uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
2,877,628,088,454 | arxiv |
\section{Introduction}
\label{sec:intro}
\vspace{-10pt}
Composed of large collections of relatively simple components which autonomously combine to form predetermined structures, self-assembling systems provide a framework in which structures can grow from the bottom up, with precise placement of individual molecules. Natural self-assembling systems, the results of which include structures ranging from crystalline snowflakes to cellular membranes and viruses, have inspired a large body of research focused on both studying their properties and creating artificial self-assembling systems to mimic them. As experimental and theoretical research into self-assembly has increased in sophistication, particular attention has been focused upon the domain of \emph{algorithmic self-assembly}, which is self-assembly intrinsically directed by algorithms, or step-by-step procedures used to perform computations. An example of a model supporting algorithmic self-assembly is the abstract Tile Assembly Model (aTAM) \cite{Winf98}, which has spawned much research investigating its powers and limitations, and even more fundamentally those of algorithmic self-assembly in general.
In the aTAM, the fundamental components are square \emph{tiles} which have sticky \emph{glues} on the edges which allow them to bind with other tiles along edges sharing matching glues. Self-assembly begins from special \emph{seed} assemblies, and progresses as tiles attach one at a time to the growing assembly. As simple as the aTAM sounds, when initially introducing it in 1998 \cite{Winf98}, Winfree showed it be to capable of Turing universal computation, i.e. it can perform any computation possible by any computer. It was soon also shown that the algorithmic nature of the aTAM can be harnessed to build squares \cite{RotWin00} and general shapes \cite{SolWin07} with (information theoretically) optimal efficiency in terms of the number of unique kinds of tiles used in the assemblies. The rich set of results displaying the power of the aTAM (e.g. \cite{IUSA,jCCSA,jSADS} to name just a few), however, have appeared to be contingent upon a minimal value of $2$ for a system parameter known as the \emph{temperature}. The temperature of an aTAM system is the threshold which, informally stated, determines how many glues a tile must bind to a growing assembly with in order to remain attached. Temperature-$2$ systems have the property that they can enforce \emph{cooperation} in which the attachment of a tile requires it to correctly bind to at least two tiles already in the assembly (thus, those two tiles \emph{cooperate} to allow the new tile to attach). This cooperation allows for each tile to effectively perform a primitive logical operation (e.g. \texttt{and}, \texttt{or}, \texttt{xor}, etc.) on the ``input'' values supplied by the tiles they bind to, and careful combination of these operations, just as with the gates in a modern electronic processor, allow for complex computations to occur. In contrast, the requirement for cooperation cannot be enforced in temperature-$1$ systems which only require one binding side, and it has thus been conjectured that temperature-$1$ aTAM systems are ``weak'' in the sense that they cannot perform universal computation or be guided algorithmically \cite{jLSAT1}. While this long-standing conjecture remains unproven in the general case of the aTAM, a growing body of work has focused on attempts to circumvent the limitations of temperature-$1$ self-assembly by making small variations to the aTAM. For instance, it has been shown that the following models are computationally universal at temperature-$1$: the 3-D aTAM \cite{CooFuSch11}, aTAM systems which compute probabilistically \cite{CooFuSch11}, the restricted glues TAM (rgTAM) which allow glues with repulsive (rather than just attractive) forces \cite{SingleNegative}, the Dupled aTAM which allows tiles shaped like $2 \times 1$ rectangles \cite{Duples}, and the Signal-passing Tile Assembly Model \cite{Signals} which contains dynamically reconfigurable tiles.
While such results may seem to indicate that those computationally universal models are as powerful as the temperature-$2$ aTAM, in \cite{IUNeedsCoop} it was shown that 3-D temperature-$1$ aTAM systems cannot possibly simulate very basic ``glue cooperation'' exhibited in the temperature-$2$ aTAM where a new tile actually binds to two already placed tiles. Essentially, the weaker form of cooperation exploited by the 3-D temperature-$1$ aTAM to perform computation does allow for the restriction of tile placements based on the prior placement of two other tiles, but that form of cooperation seems to be fundamentally restrictive and ``non-additive'', meaning that the previously placed tiles can only prevent certain future tile bindings, but not cooperate to support new binding possibilities. In fact, that lesser form of cooperation now appears to be the limit for those temperature-$1$ models which can compute (with perhaps the exception of the active signal-passing tiles), as it was shown in \cite{Duples} that the DaTAM also cannot simulate glue cooperation. It appears that the landscape modeling the relative powers of models across various parameters is more subtle and complicated than originally recognized, with the original notion of cooperative behavior being more refined.
The contributions of this paper are threefold. First, we show that the rgTAM is also not capable of simulating glue cooperation. Second, we introduce the Dupled restricted glue TAM (DrgTAM) which allows for both square tiles and ``duple'' tiles, which are simply pre-formed pairs of $2$ tiles joined along one edge before assembly begins, and it allows for glues with negative strength (i.e. those which exert repulsive force). However, it is restricted similar to the rgTAM in that the magnitude of glue strengths cannot exceed $1$ (i.e. only strengths $1$ and $-1$ are allowed). Third, we show that by creating the DrgTAM by combining two models (the rgTAM and the Dupled aTAM) which are computationally universal at temperature $1$ but which cannot independently simulate glue cooperation, the result is a model which in some measures is greater than the sum of its parts. That is, the resulting DrgTAM is capable of both universal computation \emph{and} the simulation of glue cooperation. This is the first such result for passive (i.e. non-active) tile assembly systems. In fact, we show the stronger result that there is a single tile set in the DrgTAM which can be configured to, in a temperature-$1$ system, simulate any arbitrary aTAM system, making it intrinsically universal for the aTAM. Coupled with the result in \cite{Duples} which proves that there are temperature-$1$ systems in the DTAM, which are thus also in the DrgTAM, that cannot be simulated by the aTAM at temperature-$2$, this actually implies that the DrgTAM is more powerful than the temperature-$2$ aTAM.
The paper is organized as follows. In Section~\ref{sec:prelims} we give high-level sketches of the definitions of the models and of the concepts of simulation used throughout the paper. In Section~\ref{rgTAS_cannot_coop} we prove that rgTAM systems cannot simulate the glue cooperation of temperature-$2$ aTAM systems, and in Section~\ref{sec:DrgTAM_sim} we present the proof that the DrgTAM can simulate the temperature-$2$ aTAM and in fact contains a tile set which is intrinsically universal for it. Due to space constraints, the formal definitions as well as all proofs can be found in the Appendix.
\section{Preliminaries}\label{sec:prelims}
\input{tam-informal}
\input{tam-formal}
\input{simulation_def}
\section{A temperature-$2$ aTAM system that cannot be simulated by any rgTAS}\label{rgTAS_cannot_coop}
\vspace{-10pt}
In this section we show that there exists a temperature-$2$ aTAM system that cannot be simulated by any rgTAM system. Here we give an overview of the TAS, $\mathcal{T}$, that we show cannot be simulated by any rgTAS, and an overview of the proof. For details of the proof, see Section~\ref{sec:negResProof} in the Appendix.
\vspace{-5pt}
\begin{theorem}\label{thm:rgTAScannotSIMaTAM}
There exists a temperature-$2$ aTAM system $\mathcal{T} = (T,\sigma, 2)$ such that $\mathcal{T}$ cannot be simulated by any rgTAS.
\end{theorem}
\vspace{-5pt}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=3.5in]{images/fingerFlagpole_overview}
\caption{(Figure taken from \cite{IUNeedsCoop}) (a) An overview of the tile assembly system $\mathcal{T} = (T,\sigma,2)$.~$\mathcal{T}$ runs at temperature 2 and its tile set $T$ consists of 18 tiles. (b) The glues used in the tileset $T$. Glues $g_{11}$ and $g_{14}$ are strength 1, all other glues are strength~2. Thus the keystone tile binds with two ``cooperative'' strength~1 glues. Growth begins from the pink seed tile $\sigma$: the top and bottom arms are one tile wide and grow to arbitrary, nondeterministically chosen, lengths. Two blue figures grow as shown. (c) If the fingers happen to meet then the keystone, flagpole and flag tiles are placed, (d) if the fingers do not meet then growth terminates at the finger ``tips''.}
\label{fig:fingerFlagpole_overview}
\end{center}
\vspace{-25pt}
\end{figure}
Let $\mathcal{T} = (T, \sigma, 2)$ denote the system with $T$ and $\sigma$ given in Figure~\ref{fig:fingerFlagpole_overview}. The glues in the various tiles are all unique with the exception of the common east-west glue type used within each arm to induce non-deterministic and independent arm lengths. Glues are shown in part (b) of Figure~\ref{fig:fingerFlagpole_overview}.
Note that cooperative binding happens at most once during growth, when attaching the keystone tile to two arms of identical length. All other binding events are noncooperative and all glues are strength $2$ except for $g_{11}, g_{14}$ which are strength $1$.
The TAS $\mathcal{T}$ was used in~\cite{IUNeedsCoop} to show that there is a temperature-$2$ aTAM system that cannot be simulated by a temperature-$1$ aTAM system. To prove that there is no rgTAS that simulates $\mathcal{T}$, we use a similar proof to the proof for aTAM systems, however, we must take special care to show that allowing for a single negative glue does not give enough strength to the model to allow for simulation of cooperative glue binding.
The proof is by contradiction. Suppose that $\mathcal{S} = (S,\sigma_S)$ is an rgTAS that simulates $\mathcal{T}$. We call an assembly sequence $\vec{\alpha} = (\alpha_0, \alpha_1, \dots)$ in an rgTAS \emph{detachment free} if for all $i\geq0$, $\alpha_{i+1}$ is obtained from $\alpha_i$ by the stable attachment of a single tile. The following lemma gives sufficient conditions for the existence of a detachment free assembly sequence.
\vspace{-5pt}
\begin{lemma}\label{lem:stable_assembly-main}
Let $\mathcal{S} = (S, \sigma_S)$ be an rgTAS and let $\alpha\in \prodasm{S}$ be a finite stable assembly. Furthermore, let $\beta$ be a stable subassembly of $\alpha$. Then there exists a detachment free assembly sequence $\vec{\alpha} = (\alpha_1, \alpha_2, \dots, \alpha_{n})$ such that $\alpha_1 = \beta$, and $\alpha_n=\alpha$.
\end{lemma}
\vspace{-10pt}
\begin{figure}[htp]
\begin{center}
\includegraphics[width=4.7in]{images/bad-sim-overview-main}
\caption{An example assembly formed by $S$ simulating $\mathcal{T}$ -- (a) and (b), and the resulting producible assembly (c) constructed via a ``splicing'' technique that uses the window movie lemma. The assembly in (c) shows that $\mathcal{S}$ is incapable of valid simulation of $\mathcal{T}$.}
\label{fig:bad_sim_overview-main}
\end{center}
\vspace{-25pt}
\end{figure}
A corollary of this lemma is that if an rgTAS gives a valid simulation of $\mathcal{T}$, it can do so using detachment free assembly sequences. Using detachment free assembly sequences, it is possible to use a technique for ``splicing'' subassemblies of producible assemblies of $\mathcal{S}$.
This technique uses a lemma referred to as the ``window movie lemma''. For aTAM systems, this lemma is shown in~\cite{IUNeedsCoop} (Lemma 3.1). We give a version of the window movie lemma that holds for detachment free assembly sequences. See Section~\ref{sec:negResProof} for the formal definitions of windows and window movies, and for a formal statement of the window movie lemma that we use. Figure~\ref{fig:bad_sim_overview-main} gives a depiction of this splicing technique. Here we use this lemma for detachment free assembly in the rgTAM. Then, using this splicing technique, we show that if $\mathcal{S}$ can simulate $\mathcal{T}$, it can also produce assemblies that violate the definition of simulation. In other words, we arrive at our contradiction and conclude that there is no rgTAS that can simulate $\mathcal{T}$.
\ifabstract
\later{
\section{Proof of Theorem~\ref{thm:rgTAScannotSIMaTAM}}\label{sec:negResProof}
Before we prove Theorem~\ref{thm:rgTAScannotSIMaTAM} we will give necessary conditions for any rgTAS system that can simulate $\mathcal{T}$. Let $\mathcal{S} = (S,\sigma_S)$ denote any rgTAS that simulates $\mathcal{T}$. We call an assembly sequence $\vec{\alpha} = (\alpha_0, \alpha_1, \dots)$ in an rgTAS \emph{detachment free} if for all $i\geq0$, $\alpha_{i+1}$ is obtained from $\alpha_i$ by the stable attachment of a single tile. The following lemma gives sufficient conditions for the existence of a detachment free assembly sequence.
\begin{lemma}\label{lem:stable_assembly}
Let $\mathcal{S} = (S, \sigma_S)$ be an rgTAS and let $\alpha\in \prodasm{S}$ be a finite stable assembly. Furthermore, let $\beta$ be a stable subassembly of $\alpha$. Then there exists a detachment free assembly sequence $\vec{\alpha} = (\alpha_1, \alpha_2, \dots, \alpha_{n})$ such that $\alpha_1 = \beta$, and $\alpha_n=\alpha$.
\end{lemma}
\begin{proof}
Let $W$ be the set of of subassemblies of $\alpha$ such that $\eta\in W$ if and only if there exists an assembly sequence consisting of stable assemblies starting from $\beta$ with result $\eta$ that is detachment free. Note that since $\alpha$ is finite, $W$ is finite. Therefore, we can let $\gamma$ denote a subassembly of $W$ such that for any $\eta$ in $W$, $|\dom \gamma| \geq |\dom \eta|$. In other words, $\gamma$ is such that no other subassembly in $W$ has more tiles than $\gamma$. We will show that $\gamma = \alpha$.
For the sake of contradiction, assume that $\gamma \neq \alpha$. Then there is some tile of $\alpha$ that is not in $\gamma$. Consider the binding graph of $\alpha$ with nodes corresponding to tiles of $\gamma$ removed, and call the resulting graph $G$. Notice that a connected component (possibly with edges corresponding to the negative glue) of $G$ corresponds to a subassembly of tiles, $x$ say, in $\alpha$ such that no tile of $x$ is in $\gamma$. Now, since $\alpha$ is stable, the cut $c$ of the binding graph of $\alpha$ that separates $x$ from $\alpha$ must have strength greater than $0$. Since $x$ is taken to be a connected component of $G$, all of the edges defining the cut $c$ correspond to exposed glues of $\gamma$. Since the strength of these edges sum to a positive strength, at least one tile of $x$ can stably bind to $\gamma$ resulting in $\gamma'$ because at least on position must receive positive strength across the cut. Note that $\gamma'$ is in $W$ since it is obtained from $\gamma$ by a single tile addition. Finally, the fact that $|\dom \gamma'| = |\dom \gamma| + 1 > |\dom \gamma|$, contradicts our choice of $\gamma$.
\end{proof}
The following lemma states that if an rgTAS gives a valid simulation of $\mathcal{T}$, it can do so using detachment free assembly sequences.
\begin{corollary}\label{cor:detachmentfree}
Let $\mathcal{S} = (S, \sigma_S)$ be an rgTAS that simulates $\mathcal{T}$ under $R$, and let $\alpha$ be in $\prodasm{T}$. Then there exists a stable assembly $\alpha'' \in \prodasm{S}$ and a detachment free assembly sequence $\vec{\alpha}$ starting from $\sigma_S$ with result $\alpha''$ such that $\alpha''$ represents $\alpha$ under $R$.
\end{corollary}
\begin{proof}
Let $\alpha'$ be in $\prodasm{S}$ such that $\alpha'$ represents $\alpha$ under $R$.
We obtain $\alpha''$ from $\alpha'$ by allowing detachment to occur for each cut of $\alpha'$ with strength $<1$. In particular, there exists an assembly sequence $\vec{\alpha}_{d} = (\alpha_1, \alpha_2, \dots, \alpha_n)$ where $\alpha_1 = \alpha'$, $\alpha_n = \alpha''$, and $\alpha_{i+1}$ is obtained from $\alpha_i$ by the detachment along a strength $<1$ cut. The existence of $\vec{\alpha}_d$ follows from the fact that as detachment occurs in $\alpha_i$ along a cut $c$, one side of the cut must be an assembly that maps to $\alpha$ under $R$ (by the definition of simulation in Section~\ref{sec:simulation_def_formal}). We take this assembly to be $\alpha_{i+1}$.
Therefore, we have a stable assembly $\alpha''$ that represents $\alpha$ under $R$. Finally, since the seed $\sigma_S$ is a stable subassembly of $\alpha''$, by Lemma~\ref{lem:stable_assembly} there exists a detachment free assembly sequence $\vec{\alpha}$ with result $\alpha''$.
\end{proof}
To show that $\mathcal{T}$ cannot be simulated by an rgTAS, we will use the window movie lemma. This lemma was introduced in~\cite{IUNeedsCoop} (Lemma 3.1) and was used to show that there does not exist a temperature $1$ aTAM system that can simulated $\mathcal{T}$. We will start by stating the definitions of a window and window movie.
\begin{definition}
A \emph{window} $w$ is a set of edges forming a cut-set in the infinite grid graph.
\end{definition}
Often a window is depicted as paths (possibly closed) in the 2D plane. See Figure~\ref{fig:bad_sim_overview} for an example. Given a window and an assembly sequence, one can observe the order and sequence that tiles attach across the window. This gives rise to the following definition.
\begin{definition}\label{def:windowMovie}
Given an assembly sequence $\vec{\alpha}$ and a window $w$, the associated {\em window movie} is the maximal sequence $M_{\vec{\alpha},w} = (v_{0}, g_{0}) , (v_{1}, g_{1}), (v_{2}, g_{2}), \ldots$ of pairs of grid graph vertices $v_i$ and glues $g_i$, given by the order of the appearance of the glues along window $w$ in the assembly sequence $\vec{\alpha}$.
Furthermore, if $k$ glues appear along $w$ at the same instant (this happens upon placement of a tile which has multiple sides touching $w$) then these $k$ glues appear contiguously and are listed in lexicographical order of the unit vectors describing their orientation in $M_{\vec{\alpha},w}$.
\end{definition}
Now we can state the window movie lemma for detachment free assembly sequences.
\begin{lemma}[Window movie lemma]
\label{lem:windowmovie}
Let $\vec{\alpha} = (\alpha_i \mid 0 \leq i < l)$ and $\vec{\beta} = (\beta_i \mid 0 \leq i < m)$, with
$l,m\in\Z^+ \cup \{\infty\}$,
be \emph{detachment free} assembly sequences in $\mathcal{T}$ with results $\alpha$ and $\beta$, respectively.
Let $w$ be a window that partitions~$\alpha$ into two configurations~$\alpha_L$ and $\alpha_R$, and $w' = w + \vec{c}$ be a translation of $w$ that partitions~$\beta$ into two configurations $\beta_L$ and $\beta_R$.
Furthermore, define $M_{\vec{\alpha},w}$, $M_{\vec{\beta},w'}$ to be the respective window movies for $\vec{\alpha},w$ and $\vec{\beta},w'$, and define $\alpha_L$, $\beta_L$ to be the subconfigurations of $\alpha$ and $\beta$ containing the seed tiles of $\alpha$ and $\beta$, respectively.
Then if $M_{\vec{\alpha},w} = M_{\vec{\beta},w'}$, it is the case that the following two assemblies are also producible:
(1) the assembly $\alpha_L \beta'_R = \alpha_L \cup \beta'_R$ and
(2) the assembly $\beta'_L \alpha_R = \beta'_L \cup \alpha_R$, where $\beta'_L=\beta_L-\vec{c}$ and $\beta'_R=\beta_R-\vec{c}$.
\end{lemma}
Under the assumption that the assembly sequences in Lemma~\ref{lem:windowmovie} are detachment free, Lemma~\ref{lem:windowmovie} follows directly from the proof of the window movie lemma for aTAM systems (Lemma 3.1 in~\cite{IUNeedsCoop}).
We can also define a restricted form of a window movie. For windows $w$ and $w'$, and assembly sequences $\vec{\alpha}$ and $\vec{\beta}$, Lemma~\ref{lem:windowmovie} holds even if the window movies
$M_{\vec{\alpha},w}$ and $M_{\vec{\alpha},w'}$ match on specific \emph{submovies} (subsequences of the movies $M_{\vec{\alpha},w}$ and $M_{\vec{\alpha},w'}$). We specify a particular submovie as follows.
Consider the window movie $M_{\vec{\alpha},w}$. Location-glue pairs are added to a window movie by observing tile placements given by $\vec{\alpha}$. Suppose that step $i$ of $\vec{\alpha}$ is the placement of a tile $t$ that adds a location-glue pair $(l,g)$ to the window movie. We call this tile placement \emph{non-window crossing} if the tile can stably bind even in the absence of any positive glue along the window $w$.
We also define a \emph{window crossing submovie} to be the subsequence of a window movie, $M$, that consists of all of the steps of $M$ except for the steps corresponding to the addition of a non-window crossing tile. We denote the window crossing submovie of $M$ by ${\cal W}(M)$. Note that every window movie has a unique window crossing submovie. Then, Corollary~\ref{cor:windowmovie} says that in certain cases, Lemma~\ref{lem:windowmovie} holds even if two window movies only match on their window crossing submovies.
\begin{corollary}
\label{cor:windowmovie}
Suppose that the following two conditions hold.
\begin{enumerate}
\item[(1)] For all $(l,g)$ in $M_{\vec{\alpha},w}$ such that $(l,g)$ corresponds to the placement of a tile $t$ with north glue $g$,
if there exists a tile $t'$ in $\beta$ at location $l' = l + c + (0,1)$ such that the south glue $g'$ of $t$ and $g$ are the negative glue, then there exists a tile in $\alpha$ at location $l + (0,1)$ with south glue $g$. We also include the similar conditions for $(l,g)$ in $M_{\vec{\alpha},w}$ where $g$ is a south, east, or west glue.
\item[(2)] For all $(l',g')$ in $M_{\vec{\beta},w'}$ such that $(l',g')$ corresponds to the placement of a tile $t'$ with north glue $g'$,
if there exists a tile $t$ in $\alpha$ at location $l = l' - c + (0,1)$ such that the south glue $g$ of $t$ and $g'$ are the negative glue, then there exists a tile in $\beta$ at location $l + (0,1)$ with south glue $g'$. We also include the similar conditions for $(l',g')$ in $M_{\vec{\beta},w'}$ where $'g$ is a south, east, or west glue.
\end{enumerate}
Then, the statement of Lemma~\ref{lem:windowmovie} holds if the window movies $M_{\vec{\alpha},w}$ and $M_{\vec{\beta},w'}$ are replaced by their window crossing submovies ${\cal W}\left(M_{\vec{\alpha},w}\right)$ and ${\cal W}\left(M_{\vec{\beta},w'}\right)$.
\end{corollary}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=4.5in]{images/fingerFlagpole_overview}
\caption{(Figure taken from \cite{IUNeedsCoop}) (a) An overview of the tile assembly system $\mathcal{T} = (T,\sigma,2)$.~$\mathcal{T}$ runs at temperature 2 and its tile set $T$ consists of 18 tiles. (b) The glues used in the tileset $T$. Glues $g_{11}$ and $g_{14}$ are strength 1, all other glues are strength~2. Thus the keystone tile binds with two ``cooperative'' strength~1 glues. Growth begins from the pink seed tile $\sigma$: the top and bottom arms are one tile wide and grow to arbitrary, nondeterministically chosen, lengths. Two blue figures grow as shown. (c) If the fingers happen to meet then the keystone, flagpole and flag tiles are placed, (d) if the fingers do not meet then growth terminates at the finger ``tips''.}
\label{fig:fingerFlagpole_overview_append}
\end{center}
\end{figure}
Condition (1) in Corollary~\ref{cor:windowmovie} is saying that when we attempt to assemble $\alpha_L \beta'_R$, we can rest assured that there are no negative glue interactions across the window $w$ between negative glues exposed by tiles of $\beta'_R$ and negative glues exposed by $\alpha_L$ that are not present in the assembly $\alpha$. This implies that using the assembly sequence $\vec{\alpha}$ to attach tiles from $\alpha_L$, and the assembly sequence $\vec{\beta}$ to attach tiles from $\beta'_R$, $\alpha_L \beta'_R$ can be assembled since there are no negative glue interactions in $\alpha_L \beta'_R$ that are not present in $\alpha$ or $\beta$. Similarly, Condition (2) says the same for the assembly of $\beta'_L \alpha_R$.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=4.5in]{images/bad-sim-overview}
\caption{An example of an assembly formed by $S$ simulating $\mathcal{T}$ and the identical window crossing submovies $w$ and $w'$ -- (a) and (b), and the resulting producible assembly constructed via Corollary~\ref{cor:windowmovie} (c). (d) shows the windows $w$ and $w'$ (which are equivalent up to shifting). The portions of these windows that are determined by $c_{\min}$ are labeled.}
\label{fig:bad_sim_overview}
\end{center}
\end{figure}
With Corollary~\ref{cor:detachmentfree} and Corollary~\ref{cor:windowmovie}, we are now ready to prove Theorem~\ref{thm:rgTAScannotSIMaTAM}. For the sake of contradiction, suppose that $\mathcal{S} = (S, \sigma_S)$ is an rgTAS that simulates $\mathcal{T}$, the finger and flagpole system, $\mathcal{T}$ with representation function $R: \mathcal{A}^{S} \rightarrow \mathcal{A}^T$ and scale factor $m\in \mathbb{N}$.
Now let $\alpha_d$ in $\termasm{T}$ be the assembly where the top and bottom arms are $d$ tiles long.
By Corollary~\ref{cor:detachmentfree}, we can find a detachment free assembly sequence $\vec{\alpha}'_d$ in $\mathcal{S}$ such that the stable result $\alpha'_d$ represents $\alpha_d$.
Now let $c$ be a set of edges in the binding graph $G$ of $\alpha'_d$ such that $c$ is a cut-set of the subgraph of $G$ corresponding to the subassembly, $\eta$, of tiles contained in the keystone macrotile, the flagpole macrotile, the flag macrotile, and the macrotiles immediately surrounding these macrotiles in $\alpha'_d$. Then let $C$ be the set of all such cuts $c$. Since $|C| < \infty$, we can find a cut $c_{\min}$ such that for any cut $c$ in $C$, the strength of $c_{\min}$ is less than or equal to the strength of $c$. In other words, $c_{\min}$ is a cut with minimal strength.
For the proof here, we must be more selective about our choice of assembly sequence $\vec{\alpha}'_d$ resulting in $\alpha'_d$. In this proof, we will use the window movie lemma for detachment free assembly sequences (Lemma~\ref{lem:windowmovie}). For some $d$ to be chosen later, the windows, $w$ and $w'$, that we will use for Lemma~\ref{lem:windowmovie} will be windows that cut an arm of $\alpha'_d$ vertically.
Note that we can also ensure that other than the edges corresponding to bonds between tiles of belonging to macrotiles of an arm, the only edges in $w$ or $w'$ are exactly the edges of $c_{\min}$. Moreover, without loss of generality, suppose that a tile in the flagpole region stably binds below the cut $c_{\min}$. We will choose the windows $w$ and $w'$ to cut the bottom arm of $\alpha_d'$. See Figure~\ref{fig:bad_sim_overview} for an example of such windows.
\begin{claim}
$\vec{\alpha}'_d$ can be chosen so that every location-glue pair of $M_{\vec{\alpha}'_d, w}$ or $M_{\vec{\alpha}'_d, w'}$ whose glues lie on $c_{\min}$ corresponds to a tile placement that is non-window crossing.
\end{claim}
For the moment, suppose that the claim holds and $\vec{\alpha}'_d$ is chosen as such. Then, let $g$ be the number of glues of tiles in $S$. We will show that $\mathcal{S}$ is capable of producing an assembly sequence that yields an invalid production for simulation. For any $d \in \mathbb{N}$, it must be the case that $\mathcal{S}$ can simulate the production of the assembly $\alpha_d$ in $\termasm{T}$ where the top and bottom arms of $\alpha_d$ are $d$ tiles long. Note that for every $d$, $\alpha_d$ is of the form depicted (c) of Figure~\ref{fig:fingerFlagpole_overview_append}.
Figure~\ref{fig:bad_sim_overview} shows our choice of windows $w$ and $w'$ that cut an arm of some $\alpha_d'$ vertically. By the claim, we can assume that every location-glue pair of $M_{\vec{\alpha}'_d, w}$ and $M_{\vec{\alpha}'_d, w'}$ corresponds to non-window crossing tile additions forming $\eta$. Therefore, the window crossing submovies $\mathcal{W}(M_{\vec{\alpha}'_d, w})$ and $\mathcal{W}(M_{\vec{\alpha}'_d, w'})$ only contain location-glue pairs corresponding to the bindings of tiles belonging to the bottom arm of $\alpha_d'$ (i.e. location-glue pairs along the vertical portion of the windows).
Then, since $m$ (macrotile size) and $g$ (the number of glues of tile types in $S$) are fixed constants, for $d$ sufficiently large, there exists two such window movies $w$ and $w'$ such that $w'$ is a horizontal translation $w$ and the window crossing submovies, $\mathcal{W}(M_{\vec{\alpha}'_d, w})$ and $\mathcal{W}(M_{\vec{\alpha}'_d, w'})$ match. The top assemblies in Figure~\ref{fig:bad_sim_overview} give an example of two equivalent window movies. Notice that we can also choose $w$ and $w'$ so that the distance between them is at least $3m$. Then, $w$ (respectively $w'$) divides $\alpha_d'$ into configurations $\alpha_L$ and $\alpha_R$ (respectively $\beta_L$ and $\beta_R$). By Corollary~\ref{cor:windowmovie}, $\alpha_L\beta'_R$ (depicted in Figure~\ref{fig:bad_sim_overview}(c)) is a valid assembly in $\mathcal{S}$. Notice that $\alpha_L\beta'_R$ is stable and contains a tile in the flagpole macrotile region. This region lies outside of any permissible fuzz region. (See Section~\ref{sec:simulation_def_formal} for the definition of fuzz.) Therefore, the existence of the valid producible assembly $\alpha_L\beta'_R$ shows that $\mathcal{S}$ is not a valid simulation.
To finish the proof, we now prove the claim.
\noindent\textit{Proof of the claim.} Here we show that $\vec{\alpha}'_d$ as defined above can be chosen so that each glue lying on $c_{\min}$ corresponds to a tile placement that is non-window crossing.
The proof of this claim is similar to the proof of Lemma~\ref{lem:stable_assembly}.
First, let $W$ be the set of of subassemblies of $\alpha_d'$ such that $\eta\in W$ if and only if there exists an assembly sequence consisting of stable assemblies starting from $\sigma_S$ with result $\eta$ that is detachment free \emph{and} every location-glue pair of $M_{\vec{\alpha}'_d, w}$ (The proof is similar for $M_{\vec{\alpha}'_d, w'}$.)
Note that since $\alpha_d'$ is finite, $W$ is finite. Therefore, we can let $\gamma$ denote a subassembly of $W$ such that for any $\eta$ in $W$, $|\dom \gamma| \geq |\dom \eta|$. In other words, $\gamma$ is such that no other subassembly in $W$ has more tiles than $\gamma$. We will show that $\gamma = \alpha_d'$.
For the sake of contradiction, assume that $\gamma \neq \alpha_d'$. Then there is some tile of $\alpha_d'$ that is not in $\gamma$. Consider the binding graph of $\alpha_d'$ with nodes corresponding to tiles of $\gamma$ removed, and call the resulting graph $G$. Notice that a connected component (possibly with edges corresponding to the negative glue) of $G$ corresponds to a configuration of tiles, $x$ say, in $\alpha_d'$ such that no tile of $x$ is in $\gamma$. Now, since $\alpha_d'$ is stable, the cut $c$ of the binding graph of $\alpha_d'$ that separates $x$ from $\alpha_d'$ must have strength greater than $0$. Since $x$ is taken to be a connected component of $G$, all of the edges defining the cut $c$ correspond to exposed glues of $\gamma$. Since the strength of these edges sum to a positive strength, either (1) at least one tile of $x$ can stably bind to $\gamma$ resulting in $\gamma'$ in $W$, or (2) no tile can stably bind to $\gamma$ without the added strength of binding to a glue corresponding to an edge of $c_{\min}$.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=4.5in]{images/rewire_cut}
\caption{A schematic picture of ``rewiring'' the cut $c_{\min}$ of $\eta$. On the right we see the cut $c$ as well as $c_{\min}$. $c_{\min}$ is a cut of strength $0$, and the only positive strength glue on the cut $c$ is labeled $g$ in the figure. On the right, we see $(c \setminus c_{\min}) \cup (c_{\min} \setminus c)$. Notice that this new cut has strength less than the strength of $c_{\min}$.}
\label{fig:rewire_cut}
\end{center}
\end{figure}
In Case (1), note that $|\dom \gamma'| = |\dom \gamma| + 1 > |\dom \gamma|$. This contradicts our choice of $\gamma$. In Case (2), it must be the case that the cut $c$ and the cut $c_{\min}$ share some edges with positive strength. This is because the reason we cannot place a tile using a positive strength glue on $c$ is that this glue is also in $c_{\min}$, and we are not allowing tile attachment of window crossing tiles across $c_{\min}$ in the assembly of $\gamma$. Then, notice that the sum of the strengths of the edges belonging to $c \setminus c_{\min}$ must sum to zero or less. Otherwise a tile could be added along this cut, which would once again contradict our choice of $\gamma$. Then, note that the edges in $(c \setminus c_{\min}) \cup (c_{\min} \setminus c)$ form a cut of the subassembly $\eta$ (defined above the statement of the claim) with strength strictly less than the strength of $c_{\min}$. Intuitively, $(c \setminus c_{\min}) \cup (c_{\min} \setminus c)$ is a cut that is formed by ``rewiring'' $c_{\min}$ using $c \setminus c_{\min}$, and since the strength of $c \setminus c_{\min}$ is less than $1$, and the strength of $c \cap c_{\min}$ is greater than $0$, this rewiring results in cut with less strength than $c_{\min}$. See Figure~\ref{fig:rewire_cut} for a schematic picture of this rewiring. This contradicts our choice of $c_{\min}$. Hence, in either Case (1) or (2), we arrive at a contradiction. Therefore, $\gamma = \alpha_d'$. This proves the claim.
}
\section{Simulation of the aTAM with the DrgTAM}\label{sec:DrgTAM_sim}
\vspace{-10pt}
In this section, given an aTAM system $\calT=(T,\sigma, 2)$, we describe how to simulate $\calT$ with a DrgTAS at temperature 1 with $O(1)$ scale factor and tile complexity $O(|T|)$. It will then follow from \cite{IUSA} that there exists a tile set in the DrgTAM at $\tau=1$ which is intrinsically universal for the aTAM at any temperature, i.e. it can be used to simulate any aTAM system of any temperature.
\vspace{-5pt}
\begin{theorem}\label{thm:DrgTAS-sim}
For every aTAM system $\calT=(T,\sigma, 2)$, there exists a DrgTAS $\mathcal{D} = (T_{\mathcal{D}}, S, D, \sigma', 1)$ such that $\mathcal{D}$ simulates $\calT$ with $O(1)$ scale factor and $|S\cup D| = O(|T|)$.
\end{theorem}
\vspace{-5pt}
We now provide a high-level overview of the construction. For the remainder of this section, $\calT=(T,\sigma, 2)$ will denote an arbitrary TAS being simulated, $\mathcal{D} = (T_{\mathcal{D}}, S, D, \sigma', 1)$ the simulating DrgTAS, and $R$ the representation function which maps blocks of tiles in $\mathcal{D}$ to tiles in $\calT$. The system $\calT$ is simulated by a DrgTAS through the use of macrotiles which consist of the components shown in Figure~\ref{fig:macro_labeled-main}. Note that macrotiles are not necessarily composed of all of the components shown in Figure~\ref{fig:macro_labeled-main}, but will consist of at least one of the subassemblies labeled probe.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=3.5in]{images/macro_labeled}
\caption{Macrotile probes, points of cooperation, and points of competition}
\label{fig:macro_labeled-main}
\end{center}
\vspace{-25pt}
\end{figure}
Informally, the subassemblies labeled probe, which we will now refer to as probes, ``simulate'' the glues of the tiles in $T$. If a probe is simulating a glue which is of strength $2$, then it does not require the assistance of any other probes in order to complete the macrotile containing it. On the other hand, if the glue which the probe is simulating is of strength $1$, then the probe cannot assemble a new macrotile until another probe arrives which simulates a glue with which the other glue can cooperate and place a new tile in $\calT$. Before probes can begin the growth of a new macrotile, they must claim (i.e. place a tile in) one of the \emph{points of competition} (shown as red in Figure~\ref{fig:macro_labeled-main}) depending on the configuration of the macrotile. Once a special tile is placed in one of the points of competition, the representation function $R$ maps the macrotile to the corresponding tile in $T$, and the growth of the macrotile can begin.
We use the following conventions for our figures. All duples are shown in darker colors (even after they are broken apart) and singletons are shown in lighter colors. Negative glues are represented by red squares protruding from tiles, and positive glues are represented by all other colored squares protruding from tiles. We represent glue mismatches (a glue mismatch occurs when two different glues are adjacent or a glue is adjacent to a tile side that does not have a glue) by showing the mismatching glues receded into the tiles from which they would normally protrude. A red box enclosing a subassembly indicates that subassembly has total binding strength 0.
\vspace{-15pt}
\begin{figure}[htp]
\begin{center}
\includegraphics[width=3.5in]{images/gad_acoop}
\caption{An assembly sequence of an adjacent cooperator gadget.}
\label{fig:gad_acoop-main}
\end{center}
\vspace{-25pt}
\end{figure}
The cooperator gadget is the underlying mechanism that allows for the DrgTAM to simulate the cooperative placement of a tile in a $\tau\ge2$ TAS. We consider two cases of cooperative tile placement: 1) the tiles that cooperatively contribute to the placement of a tile have adjacent corners (e.g. one is north of the location to be cooperatively tiled while the other is to the east or west), and 2) the tiles that cooperatively contribute to the placement of a tile are non-adjacent, that is there is a tile wide gap between the two tiles. We create a cooperator gadget for each of these two cases. Not surprisingly, we call the cooperator gadget that mimics the former case the \emph{adjacent cooperator gadget} and the cooperator gadget that mimics the latter case the \emph{gap cooperator gadget}. Each of these two gadgets is asymmetric in nature and consists of two parts: 1) a finger and 2) a resistor. The function of the resistor is to cause a duple that is attached to the finger gadget to break apart and expose the internal glue of the duple which can then be used for binding of another tile.
An adjacent cooperator gadget is shown in Figure~\ref{fig:gad_acoop-main}. Part (a) of this figure depicts the finger part of the gadget, and the subassembly labeled (b) is the resistor. Note that the only tiles which have the ability to bind to the exposed glues are duples with a negative glue that is aligned with the negative glue that is adjacent to the exposed glues. This means that neither subassembly can grow any further until its counterpart arrives. In Figure~\ref{fig:gad_acoop-main} parts (c) - (e) we see the assembly sequence showing the interaction between the two parts of the cooperator gadget. In this particular assembly sequence we have assumed that the resistor piece of the gadget has arrived first. In part (c), we see the arrival of a tile (presumably from a probe) which allows for the duple that is a part of the finger gadget to bind with total strength 1. The 0 strength cut that is induced by this binding event is shown by the red box in part (d) of the figure. Since the tile encapsulated in the red box is bound with total strength 0, it eventually detaches which leads us to part (e) of the figure. Notice that the dissociation event has caused a new glue to be exposed. This glue now allows for the binding of a duple as shown in part (e) of Figure~\ref{fig:gad_acoop-main}.
\begin{figure}[htp]
\begin{center}\includegraphics[width=4in]{images/gad_coop}
\caption{An assembly sequence of a gap cooperator gadget.}
\label{fig:gad_coop-main}
\end{center}
\vspace{-25pt}
\end{figure}
Figure~\ref{fig:gad_coop-main} shows a gap cooperator gadget which is a simple extension of the adjacent cooperator gadget. This extension of the adjacent cooperator gadget allows for a crosser gadget (described below) to grow a path of tiles in between the two parts of the gadget. This gadget allows a new glue to be exposed upon the arrival of a negative glue (Figure~\ref{fig:gad_coop-main} part (c)) which causes half of the duple to detach (shown in part (d) of the figure). This allows a duple to attach as shown in Figure~\ref{fig:gad_coop-main}(e) which depends on both of the glues exposed by the two parts of the gadget. Notice that the binding of this tile cannot occur unless both parts of the gadget are present.
The previous gadgets showed that in order for two probes to cooperate, they must be connected by a path of tiles. In order for other probes to cross in between these connected probes we utilize what we call a ``crosser gadget''. The assembly sequence for a crosser is shown in Figure~\ref{fig:gad_crosser-main}. Growth of the gadget begins with the placement of a singleton which is prevented from growing further. This singleton exposes glues which allow for duples to bind (Figure~\ref{fig:gad_crosser-main}(b) and (c)) that cause the path of tiles blocking the singleton's growth to detach (Figure~\ref{fig:gad_crosser-main}(d)). Note that the attachment of these duples cannot occur before the singleton arrives since they would only have total binding strength zero.
Section~\ref{sec:gadgets} offers a more in-depth description of the gadgets described above.
We can now use these gadgets to give a more complete description of the probes which are shown in Figure~\ref{fig:macro_labeled-main}. All of the numbered regions represent gadgets. Gadgets labeled 1-3 in the figure represent gap cooperator gadgets which allow for cooperation between the probes to which they are attached. The gadgets labeled 5-9 denote adjacent cooperator gadgets which allow for the potential of cooperation between the probes to which they are attached. Finally, the gadgets labeled 10 and 11 are cooperator gadgets which allow for Probe W to trigger the growth of the second arms of Probe N and Probe S. See Section~\ref{sec:probes} for more details about the structure of probes and their accompanying gadgets.
\vspace{-15pt}
\begin{figure}[htp]
\begin{center}
\includegraphics[width=2.7in]{images/gad_crosser}
\caption{An assembly sequence of a crosser gadget.}
\label{fig:gad_crosser-main}
\end{center}
\vspace{-25pt}
\end{figure}
The output of the representation function for a particular macrotile depends on the three regions labeled 1-3 in Figure~\ref{fig:macro_labeled-main}. If a special tile is placed in region 1, then the macrotile region is mapped to the tile in $T$ that corresponds to the special tile regardless of the tiles in the other regions. Similarly, region 3 takes precedence over region 2. Finally, if a special tile has not been placed in either region 1 or 3, then the output of the representation function depends on the tile placed in region 2. For a more detailed explanation of the representation function and regions 1-3 see Section~\ref{sec:poc_repr}. For a case analysis of how our construction handles all possible binding scenarios, see Section~\ref{sec:case_analysis}.
The seed of our simulator is formed from a set of tiles in $S \cup D$ which have been hardcoded. Section~\ref{sec:seed_form} gives a more detailed explanation about the construction of the seed in the simulator.
\ifabstract
\later{
\section{Gadgets: Cooperators and Crossers} \label{sec:gadgets}
We now introduce two gadgets which give the probes mentioned above the required functionality needed to imitate cooperatively placing a tile. Furthermore, these gadgets will allow us to modularize the construction in the proceeding sections. The first gadget that we introduce is called the cooperator gadget. As its name suggests, its purpose is to mimic the cooperation found in $\tau = 2$ TASs. The second gadget we describe, which we call the crosser gadget, allows for probes to cross in between each other. For example, a crosser gadget enables the east and west probes to grow through the north and the south probes.
A key observation to make during the description of these gadgets is that these gadgets are designed such that all tiles that detach from the assembly are singletons that are originally part of a duple unless otherwise specified. We construct the duples such that this glue is unique, and consequently nothing can bind to the portion of the duple that fell off of the assembly except its counterpart which is attached to the assembly. Indeed, observe that any duple presents at most one negative glue. This implies that the same half is always the one which detaches, and consequently there are not any tiles which may bind to it. This means that all of the tiles that detach from the assembly are inert (i.e. unable to bind to any tile in solution). If tiles that fell off the assembly were not inert, then it could be possible to grow assemblies which would invalidate the simulation (since the definition of simulation requires that so-called junk assemblies must never grow into assemblies which map to something other than the empty tile under $R$). Thus, it is necessary that we be careful about what we allow to detach from the assembly.
Throughout this section, we use the following conventions for our figures. All duples are shown in darker colors (even after they are broken apart) and singletons are shown in lighter colors. Negative glues are represented by red squares protruding from tiles, and positive glues are represented by all other colored squares protruding from tiles. We represent glue mismatches (a glue mismatch occurs when two different glues are adjacent or a glue is adjacent to a tile side that does not have a glue) by showing the mismatching glues receded into the tiles from which they would normally protrude.
\subsection{Cooperators}
The cooperator gadget is the underlying mechanism that allows for the DrgTAM to simulate the cooperative placement of a tile in a $\tau\ge2$ TAS. As in the aTAM at $\tau=2$, cooperator gadgets allow the attachment of tiles in one subassembly to trigger growth in another subassembly. We consider two cases of cooperative tile placement: 1) the tiles that cooperatively contribute to the placement of a tile have adjacent corners (e.g. one is north of the location to be cooperatively tiled while the other is to the east or west), and 2) the tiles that cooperatively contribute to the placement of a tile are non-adjacent, that is there is a tile wide gap between the two tiles. We create a cooperator gadget for each of these two cases. Not surprisingly, we call the cooperator gadget that mimics the former case the \emph{adjacent cooperator gadget} and the cooperator gadget that mimics the latter case the \emph{gap cooperator gadget}. Each of these two gadgets are asymmetric in nature and consist of two parts: 1) a finger and 2) a resistor. The function of the resistor is to cause a duple that is attached to the finger gadget to break apart and expose the internal glue of the duple which can then be used for binding of another tile.
An adjacent cooperator gadget is shown in Figure~\ref{fig:gad_acoop}. Part (a) of this figure depicts the finger part of the gadget, and the subassembly labeled (b) is the resistor. Note that the only tiles which have the ability to bind to the exposed glues are duples with a negative glue that is aligned with the negative glue that is adjacent to the exposed glues. This means that neither subassembly can grow any further until its counterpart arrives. In Figure~\ref{fig:gad_acoop} parts (c) - (e) we see the assembly sequence showing the interaction between the two parts of the cooperator gadget. In this particular assembly sequence we have assumed that the resistor piece of the gadget has arrived first. In part (c), we see the arrival of a tile (presumably from a probe) which allows for the duple that is a part of the finger gadget to bind with total strength 1. The 0 strength cut that is induced by this binding event is shown by the red box in part (d) of the figure. Since the tile encapsulated in the red box is bound with total strength 0, it eventually detaches which leads us to part (e) of the figure. Notice that the dissociation event has caused a new glue to be exposed. This glue now allows for the binding of a duple as shown in part (e) of Figure~\ref{fig:gad_acoop}.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=3.5in]{images/gad_acoop}
\caption{An assembly sequence of an adjacent cooperator gadget.}
\label{fig:gad_acoop}
\end{center}
\end{figure}
We now present an example which demonstrates an adjacent cooperator gadget. Suppose that $T$ contains the subassembly shown in Figure~\ref{fig:gad_coop_exT} (a), and the only tiles which may bind to the west glue of tile $A$ are shown in part (b) of the figure. Observe that since we are in a system of temperature 2, only tile $C$ may bind to this subassembly. Tile $D$ cannot bind because its binding strength to this subassembly is 1. Part (c) of Figure~\ref{fig:gad_coop_exT} shows the subassembly after tile $C$ binds which is the only binding event that can occur at that location. Figure~\ref{fig:gad_coop_exS} shows the assembly sequence of the adjacent cooperator gadget which simulates the binding event that occurs in Figure~\ref{fig:gad_coop_exT}. Note that the parts of the cooperator gadget lie in the macrotile region that eventually contains a macrotile which maps to tile $C$ under the representation function. Part (a) of this figure shows the two tiles which allow for the growth of a macrotile to begin which maps to either tile $C$ or $D$ in $T$. Parts (b) and (c) show the assembly sequence which leads us to the subassembly shown in part (d). The subassembly in part (d) makes it such that the tile which is placed where the arrows are pointing must have both glues match the two glues exposed by the gadget. This ensures simulation of the binding of the tile labeled $C$.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=2.5in]{images/gad_coop_exT}
\caption{An example subassembly sequence in $\calT$.}
\label{fig:gad_coop_exT}
\end{center}
\end{figure}
\begin{figure}[htp]
\begin{center}
\includegraphics[width=2.5in]{images/gad_coop_exS}
\caption{Using an adjacent cooperator gadget to mimic the cooperative tile placement shown in Figure~\ref{fig:gad_coop_exT}.}
\label{fig:gad_coop_exS}
\end{center}
\end{figure}
Figure~\ref{fig:gad_coop} shows the finger part of the gap cooperator in part (a) and the resistor portion in part (b). Notice that the end of the finger gap cooperator gadget has the same structure as the finger portion of the adjacent cooperator, and the two resistor parts of the gadgets are equivalent as well. The only difference between the two gadgets is that the finger gap cooperator gadget consists of an extra three tiles which precede the duple exposed to the resistor part of the gadget. In the next section, we will see that these extra tiles are necessary in order for the crosser gadget to be implemented. Parts (c)-(e) of Figure~\ref{fig:gad_coop}, show the assembly assembly sequence of a gap cooperator when its two pieces interact.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=4.5in]{images/gad_coop}
\caption{An assembly sequence of a gap cooperator gadget.}
\label{fig:gad_coop}
\end{center}
\end{figure}
\subsection{Crossers}
As we saw in the previous section, the only ways for probes to mimic cooperation requires them to be connected by a tile wide path. In order for other probes to cross in between these connected probes we utilize what we call a crosser gadget. The assembly sequence for a crosser is shown in Figure~\ref{fig:gad_crosser}. Growth of the gadget begins in part (a) of the figure which shows a singleton arriving at a gap cooperator which is described above. Upon the arrival of this singleton, two duples may be placed with total binding strength one as shown in part (b). Note that the attachment of these duples cannot occur before the singleton arrives since they would only have total binding strength zero. The attachment of these two duples induces a strength zero cut which contains the subassembly inside of the red box shown in part (c) of Figure~\ref{fig:gad_crosser}. Since this cut of the binding graph has total strength zero, the subassembly inside of the red box will eventually detach which leads to the assembly shown in part (d) of the figure. Now, it is possible for the single tile wide path to continue its growth to the other side of the probes.
Unlike the other gadget we explored, this gadget allows for subassemblies which consist of more than just one half of a duple to detach from an assembly. Figure~\ref{fig:gad_junk} shows all of the subassemblies which can detach due to the crosser gadget. Notice that the ``junk'' in part (a) of this figure is inert since the two exposed glues are unique internal duple glues. This is the only thing that can detach in the situation that we explored above where the finger gap cooperator gadget is bound to the resistor gadget which total strength 1. But, it could be the case that the finger gadget is in the process of growing or that the probe to which the resistor gadget is attached never arrives. This situation gives rise to the junk shown in parts (b)-(e). Observe that the ``junk'' in parts (b)-(d) can only grow into (e) which is also inert. Thus, anything that that the crosser gadget causes to detach is inert. Note that (b) and (c) do not grow into (a) since the bottom tile of the subassembly in (a) is half of a duple which was broken apart by the crosser gadget. This duple half cannot attach to any assembly without its counterpart, thus only a full duple can attach to the subassemblies (b) and (c) which causes them to grow into (e).
\begin{figure}[htp]
\begin{center}
\includegraphics[width=3.0in]{images/gad_crosser}
\caption{An assembly sequence of a crosser gadget.}
\label{fig:gad_crosser}
\end{center}
\end{figure}
\begin{figure}[htp]
\begin{center}
\includegraphics[width=2.0in]{images/gad_junk}
\caption{The subassemblies that can detach due to the crosser gadget. }
\label{fig:gad_junk}
\end{center}
\end{figure}
\subsection{Interfacing Gadgets}
Now, with these gadgets in our toolbox, we can discuss how they will be utilized in our construction. First, notice that we can orient these gadgets however we see fit by rotating or flipping the tiles from which they are assembled. In addition, for our construction we will not use the gap cooperator gadget which we described above, but rather the extension of it shown in Figure~\ref{fig:gad_coop_complex}(a). This is necessary since, as we will see, we need paths of tiles to cross through the gap cooperators from either direction. Consequently, we must add another set of three tiles which will allow for crossers coming from either direction to cross through the gadget. Part (b) of this figure shows a cooperatively placed tile growing a crosser gadget in order to begin growing its probes. Observe that whenever this occurs, a probe trying to grow southward will be prevented from growing due to the path of tiles which were laid down by the cooperatively placed tile. But, this is not an issue since at this point the tile in $T$ which the macrotile simulates has already been decided. Thus, there is no need for any other probes in the macrotile region to grow or cooperate with each other.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=2.0in]{images/gad_coop_complex}
\caption{The extension of the gap cooperator which our construction will use.}
\label{fig:gad_coop_complex}
\end{center}
\end{figure}
\section{Probe Configurations}\label{sec:probes}
Probes can take on multiple configurations depending on the strength of the glue they are simulating and the probes already in the macrotile region when they arrive. All probes consist of a single-tile wide path of tiles to which the gadgets describe above are attached. There are two types of fundamentally different probes: probes that grow from the north and south of the macrotile and probes that grow from the east and west of the macrotile. As shown in Figure~\ref{fig:macro_labeled}, probes that grow from the east and west are single arm probes while probes that grow from the north and south potentially require two arms. Figure~\ref{fig:macro_labeled} also shows all of the probes along with their corresponding gadgets which are marked as colored number regions. Gadgets labeled 1-3 in the figure represent gap cooperator gadgets which allow for cooperation between the probes to which they are attached. The gadgets labeled 5-9 denote adjacent cooperator gadgets which allow for the potential of cooperation between the probes to which they are attached. Finally, the gadgets labeled 10 and 11 are cooperator gadgets which allow for Probe W to trigger the growth of the second arms of Probe N and Probe S.
\section{Points of Competition and the Representation Function}\label{sec:poc_repr}
Before probes can place tiles which begin the growth of a particular macrotile, the probes must grow paths which claim a point of competition. Once a point of competition is claimed by a special tile, the representation function can then map the macrotile to a tile in $T$.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=2.5in]{images/macro_labeled}
\caption{A schematic picture of the probes, points of cooperation, and points of competition of a macrotile.}
\label{fig:macro_labeled}
\end{center}
\end{figure}
Figure~\ref{fig:macro_labeled} gives a schematic picture of the paths that probes take as they assemble as well as the location of the points of cooperation and points of competition. To simulate growth of $\tau=2$ aTAM systems, we must handle two cases: (1) a tile binds via a strength-$2$ glue, (2) a tile binds via the cooperation of two strength-$1$ glues.
When simulating case (1), as a macrotile assembles, a probe representing a strength-$2$ glue claims a point of competition (labeled $2$ in Figure~\ref{fig:macro_labeled}) by placing a special tile in a designated location before any other probe can place a tile at the same location. Once this special tile is placed, subassemblies form by single tile additions starting from a glue exposed by the special tile. These subassemblies output glues on the relevant sides of the macrotile. We call such subassemblies \emph{glue outputting} subassemblies. A glue outputting subassembly may attempt to present glues on a side of the macrotile where a probe has started to assemble. In this case the glue outputting subassembly simply crashes into the (possible partially formed) probe. In Figure~\ref{fig:macro_labeled}, none of the glue outputting subassemblies are shown, only the various probes are shown. See Section~\ref{sec:case_analysis} for detailed analysis of each case of simulating strength-$2$ binding.
More interesting cases arise when simulating case (2). We will give a high-level description of each case of cooperation here. See Section~\ref{sec:case_analysis} for complete details. First, each probe uses unique glues to assemble for each glue in $\calT$. Denote the glue that Probe D represents by $g_D$, where $D$ is one of $N$, $S$, $E$, or $W$. We will see that special duples attached to probes can be placed to win points of competition (specially designated tile locations of a macrotile). In winning these locations, these duples determine which tile is being simulated.
To simulate the cooperation of glues $g_N$ and $g_S$, Probe N and Probe S can win the point of cooperation at the region with label $1$ in Figure~\ref{fig:macro_labeled}. If these two probes indeed cooperate, then appropriate glue outputting subassemblies form. Notice that Probe W may occupy tile locations in region $1$ before Probe N and Probe S have a chance to cooperate. So that this does not prevent the simulation of cooperation of glues $g_N$ and $g_S$, when Probe W crosses region $1$ (using a crossing gadget), it uses adjacent cooperator gadgets to allow secondary probes to form from Probe N and Probe S. Note that these particular adjacent gadgets which trigger the growth of the second arm of the probe are generic. That is, all west probes, regardless of which glue they are simulating, present the same cooperator gadgets to trigger the growth of the second arm of the south and north probes. These secondary probes can then cooperate at region $3$. If they do, a glue outputting subassembly forms to present glues on the east side of the macrotile. Thinking of regions $1$ and $3$ as points of competition, when Probe N and Probe S win a special tile location in either region, the representation function maps the macrotile to a tile type in $T$ based on the special tile placed in either region $1$ or $3$.
Similarly, to simulate cooperation of glues $g_E$ and $g_W$, Probe E and Probe W can cooperate at the region labeled $2$ in Figure~\ref{fig:macro_labeled}. At this point, glue outputting subassemblies attempt to present glues to the north and south sides of the macrotile.
Simulation of cooperation of glues $g_W$ and $g_S$ is equivalent up to reflection to simulation of cooperation of glues $g_W$ and $g_N$. We will describe the cooperation of Probe W and Probe S. For Probe W and Probe S to cooperate, Probe W must first cross region $1$ and trigger the growth of secondary probes for Probe S. Using one of these secondary probes, Probe W and Probe S may cooperate at region $7$. Once cooperation has occurred in this region, a path of tiles assembles toward region $2$. If this path of tiles places a tile in a specially designated tile location (a point of competition) of region $2$, appropriate glue outputting subassemblies may form.
Finally, simulation of cooperation of glues $g_E$ and $g_S$ is equivalent up to reflection of simulation of cooperation of glues $g_E$ and $g_N$. Therefore, we only describe cooperation of Probe E and Probe S. Probe E and Probe S may cooperate at region $5$ or region $9$. If cooperation occurs at region $5$, a path of tiles binds one tile at a time until the point of competition in region $2$ is won, at which point, appropriate glue outputting subassemblies may form. Notice that Probe W may have triggered the growth of secondary probes from Probe S. If this is the case, these secondary probes may prevent the formation of the path of tiles that would otherwise be able to claim the point of competition in region $2$. For this reason, Probe S and Probe E may also cooperate in region $9$, at which point a path of tiles forms, claims a point of competition in region $2$, and glue outputting subassemblies form.
\section{Case Analysis of Tile Placements in $\calT$}\label{sec:case_analysis}
We now look at how our simulator is able to simulate every possible way a tile could attach in $\calT$. In order to accomplish this, we need to only look at $18$ informative cases. The other cases will follow from the symmetry of our construction. When tiles bind in $\calT$, they may do so by either attaching with a strength-$2$ glue or they may do so by the cooperation of two strength-$1$ glues. When simulating $\calT$, macrotiles that form must take into account the fact that some input glues are not used due to either mismatching or overbinding (i.e. binding with strength greater than $\tau$). Such input glues are called \emph{non-contributing input supersides}. These are input supersides that are not used to simulate tile binding. Mismatching supersides are one such example. We will describe how tile binding in $\calT$ is simulated using macrotiles and make special mention to the cases where there are non-contributing input supersides. Finally, for the remainder of this section, we denote the glue that Probe D represents by $g_D$, where $D$ is one of $N$, $S$, $E$, or $W$, and in the figures for the various cases, we denote the points of competition and points of cooperation in a region labeled $k$ by POC$k$, where $k\in \mathbb{N}$; whether or not POC$k$ is a point of competition or a point of cooperation will be clear from the context.
\subsection{One-sided binding}\label{sec:onesided_binding}
One-sided binding occurs in $\calT$ when a tile binds using a strength-$2$ glue. For example, Figure~\ref{fig:onesided} depicts the attachment of a tile due to the binding of a strength-$2$ east glue of the attaching tile.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=1.5in]{images/onesided}
\caption{Binding via a strength-$2$ glue with no non-contributing input supersides. The tile on the left binds to an assembly using a strength-$2$ east glue.}
\label{fig:onesided}
\end{center}
\end{figure}
To simulate this type of binding, when a strength-$2$ probe grows into an otherwise empty macrotile region (See Figure~\ref{fig:onesided_supertile} for an example of a probe grown from the east.), it grows a path tiles toward the point of competition labeled $2$ in Figure~\ref{fig:onesided_supertile}. If this probe wins this point of competition it places a tile that determines which glues to output on the south, west, and north sides of the macrotile and grows these output glues toward their respective sides. Figure~\ref{fig:onesided_supertile} shows this growth.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=1.5in]{images/onesided_supertile}
\caption{Growth of a macrotile that simulates the binding in~\ref{fig:onesided}.}
\label{fig:onesided_supertile}
\end{center}
\end{figure}
For strength-$2$ glues, there are $3$ other cases to consider that are all equivalent to the case in Figure~\ref{fig:onesided_supertile} up to rotation.
\subsection*{One-sided binding with non-contributing input sides}
Now we consider cases of one-sided binding with one or two non-contributing input sides. We consider the three cases of tile binding in $\calT$ depicted in Figure~\ref{fig:onesided_noncontributing} as the rest of the cases are similar to these cases. In each case of Figure~\ref{fig:onesided_noncontributing}, a strength-$2$ glue allows for a tile to bind while either a mismatch or overbinding occurs with the other glues.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=2.5in]{images/onesided_noncontributing}
\caption{Binding via a strength-$2$ glue with non-contributing input supersides.}
\label{fig:onesided_noncontributing}
\end{center}
\end{figure}
In the simulation of $\calT$, special care must be taken to ensure that the probes corresponding to glue mismatching or glue overbinding do not interfere with the growth of a macrotile that is simulating a strength-$2$ tile attachment. For example, when simulating the type of tile attachment shown in Figure~\ref{fig:onesided_noncontributing} part (a), probes enter the macrotile region from the east and west. If the probe from the south wins the point of competition labeled $2$ in Figure~\ref{fig:macro_labeled}, then the east and west probes should not prevent the output of a glue to the north side of the macrotile.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=5.0in]{images/onesided_noncontributing_supertile}
\caption{Growth of a macrotile that simulates the binding in~\ref{fig:onesided_noncontributing}.}
\label{fig:onesided_noncontributing_supertile}
\end{center}
\end{figure}
In case (a) of Figure~\ref{fig:onesided_noncontributing_supertile}, when Probe S wins POC2, a tile is placed that determines the output glues to be grown to the east, north, and west sides of the macrotile. Since probes have begun growth from the east and/or west (labeled Probe E and Probe W), growth of subassemblies that present glues to the east and west sides of the macrotile will be interrupted by the growth of Probe E and Probe W. If Probe E and Probe W fully form but do not cooperate they do so with a gap cooperator gadget; therefore, using a crossing gadget, Probe S can still cross these probes. A glue outputting subassembly that presents a glue to the north is allowed to assemble since the assembly sequence of this subassembly can be hardcoded to avoid Probe E and Probe W subassemblies. Similarly, in cases (b), (c), and (d) of Figure~\ref{fig:onesided_noncontributing_supertile}, once the point of competition labeled $2$ is won by a strength-$2$ probe (Probe E in case (b) and Probe W in cases (c) and (d)), subassemblies form that output glues to the appropriate sides of the macrotile. Any probes forming from non-contributing input sides prevent the output of glues on those sides.
\subsection{Two-sided binding}
Now we consider the cases where a tile of $\calT$ binds to two tiles via the cooperation of two strength-$1$ glues. We will first consider the cases in Figure~\ref{fig:twosided}. The four cases in Figure~\ref{fig:twosided} cover all cooperative binding cases where there is no non-contributing side that forms. Technically there are two more cases where a north glue cooperates with a west glue (or east glue) to place a tile; however, these cases are equivalent to cases (c) and (d) in Figure~\ref{fig:twosided} since in these cases, the formation of a macrotile that represents a tile in $\calT$ is symmetric about the horizontal line through the center of the macrotile.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=2.5in]{images/twosided}
\caption{Binding via cooperation of two strength-$1$ glues.}
\label{fig:twosided}
\end{center}
\end{figure}
Figure~\ref{fig:twosided_supertile} gives a schematic image for the simulation of the four cases of Figure~\ref{fig:twosided}. In each case, as the macrotile forms, the strategy is essentially the same. When two probes meet at a \emph{point of cooperation}, a cooperator gadget is used to mimic the cooperation that occurs in aTAM systems. There are two types of cooperator gadgets; Section~\ref{sec:gadgets} gives a detailed description of how each cooperator gadget works.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=3.5in]{images/twosided_supertile}
\caption{Growth of a macrotile that simulates the binding in~\ref{fig:twosided}.}
\label{fig:twosided_supertile}
\end{center}
\end{figure}
\noindent{\bf Case (a):}
In this case, Probe N and Probe S meet at POC1 in Figure~\ref{fig:twosided_supertile}(a). A \emph{gap cooperator} gadget is used to allow for the placement of a duple if and only if there exists a tile type in $T$ with north glue $g_N$ and south glue $g_S$. There is a unique duple for each such tile type in $T$ and the binding of one of these duples allows for the growth of subassemblies that output glues corresponding to the glues on the east and west of the relative tile type in $T$.
\newline
\noindent{\bf Case (b):}
Probe E and Probe W simulate cooperative binding when they meet at POC2 in Figure~\ref{fig:twosided_supertile}(b). A gap cooperator gadget is used to allow for the placement of a duple if and only if there exists a tile type in $T$ with east glue $g_E$ and west glue $g_W$. As in case (a), this duple determines which glue outputting subassemblies form to present glues on the north and south sides of the macrotile.
\newline
\noindent{\bf Case (c):}
Probe S and Probe W simulate cooperative binding as follows. First, Probe W wins POC1. After it wins this point, it grows a subassembly to the north and south of POC1 and to the east of where Probe S assembles. This subassembly uses an adjacent cooperator gadget to trigger Probe S to assemble secondary probes. One of these probes can cooperate with Probe W at POC7. An \emph{adjacent cooperator} gadget is used to allow for the placement of a duple if and only if there exists a tile type in $T$ with west glue $g_W$ and south glue $g_S$. Again, there is a unique duple for each such tile type in $T$ and the binding of one of these duples allows for the growth of subassemblies that output glues corresponding to the glues on the east and north of the relative tile type in $T$. It is at POC7 that a duple is placed that determines which east and north glues to present. Once this duple is placed, growth continues toward POC2. Upon winning POC2, glue outputing subassemblies form that present glues on the east and north sides of the macrotile.
\newline
\noindent{\bf Case (d):}
Probe E and Probe S simulate cooperative binding when they meet at POC5 in Figure~\ref{fig:twosided_supertile}(d). An adjacent cooperator gadget is used to allow for the placement of a duple if and only if there exists a tile type in $T$ with east glue $g_E$ and south glue $g_S$. This duple determines which glue outputting subassemblies form to present glues on the north and west sides of the macrotile. Once this duple is placed tiles attach that race toward POC2. Upon winning this point of competition, the glue outputting subassemblies form.
\subsection*{Two-sided binding with non-contributing input sides}
Here we present eight different cases of two-sided binding with a non-contributing input side. In these cases, three probes grow within a macrotile, and we must take special care to ensure that the probes are coordinated enough to allow for cooperative binding simulation. The eight cases under consideration are given in Figure~\ref{fig:twosided_noncontributing}. In each case we assume that each glue is strength-$1$ and that two of these glues permit cooperative binding while the other glue mismatches or overbinds, whichever the case may be. In general, there are 13 cases of two-sided binding with a non-contributing input side. Five of the eight cases presented here -- (b), (c), (d), (f), and (g) -- are the equivalent up to reflection to the five cases not presented.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=3.0in]{images/twosided_noncontributing}
\caption{Binding via cooperation of two strength-$1$ glues with non-contributing input supersides.}
\label{fig:twosided_noncontributing}
\end{center}
\end{figure}
Figure~\ref{fig:twosided_noncontributing_supertile} gives a schematic image for the simulation of the seven cases of Figure~\ref{fig:twosided_noncontributing}. In each case, two probes meet at a point of cooperation and one of the two cooperator gadgets is used to mimic the cooperation that occurs in aTAM systems. To coordinate these probes, we will also have to use the \emph{crosser} gadget. See Section~\ref{sec:gadgets} for a detailed description of how each of these gadgets works.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=4.5in]{images/twosided_noncontributing_supertile}
\caption{Growth of a macrotile that simulates the binding in~\ref{fig:twosided_noncontributing}.}
\label{fig:twosided_noncontributing_supertile}
\end{center}
\end{figure}
\noindent{\bf Case (a):}
Probe N and Probe S meet at POC1 in Figure~\ref{fig:twosided_supertile}(a). A gap cooperator gadget is used to allow for the placement of a duple if and only if there exists a tile type in $T$ with north glue $g_N$ and south glue $g_S$. There is a unique duple for each such tile type in $T$ and the binding of one of these duples allows for the growth of subassemblies that output glues corresponding to the glues on the east and west of the relative tile type in $T$. Notice that since Probe E has started to assemble, the glue outputting subassembly that presents glues on the east side of the macrotile is halted when this subassembly meets the Probe E subassembly.
\newline
\noindent{\bf Case (b):}
Probe E and Probe W simulate cooperative binding when they meet at the POC2 in Figure~\ref{fig:twosided_supertile}(b). A gap cooperator gadget is used to allow for the placement of a duple if and only if there exists a tile type in $T$ with east glue $g_E$ and west glue $g_W$. As in case (a), this duple determines which glue outputting subassemblies form to present glues on the north and south sides of the macrotile. Notice that Probe E and Probe W occupy POC2 and so have automatically won the point of competition at POC2. Therefore, even if Probe S can cooperate with one of the other probes, Probe E and Probe W determine the output glues on the macrotile. Finally, from POC2, subassemblies form to output appropriate glues. The subassembly outputting the south glue of the macrotile will crash into the (at least partially existing) subassembly Probe S.
\newline
\noindent{\bf Case (c):}
Probe S and Probe W simulate cooperative binding as follows. First, Probe W wins POC1 in Figure~\ref{fig:twosided_supertile}(c). After it wins this point, it grows a subassembly to the north and south of POC1 and to the east of where Probe S assembles. This subassembly uses an adjacent cooperator gadget to allow Probe S to assemble secondary probes. One of these probes can cooperate with Probe W at POC7.
An adjacent cooperator gadget is used to allow for the placement of a duple if and only if there exists a tile type in $T$ with west glue $g_W$ and south glue $g_S$. Again, there is a unique duple for each such tile type in $T$ and the binding of one of these duples allows for the growth of subassemblies that output glues corresponding to the glues on the east and north of the relative tile type in $T$. It is at POC7 that a duple is placed that determines which east and north glues to present. Once this duple is placed, growth continues toward POC2. Upon winning POC2, glue outputing subassemblies form that present glues on the east and north sides of the macrotile, and the subassembly presenting the east glue crashes into the subassembly Probe E.
\newline
\noindent{\bf Case (d):}
In this case two assembly sequences can lead to the simulation of this binding type. First, Probe E and Probe S can cooperate at POC5 in Figure~\ref{fig:twosided_noncontributing_supertile}(d), assemble toward POC2 and win POC2. This case is similar to case (d) in Figure~\ref{fig:twosided_supertile}, however, in this case the subassembly presenting glues on the west side of the macrotile is halted by Probe W and only a glue to the north side of the macrotile is presented. Note that it could be the case that secondary probes of Probe S block the assembly that grows from POC5 to POC2. In this case, Probe E and Probe S should still be able to simulate cooperation. They achieve this by cooperating at POC9. At this point, once POC2 is won, glue outputting subassemblies form. This is the case presented in Figure~\ref{fig:twosided_noncontributing_supertile}(d).
\newline
\noindent{\bf Case (e):}
In this case, as in case (d), two assembly sequences can lead to the simulation of this binding type. First, Probe N and Probe S can cooperate at POC1 in Figure~\ref{fig:twosided_noncontributing_supertile}(e), race toward POC2 and win POC2. This case is similar to case (a) in Figure~\ref{fig:twosided_supertile}, however, in this case the subassembly presenting glues on the west side of the macrotile is halted by Probe W and only a glue to the east side of the macrotile is presented. In the case that Probe W wins POC1, secondary probes forms -- one set of secondary probes forms from Probe S and another from Probe N. In this case, Probe N and Probe S should still be able to simulate cooperation. They achieve this by cooperating at POC3. At this point, a glue outputting subassembly forms to present a glue on the east side of the macrotile, since it is known that all other sides have grown input probes. This is the case presented in Figure~\ref{fig:twosided_noncontributing_supertile}(e).
\newline
\noindent{\bf Case (f):}
In this case, first Probe W wins POC1 by assembling a crosser gadget to grow between Probe N and Probe S. In the case where Probe N and Probe S have formed a path of adjacent tiles from the north side of the macrotile to the south side, the crosser gadget detaches a section of this path so that Probe W can assemble. Then Probe W triggers the growth of secondary probes on both Probe N and Probe S. At POC7, an adjacent cooperator gadget assembled from Probe W and Probe S allows for the placement of a duple that determines which glues are output to the north and east sides of the macrotile. Assembly proceeds from POC7 to POC2. Upon winning POC2, an appropriate glue outputting subassembly forms that presents glues to the east side of the macrotile.
\newline
\noindent{\bf Case (g):}
This case is similar to case (d) in Figure~\ref{fig:twosided_supertile} except that the subassembly that presents glues on the north side of the macrotile in case (d) of Figure~\ref{fig:twosided_supertile} crashes into the (at least partially) existing subassembly Probe N.
\noindent{\bf Case (h):}
Up to this point, for simplicity, we have neglected the special point of competition which we look at in the case of two non-contributing sides. In this case, growth of the macrotile will be similar to the case of one non-contributing side except for the case where $g_N$ and $g_S$ can cooperatively place a tile but no other glues can. In this particular case, it could be the case that probes representing glues $g_E$ and $g_W$ arrive before either Probe N or Probe S. Notice that this prevents Probe N and Probe S from cooperatively placing a tile. In order to handle this peculiar case, we enumerate all of the tiles which $g_N$ and $g_S$ can cooperatively place (this is at most $|T|$) by a function $F$. In the region labeled $*$ in Figure~\ref{fig:twosided_noncontributing_supertile}(h), we always place a tile from $E \subset S$, which consists of tiles labeled 1 through $T$, nondeterministically. Let $n$ be the number of tiles $g_N$ and $g_S$ can cooperatively place, and suppose $r$ is the value contained in the label of the tile placed in the $*$ region. The representation function maps the macrotile to the tile in $T$ given by $F(r \mod n)$. Since, in this case, the macrotile being assembled is surrounded on all four of its sides, there is not a need for any output subassemblies to be placed. Furthermore, it should be noted that this region is a ``last resort'' for the representation function. If there is an appropriate tile placed at any of the other POC regions, the output of the representation function will depend on that tile.
Figure~\ref{fig:macro_detailed} shows an assembled macrotile that simulates the binding which takes place in Figure~\ref{fig:twosided_noncontributing}(d) in the manner shown in Figure~\ref{fig:twosided_noncontributing_supertile}(d). The blue tiles are part of the subassembly which compose Probe W, the green tiles compose Probe S, and the pink tiles make up Probe E. All of these probes enter the macrotile region in the direction and location indicated by the arrows. The yellow tiles in the figure show tiles that are cooperatively placed by Probe S and Probe E. Notice that the growth of the yellow tile placed near the bottom of the figure has been blocked by arm 2 of Probe S, but the second yellow tile in the figure is placed and able to growth a path to POC2. The dark red tile placed in POC2 starts the growth of an outputting subassembly which grows a new probe into the region north of the macrotile.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=4.0in]{images/macro_detailed}
\caption{A detailed macrotile simulating the binding shown in Figure~\ref{fig:twosided_noncontributing}.}
\label{fig:macro_detailed}
\end{center}
\end{figure}
\section{Seed Formation}\label{sec:seed_form}
In order to complete our description of the simulation of $\calT$, it is necessary to describe the construction of $\sigma'$. For all $t \in \sigma$, we create special output macrotiles. An example $\sigma$ is shown in Figure~\ref{fig:seed_form_exT} and the corresponding $\sigma'$ is shown in Figure~\ref{fig:seed_form_exS}. For these macrotiles, the tile which the representation function depends upon is in the center of the macrotile. Assembly begins with the macrotiles growing probes for each exposed glue. If a tile has a side which does not have a glue, then a probe is not grown on that side. Growth of the assembly then proceeds as described in Section~\ref{sec:poc_repr}.
\begin{figure}[htp]
\begin{center}
\includegraphics[height=1.0in]{images/seed_form_exT}
\caption{An example seed in $\cal T$.}
\label{fig:seed_form_exT}
\end{center}
\end{figure}
\begin{figure}[htp]
\begin{center}
\includegraphics[width=1.5in]{images/seed_form_exS}
\caption{The corresponding seed in $\mathcal{S}$ for the seed shown in Figure~\ref{fig:seed_form_exT}. The arrows indicate in which directions the probes will grow when assembly begins. The black squares represent macrotile regions.}
\label{fig:seed_form_exS}
\end{center}
\end{figure}
\section{Proof of Correctness}\label{sec:proof_of_correctness}
\begin{proof}
Let $\calT = (T, \sigma, 2)$ be an aTAM system and let $\mathcal{S} = (T_S, S, D, \sigma_S)$ be the DrgTAS system obtained from $\calT$ by the construction given in Section~\ref{sec:DrgTAM_sim}.
To show that $\mathcal{S}$ gives a valid simulation of $\calT$ we will use the representation function $R$ described in Section~\ref{sec:poc_repr} and denote the scale factor of this simulation by $m$. We will show the following. 1. Under $R$, $\mathcal{S}$ and $\calT$ have equivalent production. This essentially follows from the construction. 2. Under $R$, $\mathcal{S}$ and $\calT$ have equivalent dynamics. To show this, we must show that when detachment occurs in $\mathcal{S}$ due to a cut of strength less than $1$, the two assemblies produced are of the form that one of them still maps correctly to a represented assembly in $\calT$ while the other produced assembly maps to the empty tile. We must also show that any assembly sequence starting from the latter assembly yields an assembly that maps to the empty tile under $R$.
To show that $\mathcal{S}$ and $\calT$ have equivalent production under $R$, we will first show that $R^*$ maps cleanly. To see this, note that the probes described in the construction (Section~\ref{sec:poc_repr}) can only be grown from adjacent macrotiles on sides where they placed an outputting subassembly. The probes grown in macrotile regions are never interpreted under $R$ as a tile in $\calT$ until a POC region is won (using cooperation if necessary). It follows from the construction that macrotile regions which map to the empty tile under $R$ will not grow any outputting subassemblies which can initiate probes in adjacent macrotile locations until a POC region is won and the macrotile first maps to a tile in $\calT$. Therefore, $R^*$ maps cleanly.
Now, to see that $\left\{R^*(\alpha') | \alpha' \in \prodasm{S}\right\} = \prodasm{T}$, let $\alpha'$ be in $\prodasm{S}$. Then by the construction, any $m$-block macrotile, $B$, in $\alpha'$ maps to the empty tile or to some tile type in $T$, and only maps to a tile in $T$ if $B$ is part of the seed of $\mathcal{S}$, or adjacent $m$-block macrotiles of $B$ in $\alpha'$ expose glues that allow for the growth of $B$. In the latter case, the construction shows that $B$ can only map to a tile type whose glues match the glues represented on the sides of adjacent $m$-block macrotiles. Since this holds for any $m$-block macrotile in $\alpha'$, $R^*(\alpha')$ is in $\prodasm{T}$. This shows that $\left\{R^*(\alpha') | \alpha' \in \prodasm{S}\right\} \subseteq \prodasm{T}$. Then, for $\alpha\in \prodasm{T}$ and an assembly sequence $\vec{\alpha}$ resulting in $\alpha$, the construction also shows that we can grow $m$-block macrotiles following the assembly sequence $\vec{\alpha}$ to obtain an $\alpha'$ in $\prodasm{S}$ such that $R^*(\alpha') = \alpha$. Therefore, we also have $\left\{R^*(\alpha') | \alpha' \in \prodasm{S}\right\} \supseteq \prodasm{T}$. Thus, $\left\{R^*(\alpha') | \alpha' \in \prodasm{S}\right\} = \prodasm{T}$.
To show that $\mathcal{S}$ and $\calT$ have equivalent dynamics, first note that the construction implies
that $\alpha' \rightarrow_{+}^\mathcal{S} \beta'$ if and only if $R^*(\alpha') \rightarrow R^*(\beta')$. To see this note that when a single tile (or duple) is added to $\alpha'$, if the tile (or duple) does not win a point of competition, then $R^*(\alpha') = R^*(\beta')$. On the other hand, if the tile (or duple) does win a point of competition, the macrotile is determined once and for all. Moreover, an assembly in a macrotile region cannot map to a tile type $t$ under $R$ unless some adjacent macrotile region (or regions in the case of simulation of cooperation) maps to a tile type with a glue (or glues) that allows for the placement of a tile with type $t$. Therefore, $R^*(\alpha') \rightarrow R^*(\beta')$.
What is left to show is that for $\alpha'$ in $\prodasm{S}$ such that $R^*(\alpha') = \alpha$, when a cut with strength less than $1$ exists in $\alpha'$, the two assemblies that on each side of the cut are such that one of the assemblies, $\beta'_1$ say, still represents $\alpha$, while the other assembly, $\beta'_2$, represents the empty tile. Moreover, we must show that the result of any assembly sequence starting from $\beta'_2$ must also represent the empty tile. To see this, note that in the only cases where there exists a cut of strength less than $1$ in any of the gadgets given in Section~\ref{sec:gadgets}, the cut separates the assembly into two configurations where one of the configurations is given as one of the assemblies in Figure~\ref{fig:gad_junk}. One can check that the configurations given in Figure~\ref{fig:gad_junk} quickly become terminal and represent the empty tile. The other configuration is an assembly that still represents $\alpha$ since there is never a cut of strength less than $1$ separating points of cooperation or points of competition from an assembly.
To see that the scale factor is $O(1)$, note that the lengths of the paths and sizes of gadgets which make up macrotiles are all fixed, independent of $\calT$. In order to see that the tile complexity of $|S \cup D|$ is $O(|T|)$ notice that for each tile $t \in T$ there are a bounded number of ways to bind, which means that the different types of tiles that simulate the binding of $t$ is bounded. Furthermore, observe that for each of these potential ways of binding, the number of tiles required to assemble the macrotile which maps to $t$ under the representation function, given the constant scale factor of macrotiles, is constant. Consequently, $|S \cup D| = O(|T|)$.
\end{proof}
}
\vspace{-5pt}
\begin{corollary}\label{cor:IU}
There exists a DrgTAM tile set $U$ which, at temperature-$1$, is intrinsically universal for the aTAM. Furthermore, the sets of singletons and duples, $S$ and $D$, created from $U$ are constant across all simulations.
\end{corollary}
\vspace{-5pt}
As mentioned above this result follows from \cite{IUSA}. See Section~\ref{sec:IU_proof} for more details.
\ifabstract
\later{
\section{Proof of Corollary~\ref{cor:IU}} \label{sec:IU_proof}
\begin{proof}
To prove Corollary~\ref{cor:IU}, we let $\calT = (T,\sigma,\tau)$ be an arbitrary TAS in the aTAM. Let $U_{\calT} = (U,\sigma_{\calT},2)$ be an aTAM TAS which simulates $\calT$ using the tile set $U$, given in \cite{IUSA}, which is intrinsically universal for the aTAM. Now let $\mathcal{D} = (T_U, S, D, \sigma_{\calT}', 1)$ be a DrgTAS, constructed as given by the proof of Theorem~\ref{thm:DrgTAS-sim}, which simulates $U_{\calT}$. We now note that, regardless of $\calT$, the same tile set $U$ is used to simulate it in the aTAM, and that $T_U$, $S$, and $D$ depend only upon the tile set being simulated by $\mathcal{D}$ (i.e. the only thing that changes in $\mathcal{D}$ as $\calT$ changes is $\sigma_{\calT}$). Therefore, a single tile set $T_U$ suffices to simulate any arbitrary aTAM TAS, and thus $T_U$ is intrinsically universal for the aTAM.
\end{proof}
}
\subsection{Informal Definitions of Simulation}
\label{sec:simulation_def_informal}
\vspace{-5pt}
In this section, we present a high-level sketch of what we mean when saying that one system \emph{simulates} another. Please see Section~\ref{sec:simulation_def_formal} for complete, technical definitions, which are based on those of \cite{IUNeedsCoop}.
For one system $\mathcal{S}$ to simulate another system $\mathcal{T}$, we allow $\mathcal{S}$ to use square (or rectangular when simulating duples) blocks of tiles called \emph{macrotiles} to represent the simulated tiles from $\mathcal{T}$. The simulator must provide a scaling factor $c$ which specifies how large each macrotile is, and it must provide a \emph{representation function}, which is a function mapping each macrotile assembled in $\mathcal{S}$ to a tile in $\mathcal{T}$. Since a macrotile may have to grow to some critical size (e.g. when gathering information from adjacent macrotiles about the simulated glues adjacent to its location) before being able to compute its identity (i.e. which tile from $\mathcal{T}$ it represents), it's possible for non-empty macrotile locations in $\mathcal{S}$ to map to empty locations in $\mathcal{T}$, and we call such growth \emph{fuzz}. We follow the standard simulation definitions (see \cite{IUNeedsCoop,2HAMIU,Signals3D,IUSA}), and restrict fuzz to being laterally or vertically adjacent to macrotile positions in $\mathcal{S}$ which map to non-empty tiles in $\mathcal{T}$.
Given the notion of block representations, we say that $\mathcal{S}$ simulates $\mathcal{T}$ if and only if (1) for every producible assembly in $\mathcal{T}$, there is an equivalent producible assembly in $\mathcal{S}$ when the representation function is applied, and vice versa (thus we say the systems have \emph{equivalent productions}), and (2) for every assembly sequence in $\mathcal{T}$, the exactly equivalent assembly sequence can be followed in $\mathcal{S}$ (modulo the application of the representation function), and vice versa (thus we say the systems have \emph{equivalent dynamics}). Thus, equivalent production and equivalent dynamics yield a valid simulation.
\newcommand{\mathsf{REPR}}{\mathsf{REPR}}
\newcommand{\mathfrak{C}}{\mathfrak{C}}
We say that a tile set $U$ is \emph{intrinsically universal} for a class $\mathfrak{C}$ of tile assembly systems if, for every system $\calT \in \mathfrak{C}$ a system $\mathcal{U}_{\mathcal{T}}$ can be created for which: 1. $U$ is the tile set, 2. there is some initial seed assembly consisting of tiles in $U$ which is constructed to encode information about the system $\calT$ being simulated, 3. there exists a representation function $R$ which maps macrotiles in the simulator $\mathcal{U}_{\mathcal{T}}$ to tiles in the simulated system, and 4. under $R$, $\mathcal{U}_{\mathcal{T}}$ has equivalent productions and equivalent dynamics to $\calT$. Essentially, there is one tile set which can be used to simulate any system in the class, using only custom configured input seed assemblies. For formal definitions of intrinsic universality in tile assembly, see \cite{IUSA,IUNeedsCoop,Signals3D}.
\ifabstract
\later{
\section{Formal Definitions of Simulation}
\label{sec:simulation_def_formal}
In this section we formally define what it means for an rgTAS to simulate a TAS and a what it means for a DrgTAS to simuate a TAS.
From this point on, let $T$ be a tile set, and let $m\in\Z^+$.
An \emph{$m$-block supertile} or \emph{macrotile} over $T$ is a partial function $\alpha : \Z_m^2 \dashrightarrow T$, where $\Z_m = \{0,1,\ldots,m-1\}$.
Let $B^T_m$ be the set of all $m$-block supertiles over $T$.
The $m$-block with no domain is said to be $\emph{empty}$.
For a general assembly $\alpha:\Z^2 \dashrightarrow T$ and $(x_0, x_1)\in\Z^d$, define $\alpha^m_{x_0,x_1}$ to be the $m$-block supertile defined by $\alpha^m_{x_0, x_1}(i_0, i_1) = \alpha(mx_0+i_0, mx_1+i_1)$ for $0 \leq i_0,i_1< m$.
For some tile set $S$, a partial function $R: B^{S}_m \dashrightarrow T$ is said to be a \emph{valid $m$-block supertile representation} from $S$ to $T$ if for any $\alpha,\beta \in B^{S}_m$ such that $\alpha \sqsubseteq \beta$ and $\alpha \in \dom R$, then $R(\alpha) = R(\beta)$.
For a given valid $m$-block supertile representation function $R$ from tile set~$S$ to tile set $T$, define the \emph{assembly representation function}\footnote{Note that $R^*$ is a total function since every assembly of $S$ represents \emph{some} assembly of~$T$; the functions $R$ and $\alpha$ are partial to allow undefined points to represent empty space.} $R^*: \mathcal{A}^{S} \rightarrow \mathcal{A}^T$ such that $R^*(\alpha') = \alpha$ if and only if $\alpha(x_0, x_1) = R\left(\alpha'^m_{x_0,x_1}\right)$ for all $(x_0,x_1) \in \Z^2$.
For an assembly $\alpha' \in \mathcal{A}^{S}$ such that $R(\alpha') = \alpha$, $\alpha'$ is said to map \emph{cleanly} to $\alpha \in \mathcal{A}^T$ under $R^*$ if for all non empty blocks $\alpha'^m_{x_0,x_1}$, $(x_0,x_1)+(u_0,u_1) \in \dom \alpha$ for some $u_0,u_1 \in U_2$ such that $u_0^2 + u_1^2 \leq 1$, or if $\alpha'$ has at most one non-empty $m$-block~$\alpha^m_{0, 0}$. In other words, $\alpha'$ may have tiles on supertile blocks representing empty space in $\alpha$, but only if that position is adjacent to a tile in $\alpha$. We call such growth ``around the edges'' of $\alpha'$ \emph{fuzz} and thus restrict it to be adjacent to only valid supertiles, but not diagonally adjacent (i.e.\ we do not permit \emph{diagonal fuzz}).
\subsection{rgTAS simulation of a TAS}\label{sec:rgTASsimTAS-formal}
To state our main results, we must formally define what it means for an rgTAS to ``simulate'' a TAS. Our definitions are similar to the definitions of simulation of a TAS by a TAS given in \cite{IUNeedsCoop}.
In the following definitions, let $\mathcal{T} = \left(T,\sigma_T,\tau_T\right)$ be a TAS, let $\mathcal{S} = \left(S,\sigma_S,\tau_S\right)$ be a TAS, and let $R$ be an $m$-block representation function $R:B^S_m \rightarrow T$.
\begin{definition}
\label{def-equiv-prod} We say that $\mathcal{S}$ and $\mathcal{T}$ have \emph{equivalent productions} (under $R$), and we write $\mathcal{S} \Leftrightarrow \mathcal{T}$ if the following conditions hold:
\begin{enumerate}
\item $\left\{R^*(\alpha') | \alpha' \in \prodasm{\mathcal{S}}\right\} = \prodasm{\mathcal{T}}$.
\item For all $\alpha'\in \prodasm{\mathcal{S}}$, $\alpha'$ maps cleanly to $R^*(\alpha')$.
\end{enumerate}
\end{definition}
\begin{definition}
\label{def-t-follows-s} We say that $\mathcal{T}$ \emph{follows} $\mathcal{S}$ (under $R$), and we write $\mathcal{T} \dashv_R \mathcal{S}$ if
(1) $\alpha' \rightarrow_{+}^\mathcal{S} \beta'$, for some $\alpha',\beta' \in \prodasm{\mathcal{S}}$, implies that $R^*(\alpha') \to^\mathcal{T} R^*(\beta')$, and
(2) $\alpha' \rightarrow_{-}^\mathcal{S} (\beta'_1, \beta'_2)$ for some $\alpha',\beta'_1,\beta'_2 \in \prodasm{\mathcal{S}}$, implies that either of the following holds.
\begin{enumerate}
\item[(a)] $\sigma_S\subseteq \beta'_1$ and $R^*(\alpha') = R^*(\beta'_1)$, and there is some $\gamma \in B^S_m$ such that $\beta'_2 \subseteq \gamma$ and $\gamma \not\in \dom R$, and moreover, if $\beta'_2 \rightarrow^\mathcal{S} \beta''_2$ for some $\beta''_2\in \prodasm{\mathcal{S}}$, then there is some $\gamma' \in B^S_m$ such that $\beta''_2 \subseteq \gamma'$ and $\gamma' \not\in \dom R$.
\item[(b)] $\sigma_S\subseteq \beta'_2$ and $R^*(\alpha') = R^*(\beta'_2)$, and there is some $\gamma \in B^S_m$ such that $\beta'_1 \subseteq \gamma$ and $\gamma \not\in \dom R$, and moreover, if $\beta'_1 \rightarrow^\mathcal{S} \beta''_1$ for some $\beta''_1\in \prodasm{\mathcal{S}}$, then there is some $\gamma' \in B^S_m$ such that $\beta''_1 \subseteq \gamma'$ and $\gamma' \not\in \dom R$.
\end{enumerate}
\end{definition}
Condition (2) in the definition above says that when a cut is made to an assembly $\alpha'\in\prodasm{S}$ that represents $\alpha\in\prodasm{T}$, the two assemblies that are produced are such that one of the assemblies, $\beta'_1$ say, still represents $\alpha$ and is identifiable by the fact that it contains the seed $\sigma_S$, while the other assembly, $\beta'_2$, represents the empty tile. In addition, the result of any assembly sequence starting from $\beta'_2$ must also represent the empty tile. Informally, ``junk'' that falls off of an assembly during simulation must represent the empty tile and cannot grow into anything other than an assembly that represents the empty tile.
\begin{definition}
\label{def-s-models-t} We say that $\mathcal{S}$ \emph{models} $\mathcal{T}$ (under $R$), and we write $\mathcal{S} \models_R \mathcal{T}$, if for every $\alpha \in \prodasm{\mathcal{T}}$, there exists $\Pi \subset \prodasm{\mathcal{S}}$ where $R^*(\alpha') = \alpha$ for all $\alpha' \in \Pi$, such that, for every $\beta \in \prodasm{\mathcal{T}}$ where $\alpha \rightarrow^\mathcal{T} \beta$, (1) for every $\alpha' \in \Pi$ there exists $\beta' \in \prodasm{\mathcal{S}}$ where $R^*(\beta') = \beta$ and $\alpha' \rightarrow^\mathcal{S} \beta'$, and (2) for every $\alpha'' \in \prodasm{\mathcal{S}}$ where $\alpha'' \rightarrow^\mathcal{S} \beta'$, $\beta' \in \prodasm{\mathcal{S}}$, $R^*(\alpha'') = \alpha$, and $R^*(\beta') = \beta$, there exists $\alpha' \in \Pi$ such that $\alpha' \rightarrow^\mathcal{S} \alpha''$.
\end{definition}
The previous definition essentially specifies that every time $\mathcal{S}$ simulates an assembly $\alpha \in \prodasm{\mathcal{T}}$, there must be at least one valid growth path in $\mathcal{S}$ for each of the possible next steps that $\mathcal{T}$ could make from $\alpha$ which results in an assembly in $\mathcal{S}$ that maps to that next step.
\begin{definition}
\label{def-s-simulates-t} We say that $\mathcal{S}$ \emph{simulates} $\mathcal{T}$ (under $R$) if $\mathcal{S} \Leftrightarrow_R \mathcal{T}$ (equivalent productions), $\mathcal{T} \dashv_R \mathcal{S}$ and $\mathcal{S} \models_R \mathcal{T}$ (equivalent dynamics).
\end{definition}
\subsection{Dupled rgTAS simulation of a TAS}\label{sec:DrgTASsimTAS-formal}
Here we formally define what it means for a DrgTAS to ``simulate'' a TAS. The definition of a DrgTAS lends itself to a simulation definition statement that is equivalent to the definition of simulation for a TAS simulating another TAS. Therefore, our definitions come from \cite{IUNeedsCoop}.
Now let $\mathcal{T} = \left(T,\sigma_T,\tau_T\right)$ be a TAS, let $\mathcal{U} = \left(U,S,D,\sigma_U\right)$ be a DrgTAS, and let $R$ be an $m$-block representation function $R:B^U_m \rightarrow T$. Then we may define \emph{equivalent production}, \emph{follows}, and \emph{models} for $\mathcal{U}$ and $\mathcal{T}$ (under $R$) exactly as defined in Section~\ref{sec:rgTASsimTAS-formal} and therefore define simulation as follows.
\begin{definition}
\label{def-d-simulates-t} We say that $\mathcal{U}$ \emph{simulates} $\mathcal{T}$ (under $R$) if $\mathcal{U} \Leftrightarrow_R \mathcal{T}$ (equivalent productions), $\mathcal{T} \dashv_R \mathcal{U}$ and $\mathcal{U} \models_R \mathcal{T}$ (equivalent dynamics).
\end{definition}
}
\section{Formal descriptions of the Tile Assembly Models}
We now give the formal definitions of the tile assembly models.
\subsection{Formal description of the abstract Tile Assembly Model}
\label{sec-tam-formal}
In this section we provide a set of definitions and conventions that are used throughout this paper.
We work in the $2$-dimensional discrete space $\Z^2$. Define the set
$U_2=\{(0,1), \\(1,0), (0,-1), (-1,0)\}$ to be the set of all
\emph{unit vectors} in $\mathbb{Z}^2$.
We also sometimes refer to these vectors by their
cardinal directions $N$, $E$, $S$, $W$, respectively.
All \emph{graphs} in this paper are undirected.
A \emph{grid graph} is a graph $G =
(V,E)$ in which $V \subseteq \Z^2$ and every edge
$\{\vec{a},\vec{b}\} \in E$ has the property that $\vec{a} - \vec{b} \in U_2$.
Intuitively, a tile type $t$ is a unit square that can be
translated, but not rotated, having a well-defined ``side
$\vec{u}$'' for each $\vec{u} \in U_2$. Each side $\vec{u}$ of $t$
has a ``glue'' with ``label'' $\textmd{label}_t(\vec{u})$--a string
over some fixed alphabet--and ``strength''
$\textmd{str}_t(\vec{u})$--a nonnegative integer--specified by its type
$t$. Two tiles $t$ and $t'$ that are placed at the points $\vec{a}$
and $\vec{a}+\vec{u}$ respectively, \emph{bind} with \emph{strength}
$\textmd{str}_t\left(\vec{u}\right)$ if and only if
$\left(\textmd{label}_t\left(\vec{u}\right),\textmd{str}_t\left(\vec{u}\right)\right)
=
\left(\textmd{label}_{t'}\left(-\vec{u}\right),\textmd{str}_{t'}\left(-\vec{u}\right)\right)$.
In the subsequent definitions, given two partial functions $f,g$, we write $f(x) = g(x)$ if~$f$ and~$g$ are both defined and equal on~$x$, or if~$f$ and~$g$ are both undefined on $x$.
Fix a finite set $T$ of tile types.
A $T$-\emph{assembly}, sometimes denoted simply as an \emph{assembly} when $T$ is clear from the context, is a partial
function $\pfunc{\alpha}{\Z^2}{T}$ defined on at least one input, with points $\vec{x}\in\Z^2$ at
which $\alpha(\vec{x})$ is undefined interpreted to be empty space,
so that $\dom \alpha$ is the set of points with tiles.
We write $|\alpha|$ to denote $|\dom \alpha|$, and we say $\alpha$ is
\emph{finite} if $|\alpha|$ is finite. For assemblies $\alpha$
and $\alpha'$, we say that $\alpha$ is a \emph{subassembly} of
$\alpha'$, and write $\alpha \sqsubseteq \alpha'$, if $\dom \alpha
\subseteq \dom \alpha'$ and $\alpha(\vec{x}) = \alpha'(\vec{x})$ for
all $x \in \dom \alpha$.
We now give a brief formal definition of the aTAM. See \cite{Winf98,RotWin00,Roth01,jSSADST} for other developments of the model. Our notation is that of \cite{jSSADST}, which also contains a more complete definition.
Given a set $T$ of tile types, an {\it assembly} is a partial function $\pfunc{\alpha}{\mathbb{Z}^2}{T}$. An assembly is {\it $\tau$-stable}
if it cannot be broken up into smaller assemblies without breaking bonds of total strength at least $\tau$, for some $\tau \in \mathbb{N}$.
Self-assembly begins with a {\it seed assembly} $\sigma$ and
proceeds asynchronously and nondeterministically, with tiles
adsorbing one at a time to the existing assembly in any manner that
preserves $\tau$-stability at all times. A {\it tile assembly system}
({\it TAS}) is an ordered triple $\mathcal{T} = (T, \sigma, \tau)$,
where $T$ is a finite set of tile types, $\sigma$ is a seed assembly
with finite domain, and $\tau \in \N$. A {\it generalized tile
assembly system} ({\it GTAS})
is defined similarly, but without the finiteness requirements. We
write $\prodasm{\mathcal{T}}$ for the set of all assemblies that can arise
(in finitely many steps or in the limit) from $\mathcal{T}$. An
assembly $\alpha \in \prodasm{\mathcal{T}}$ is {\it terminal}, and we write $\alpha \in
\termasm{\mathcal{T}}$, if no tile can be $\tau$-stably added to it. It is clear that $\termasm{\mathcal{T}} \subseteq \prodasm{\mathcal{T}}$.
An assembly sequence in a TAS $\mathcal{T}$ is a (finite or infinite) sequence $\vec{\alpha} = (\alpha_0,\alpha_1,\ldots)$ of assemblies in which each $\alpha_{i+1}$ is obtained from $\alpha_i$ by the addition of a single tile. The \emph{result} $\res{\vec{\alpha}}$ of such an assembly sequence is its unique limiting assembly. (This is the last assembly in the sequence if the sequence is finite.) The set $\prodasm{T}$ is partially ordered by the relation $\longrightarrow$ defined by
\begin{eqnarray*}
\alpha \longrightarrow \alpha' & \textmd{iff} & \textmd{there is an assembly sequence } \vec{\alpha} = (\alpha_0,\alpha_1,\ldots) \\
& & \textmd{such that } \alpha_0 = \alpha \textmd{ and } \alpha' = \res{\vec{\alpha}}. \\
\end{eqnarray*}
If $\vec{\alpha} = (\alpha_0,\alpha_1,\ldots)$ is an assembly sequence in $\mathcal{T}$ and $\vec{m} \in \mathbb{Z}^2$, then the $\vec{\alpha}$\emph{-index} of $\vec{m}$ is $i_{\vec{\alpha}}(\vec{m}) = $min$\{ i \in \mathbb{N} | \vec{m} \in \dom \alpha_i\}$. That is, the $\vec{\alpha}$-index of $\vec{m}$ is the time at which any tile is first placed at location $\vec{m}$ by $\vec{\alpha}$. For each location $\vec{m} \in \bigcup_{0 \leq i \leq l} \dom \alpha_i$, define the set of its input sides IN$^{\vec{\alpha}}(\vec{m}) = \{\vec{u} \in U_2 | \mbox{str}_{\alpha_{i_{\alpha}}(\vec{m})}(\vec{u}) > 0 \}$.
We say that $\mathcal{T}$ is \emph{directed} (a.k.a. \emph{deterministic}, \emph{confluent}, \emph{produces a unique assembly}) if the relation $\longrightarrow$ is directed, i.e., if for all $\alpha,\alpha' \in \prodasm{T}$, there exists $\alpha'' \in \prodasm{T}$ such that $\alpha \longrightarrow \alpha''$ and $\alpha' \longrightarrow \alpha''$. It is easy to show that $\mathcal{T}$ is directed if and only if there is a unique terminal assembly $\alpha \in \prodasm{T}$ such that $\sigma \longrightarrow \alpha$.
A set $X \subseteq \Z^2$ {\it weakly self-assembles} if there exists
a TAS ${\mathcal T} = (T, \sigma, \tau)$ and a set $B \subseteq T$
such that $\alpha^{-1}(B) = X$ holds for every terminal assembly
$\alpha \in \termasm{T}$. Essentially, weak self-assembly can be thought of
as the creation (or ``painting'') of a pattern of tiles from $B$ (usually taken to be a
unique ``color'') on a possibly larger ``canvas'' of un-colored tiles.
A set $X$ \emph{strictly self-assembles} if there is a TAS $\mathcal{T}$ for
which every assembly $\alpha\in\termasm{T}$ satisfies $\dom \alpha =
X$. Essentially, strict self-assembly means that tiles are only placed
in positions defined by the shape. Note that if $X$ strictly self-assembles, then $X$ weakly
self-assembles. (Let all tiles be in $B$.)
\subsection{Formal description of the restricted glue Tile Assembly Model}
\label{sec-rgtam-formal}
In this section we formally define the restricted glue Tile Assembly Model (rgTAM). Since the rgTAM is based on the aTAM, most of the formal definition of Section~\ref{sec-tam-formal} apply here.
The rgTAM can be thought of as the aTAM where every system (rgTAS) in the rgTAM has the properties that $\tau = 1$ and glues may have strengths $-1, 0,$ or $1$. A system in the rgTAM is defined as an ordered pair $(T, \sigma)$ where $T$ is a set of tile types, and $\sigma$ is a stable seed assembly.
An assembly sequence in an rgTAS $\mathcal{T}$ is a (finite or infinite) sequence $\vec{\alpha} = (\alpha_0,\alpha_1,\ldots)$ of assemblies in which each $\alpha_{i+1}$ is obtained from $\alpha_i$
in one of two ways. First, $\alpha_{i+1}$ can obtained from $\alpha_i$ by the addition of a single tile such that the sum of the strengths of bound glues of this single tile in $\alpha_{i+1}$ is $\geq 1$. Second, $\alpha_{i+1}$ can obtained from $\alpha_i$ if $\alpha_{i+1}$ lies on one side of a cut of $\alpha_{i}$ such that the strength of this cut is $\leq 0$. Unlike an assembly sequence in the aTAM, assembly sequences in the rgTAM may not have a unique limiting assembly, and therefore, may not have a result. However, given an assembly $\alpha$ in an rgTAS, and an assembly sequence $\vec{\alpha}$ if the limit of $\vec{\alpha}$ is $\alpha$, then we say that the \emph{result} (denoted $\res{\vec{\alpha}}$) of $\vec{\alpha}$ is $\alpha$. The notations used in Section~\ref{sec-tam-formal} apply to the rgTAM. In addition to these notations, we distinguish between tile attachment and assemblies produced by a cut of strength $\leq 0$ as follows.
\begin{eqnarray*}
\alpha \rightarrow_+ \alpha' & \textmd{iff} & \alpha' \textmd{ is obtained from } \alpha \textmd{ by a single stable tile addition } \\
\alpha \rightarrow_- (\alpha'_1, \alpha'_2) & \textmd{iff} & \alpha'_1 \textmd{ and } \alpha'_2 \textmd{ lie on each side of a cut of } \alpha \textmd{ such that the } \\
&&\textmd{ strength of this cut is } \leq 0
\end{eqnarray*}
\subsection{Formal description of the Dupled restricted glues Tile Assembly Model}
\label{sec-Drgtam-formal}
This section gives a formal definition of the Dupled restricted glues Tile Assembly Model (DrgTAM).
First, we define the dupled aTAM (DaTAM), which is a mild extension of Winfree's abstract tile assembly model \cite{Winf98}. Then, as in~\ref{sec-rgtam-formal}, we define the DrgTAM by restricting temperature to $1$ and glues strengths to $-1$, $0$, or $1$.
Given $V \subseteq \Z^2$, the \emph{full grid graph} of $V$ is the undirected graph $G^\mathrm{f}}\newcommand{\bindinggraph}{G^\mathrm{b}_V=(V,E)$,
and for all $\vec{x}, \vec{y}\in V$, $\left\{\vec{x},\vec{y}\right\} \in E \iff \| \vec{x} - \vec{y}\| = 1$; i.e., if and only if $\vec{x}$ and $\vec{y}$ are adjacent on the $2$-dimensional integer Cartesian space. Fix an alphabet $\Sigma$. $\Sigma^*$ is the set of finite strings over $\Sigma$. Let $\Z$, $\Z^+$, and $\N$ denote the set of integers, positive integers, and nonnegative integers, respectively.
A \emph{square tile type} is a tuple $t \in (\Sigma^* \times \N)^{4}$; i.e. a unit square, with four sides, listed in some standardized order, and each side having a \emph{glue} $g \in \Sigma^* \times \N$ consisting of a finite string \emph{label} and nonnegative integer \emph{strength}. Let $T\subseteq (\Sigma^* \times \N)^{4}$ be a set of tile types. We define a set of \emph{singleton types} to be any subset $S \subseteq T$. Let $t = ((g_N,s_N),(g_S,s_S),(g_E,s_E),(g_W,s_W)) \in T$, $d\in \{N,S,E,W\} = \mathcal{D}$, and write $Glue_d(t) = g_d$ and $Strength_d(t) = s_d$. A \emph{duple type} is defined as an element of the set
$\{ (x,y,d) \mid x,y\in T, \; d\in\mathcal{D}, \; Glue_d(x) = Glue_{-d}(y), \; \textmd{ and }Strength_d(x)=Strength_{-d}(y)\geq\tau \}$.
A {\em configuration} is a (possibly empty) arrangement of tiles on the integer lattice $\Z^2$, i.e., a partial function $\alpha:\Z^2 \dashrightarrow T$.
Two adjacent tiles in a configuration \emph{interact}, or are \emph{attached}, if the glues on their abutting sides are equal (in both label and strength) and have positive strength.
Each configuration $\alpha$ induces a \emph{binding graph} $\bindinggraph_\alpha$, a grid graph whose vertices are positions occupied by tiles, according to $\alpha$, with an edge between two vertices if the tiles at those vertices interact. An \emph{assembly} is a connected, non-empty configuration, i.e., a partial function $\alpha:\Z^2 \dashrightarrow T$ such that $G^\mathrm{f}}\newcommand{\bindinggraph}{G^\mathrm{b}_{\dom \alpha}$ is connected and $\dom \alpha \neq \emptyset$. The \emph{shape} $S_\alpha \subseteq \Z^d$ of $\alpha$ is $\dom \alpha$. Let $\alpha$ be an assembly and $B \subseteq
\mathbb{Z}^2$. $\alpha$ \emph{restricted to} $B$, written as $\alpha
\upharpoonright B$, is the unique assembly satisfying $\left(\alpha
\upharpoonright B\right) \sqsubseteq \alpha$, and $\dom{\left(\alpha
\upharpoonright B\right)} = B$
Given $\tau\in\Z^+$, $\alpha$ is \emph{$\tau$-stable} if every cut of~$\bindinggraph_\alpha$ has weight at least $\tau$, where the weight of an edge is the strength of the glue it represents. When $\tau$ is clear from context, we say $\alpha$ is \emph{stable}.
Given two assemblies $\alpha,\beta$, we say $\alpha$ is a \emph{subassembly} of $\beta$, and we write $\alpha \sqsubseteq \beta$, if $S_\alpha \subseteq S_\beta$ and, for all points $p \in S_\alpha$, $\alpha(p) = \beta(p)$. Let $\mathcal{A}^T$ denote the set of all assemblies of tiles from $T$, and let $\mathcal{A}^T_{< \infty}$ denote the set of finite assemblies of tiles from $T$.
A \emph{dupled tile assembly system} (DTAS) is a tuple $\mathcal{T} = (T,S,D,\sigma,\tau)$, where $T$ is a finite tile set, $S \subseteq T$ is a finite set of singleton types, $D$ is a finite set of duple tile types, $\sigma:\Z^2 \dashrightarrow T$ is the finite, $\tau$-stable, \emph{seed assembly}, and $\tau\in\Z^+$ is the \emph{temperature}.
Given two $\tau$-stable assemblies $\alpha,\beta$, we write $\alpha \to_1^{\mathcal{T}} \beta$ if $\alpha \sqsubseteq \beta$, $0 < |S_{\beta} \setminus S_{\alpha}| \leq 2$. In this case we say $\alpha$ \emph{$\mathcal{T}$-produces $\beta$ in one step}. The \emph{$\mathcal{T}$-frontier} of $\alpha$ is the set $\partial^\mathcal{T} \alpha = \bigcup_{\alpha \to_1^\mathcal{T} \beta} S_{\beta} \setminus S_{\alpha}$, the set of empty locations at which a tile could stably attach to $\alpha$.
A sequence of $k\in\Z^+ \cup \{\infty\}$ assemblies $\alpha_0,\alpha_1,\ldots$ over $\mathcal{A}^T$ is a \emph{$\mathcal{T}$-assembly sequence} if, for all $1 \leq i < k$, $\alpha_{i-1} \to_1^\mathcal{T} \alpha_{i}$.
The {\em result} of an assembly sequence is the unique limiting assembly (for a finite sequence, this is the final assembly in the sequence).
If $\vec{\alpha} = (\alpha_0,\alpha_1,\ldots)$ is an assembly sequence in $\mathcal{T}$ and $\vec{m} \in \mathbb{Z}^2$, then the $\vec{\alpha}$\emph{-index} of $\vec{m}$ is $i_{\vec{\alpha}}(\vec{m}) = $min$\{ i \in \mathbb{N} | \vec{m} \in \dom \alpha_i\}$. That is, the $\vec{\alpha}$-index of $\vec{m}$ is the time at which any tile is first placed at location $\vec{m}$ by $\vec{\alpha}$. For each location $\vec{m} \in \bigcup_{0 \leq i \leq l} \dom \alpha_i$, define the set of its input sides IN$^{\vec{\alpha}}(\vec{m}) = \{\vec{u} \in U_2 | \mbox{str}_{\alpha_{i_{\alpha}}(\vec{m})}(\vec{u}) > 0 \}$.
We write $\alpha \to^\mathcal{T} \beta$, and we say $\alpha$ \emph{$\mathcal{T}$-produces} $\beta$ (in 0 or more steps) if there is a $\mathcal{T}$-assembly sequence $\alpha_0,\alpha_1,\ldots$ of length $k$ such that
(1) $\alpha = \alpha_0$,
(2) $S_\beta = \bigcup_{0 \leq i < k} S_{\alpha_i}$, and
(3) for all $0 \leq i < k$, $\alpha_{i} \sqsubseteq \beta$.
If $k$ is finite then it is routine to verify that $\beta = \alpha_{k-1}$.
We say $\alpha$ is \emph{$\mathcal{T}$-producible} if $\sigma \to^\mathcal{T} \alpha$, and we write $\prodasm{\mathcal{T}}$ to denote the set of $\mathcal{T}$-producible assemblies. An assembly $\alpha$ is \emph{$\mathcal{T}$-terminal} if $\alpha$ is $\tau$-stable and $\partial^\mathcal{T} \alpha=\emptyset$.
We write $\termasm{\mathcal{T}} \subseteq \prodasm{\mathcal{T}}$ to denote the set of $\mathcal{T}$-producible, $\mathcal{T}$-terminal assemblies. If $|\termasm{\mathcal{T}}| = 1$ then $\mathcal{T}$ is said to be {\em directed}.
We say that a DTAS $\mathcal{T}$ \emph{strictly (a.k.a. uniquely) self-assembles} a shape $X \subseteq \Z^2$ if, for all $\alpha \in \termasm{\mathcal{T}}$, $S_{\alpha} = X$; i.e., if every terminal assembly produced by $\mathcal{T}$ places tiles on -- and only on -- points in the set $X$.
Now, the DrgTAM is defined a in Section~\ref{sec-rgtam-formal} and a DrgTAS is defined to be a system in the DrgTAM. Note that the glue binding two tiles that form a duple must have strength $1$, and the glues exposed by a duple may have strength $-1$, $0$, or $1$. Also notice that for an assembly $\alpha$ in a DrgTAS, a cut of strength $\leq 0$ may separate two nodes of the grid graph that correspond to two tiles of a duple. Then, the two producible assemblies on each side of this cut each contain one tile from the duple.
}
\subsubsection{Informal description of the abstract Tile Assembly Model}
\label{sec-tam-informal}
A \emph{tile type} is a unit square with four sides, each consisting of a \emph{glue label}, often represented as a finite string, and a nonnegative integer \emph{strength}. A glue~$g$ that appears on multiple tiles (or sides) always has the same strength~$s_g$.
There are a finite set $T$ of tile types, but an infinite number of copies of each tile type, with each copy being referred to as a \emph{tile}. An \emph{assembly}
is a positioning of tiles on the integer lattice $\Z^2$, described formally as a partial function $\alpha:\Z^2 \dashrightarrow T$.
Let $\mathcal{A}^T$ denote the set of all assemblies of tiles from $T$, and let $\mathcal{A}^T_{< \infty}$ denote the set of finite assemblies of tiles from $T$.
We write $\alpha \sqsubseteq \beta$ to denote that $\alpha$ is a \emph{subassembly} of $\beta$, which means that $\dom\alpha \subseteq \dom\beta$ and $\alpha(p)=\beta(p)$ for all points $p\in\dom\alpha$.
Two adjacent tiles in an assembly \emph{interact}, or are \emph{attached}, if the glue labels on their abutting sides are equal and have positive strength.
Each assembly induces a \emph{binding graph}, a grid graph whose vertices are tiles, with an edge between two tiles if they interact.
The assembly is \emph{$\tau$-stable} if every cut of its binding graph has strength at least~$\tau$, where the strength of a cut is the sum of all of the individual glue strengths in the cut. When $\tau$ is clear from context, we simply say that a $\tau$-stable assembly is stable.
A \emph{tile assembly system} (TAS) is a triple $\calT = (T,\sigma,\tau)$, where $T$ is a finite set of tile types, $\sigma:\Z^2 \dashrightarrow T$ is a finite, $\tau$-stable \emph{seed assembly},
and $\tau$ is the \emph{temperature}.
An assembly $\alpha$ is \emph{producible} if either $\alpha = \sigma$ or if $\beta$ is a producible assembly and $\alpha$ can be obtained from $\beta$ by the stable binding of a single tile.
In this case we write $\beta\to_1^\calT \alpha$ (to mean~$\alpha$ is producible from $\beta$ by the attachment of one tile), and we write $\beta\to^\calT \alpha$ if $\beta \to_1^{\calT*} \alpha$ (to mean $\alpha$ is producible from $\beta$ by the attachment of zero or more tiles).
When $\calT$ is clear from context, we may write $\to_1$ and $\to$ instead.
We let $\prodasm{\calT}$ denote the set of producible assemblies of $\calT$.
An assembly is \emph{terminal} if no tile can be $\tau$-stably attached to it.
We let $\termasm{\calT} \subseteq \prodasm{\calT}$ denote the set of producible, terminal assemblies of $\calT$.
A TAS $\calT$ is \emph{directed} if $|\termasm{\calT}| = 1$. Hence, although a directed system may be nondeterministic in terms of the order of tile placements, it is deterministic in the sense that exactly one terminal assembly is producible (this is analogous to the notion of {\em confluence} in rewriting systems).
Since the behavior of a TAS $\calT=(T,\sigma,\tau)$ is unchanged if every glue with strength greater than $\tau$ is changed to have strength exactly $\tau$, we assume that all glue strengths are in the set $\{0, 1, \ldots , \tau\}$.
\vspace{-15pt}
\subsubsection{Informal description of the restricted glue Tile Assembly Model}
\label{sec-rgtam-informal}
The rgTAM was introduced in~\cite{SingleNegative} where it was shown that the rgTAM is computationally universal even in the case where only a single glue has strength $-1$. The definition used in~\cite{SingleNegative} and the definition given here are similar to the irreversible negative glue tile assembly model given in~\cite{DotKarMasNegativeJournal}.
The restricted glue Tile Assembly Model (rgTAM) can be thought of as the aTAM where the temperature is restricted to $1$ and glues may have strengths $-1, 0,$ or $1$. A system in the rgTAM is an ordered pair $(T, \sigma)$ where $T$ is the \emph{tile set}, and $\sigma$ is a stable \emph{seed assembly}. We call an rgTAM system an rgTAS. \emph{Producible} assemblies in an rgTAS can be defined recursively as follows. Let $\mathcal{T} = (T,\sigma)$ be an rgTAS. Then, an assembly $\alpha$ is producible in $\mathcal{T}$ if 1. $\alpha = \sigma$, 2. $\alpha$ is the result of a stable attachment of a single tile to a producible assembly, or 3. $\alpha$ is one side of a cut of strength $\leq 0$ of a producible assembly.
In~\cite{DotKarMasNegativeJournal}, Doty et al. give a list of the choices that can be made when defining a model with negative glues. These choices are (1) seeded/unseeded, (2) single-tile addition/two-handed assembly, (3) irreversible/reversible, (4) detachment precedes attachment/detachment and attachment in arbitrary order, (5) finite tile counts/infinite tile counts, and (6) tagged result/tagged junk. Here we have chosen the rgTAM to be a seeded, single-tile addition, irreversible model that uses infinite tiles. We also assume that attachment and detachment in the model occur in arbitrary order, however the results presented here also hold in the case where detachment precedes attachment. Finally, the definition of simulation (see Section~\ref{sec:simulation_def_informal}) implicitly enforces a notion of tagged result and tagged junk. In particular, if detachment occurs in a simulating system, of the two resulting assemblies one contains the seed and represents some assembly in the simulated system, while the other resulting assembly must map to the empty tile.
\vspace{-15pt}
\subsubsection{Informal description of the Dupled restricted glue Tile Assembly Model}
\label{sec-drgtam-informal}
The DrgTAM is an extension of the rgTAM which allows for systems with square tiles as well as rectangular tiles. The rectangular tiles are $2 \times 1$ or $1 \times 2$ rectangles which can logically be thought of as two square tiles which begin pre-attached to each other along an edge, hence the name \emph{duples}. A \emph{DrgTAM system} (DrgTAS) is an ordered 4-tuple $(T,S,D,\sigma)$ where, as in a TAS, $T$ is a tile set and $\sigma$ is a seed assembly. $S$ is the set of singleton (i.e. square) tiles which are available for assembly, and $D$ is the set of duple tiles. The tile types making up $S$ and $D$ all belong to $T$, with those in $D$ each being a combination of two tile types from $T$.
It should be noted that the glue binding two tiles that form a duple must have strength $1$, and the glues exposed by a duple may have strength $-1$, $0$, or $1$. Also notice that for an assembly $\alpha$ in a DrgTAS, a cut of strength $\leq 0$ may separate two nodes of the grid graph that correspond to two tiles of a duple. Then, the two producible assemblies on each side of this cut each contain one tile from the duple.
|
2,877,628,088,455 | arxiv | \section{Introduction}
The coupled-channels approach
has been successful in describing the subbarrier enhancement of
heavy-ion fusion
cross sections ~\cite{DHRS98,BT98}.
Conventionally, it takes into account the coupling between the relative motion
of the colliding nuclei and a few low-lying
collective excitations in the colliding
nuclei, as well as transfer channels,
which couple strongly to the ground state.
High-lying modes, such as giant resonances, and single-particle excitations
are not considered usually, since the former simply renormalizes the
internucleus potential ~\cite{THAB94}
and the latter is not coupled strongly to the
ground state.
However, in recent years,
many
experimental data have accumulated that suggest a need to go beyond the
conventional coupled-channels approach. The examples include
the surface diffuseness anomaly in the internuclear potential~\cite{N04},
the steep fall-off
of fusion cross sections at deep subbarrier energies~\cite{J02,D07,S08}, a
large smoothing of quasi-elastic barrier distribution~\cite{T95,P05}, and
the energy dependence of the $Q$-value spectra for quasi-elastic back
scattering~\cite{E08,L09}.
It has been a challenge to account for these new aspects of heavy-ion
fusion reactions simultaneously with the coupled-channels framework.
Recently, quasi-elastic scattering cross sections
for $^{20}$Ne+$^{90,92}$Zr systems at backward angles have been measured, which show a considerable difference
in the barrier distribution between the two systems ~\cite{E09}, that is,
the barrier distribution with the $^{92}$Zr target is much more smeared than that with $^{90}$Zr. The
coupled-channels calculations, on the other hand, predict a similar barrier distribution
to each other for both the systems,
because the rotational excitations of $^{20}$Ne play a
predominant role. Since those coupled-channels calculations include
the collective excitations in the $^{90,92}$Zr nuclei, the experimental
data strongly indicate that the difference in the barrier distribution for the two
systems can be attributed to non-collective excitations in the target nuclei.
Notice that
the effect of single-particle excitation should be more important in $^{92}$Zr,
compared to a $N=50$ magic nucleus, $^{90}$Zr.
In this
contribution, we shall
discuss the effect of single-particle excitations on heavy-ion reactions, that
have been ignored in the conventional coupled-channels approach.
To this end, we compute
the penetrability for a one dimensional
two-level system in the presence of a coupling to dissipative environment
described by a random matrix model.
The random matrix model was originally developed by Weidenm\"uller and his
collaborators in the late 70's in order to describe deep inelastic
collisions for massive systems~\cite{AKW77}.
Here we shall use a similar model,
and solve quantum mechanically
the coupled-channels equations of a large dimension.
See Refs. ~\cite{HT98,DT08,Z90} for
earlier attempts, which however did not use the random matrix model.
\section{One-dimensional barrier penetrability with random matrix model}
In the random matrix model, one considers an ensemble of
the coupling matrix elements, $V_{ij}(x)$, in the coupled-channels
equations,
which are assumed to follow the Gaussian Orthogonal Ensemble (GOE)
~\cite{AKW77}.
That is, they have a zero mean, $\overline{V_{ij}(x)}=0$,
and the second moment is
given by
\begin{equation}
\overline{V_{ij}(x)V_{kl}(x')}=
(\delta_{i,k}\delta_{j,l}+\delta_{i,l}\delta_{j,k})\,
\frac{w_0}{\sqrt{\rho(\epsilon_i)\rho(\epsilon_j)}}\,
e^{-\frac{(\epsilon_i-\epsilon_j)^2}{2\Delta^2}}\,
\cdot e^{-\frac{(x-x')^2}{2\sigma^2}}\cdot e^{-\frac{x^2+x'^2}{2\alpha^2}},
\end{equation}
where $\rho(\epsilon)$ is the level density.
We apply this model to a one-dimensional system ~\cite{KPW76}, in which
we consider a Gaussian potential barrier with the height of 100 MeV
~\cite{DLW83,HB04}.
In order to take into account the quasi-continuum single-particle spectrum,
we descretize it ~\cite{N79} from 3 MeV to 13 MeV with an energy spacing
of 0.05 MeV (in this way, we include 200 single-particle channels).
In order to take an ensemble average, we generate 20 random matrices and
perform coupled-channels calculations 20 times for each energy.
In addition to the single-particle levels, we consider also a
collective level at 1 MeV, whose coupling form factor is given
by a Gaussian function ~\cite{DLW83,HB04}.
The coupling strength to the collective state is set to be the same
for all the samples in the random ensemble.
\begin{figure}[htb]
\begin{minipage}[t]{80mm}
\includegraphics*[width=15pc]{fig1.eps}
\caption{The penetrability (the top panel), the barrier distribution
(the middle panel), and the logarithmic slope of the penetrability (the
bottom panel) obtained with the random matrix model. }
\end{minipage}
\hspace{\fill}
\begin{minipage}[t]{75mm}
\includegraphics*[width=15pc]{fig2.eps}
\caption{The energy dependence of the $Q$-value distribution
defined with the reflected flux. The peaks at $Q$=0 and $-$1 MeV correspond
to the elastic scattering and the excitation of the collective state,
respectively. }
\end{minipage}
\end{figure}
The top panel of Fig. 1 shows the penetrability thus obtained (the solid line), in comparison to
that without the couplings to non-collective states (the dashed line). The figure also shows
by the dotted line the result of a single-channel calculation. The effect of
the non-collective couplings mainly appears at energies above the barrier, where
the couplings hinder the penetrability. The middle panel shows the barrier distribution ~\cite{DHRS98,BT98},
defined as the first derivative of the penetrability.
In the absence of the non-collective couplings, the barrier distribution has two peaks, corresponding to
the two eigen-barriers generated from superpositions of the ground and the collective states.
When the non-collective couplings are switched on, the higher peak is smeared significantly, although the structure
of the lower peak remains almost the same.
The bottom panel shows the logarithmic slope of the penetrability ~\cite{J02}, which provides a useful means to
investigate the deep subbarrier behavior of the penetrability.
One can see that the logarithmic slope is not altered much by the non-collective couplings, indicating that
the steep fall-off phenomena of deep subbarrier fusion cross sections do not seem to be accounted for by the
present mechanism (see also Ref. ~\cite{R09}).
Figure 2 shows the $Q$-value distribution obtained with the reflected flux
in the solution of coupled-channels equations.
We have smeared the discrete distribution with a Lorenzian function with the width of 0.2 MeV.
At energies below the barrier, only the elastic and the collective channels are important.
As energy increases, one can clearly see that the single-particle excitations gradually become important, in
accordance with the results shown in Fig. 1.
One can also define the $Q$-value distribution with the transmitted flux (not shown). Our calculation indicates
that the $Q$-value distribution obtained with the transmitted flux is much less sensitive to the non-collective
couplings compared with the $Q$-value distribution defined with the reflected flux, suggesting
that quasi-elastic scattering is more sensitive to the single-particle excitations than fusion.
\section{Summary}
We have applied the random matrix model to a one-dimensional barrier penetrability in order to discuss
the effect of single-particle excitations.
We have shown that the effects of non-collective excitations mainly affect the above barrier behavior
of the penetrability, that is, they hinder the penetrability and smear the barrier distribution. On the
other hand, the low energy behavior does not appear significant.
The coupled-channels approach with the random matrix model enables one to compute the $Q$-value distribution.
We have shown that the single-particle excitation gradually becomes important as energy increases.
In this contribution, we have used a schematic one-dimensional model. It will be an interesting future
project to apply this model to realistic systems, and investigate the effect of non-collective excitations
on quasi-elastic barrier distributions. A quantum mechanical description of deep inelastic collisions
using the present model will also be
of interest.
\section*{Acknowledgement}
We thank E. Piasecki and V.I. Zagrebaev for useful discussions.
This work was supported by the Japanese
Ministry of Education, Culture, Sports, Science and Technology
by Grant-in-Aid for Scientific Research under
the program number 19740115.
|
2,877,628,088,456 | arxiv | \section{Introduction}
\label{sec:Introduction}
The dynamics of the growth of GDP (gross domestic product), where GDP is defined as the market value of all final goods and services
produced within a country in a given period of time \cite{Mankiw2011}, is arguably the most
scrutinised metric quantifying the overall economic development of an economy. A weak
annual growth rate of GDP, as has been characterising the US and Europe in the years following the
financial crisis of 2008, is interpreted as underperformance, which has called
for unorthodox monetary policies \cite{Erber2012} to attempt to fix it. In contrast, a strong growth of GDP is usually
lauded, because it usually reflects a rise of living standards and is generally
accompanied by decreasing unemployment. But what is meant by ``weak'' or ``strong'' growth?
Is there a ``natural'' growth rate? Does past growth rates of GDP imply future growth rates?
This last question is particularly relevant in the present context of small growth compared
with previous decades in developed countries and the argument by many that we may have
shifted to a ``new normal'' of slower intrinsic growth \cite{Dabla-Norris2015}.
A number of authors, e.g. \cite{Holden2012,Fernald2014}, have noted that a plot of the logarithm of the US GDP
as a function of (linear) time over the last one hundred years looks remarkably linear, as
shown by the continuous line and its dashed linear fitted line in figure \ref{fig:GDP_heatmap}:
the inflation adjusted GDP per capita exhibits a long term average growth of
1.9-2\% per year \cite{Fernald2014}. The occurrence of
such a near trend-stationary long run growth covering a period with two world wars,
the cold war and its associated proxy wars, the collapse of the Bretton Woods System in 1973, several large bubbles, crashes
and recessions and strong changes in interest rate policies, is truly remarkable.
It entices one to entertain the possibility of an equilibrium or natural growth rate, which
then could be extrapolated in the future. Gordon \cite{Gordon2012} questions this extrapolation on the basis
of the drags that are bound to impact growth, including demography, education, inequality,
globalization, energy/environment, and the overhang of consumer and government debts.
Fernald and Jones \cite{Fernald2014} point out the large uncertainties associated
with new technologies, inequality, climate change and the increasing shift of the economy
towards health care. Holden \cite{Holden2012} observes that there are large
medium frequency fluctuations around this linear trend of 1.9-2\% per year and presents
a model in which standard business cycle shocks \cite{Hansen1985,Merz1995,Bernanke1999}
lead to highly persistent movements around the long-term trend without significantly altering the trend itself,
due to a quasi-cancellation between the changes of new products and of new firms as a function of time.
Because of the wide spread effects that business
cycles have both in society and in fiscal policy making \cite{Barseghyan2013},
researchers have been urged to develop a more solid understanding of this stylized fact.
In recent years, the existence of out of equilibrium business cycles has been gaining
more acceptance in economic theory.
It is now understood that (out of equilibrium) business cycles, i.e. (excessive) periodic fluctuations in productivity,
have significant effects on the cost of external finance \cite{Mclean2014}, on inflation \cite{Altig2011},
on employment and on many other macroeconomic key indicators \cite{Michele2001}.
Business cycle-related market volatility has been shown to have predictive power
on expected market returns \cite{Kim2013}, which, in turn, play a central role in the capital asset pricing model
at the heart of finance. Noted professionals \cite{Dalio2015} view GDP growth as a long term trend, overlaid with both
short and long term debt cycles, suggesting that fluctuations associated with business cycles could occur at all scales.
Here, we present a quantitative characterisation of the fluctuations of US GDP growth at all scales,
using the natural tool for this, namely the wavelet transform. Adapting the analysing tool
to the quantification of local growth rates, our main finding is that
the distribution of GDP growths can be well approximated by a bimodal function.
One can thus represent the growth of the US economy per capita as an alternation
between regimes of strong growth rate $\rho_\text{high}$, associated with booms (or bubbles),
and regimes of low growth rate $\rho_\text{low}$ that include plateaus (and recessions).
These two types of regimes alternate, thus quantifying the business cycle and giving an effective long-term
growth rate $\rho_\text{lt}$ that is between $\rho_\text{low}$ and $\rho_\text{high}$.
While the existence of fluctuations around the long-term growth trend has been noted
by many others as mentioned above, to the best of our knowledge, this is the first time that it is shown that these
fluctuations can be classified in two broad families, suggesting two
well-defined economic regimes that completely exhaust the possible phases.
The existence of a well-characterised strong growth regime with average growth rate $\rho_\text{high}$
often leads to the misleading expectations that it is the normal that reflects a well-functioning economy, while
the other mode of low growth $\rho_\text{low}$ is considered
abnormal, often interpreted as due to a surprising shock, bringing considerable dismay and pain,
and leading to policy interventions. Our finding of a robust bimodal distribution of GDP growth rates
over the whole history of the US suggests that this interpretation is incorrect.
Rather than accepting the existence of the long-term growth rate as given, and interpreting the
deviations from it as perturbations, the bimodal view of the GDP growth suggests a completely
different picture. In this representation, the long-term growth rate is the result of a subtle
compensation between the high and low growth regimes that alternate continuously.
The overall growth dynamics that emerges is that the US economy is growing in a punctuated way \cite{Louzoun2003},
following phases of strong growth that are intrinsically unsustainable, followed by corrections or consolidation
until the next boom starts. In other words, the approximately long-term growth rate reflects an economy that
oscillates between booms and consolidation regimes. Because of the remarkable
recurrence of the strong regime and in view of its short-term beneficial effects,
economists and policy makers are tempted (and actually incentivised) to form their expectations
based on it, possibly catalysing or even creating it in a self-fulfilling prophecy fashion even when the real productivity
gains are no more present, as occurred in the three decades before the 2008 crisis \cite{Sornette2014a}.
We suggest that the transient strong growth regimes can be rationalised within the framework
of the ``social bubble hypothesis'' \cite{Sornette2008,Gisler2009,Gisler2010,Gisler2011}, in the sense that they result from
collective enthusiasm that are similar to those developing during financial bubbles, which foster collective attitude
towards more risk taking, The social bubble hypothesis claims that strong social interactions
between enthusiastic supporters weave a network of reinforcing feedbacks that lead to widespread endorsement and extraordinary commitment by those involved, beyond what would be rationalised by a standard cost-benefit analysis.
For a time, the economy grows faster than its long-term trend, due to a number of factors
that reinforce each other, leading to a phase of creative innovation (e.g. the internet dotcom bubble)
or credit based expansion (e.g. the house boom and financialisation of the decade before 2008). These regimes
then unavoidably metamorphose into a ``hangover'', the recovery and strengthening episode
until the next upsurge.
Performing a careful analysis at multiple scales and over different window sizes up to the largest one going back to 1800,
we also find that the long-term growth rate of real GDP per capita has actually not been perfectly constant, being lower at about
1.6\% from 1800 till the end of WWII and growing to 1.9-2\% from 1950 until 2007 and then slowing down to approximately
1.1\% over the last 8 years. Informed by the above proposition that the high growth regime has no reason
to be the norm, the slower growth since 2008 suggests for a different interpretation. Having exhausted
the measures that (somewhat artificially \cite{Sornette2014a}) boosted economic growth in the previous three
decades before the 2008 crisis
and notwithstanding the introduction of exceptional measures, broadly referred to as ``quantitative easing'',
the innovations and productivity gains seem unable to return to those during ``thirty glorious years'' of 1950-1980, preventing
the recurrence of the strong boom regimes with $\rho_\text{high}$, but rather remain in a protracted
low growth regime $\rho_\text{low}$.
In the next section, we present the wavelet transform that we use to examine the GDP growth rate fluctuations
over different time scales. In section \ref{sec:wavelet_analysis_GDP}, we present our results concerning the
analysis of US GDP data and section \ref{sec:conclusions} concludes.
\section{The wavelet transform}
\label{sec:WT}
Originally developed in geophysics as a mathematical tool to analyze seismic signals \cite{Morlet1982,Goupillaud1984}, the wavelet
transform has proven useful for data analysis in a variety of fields such as image processing \cite{Antonini1992}, astrophysics \cite{Slezak1990},
turbulence \cite{Argoul1989} and generally whenever complicated interactions between events occurring at different scales appear \cite{Meyer1992}.
A $\psi$-wavelet transform $W_\psi$ is simply a projection of a signal $X(\tau)$ onto $t$-translated
and $s$-dilated versions of $\psi$ \cite{Goupillaud1984,Grossmann1984,Yiou2000}:
\begin{equation}
W_\psi[X](s,t) = \int \limits_{-\infty}^{\infty} d\tau~ \psi \left(\tau-t; s \right) ~X(\tau).
\label{eq:WT_definition}
\end{equation}
We call $s$ the scale and $t$ the time parameter. The analyzing function $\psi$, called the wavelet, has to be a localized function
both in time and frequency domain. Depending on the application, the wavelets must be endowed with several additional properties,
see \cite{Daubechies1990,Daubechies1992,Debnath2002} for mathematical details. For our purposes, it is important for the wavelet to be properly normalized.
Assuming that $\psi(t;s)$ is approximately zero for values of $t$ outside the interval $[-s,s]$, the wavelet transform has then
the following intuitive interpretation: $W_\psi[X](s,t)$ is the weighted average of $X$ over the interval $[t-s,t+s]$. The
wavelet transform can thus be seen as a `mathematical microscope' that resolves local structures of $X$ at `position' (time) $t$ and at a `magnification' (scale) $s$.
Denoting by $\ast$ the convolution operator, expression \eqref{eq:WT_definition} can also be written compactly as $W_\psi[X](s,t) = [X(\tau) \ast \psi(\tau;s)](t)$,
or, for brevity, just $X \ast \psi$.
Replacing $\psi$ in \eqref{eq:WT_definition} by its $n$-th derivative $\psi^{(n)}$
corresponds to a $\psi$-analysis of the $n$-th derivative
of the time series $X(t)$ (up to a normalization factor), as a simple integration by parts derivation shows. In this context, $\psi = \psi^{(0)}$ is
also called the mother wavelet. Since the overall statistical characterization of complex structures depends only weakly on the choice of the mother wavelet \cite{Arneodo1993}, we will show here only results for the Gaussian mother wavelet $\psi(t;s) = \exp(- t^2 / 2 s^2) / \sqrt{2 \pi} s$. We have checked that other real-valued mother wavelets give similar results.
In this article, we use the wavelet transform to quantify
the pattern of local slopes (giving the local growth rates) of the analyzed time series (logarithm of the real US GDP per capita).
This amounts to replacing $\psi$ in \eqref{eq:WT_definition} by
the first derivative $\psi^{(1)}$ of the Gaussian mother wavelet, up to a normalization. The normalization
is chosen such that the wavelet transform of the test signal $X(t) = p t$ with constant slope $p$
gives exactly its slope $p$ for all times $t$ and all scales $s$. This leads to the following expression for our analyzing mother wavelet used
in expression in \eqref{eq:WT_definition}:
\begin{equation}
\psi^{(1)}(t;s) = \frac{t}{\sqrt{2 \pi} s^3} \exp \left( - \frac{1}{2} \left( \frac{t}{s} \right)^2 \right).
\label{eq:psi(1)}
\end{equation}
Note also that, by construction, the wavelet transform performed with $\psi^{(1)}(t;s)$ of a constant signal is zero.
This means that our implementation of the wavelet transform \eqref{eq:WT_definition} with \eqref{eq:psi(1)}
is insensitive to the absolute level and only quantifies precisely the local slope at a scale $s$.
In the remainder of this article, all figures are the result of the wavelet transform $X \ast \psi^{(1)}$ with $\psi^{(1)}$ given by \eqref{eq:psi(1)}.
\section{Wavelet analysis of the growth of real US GDP per capita}
\label{sec:wavelet_analysis_GDP}
\subsection{Analysed data: why real US GDP per capita?}
In this section, we analyze the real, i.e. adjusted for inflation, US GDP per capita (r-US-GDP-pc) as a measure for real
innovation and productivity gains. In contrast, the total nominal US GDP contains two additional
contributions to its growth: (i) population growth, including immigrants who are still an important component in the US;
(ii) inflation. In other words, the observed total US GDP can grow just from population change and/or the existence of inflation,
even at constant or decreasing real US GDP per capita, often leading to a mistaken illusion of improved wealth \cite{Miao2013}.
Given that the flow of immigrants and population increase has undergone many complex phases
during the history of the US, it is important to disentangle this population component of GDP growth and
work in units per capita. Similarly, inflation has been varying also significantly with bursts associated with wars,
the demise of the Bretton Woods System in 1973, the oil shocks and other financial crises.
The real US GDP per capita (r-US-GDP-pc) thus constitutes the standard gauge to evaluate
the evolution over time of the real wealth per person, i.e. the one that counts, is felt in real terms
and is truly associated with progress.
We thus analyze the quarterly r-US-GDP-pc data between 1947 and 2015.
In \ref{sec:annual_figures}, we present a similar analysis for the annual r-GDP-pc data
over the much larger extended period between 1800 and 2010.
\subsection{Hierarchical structure of the GDP growth rates revealed by wavelet analysis}
As mentioned in the introduction, plotting the r-US-GDP-pc in a semi-logarithmic plot (figure \ref{fig:GDP_heatmap})
shows, to a first approximation, a remarkably straight line, suggesting that the r-US-GDP-pc
grows exponentially as $\exp( \rho_\text{lt} t )$ with $t$ in units of years and
a long-term annual growth rate $\rho_\text{lt} \approx 2\%$ determined by an ordinary least squares (OLS) fit.
This value is often reported in the literature as the average long-term historical growth of real GDP per capita (e.g. \cite{Fernald2014}).
Beyond this long term average growth, one can see deviations that occur again and again.
Moreover, it is interesting to observe that the long-term growth rate $\rho_\text{lt}$ represented
by the slope of the straight dashed line seems to almost never describe the actual local growth rate of the r-US-GDP-pc.
In other words, the average growth rate does not seem to be a good description of the typical growth rates.
To quantify these qualitative observations, we perform a wavelet transform analysis of the logarithm of the the r-US-GDP-pc
at different times $t$ and different scales $s$ to obtain the local growth rate at time $t$, averaged over a time interval $[t-s,t+s]$,
defined by
\begin{equation}
\rho(s,t) = \ln(\text{r-US-GDP-pc}) \ast \psi^{(1)}~.
\end{equation}
The results are encoded with the color scale for the annualized growth rates in figure \ref{fig:GDP_heatmap}
\begin{figure}[!htb]
\centering
\includegraphics[width = \textwidth]{US_GDP_heatmap_quarterly}
\caption{Wavelet transform $\ln(\text{r-US-GDP-pc}) \ast \psi^{(1)}$ of the logarithm of the quarterly real US GDP per capita data measured in chained 2009 US dollar over the period from 1947 to 2015 and represented by the continuous dark line (right vertical axis). An ordinary least squares fit determines a long-term annualized growth rate $\rho_\text{lt}$ of approximately $2\%$, shown as the dashed line.
The left vertical axis plots the scale $s$ of the wavelet analysis, corresponding
to an interval of analysis approximately equal to $2s$. The color scale encodes the value of the annualized growth rates at different times and
scales. The nonlinear conical shape of the envelop is due to edge-effects in the wavelet transform.}
\label{fig:GDP_heatmap}
\end{figure}
over the period from 1947 to 2015 shown on the horizontal axis. The left vertical axis plots the scale $s$ of the wavelet analysis, corresponding
to an interval of analysis approximately equal to $2s$. For scales at and lower than $s \approx 4$ years
(i.e. averaged over approximately 8 years), one can first observe a hierarchy of branches
associated with alternating warm (low or negative growth rates) and cold (positive and strong growth rates) colors.
As one goes to smaller and smaller time scales, more fine structures of alternating colors (growth rates) can be seen.
At the larger scales, $s \geqslant 4$ years, the color settles to the green value, recovering the known, and also directly determined by OLS,
long term growth $\rho_\text{lt} \approx 2\%$.
Because the continuous wavelet transform \eqref{eq:WT_definition} with \eqref{eq:psi(1)}
contains a lot of redundant information (a function $X(t)$ of one variable $t$
is transformed into a function $W_\psi[X](s,t)$ of two variables $t$ and $s$), it is standard to compress
the wavelet map shown in figure \ref{fig:GDP_heatmap}
into a so-called
``skeleton'' \cite{Arneodo2002,Mallat1992a,Mallat1992b}. The skeleton of $W_\psi[X](s,t)$
is the set of all local maxima of $\left| W_\psi[X](s,t) \right|$ considered as a function of $t$, for fixed scale $s$.
It is thus the set of all local maxima and minima of $W_\psi[X](s,t)$.
The skeleton forms a set of connected curves in the time-scale space, called the extrema lines.
Geometrically, each such skeleton line corresponds to either a crest or valley bottom of the three-dimensional
representation of the wavelet function $W_\psi[X](s,t)$. A crest can be viewed as the typical value
of the growth rate of a locally surging r-US-GDP-pc. The bottom of a valley is similarly the typical value
of the growth rate of a locally slowing down or contracting r-US-GDP-pc.
The skeleton of figure \ref{fig:GDP_heatmap}
is shown in figure \ref{fig:GDP_skeleton}.
\begin{figure}[!htb]
\centering
\includegraphics[width = \textwidth]{US_GDP_skeleton_quarterly}
\caption{Skeleton structure of the wavelet transform $\ln(\text{r-GDP-pc}) \ast \psi^{(1)}$ for quarterly real US GDP per capita data measured in chained 2009 US dollar corresponding to figure \ref{fig:GDP_heatmap}.}
\label{fig:GDP_skeleton}
\end{figure}
One can observe
much more clearly the hierarchy of alternating growth regimes, which combine into an overall growth of $\approx 2\%$ at large scales.
Written along each skeleton line in the figure, we give the values of the local annualized growth rates
at four scale levels, 3 months, 6 months, 18 months
and 3 years. The structure of the skeleton lines, their colors and the values of the local annualized growth rates
confirm the existence of ubiquitous shifting regimes of slow and strong growths.
To validate the intuition that a crest (resp. valley bottom) of the skeleton can be viewed as the typical value
of the growth rate of a locally surging (resp. slowing down) r-US-GDP-pc, we check that
only knowing the growth rates on the skeleton structure retains the structural information of the full wavelet transform.
For this, we have constructed synthetic versions of the real GDP per capita
from the skeleton at different scales as explained in \ref{sec:pseudo_GDP}, and we have compared them
to the true r-US-GDP-pc.
\subsection{Evidence for a robust bimodal structure of distributions of US GDP growth rates}
The nature of the shifting regimes of slow and strong growths
can be quantified further by constructing the probability density distributions (pdf) of annualized
GDP growth rates at different fixed scales,
both from the entire wavelet transform (figure \ref{fig:GDP_heatmap})
and from the skeleton structure (figure \ref{fig:GDP_skeleton}).
The obtained pdf's for four different scales (6, 9, 15 and 30 months) are depicted in figure \ref{fig:GDP_distributions}.
\begin{figure}[!htb]
\centering
\includegraphics[width = \textwidth]{quarterly_distributions_bandwidth=0_002}
\caption{Gaussian kernel estimations (with width equal to $0.002$) of the probability density distributions (pdf) of the
local annualized growth rates of r-US-GDP-pc at four different scales indicated in the inset in the top-left.
The main panel represents the distributions extracted from the wavelet transform
shown in figure \ref{fig:GDP_heatmap}, while the top-right inset shows the pdf's obtained from the skeleton values
shown in figure \ref{fig:GDP_skeleton}.}
\label{fig:GDP_distributions}
\end{figure}
They have been obtained using a Gaussian kernel estimations with width equal to $0.002$.
We have checked the robustness of these pdf's by changing the width of the kernels within a factor of two.
The pdf's extracted from the wavelet transform shown in figure \ref{fig:GDP_heatmap}
and from the skeleton values shown in figure \ref{fig:GDP_skeleton} exhibit the same structures.
First, the pdf's at the largest scale of 30 months peak at the annualized growth rate of $\approx 2\%$,
recovering the OLS value reported above (shown as the dashed line in figure \ref{fig:GDP_heatmap}).
Second, as we go down to smaller scales, already at the scale of 15 months, and more pronounced
at the scale of 9 and 6 months, a clear bimodal structure emerges (decorated by higher frequency structures,
associated with the width of the estimating kernel).
Denoting the two main peaks of the bimodal density extracted from the full wavelet transform (the skeleton gives similar results)
at scale $s$ by $\rho_\text{low}(s)$ and $\rho_\text{high}(s)$ respectively, we obtain
\begin{equation}
\rho_\text{low}(6 \text{ months}) \approx 1\% \lesssim
\rho_\text{low}(9 \text{ months}) \approx 1.1\% \lesssim
\rho_\text{low}(15 \text{ months}) \approx 1.5\% \lesssim
\rho_\text{lt} \approx 2\%
\end{equation}
and
\begin{equation}
\rho_\text{high}(6 \text{ months}) \approx 3.1\% \gtrsim
\rho_\text{high}(9 \text{ months}) \approx 2.8\% \approx
\rho_\text{high}(15 \text{ months}) \approx 2.8\% \gtrsim
\rho_\text{lt} \approx 2\%.
\end{equation}
The pleasant stability for the estimates $\rho_\text{low}(6 \text{ months}) \approx \rho_\text{low}(9 \text{ months})$
and $\rho_\text{high}(6 \text{ months}) \approx \rho_\text{high}(9 \text{ months}) \approx \rho_\text{high}(15 \text{ months})$
suggests that real US GDP per capita can be modelled as an alternation of slow growth around a typical value
of $1\%$ and strong growth around a typical value of $3\%$, which bracket the long-term average growth rate
$\rho_\text{lt} \approx 2\%$. This constitutes the main result of our article.
\ref{sec:annual_figures} presents the wavelet transform, skeleton structure and growth rate
distributions for annual r-US-GDP-pc data starting in 1800 till 2010. The important conclusion is that
the previous observations presented above for quarterly data from 1950 to 2015 are broadly confirmed
when using annual data over this much longer period.
\section{Concluding remarks}
\label{sec:conclusions}
We have presented a quantitative characterisation of the fluctuations of
the annualized growth rate of the real US GDP per capita growth at many scales,
using a wavelet transform analysis of two data sets, quarterly data from 1947 to 2015
and annual data from 1800 to 2010. Our main finding is that
the distribution of GDP growth rates can be well approximated by a bimodal function
associated to a series of switches between regimes of strong growth rate $\rho_\text{high}$
and regimes of low growth rate $\rho_\text{low}$. The succession
of alternations of these two regimes compounds to produce a
remarkably stable long term average real annualized growth rate of 1.6\% from 1800 to 2010 and $\approx 2.0\%$ since 1950.
We thus infer that the robust constant growth rate since 1950 cannot be taken as
evidence for a purely exogenous ``natural'' rate of innovations and productivity growth.
It is rather a remarkable output of the succession of booms and corrections that punctuate
the economic history of the US since more than 200 years.
Our results suggest that alternating growth regimes
are intrinsic to the dynamics of the US economy and appear at all scales. These alternating
regimes can be identified as generalized business cycles, occurring at the
scale of the whole economy.
Such business cycles may be briefly rationalised as follows.
During the high growth regime, a number of positive feedback loops are in operation, such
as deregulation, enhanced credit creation, the belief in a ``new economy" and so on. This creates a transient boom,
perhaps accelerating itself and leading to financial and social bubbles
\cite{Geraskin2013,Sornette2014,Yukalov2015,Sornette2008,Gisler2009,Gisler2011}.
This over heating of the economy then turns out not to be sustainable and leads to a correction
and consolidation phase, the low growth regime. Then, the next strong growth regime starts
and so on.
Our findings suggest that strong growth cannot be dissociated from periods of
recessions, corrections or plateaus, which serve as a consolidation phase before
the next boom starts. However, because of the remarkable
recurrence of the strong regime and in view of its short-term beneficial effects,
economists and policy makers tend to form expectations of strong continuous growth.
Such way of thinking may lead to conclusions that, we argue, may have little merit. Consider the estimation
of the US Federal Reserve Bank of Dallas \cite{Atkinson2013} that
the cost of the 2008 crisis, assuming output eventually returning to its pre-crisis trend path,
is an output loss of \$6 trillion to \$14 trillion US dollars. These enormous numbers are
based on the integration of the difference between the extrapolation of hypothetical
GDP trajectories expected from a typical return to pre-crisis growth compared with the
realised GDP per capita. In the light of our findings, we argue that it is incorrect to
extrapolate to the pre-crisis growth rate, which is by construction abnormally high, and
much higher than the long term growth rate. In addition, one should take into account the
fact that the base rate after a crisis should be low or even negative, for the consolidation to work.
Moreover, the duration of the boom years may have direct impact on that of the recovery period.
In this vein, Sornette and Cauwels \cite{Sornette2014a}
have argued that this 2008 crisis is special, as it is the culmination of a 30 year trend
of accelerating financialization, deregulation and debt growth. Our present results impel the
careful thinker to ponder what is the ``natural'' growth rate and avoid naive extrapolations.
Using a simple generic agent-based model
of growth, Louzoun et al. \cite{Louzoun2003} have identified
the existence of a trade off between either low and steady growth
or large growth associated with high volatility. Translating this insight to the US economy
and combining with the reported empirical evidence, the observed growth
features shown in the present paper seem to reveal a remarkable stable
relationship between growth and its fluctuations over many decades, if not centuries.
Perhaps, this is anchored in the political institutions as well as in the psychology
of policy makers and business leaders over the long term that transcend
the short-term vagaries of political power sharing and geopolitics. It is however
important to include in these considerations the fact that the US is unique compared with other developed countries, having
benefitted enormously from the two world wars in particular (compared with the
destruction of the UK and French empires and the demise of the economic dominance of european powers).
\bibliographystyle{unsrt}
|
2,877,628,088,457 | arxiv | \section{Introduction}
\label{section:intro}
In the last decade a significant step forward in the phenomenological
understanding of star formation in galaxies has been achieved, thanks
to many observational campaigns of nearby \citep[see,
e.g.,][]{Boissier07,Kennicutt07,Walter08} and distant
\citep[e.g.][]{Bouche07,Daddi10a,Genzel10} galaxies. In particular,
the relation between surface densities of gas and star formation rate
(hereafter SFR), the so-called Schmidt-Kennicutt relation
\citep[][hereafter standard SK]{Schmidt59,Kennicutt98}, has acquired a
higher degree of complexity, passing from a simple power law to a much
more structured and environment-dependent relation. Here by
environment we mean the local properties, averaged over $\sim$kpc
scale, of a galaxy patch that hosts star-forming molecular clouds. We
will call {\it standard}, {\it molecular}, {\it HI} and {\it
dynamical} SK the relations obtained by putting on the y-axis the
surface density of SFR $\Sigma_{\rm sfr}$, and on the x-axis,
respectively, the total cold gas surface density $\Sigma_{\rm cold}$,
the molecular gas surface density $\Sigma_{\rm mol}$, the HI gas
surface density $\Sigma_{HI}$ or the total cold gas surface density
divided by the orbital time-scale, $\Sigma_{\rm cold}/\tau_{\rm orb}$,
where $\tau_{\rm orb}=2\pi r/V(r)$ ($V(r)$ being the galaxy rotation
curve).
Wide consensus has recently been reached on the idea that the standard
SK is a reflection of the more ``fundamental'' molecular SK
\citep{Wong02,Blitz04,Blitz06}, while the HI SK is weak if not absent
\citep{Bigiel08,Bigiel10}. Indeed, in normal (and non-barred) spiral
galaxies the fraction of gas in molecular clouds increases toward the
galaxy center, while the HI gas surface density, $\Sigma_{HI}$,
saturates at a value of $\sim10$ {M$_\odot$ pc$^{-2}$} and declines in the inner
kpc. The old notion of a star formation threshold in disc galaxies
\citep{Martin01} has thus been revised to a steepening of the standard
SK at low surface densities \citep{Boissier07,Bigiel08}, driven by the
declining molecular fraction.
The gas surface density at which the transition from HI- to
molecular-dominated gas takes place, or equivalently the saturation
value of $\Sigma_{HI}$, has been proposed to be a function of galactic
environment (\citealt{Krumholz09b,Gnedin10,Papadopoulos10}; see also
\citealt{Schaye04}), with dwarf galaxies like the Magellanic Clouds
showing higher values (Bolatto et al. 2009; \citealt{Bolatto11}; see also references in
\citealt{Fumagalli10}). This is in line with the high-redshift evidence
of low efficiency of star formation in Damped Lyman-alpha systems at
$z\sim2$ \citep{Wolfe06}, that are thought to be the external,
gas-rich, metal-poor regions of young disc galaxies.
The slope of the molecular SK relation is debated. For THINGS spiral
galaxies, \cite{Bigiel08} report a slope of $1.0\pm0.2$ when measured on
a spatial grid of 750 pc bin size. An average value of $\sim$1 has been
confirmed very recently by \cite{Bigiel11}.
A steeper slope, more consistent with the
canonical 1.4 value of \citealt{Kennicutt98} is reported by
\cite{Kennicutt07} for star-forming regions in M51a.
\cite{Liu11} interpret this discrepancy as an effect of subtraction of
background emission in H$\alpha$ and dust, and claim that
a super-linear slope results when proper subtraction is performed.
At higher surface densities, a steeper relation is suggested by
observations of Ultra Luminous Infra-Red Galaxies (ULIRGs) and Sub-mm
Galaxies (SMGs) \citep{Bouche07}, but recent observations
\citep{Daddi10b,Genzel10,Boquien11} suggest that at $z\sim2$ ULIRGS and
SMGs, on the one side, and non-IR bright and BzK galaxies, on the
other side, trace two parallel molecular SK relations with slope
$\sim1.4$ and separated by nearly one order of magnitude in
normalization (IR-bright galaxies having higher $\Sigma_{\rm sfr}$).
The interpretation of this phenomenological picture is still under
discussion. Based on observations, the declining molecular fraction
with gas surface density was proposed by \cite{Blitz04,Blitz06} to be
driven by external pressure, i.e. the midplane pressure of gas in
vertical hydrostatic equilibrium in a thin disk. However, this
relation has large scatter and is as scattered as other relations
with, e.g., disc mass surface density \citep{Leroy08}. Alternatively,
many authors \citep{Pelupessy06,Krumholz09a,Gnedin10,Dib11} have proposed
models where the molecular fraction is regulated by the equilibrium
balance between production of H$_2$ and destruction of the same
molecule by UV radiation from young stars (an assumption recently
criticized by \citealt{MacLow10}). Both creation and destruction
channels are regulated by dust abundance, because dust is both a
catalyst and a shield. As a consequence, the molecular fraction is
predicted to be driven by gas surface density (or column density) and
modulated by gas metallicity. Therefore, the scaling of molecular fraction
with gas surface density, or equivalently the saturation value of
$\Sigma_{HI}$, should be a function of metallicity. \cite{Fumagalli10} have
recently tested the two assumptions (pressure-driven or gas surface
density-driven molecular fraction) against data on nearby spiral and
dwarf galaxies, and report a marginal preference for the second
hypothesis.
The varying slope of the of the molecular SK, steepening toward high
$\Sigma_{\rm cold}$ from $\sim1.0$ to $\sim1.4-1.7$, has been
interpreted by \cite{Krumholz09b} as an effect of the decoupling of
molecular clouds in normal spiral galaxies from the rest of the
Inter-Stellar Medium (ISM). These authors argue that
molecular clouds are known to
have a roughly constant surface density and pressure \citep[and then
dynamical time, see][]{Solomon87}, so they are not in pressure
equilibrium with the rest of the ISM and their consumption time
$\Sigma_{\rm mol}/\Sigma_{\rm sfr}$ results to be $\sim2$ Gyr
\citep{Bigiel08}, irrespective of the molecular gas surface density
computed on $\sim$kpc scale. This last quantity is indeed to be
considered as a measure of the filling factor of molecular clouds.
The decoupling breaks at higher gas surface densities, where the ISM
is able to pressurize the molecular clouds so that their dynamical
time scales again with the inverse of the square root of the density
at $\sim$kpc scales. In this regime the molecular SK takes values
more similar to the canonical 1.4 one.
The evidence of a double standard (or molecular) SK at high redshift
has only been proposed very recently. \cite{Teyssier10} have presented
a hydrodynamic simulation, performed with the AMR RAMSES code
\citep{Teyssier02}, of two merging galaxies resembling the Antennae.
They found that, when the force resolution reaches values as small
as 12pc, the predicted standard SK is boosted and reaches the relation
found by \cite{Daddi10b} to hold for ULIRGs and SMGs. However, the
authors do not show a simulation, run at the same resolution, of the
equivalent of a BzK galaxy that lies on a relation one order of
magnitude lower.
A second way of expressing a ``star formation law'' is by correlating
the surface density of SFR with $\Sigma_{\rm cold}$ divided by the gas
orbital time $t_{\rm orb}$. This dynamical SK was suggested by
\cite{Wyse89}, \cite{Silk97} and \cite{Elmegreen97} to be more
``fundamental'' than the standard SK, on the basis of the influence
that disc rotation and shearing exert on star-forming clouds. It has
attracted less attention than the standard SK, and in most cases it
has been reported as equally acceptable from the observational point
of view \citep[e.g.][]{Kennicutt98}. More recently, \cite{Tan10}
tested against observations of local galaxies the hypothesis of a {\em
linear} standard SK compared to several other proposals of star
formation ``laws''. He found that data do not favour this linear
hipothesis. At higher redshift, \cite{Daddi10b} noticed that the
dicothomy in the molecular SK of normal and IR-bright galaxies
disappears when the dynamical SK is considered.
The nature of the SK relation and its
environment-dependent nature is of fundamental importance in the field
of galaxy formation, because SK-like relations are widely used in
models to predict the amount of stars formed in a star-forming galaxy.
In particular, in many N-body hydrodynamic simulations \citep[see,
e.g.][]{Katz96} the SFR of gas particles or cells is computed as
$\epsilon \times M_{\rm gas}/\tau_{\rm dyn}$, where $\epsilon$ is an
efficiency and the dynamical time is computed on the average density
of the particle. In many of the most recent simlations of galaxy formation it is
assumed that the SFR
obeys by construction a standard SK law with a cut at low density
\citep[e.g.][]{Springel03}. \cite{Schaye08} showed that this is equivalent to
assuming an effective equation of state for star-forming particles of
the kind $P \propto \rho^{\gamma_{\rm eff}}$. Only higher-resolution
simulations, resolving scales below 50 pc, are able to follow the
formation of molecules \citep{Pelupessy06, Pelupessy09, Robertson08,
Gnedin09} and then to predict molecular fractions and kpc-averaged
SK relations, but the cost of these simulations makes it very hard with
presently available facilities to push even one of them to
$z=0$ in a full cosmological context.
One exception in this sense is given by the {{\sc muppi}} model
\citep[MUlti-Phase Particle Integrator;][hereafter paper
I]{Murante10}, based on a sub-resolution multi-phase treatment of
the star-forming gas particles and developed within the {\sc gadget}
TreePM+SPH code \citep{GADGET2}. In this model, the star formation
rate is {\em not} forced to follow a SK-like relation, so its
adherence to this law is a prediction of the model. The details of
the model are described below, but two points are worth mentioning.
First, inspired by the observational result of \cite{Blitz06},
the molecular fraction of the cold component of a gas particle is
scaled with the hydrodynamic pressure of the SPH particle.
Second, the multi-phase treatment allows thermal feedback
from SNe to efficiently heat the gas. In paper I, free parameters
were tuned to reproduce the observed SK relation in the case of an
isolated spiral, Milky Way-like galaxy. We noticed that the SK
relation traced by an isolated low surface brightness dwarf spiral
galaxy differs from the Milky Way one in the same way as spirals and
dwarfs differ in the data of \cite{Bigiel08}.
In this paper we show the SK relations resulting from a set of
{{\sc muppi}} simulations of isolated halos, including the ones used in
paper I. The value of this analysis is not only to present the
results of a specific model but also to understand how the SK
relations depend on galactic environment when the disc is heated by
feedback and the molecular fraction is scaled with pressure and does
not depend on metallicity. Section 2 describes the {{\sc muppi}} model and
the initial conditions used for our simulations. Section 3 presents
the resulting SK relations. Interpretation of results requires an
assessment of the vertical structure of simulated discs, presented in
Section 3.2. Section 3.3 is devoted to a discussion of the dynamical
SK relation. Section 4 gives a discussion of present results in
comparison with available literature, while Section 5 presents the
conclusions.
\begin{figure*}
\centering{
\includegraphics[width=0.32\linewidth]{map_MW_dens_xy_neg.jpg}
\includegraphics[width=0.32\linewidth]{map_MW_dens_xz_neg.jpg}
\includegraphics[width=0.32\linewidth]{map_MW_temp_xy_neg.jpg}
}
\centering{
\includegraphics[width=0.32\linewidth]{map_MW_HR_dens_xy_neg.jpg}
\includegraphics[width=0.32\linewidth]{map_MW_HR_dens_xz_neg.jpg}
\includegraphics[width=0.32\linewidth]{map_MW_HR_temp_xy_neg.jpg}
}
\centering{
\includegraphics[width=0.32\linewidth]{map_DW_dens_xy_neg.jpg}
\includegraphics[width=0.32\linewidth]{map_DW_dens_xz_neg.jpg}
\includegraphics[width=0.32\linewidth]{map_DW_temp_xy_neg.jpg}
}
\centering{
\includegraphics[width=0.32\linewidth]{map_SH_dens_xy_neg.jpg}
\includegraphics[width=0.32\linewidth]{map_SH_dens_xz_neg.jpg}
\includegraphics[width=0.32\linewidth]{map_SH_temp_xy_neg.jpg}
}
\caption{Gas surface density (left column: face-on; middle column: edge-on)
and temperature (right column) maps of simulated galaxies. Color
coding is reported in the color bars, numbers refer to the Log of
surface density or temperature. The galaxies shown are, from
top to bottom, MW, MW\_HR, DW and SH.}
\label{fig:maps}
\end{figure*}
\section{Simulations}
\label{section:simulations}
\begin{table*}
\caption{
Basic characteristics of simulated galaxies.
Column 1: Simulation name.
Column 2: Gravitational Plummer-equivalent (P-e) softening for gas particles.
Column 3: Mass of DM halo.
Column 4: Mass of DM particle.
Column 5: Stellar mass.
Column 6: Mass of star particle.
Column 7: Half-mass radius of stars.
Column 8: Cold gas mass.
Column 9: Initial mass of gas particle (before spawning stars).
Column 10: Half-mass radius of cold gas.
Column 11: Gas fraction.
Notes (1): for MW, MW\_HR and DW stellar masses include both the
disc and bulge (only MW and MW\_HR) stars present in the initial
conditions and the newly formed stars, which are a minority. (2):
in the above cases we report the stellar mass particle of old
stars, because new stars (whose particle mass is $m_{\rm gas}/4$)
give a negligible contribution to the
disc. (3): The DM halo in the SH simulation is static.
}
\begin{tabular}{l|c|cc|ccc|ccc|c}
\hline\hline
Name & softening & $M_{\rm dm}$ & $m_{\rm dm}$ &$M_\star^{(1)}$&$m_\star^{(2)}$& $R_\star$ & $M_{\rm cold}$ & $m_{\rm gas}$ & $R_{\rm cold}$ & gas \\
& (kpc) & (M$_\odot$) & (M$_\odot$) & (M$_\odot$) & (M$_\odot$) & (kpc) & (M$_\odot$) & (M$_\odot$) & (kpc) & fraction \\
\hline
MW & 0.69 & $9.4\cdot10^{11}$ & $3.5\cdot10^6$ & $4.2\cdot10^{10}$ & $1.3\cdot10^6$ & $4.8$ & $3.3\cdot10^9$ & $7.4\cdot10^4$ & $5.6$ & $7.3$\% \\
MW\_HR & 0.41 & $9.4\cdot10^{11}$ & $6.9\cdot10^5$ & $4.2\cdot10^{10}$ & $2.6\cdot10^5$ & $4.4$ & $3.2\cdot10^9$ & $1.5\cdot10^4$ & $5.4$ & $7.1$\% \\
DW & 0.42 & $1.6\cdot10^{11}$ & $8.1\cdot10^5$ & $7.8\cdot10^9$ & $1.6\cdot10^5$ & $8.5$ & $1.9\cdot10^9$ & $3.9\cdot10^4$ & $8.3$ & $20$\% \\
SH & 0.042& $1.4\cdot10^{10}$ & $-^{(3)}$ & $1.4\cdot10^7$ & $2.2\cdot10^3$ & $0.77$& $1.4\cdot10^9$ & $8.7\cdot10^3$ & $5.2$ & $99$\% \\
\hline
\end{tabular}
\label{table:runs}
\end{table*}
The main properties of the simulated galaxies used in this paper are
reported in Table~1.
The first set of initial conditions is the one used in paper I, and
were generated following the procedure described in \cite{GADGET2}.
They are near-equilibrium distributions of particles consisting of a
rotationally supported disc of gas and stars and a dark matter halo.
For MW and MW\_HR a stellar bulge component is included. Bulge and
halo components are modeled as spheres with \cite{Hernquist90}
profiles, while the gaseous and stellar discs are modeled with
exponential surface density profiles. To start from a relaxed and
stable configuration, we first evolve the galaxy models for 10
dynamical times with non-radiative hydrodynamics. We then use the
configurations evolved after 4 dynamical times as initial conditions
for our simulations. To test the effect of resolution on the SK relations,
the MW galaxy is used also at a higher resolution (MW\_HR).
The second set of initial conditions has been used by
\cite{Springel03} for a resolution test of their star formation code.
Gas particles are embedded in a $1.39\times 10^{10}$ M$_\odot$ static
NFW \citep{Navarro96} halo ($10^{10}$ $h^{-1}$ M$_\odot$ with
$h=0.72$), and rotate at a speed corresponding to a spin parameter of
$\lambda=0.1$, radially distributed following \cite{Bullock01}. Gas
is initially in virial equilibrium with the halo. When cooling is
switched on, gas particles slowly coalesce into a rotating disc. With
this simple setting it is possible to use very high force resolution,
so that the vertical structure of the disc is well resolved. These
initial conditions are available at four resolutions. We show results
for only the highest one. We checked that results are very stable
when the resolution is degraded.
In all cases we forced the SPH smoothing length of gas particles not to drop
below $1/2$ of the Plummer-equivalent softening.
\subsection{The model}
\label{section:model}
{{\sc muppi}} has been developed within a non-public version of the {\sc
gadget}2 Tree-PM+SPH code \citep{GADGET2} that includes an
entropy-conserving formulation of SPH \citep{Springel02}. It has
already been ported into the more efficient {{\sc gadget}3} code.
All details are given in \cite{Murante10}, while we only describe here
some relevant features of the {{\sc muppi}} algorithm.
This model is inspired by the multi-phase analytic model of star
formation and feedback by \cite{Monaco04a}. A particle enters the
multi-phase regime when its temperature is lower than a threshold (set
to $5\times 10^4$ K) and its density, recast in terms of particle
number density (with a molecular weight of $\mu=0.6$) is higher than
0.01 cm$^{-3}$. This threshold is an order of magnitude lower than
the commonly used value of $\sim0.1$ cm$^{-3}$, which is typically
tuned to obtain a cut in the standard SK relation at a gas density
$\sim10$ {M$_\odot$ pc$^{-2}$}.
A multi-phase particle is assumed to be made up of three components:
two gas phases and a stellar phase. The two gas phases are assumed to
be in thermal pressure equilibrium. As in \cite{Springel03}, the cold
gas phase is assumed to have a temperature $T_c=1000$ K,
while the temperature of
the hot gas phase is set by the particle entropy. Upon entrance into
the multi-phase regime, all the mass is assumed to reside in the hot
phase; cooling does not lead to a lowering of the temperature but to a
deposition of mass in the cold phase. A fraction of the cold mass is
assumed to be in the molecular form. \cite{Blitz06} \citep[see
also][]{Leroy08} found that the ratio between molecular and $HI$ gas
surface densities correlates with the so-called external pressure,
which is the pressure expected at the galaxy midplane in the case of a
thin disc composed by gas and stars in vertical hydrostatic
equilibrium. This was estimated using a simplified version of the
expression proposed by
\cite{Elmegreen89}:
\begin{equation}
P_{\rm ext} \simeq \frac{\pi}{2} G \Sigma_{\rm cold} \left( \Sigma_{\rm cold}
+ R \Sigma_\star \right)\, .
\label{eq:Pext} \end{equation}
\noindent
Here $R=\sigma_{\rm cold}/\sigma_\star$ is the ration between the
vertical r.m.s. velocity dispersions of cold gas and stars (here
$\sigma$ denotes velocity dispersion while $\Sigma$ denotes surface
density). The exponent $\alpha$ of the correlation $\Sigma_{\rm
mol}/\Sigma_{\rm HI}\propto P_{\rm ext}^\alpha$ was found to be
$0.9\pm0.1$. Inspired by this finding, and adopting an exponent
of 1 for simplicity, we use the following equation to estimate the
molecular fraction:
\begin{equation}
f_{\rm mol}(P) = \frac{1}{1 + P_0/P}\, .
\label{eq:fmol}
\end{equation}
\noindent
Here $P$ is the hydrodynamical pressure of the SPH particle (different
from the external pressure used in the observational correlation), and
we adopt $P_0/k=35000$ K cm$^{-3}$ as in \cite{Blitz06}.\footnote{We have
checked that our simulations produce relations similar to the
observational one (though with significant scatter) when pressure is
estimated as in \cite{Blitz06}.}
The three components exchange mass through four mass flows: cooling,
star formation, restoration and evaporation. Cooling is computed with
a standard cooling function assuming zero metallicity.
Thermal energy resides in the hot phase, so cooling is computed using
its density, which is much lower than the average one. Star formation
takes place in the molecular phase, with a consumption time-scale
proportional to the particle dynamical time $t_{\rm dyn}$:
\begin{equation}
\dot{M}_\star = f_{\rm mol}(P) f_\star M_{\rm cold} / t_{\rm dyn} (n_c)
\label{eq:sfr} \end{equation}
\noindent
Here $f_{\rm mol}$ is the pressure-dependent molecular fraction of
equation~\ref{eq:fmol}, $f_\star=0.02$ is a parameter of star
formation efficiency, determining the fraction of a molecular cloud that
is transformed into stars per dynamical time, and $M_{\rm cold}$ the
mass of the cold phase in the particle. The dynamical time $t_{\rm
dyn}$ is computed on the cold phase as soon as this collects 90 per
cent of the total gas mass, and is frozen until the particle exits
from the multi-phase regime (see paper I for a detailed discussion of
this hypothesis). A thing is worth noticing in equation~\ref{eq:sfr}:
because of the hypothesis of pressure equilibrium and constant
temperature of the cold phase, $n_c$ is proportional to pressure $P$;
at the same time, the fraction of gas mass in the hot phase is always
very low, so $f_{\rm mol}$ is very similar to the fraction of {\em
total} gas in molecular form. As a consequence, the particle star
formation rate is primarily regulated by gas pressure (with the
complication that $t_{\rm dyn}$ is computed at the beginning of a star
formation cycle and then kept frozen, while $f_{\rm mol}$ is computed
at each time-step).
The SFR term of equation~\ref{eq:sfr} deposits a fraction $(1-f_{\rm
re})$ of the transformed mass into the stellar component, while a
fraction $f_{\rm re}$, restored from massive stars in an Instantaneous
Recycling Approximation (IRA), is given back to the hot phase. The
formed stellar component is accumulated within the particle, and
contributes to its inertia but not to its gas mass in all SPH
computations. The evaporation rate is assumed to be due to the
destruction of molecular clouds and amounts to a fraction $f_{\rm
ev}=0.1$ of the SFR.
Production of star particles is done according to the stochastic star
formation algorithm of \cite{Springel03} (see paper I for details).
We allow for 4 generations of star particles to be spawned by each
parent gas particle. Each new star particle
is produced at the expense of the stellar component and, if needed,
of the cold phase.
The three components of a multi-phase particle are of course assumed
to be subject to the same hydrodynamical forces. To alleviate the
effect of this unphysical assumption, and to mimic the destruction of
a star-forming cloud after a few dynamical times \citep{Monaco04b},
the code forces each particle to leave the multi-phase regime after
two dynamical times $t_{\rm dyn}$, computed as specified above.
One SN is generated each $M_{\star,SN}=120\ {\rm M}_\odot$ of stars
formed, and each SN generates $10^{51}$ erg of energy. Of the energy
generated in the IRA, a small fraction $f_{\rm fb,i}=0.02$ is given to
the local hot phase to sustain its high temperature, while a fraction
$f_{\rm fb,o}=0.3$ is distributed to the hot phases of neighbour
particles in a 60-degree wide cone anti-aligned with the gas density
gradient. Energy contributions to particles are weighted by their
distance from the cone axis, to mimic the expansion of SN-driven
blasts along the least resistance path \citep{McKee77}. The present
version of the code distributes only thermal energy.
The assumption of a molecular fraction regulated by pressure
(equation~\ref{eq:fmol}) is very important because it makes the
evolution of the system intrinsically runaway: star formation
generates SNe, energy feedback from SNe pressurizes the hot phase, the
increase in pressure leads to an increase in molecular fraction and
thus to an increase in SFR. The runaway halts when the molecular
fraction saturates to unity. However, the dynamical response of the
pressurized particle is able to limit this runaway through the
expansion work done on neighbours. This intrinsic runaway behaviour,
together with the long cooling times, are the main reasons for our
efficient thermal feedback.
\begin{figure}
\centering{\includegraphics[width=0.90\linewidth]{sknew_1.jpg}}
\caption{SK relations for simulated galaxies. The black contour levels
report the data from Bigiel et al. (1998), binned in bins of size
0.5 dex; levels correspond to 1, 2, 5 and 10 observational points
per bin, gas surface densities include helium. Triangles and thick
lines give results for the four simulations in 750pc bins and in
radial profiles in cylindrical coordinates; color coding is given in
each panel. The three panels give respectively the standard SK
relation with all gas, HI gas and molecular gas.}
\label{fig:skone}
\end{figure}
\section{The SK relations in simulated discs}
The first three simulations, the MW (at standard and high resolution)
and DW test cases of paper I, start from already formed disc galaxies
with 10 and 20 per cent gas fractions, and are shown at 0.5 Gyr, after
the initial transients due to the switching on of stellar feedback
have (almost) died out and while gas consumption is still negligible. The
fourth simulation, the SH halo, forms in an isolated, static halo
filled with rotating gas initially in virial equilibrium; we analyze
the simulation at 0.7 Gyr, i.e. at the peak of its SFR, when the disc
is still largely gas-dominated. In all cases, conclusions are
unchanged when simulations are considered at other times. We show in
figure~\ref{fig:maps} face-on and edge-on maps of gas surface
densities and temperatures for the four simulations. It is interesting
to notice that in the regions interested by star formation the discs
(most of the MW and MW\_HR discs and the inner few kpc of SH) are relatively
hot and surrounded by thick coronae of gas heated by feedback and
circulating above or below the disc in a galactic fountain. The effect
of star formation on the DW galaxy is much less evident.
\subsection{The standard, HI and molecular SK relations}
\label{section:standard}
Figure~\ref{fig:skone} shows the standard,
HI and molecular
SK relations of the four simulations, compared with the data of
\cite{Bigiel08} for normal spiral galaxies.
Gas surface densities are always meant to include contribution from
helium. In the upper panel the thin line represents the fit proposed by
\cite{Kennicutt98}. Simulations have been processed as follows. Our
analyses are restricted to cold gas; in this paper by cold gas we mean
the cold phase of multi-phase particles plus all single-phase
particles colder than $10^5$ K. The molecular gas surface density is
computed using the molecular fraction of equation~\ref{eq:fmol},
applied only to multi-phase particles; HI gas is just cold minus
molecular gas. A galaxy frame is defined by the inertia tensor of
stars and cold gas, the z-axis corresponding to the largest
eigenvalue. The angular momentum of the same particles is always
found to be at most a few degrees off the z-axis. Then, radial
surface densities (in cylindrical coordinates) of (cold, HI,
molecular) gas and SFR are computed; these are reported in the figure
as colored thick lines. The same quantities are computed on a square
grid in the x-y plane, with bin size of 750 pc, as in the
\cite{Bigiel08} paper; these are reported as triangles, with the same
color as the corresponding thick lines.
The following points are apparent from the figure: (i) the simulations
trace different SK relations when the total or HI gas are used. (ii)
These different relations correspond to different transitions from HI-
to molecular-dominated gas, or equivalently to different saturation
values of $\Sigma_{HI}$. (iii) The differences go in the direction
highlighted by \cite{Bigiel08}, \cite{Bolatto09,Bolatto11} and
\cite{Bigiel10} of a higher saturation $\Sigma_{HI}$ in dwarf
galaxies. (iv) The molecular SK relation is much tighter; also, it is
steeper than the \cite{Bigiel08} one and consistent with \cite{Liu11}.
(v) The scatter around the SK relation is generally smaller than the
data, especially for the SH simulation. (vi) As for the MW and MW\_HR
simulations, the SK relation is rather stable with resolution; this is
confirmed by applying the same analysis to the SH halo at lower
resolution.
\begin{figure}
\centering{\includegraphics[width=0.95\linewidth]{R.jpg}}
\caption{Molecular fraction as a function of cold gas surface density
for the four simulations. The dotted line marks the
$1/2$ value, where HI and molecular gas surface densities are
equal.}
\label{fig:R}
\end{figure}
To better quantify the difference between the simulations, in
figure~\ref{fig:R} we show the relation between surface density
and molecular fraction $f_{\rm mol}$, a quantity that is directly
comparable with data. The $\Sigma_{\rm cold}$ value at which the
molecular fraction is $1/2$ ranges from 10 {M$_\odot$ pc$^{-2}$} of MW to 35 {M$_\odot$ pc$^{-2}$}
of SH. This is in agreement with the measures of the Large Magellanic
Cloud, though more extreme cases like the Small
Magellanic Cloud are not recovered (\citealt{Bolatto09,Bolatto11}; \citealt[see also references
in][]{Fumagalli10}).
\begin{figure*}
\centering{\includegraphics[width=0.95\linewidth]{P.jpg}}
\caption{
Relation between total cold gas surface density and hydrodynamical pressure
in simulations.
Triangles denote the relation in 750pc bins, while
darker continuous lines of the same color give the radial averages.
The bright
dot-dashed and dashed lines give radial estimates based respectively
on hydrostatic equilibrium (equation~\ref{eq:Pext}) and on
equation~\ref{eq:Pfit}. Each panel contains the results from one
simulation, while only the radial averages of hydrodynamical pressure from
the other simulations are reported in each panel as grey lines.}
\label{fig:P}
\end{figure*}
As pointed out by \cite{Fumagalli10}, if the molecular SK is the
``fundamental'' relation then a pressure law for the molecular
fraction results in an environment-dependent standard SK. This is
true because, as evident in equation~\ref{eq:Pext} for pressure in the
case of vertical hydrostatic equilibrium, this quantity depends not on
gas surface density alone (the quantity in the x-axis of the standard
SK) but on gas {\em and} stellar surface densities, the latter being
multiplied by the ratio $R$ of gas and star vertical velocity
dispersions \citep[see also][]{Shi11}. In other words, the cut in the standard SK is determined
by the (velocity-weighted) gas fraction. So our result is easy to
interpret as long as hydrodynamic pressure of our discs scales with
gas fraction in a similar way as the external one. It is worth
noticing at this point that our simulated galaxies have gas fractions
ranging from low to very high values, so they span most of the
interesting range for this quantity.
However, the validity of equation~\ref{eq:Pext}, obtained in the case
of gas in vertical hydrostatical equilibrium, must be checked in our
simulations where energy is continually injected in discs. This is
done in next sub-section.
\subsection{Vertical structure of simulated discs}
\label{section:vertical}
\begin{figure}
\centering{\includegraphics[width=0.95\linewidth]{corr_Q.jpg}}
\caption{Correlation of the ratio of the radial average of $P_{\rm
sim}/P_{\rm ext}$ with Toomre parameter $Q_{\rm tot}$ for the four
simulations. Color coding is reported in the panel; thick
continuous lines correspond to that part of the galaxy beyond two
softening lengths from the center and with significant SFR, thin
dashed lines refer to the rest of the galaxy. The dashed thick line
has slope $1/3$.}
\label{fig:corr_Q}
\end{figure}
In figure~\ref{fig:P} we report, for the four simulated galaxies, the
relation between pressure and $\Sigma_{\rm cold}$ as found in the
simulations. We show in each panel the hydrodynamical pressure found
in simulations, $P_{\rm sim}$, averaged in bins of the x-y grid as
colored triangles, and the radial profile of the average of the same
quantity as a line with a darker color. The external pressure $P_{\rm
ext}$ is computed using equation~\ref{eq:Pext} from the radial
profiles (colored dot-dashed lines). Moreover, to ease the comparison,
each panel reports, as grey thick lines, the radial $\Sigma_{\rm
cold}-P_{\rm sim}$ relations from the other simulations. To obtain
$P_{\rm ext}$, vertical velocity dispersions $\langle v_z^2 \rangle$
of cold gas and star particles have been computed in the galaxy frame.
For the stars, this quantity is equated to $\sigma_\star^2$, while for
the gas we use:
\begin{equation}
\sigma_{\rm cold}^2 = \langle v_z^2 \rangle + c_s^2 \, ,
\label{eq:scold}
\end{equation}
\noindent
where $c_s$ is the gas sound speed, computed on the average particle
temperature and density. Figure~\ref{fig:P} shows that the relation
between $P_{\rm sim}$ and $\Sigma_{\rm cold}$ varies in a very similar
way as that between $P_{\rm ext}$ and $\Sigma_{\rm cold}$. This
confirms the validity of our interpretation for the environmental
dependence of the standard SK. However, external and simulated
pressures show significant dicrepancies that are worth addressing.
\begin{figure}
\centering{\includegraphics[width=0.95\linewidth]{sigma.jpg}}
\caption{Gas vertical velocity dispersions, including thermal and
kinetic energy (\ref{eq:scold}), for the four simulations. Color
coding is reported in the panel.}
\label{fig:sigma}
\end{figure}
These discrepancies may be related to the fact that we are comparing
the expected midplane pressure with the average one over the whole
disc height. While for barely resolved discs the two quantities cannot
differ much, the SH simulation allows us to test to what extent
midplane and average pressures are comparable. As a matter of fact,
particles in the midplane show a broad range of pressures, because the
accumulation of cold mass at the beginning of a star formation cycle
causes a depressurization by one order of magnitude, while successive
star formation re-pressurizes the star-forming particles. As a
result, pressure does not show a smooth decrease with height on the
disc, and the (mass-weighted) average of pressure over the disc height
is always very similar to the midplane one. A similar trend was found
by \cite{Tasker08}, using a much better resolution, when energy
feedback from SNe is considered.
We noticed that in our simulations the ratio of true pressure $P_{\rm
sim}$ and $P_{\rm ext}$, computed the first in radial bins and the second from radial
profiles, correlates well with the Toomre parameter $Q$ of the disc,
computed at the same radius. For a disc made of a single component
with surface density $\Sigma$ and velocity dispersion $\sigma$, we have $Q(r)
= \sigma \kappa / \pi G \Sigma$, where $\kappa=V(r)\sqrt{2 + 2
d\ln V/d\ln r}/r )$ is the epicyclic
frequency and $V(r)$ the rotation curve. Being our discs composed by stars and gas, we compute the
disc total Toomre parameter $Q_{\rm tot}$ by using the simple
approximation of \cite{Wang94}, that is correct for our discs
characterized by relatively high velocity dispersions \citep{Romeo11}:
\begin{equation} Q_{\rm tot} \simeq \left( \frac{1}{Q_{\rm cold}} +
\frac{1}{Q_\star}
\right)^{-1}
= \frac{\sigma_{\rm cold} \kappa}{\pi G (\Sigma_{\rm cold} + R \Sigma_\star)}\, .
\label{eq:Q} \end{equation}
\noindent
It is easy to show that $Q_{\rm tot}=\sigma_{\rm cold} \kappa
\Sigma_{\rm cold} / 2 P_{\rm ext}$.
\begin{figure}
\centering{\includegraphics[width=0.95\linewidth]{Q.jpg}}
\caption{Toomre Q parameters for cold gas (dotted lines), stars (dashed
lines) and total (continuous lines). Color coding is reported in
the panel.}
\label{fig:Q}
\end{figure}
Figure~\ref{fig:corr_Q} shows for our four simulations and at all
radii the relation between $Q_{\rm tot}$ and the ratio $P_{\rm
sim}/P_{\rm ext}$. The most interesting regions with $\Sigma_{\rm
sfr}>10^{-5}$ {M$_\odot$ yr$^{-1}$ kpc$^{-2}$} and radius larger than two softening lengths
have been highlighted as thick continuous lines. The two quantities
show a linear correlation that is
well fit by $P_{\rm sim} / P_{\rm ext} = Q_{\rm tot} / 3$; the round
number 3 adapts well to the SH simulation where the vertical structure
of the disc is fully resolved. Then, a much better fit to the $P_{\rm
sim} -\Sigma_{\rm cold}$ relation is given by:
\begin{equation}
P_{\rm fit} = P_{\rm ext} \times \frac{Q_{\rm tot}}{3}
= \frac{1}{6} \Sigma_{\rm cold} \sigma_{\rm cold} \kappa\, .
\label{eq:Pfit} \end{equation}
\noindent
The second equality is obtained using equations~\ref{eq:Pext} and
\ref{eq:Q}. $P_{\rm fit}$ is shown in figure~\ref{fig:P} as dashed
lines; it gives a much better approximation to the $P_{\rm sim} -
\Sigma_{\rm cold}$ relation, with some discrepancy for the DW galaxy
at $\Sigma_{\rm cold}<5$ {M$_\odot$ pc$^{-2}$}, where $\Sigma_{\rm sfr}$ is anyway
very low. We checked the validity of $P_{\rm fit}$ by comparing it to
the pressure profile of many versions of our galaxies, obtained in our
tests by varying model parameters and assumptions. In particular, we
tested changes in the computation of dynamical time used in
equation~\ref{eq:sfr} in several ways, e.g. by equating it to the
average one at entrance into multi-phase or by recomputing it at each
time-step. We found in all cases that equation~\ref{eq:Pfit} always
gives a good fit to pressure, with significant discrepancies found
only at very low pressure. Therefore we consider this result to be
independent of details of our sub-resolution model. Presently,
we have no analytical explanation of why multiplying the external
pressure by $Q_{\rm tot}/3$ gives a better fit to the average SPH
pressure in our simulations. We interpret equation \ref{eq:Pfit}
as the quasi-equilibrium pressure of a disc with continuous injection
of energy from SNe: hotter discs, with high $Q_{\rm tot}$, are
characterized by higher pressure than $P_{\rm ext}$, while marginally
stable discs have a lower pressure by a factor up to $\sim3$ (a
similar result was found by \citealt{Koyama09}, see section 4 for more
details).
More insight on the vertical structure of the disc can be obtained by
analysing the quantities $\sigma_{\rm cold}$ and $Q_{\rm tot}$, that
enter in the computation of $P_{\rm fit}$.
Figure~\ref{fig:sigma} shows the values of gas vertical velocity
dispersions $\sigma_{\rm cold}$ for the four galaxies, computed both
on the 750 pc grid and in radial profiles. Clearly the assumption of
a constant velocity dispersion, often made in the literature to
represent this quantity, is a poor approximation of our results.
Moreover, these velocities are much higher than the $\sim5-10$ {km s$^{-1}$}
usually assumed for galaxy discs.\footnote{This quantity includes
contribution from the thermal energy of the hot phase, so it is not
directly comparable to observations. We will return on this in section 4.}
The values of the Q parameters for
the four simulations are shown in figure~\ref{fig:Q}. $Q_{\rm tot}$
was computed as in equation~\ref{eq:Q}, the same parameter was
computed only for gas and stars as $Q_i = \sigma_i \kappa / \pi G
\Sigma_i$ (where $i$ is either cold or $\star$), and its values are
reported in the figure with dotted and dashed lines, respectively.
The MW and DW discs have $Q_{\rm tot}\sim 1$ in the central,
star-forming regions, while $Q_{\star}$ assumes higher values. $Q_{\rm cold}$
is very high, $\sim20$; this quantity cannot be directly compared with
observations of cold gas, because the main contribution to
$\sigma_{\rm cold}$ comes from the sound speed,
that is determined by the thermal
energy of the hot phase of multi-phase particles. If the sound speed is
neglected, we obtain $Q_{\rm cold}\sim5-10$. With this caveat,
these values of the
Toomre parameters are in very good agreement with what found by
\cite{Leroy08} in nearby galaxies, where $Q_{\rm tot}$ is just above
unity while $Q_{\star}$ and especially $Q_{\rm cold}$ take up higher
values. At the same time, the gas-dominated SH disc is much more
stable and kinematically hot. As a conclusion, while
our stellar-dominated discs tend to regulate to keep
$Q_{\rm tot}\sim 1$, this is not a general rule.
As a further test, we run our simulations, starting from the same
four sets of initial conditions but using the effective model of
\cite{Springel03}, that is known not to provide efficient thermal
feedback. We obtained colder discs, with higher $Q_{\rm tot}$
parameters and hydrodynamical pressure not well fit either by $P_{\rm
ext}$ or by our equation~\ref{eq:Pfit}. The resulting standard SK
relation is in line with that originally found by \cite{Springel03}: it lies on the
\cite{Kennicutt98} relation above $\sim$5 {M$_\odot$ pc$^{-2}$}, and cuts below that
threshold, not following the mild steepening found in data. This
surface density threshold somehow depends on the galaxy: we find it at
3, 4.5 and 6 {M$_\odot$ pc$^{-2}$} for the MW, DW and SH simulations. This
dependence is easy to understand: the cut is determined by the imposed
volume density threshold for star formation. Since disc
temperatures do not vary much (especially in these colder discs) the
relation between volume and surface density of gas scales similarly to
that between pressure and surface density. Of course no prediction
can be made in this case for the HI and molecular SK. We conclude that
a simple volume density threshold for star formation, used in
conjunction with an ineffective feedback scheme, gives a standard SK
with environmental dependence that goes in the same direction as
the one presented in this paper, but cannot reproduce the data at the
same level of detail.
\subsection{The dynamical SK}
\label{section:origin}
\begin{figure}
\centering{\includegraphics[width=0.95\linewidth]{sknew_2.jpg}}
\caption{Dynamical SK relations for simulated galaxies. Points and
thick lines give results for the four simulations in 750pc bins and
in radial profiles in cylindrical coordinates; color coding is given
in each panel. Point size has been decreased to make the agreement
of the four relations more evident.}
\label{fig:sktwo}
\end{figure}
Figure~\ref{fig:sktwo} shows the dynamical SK of our four simulations.
Unfortunately, a dataset like that of \cite{Bigiel08} is unavailable
for this relation; most observations give global estimates of
galaxies. As a consequence, we do not compare this relation with data
in this paper.
Remarkably, the four simulations, that produced different standard
SK's, now trace a unique, non-linear relation, with a scatter that is even lower
than that in the molecular SK.
To make this equality more
evident, we decrease the point size of the grid-based relation to let
radial profiles be more visible.
It is easy to show, using equations~\ref{eq:Pext} and \ref{eq:Q}, that a ``universal'' dynamical SK relation, valid
for all galaxies, can be obtained under the following simple
assumptions: (i) the disc is in vertical hydrostatic equilibrium, so
that pressure is well represented by $P_{\rm ext}$; (ii) the disc is
marginally stable, with a $Q_{\rm tot}$ Toomre parameter equal to 1;
(iii) the velocity dispersion of gas $\sigma_{\rm cold}$ is constant
for all galaxies; (vi) the rotation curve is flat so that $\kappa
\simeq \sqrt{2}/\tau_{\rm orb}$; (v) SFR is a function of pressure.
But we have shown above that the first three conditions don't hold for
our discs, so this simple interpretation cannot be valid here. On the
other hand, we noticed that our simulated discs are rather hot when we
take into account the effective temperature of gas particles, which is
determined by the thermal energy of the hot phase. In the following
we demonstrate that the dynamical SK can be interpreted as the result
of the balance between energy injection and dissipation, without
assuming that SFR is directly influenced by disc rotation.
Energy is continually injected by SN feedback and lost by radiative
cooling and viscous dissipation. The equilibrium among these
processes can be illustrated with the help of a simplified approach that
quantifies the energy made available to gas particles, in
both kinetic and thermal form, after
cooling has radiated away a part of it.
Injection of kinetic and
thermal energy can be expressed as (see also equation~14 of paper
I):
\begin{equation}
\dot{\Sigma}_{\rm inj} = \epsilon v^2_{\rm sn} \Sigma_{\rm sfr}\, ,
\label{eq:heating}
\end{equation}
\noindent
where the constant $v^2_{\rm sn}=(f_{\rm fb,i}+f_{\rm fb,o})
10^{51}\ {\rm erg}/M_{\star,sn}$ takes account of the energy made
available for feedback by {{\sc muppi}} and $\epsilon$ is an efficiency
parameter that quantifies the fraction of injected energy that is {\em
not} radiated away by cooling. This injected energy is then
transformed by expansion into kinetic energy and is later dissipated
by viscosity\footnote{
The fact that in our simulations viscosity is the numerical one
included in our SPH code is at this stage immaterial, as long as this
behaves similarly to dissipation of turbulence that would take place
if the resolution were adequate to describe it.
}; this keeps the disc in a quasi-stationary state. For a disc of
height $H_{\rm eff}$, disc energy will likely be dissipated on one
disc height crossing time, $t_{\rm cross} = H_{\rm eff}/\sigma_{\rm
cold}$. Notably, this is the same rate at which turbulence is
dissipated, as long as the driving length of turbulence is the size of
the typical SN-driven bubble \citep[e.g.][]{MacLow03}, which must be
$\sim H_{\rm eff}$ if bubbles die by blowing out of the disc
\citep[see also][]{Monaco04a}.
Let's define the surface density of energy in the disc as:
\begin{equation}
\Sigma_{\rm E} = \frac{3}{2}\Sigma_{\rm cold}\sigma_{\rm cold}^2\, ,
\label{eq:Se}\end{equation}
\noindent
where we assume equipartition between the three translational degrees
of freedom. Then the energy dissipation rate is:
\begin{equation} \dot{\Sigma}_{\rm disp} = \frac{\Sigma_{\rm E}}{t_{\rm cross}}
= \frac{3\Sigma_{\rm cold} \sigma_{\rm cold}^2 }{2t_{\rm
cross}}\, ,
\label{eq:diss}
\end{equation}
In the definition of effective disc height $H_{\rm eff} = \Sigma_{\rm
cold}/2\rho_{\rm cold}$ the midplane density can be substituted with
pressure using the equation of state $P=\rho_{\rm cold}\sigma_{\rm
cold}^2$. Using equation~\ref{eq:Pfit} for the pressure, it is easy
to show that:
\begin{equation}
H_{\rm eff} = 3 \frac{\sigma_{\rm cold}}{\kappa}\, .
\label{eq:Heff} \end{equation}
\noindent
It then follows that:
\begin{equation}
t_{\rm cross} = \frac{3}{\kappa} \simeq \frac{3}{\sqrt{2}} \tau_{\rm orb}\, .
\label{eq:tcross}
\end{equation}
In a stationary system, the effective energy injection will equate
dissipation, so that $\dot{\Sigma}_{\rm inj} = \dot{\Sigma}_{\rm
disp}$. This implies that:
\begin{equation}
\epsilon v^2_{\rm sn} \times \Sigma_{\rm sfr}
=\frac{1}{2} \Sigma_{\rm cold} \sigma_{\rm cold}^2 \kappa\, ,
\label{eq:epsilon}
\end{equation}
\noindent
or, using the approximated value of $\kappa\simeq \sqrt{2}/\tau_{\rm orb}$:
\begin{equation}
\Sigma_{\rm sfr}
\simeq \frac{\Sigma_{\rm cold}}{ \tau_{\rm orb}} \times \frac{\sqrt{2}}{2}\left(\frac{\sigma_{\rm cold}}{\epsilon v^2_{\rm sn}}\right)^2
\, ,
\label{eq:epsilon2}
\end{equation}
\noindent
This relation allows us to recast the interpretation of a unique
dynamical SK relation in terms of energy injection: it will hold as
long as, at fixed $\Sigma_{\rm sfr}$,
the post-cooling efficiency of energy injection $\epsilon$
scales with the square of the gas vertical velocity dispersion,
i.e. the gas specific energy. This can be seen in the other
direction: the specific energy of the gas $\sigma_{\rm cold}^2$ must
scale with the fraction of specific thermal energy that has time to
perform $PdV$ work before cooling radiates it away, $\epsilon v_{\rm
sn}^2$. This is a property that arises naturally from our
sub-resolution feedback model, at least for values of free
parameters that do not widely differ from the fiducial ones selected
in paper I; as in the case of pressure, we tested the validity of
this result with a large suite of {{\sc muppi}} simulations on the same
sets of initial conditions, and many combinations of parameters and
physical assumptions. The result of an environment-dependent standard
SK and a unique dynamical SK holds in all cases where a disc
efficiently heated by feedback and in a stationary state is
obtained.
According to our interpretation, the fact that our four simulations
all lie on the same dynamical SK is {\em no} evidence that SFR is
directly determined by galaxy rotation, at variance with the
motivation that has been used to introduce this relation. To make
this more clear, the dispersion rate of energy given in
equation~\ref{eq:diss} can be written, without loss of generality, as
$\dot{\Sigma}_{\rm disp}=3P\sigma_{\rm cold}$. Pressure in our
simulations is well reproduced by equation~\ref{eq:Pfit}, that depends
on the kinematical state of the disc through $Q_{\rm tot}$, and then
on $\kappa$. As a result, the dynamical stationarity of the disc
induces a dependence of SFR on the epicyclic frequency, i.e. on the
orbital time. So, the fact that our four simulations all lie on the
same dynamical SK is telling us that they are stationary discs kept
out of vertical hydrostatic equilibrium by SN feedback, and not that
their SFR is directly determined by rotation or shearing - it is only
indirectly influenced by it through the dependence of pressure on
$Q_{\rm tot}$. Indeed we noticed that some of our simulated discs
stay out of this dynamical SK, and this happens when, e.g., a bar
instability takes place. In this case the condition of stationarity is
violated until a new equilbrium configuration is reached. So these
discrepancies confirm our interpretation that simulations trace a
unique dynamical SK as long as they are in a stationary condition.
When run with the effective model of \cite{Springel03}, the three
galaxies (MW, DW, SH) show similar but not unique dynamical SK, the
differences being comparable to those in the standard SK
(section~\ref{section:vertical}). This is a result of
inefficient thermal feedback in this model: as long as pressure is
not well fit by equation~\ref{eq:Pfit} and the injection of energy is marginal,
we do not expect galaxies to lie on a unique dynamical SK.
\section{Discussion}
\label{section:discussion}
One result of this paper is that our pressure-based model of
star formation produces a standard SK with a knee at a gas surface
density that depends on gas fraction in a way that resembles the
metallicity dependence proposed by \cite{Krumholz09b} and
\cite{Gnedin10}. If the molecular fraction is regulated by pressure,
gas-rich galaxies become dominated by molecular gas at higher gas
surface densities. It is easy to show that the gas surface density at
which $\Sigma_{\rm mol}$ equates $\Sigma_{HI}$, which we call
$\Sigma_{\rm eq}$, scales with the gas fraction $\mu=\Sigma_{\rm
cold}/(\Sigma_{\rm cold} + \Sigma_\star)$, as:
\begin{equation}
\Sigma_{\rm eq} = \frac{2P_0}{\pi G} \frac{\mu}{\mu + R (1-\mu)}\, .
\label{eq:Seq}
\end{equation}
\noindent
Here $P_0$ is the normalization of equation~\ref{eq:fmol}. In most
chemical evolution models, $\mu$ is related to metallicity; for
instance, in the simple closed-box model, $Z=Y\ln (1/\mu)$, where $Y$ is
the metal yield per stellar generation. Then, a pressure-based
molecular fraction can mimic a metallicity dependence. But the
maximum value of $\Sigma_{\rm eq}$ is reached for $\mu=1$, in which
case $\Sigma_{\rm eq, max} = \sqrt{2P_0/\pi G} \simeq 34\ {\rm
M}_\odot\ {\rm pc}^{-2}$; this is the $\Sigma_{\rm eq}$ value of the
SH simulation, that has a 99 per cent gas fraction. According to
\cite{Bolatto09}, $\Sigma_{\rm eq}$ for the Large Magellanic Cloud
is similar to 34 {M$_\odot$ pc$^{-2}$},
while for
the Small Magellanic Cloud it is $\sim$100 {M$_\odot$ pc$^{-2}$}. This is
confirmed by a quick comparison with Krumholtz et al's model: an
increase in $\Sigma_{\rm eq}$ by a factor of 3, as large as the
difference between MW and SH, is obtained by a decrease of metallicity by
a similar factor, which is relatively modest. The same conclusion can
be reached by considering the saturation $\Sigma_{\rm HI}$ value, that
reaches $\sim20$ {M$_\odot$ pc$^{-2}$} for our SH disc while, for instance,
\cite{Fumagalli10} report higher values for a few low metallicity
dwarf galaxies. We conclude that a pressure-driven molecular fraction
cannot explain the whole observed range of variation of $\Sigma_{\rm
eq}$. Because the motivation for molecular fraction being directly
modulated by metallicity is very strong \citep[but see the comments
by][]{MacLow10}, it is well conceivable to construct mixed scenarios
where equation~\ref{eq:fmol} is valid and the normalization $P_0$
depends on metallicity.
The search for a physical interpretation of our results lead us to an
expression for gas pressure in star-forming discs given by
equation~\ref{eq:Pfit}. Discs subject to this pressure have some
interesting properties: for instance, neither $\Sigma_\star$ nor
$\sigma_\star$ appear in the expression of pressure (though the
gravity of stars enters in determining $\sigma_{\rm cold}$), gas
effective height is related to the ratio of gas velocity dispersion
and epicyclic frequency (equation~\ref{eq:Heff}) and the sound
crossing time of effective height is simply proportional to the
orbital time (equation~\ref{eq:tcross}). Comparison with data is not
straightforward, as $\sigma_{\rm cold}$ includes a major contribution
from the sound speed of the hot phase (we give further comments below)
and direct pressure estimates are hard to obtain. Nonetheless, these
predictions can in principle be tested against observations of the
Milky Way and nearby galaxies, so they may constitute a basis for a
theory of the non-equilibrium vertical structure of discs effectively
heated by feedback.
Our simulated discs show total vertical velocity dispersions
(figure~\ref{fig:sigma}) that are well in excess of the $\sim10$
{km s$^{-1}$} value that is usually assumed to hold for discs. However,
$\sigma_{\rm cold}$ in our simulations is dominated by the thermal
sound speed (equation~\ref{eq:scold}) computed at the particle
effective temperature, and this last quantity is determined by the
thermal content of the hot phase. This means that $\sigma_{\rm cold}$
cannot be compared with the velocity dispersion of cold clouds in real
galaxies. At the same time, vertical velocity dispersions of gas
particles (computed without the sound speed) of MW and DW simulations
show values that are in much better agreement with those measured in
THINGS galaxies by \cite{Tamburro09} (see figures 11 and 18 in paper
I). In our simulations, multi-phase particles are composed by two
phases at the sub-grid level, but are seen as a single entity by the
SPH code. This means that, as long as a multi-phase cycle goes on,
the hot phase is not free to leave the disc, while the cold phase is
pulled by the former. This artificial entrainment results, at the
macroscopic level, in a velocity dispersion of gas particles that is
realistic and, from a detailed comparison of MW and MW\_HR, rather
stable with resolution in that part of the disc that is well resolved
in both simulations. This means that our effective model is correctly
producing a gas disc that is thermally warm but kinematically colder.
It must be noticed that the hot coronae we produce around the discs
have temperatures $\sim3\times10^6$ K and densities $\sim10^{-2}$
cm$^{-3}$, so their presence is likely ruled out by X-ray observations
\citep[e.g.][]{Crain10}. However, insertion of metal cooling would
change this energy balance in favour of kinetic energy, so we expect
this hot corona to be less prominent when chemical evolution is
properly taken into account.
Our results thus suggest that a complete modeling of a disc heated by
feedback must fully take into account the multi-phase nature of the
ISM, where the $\sim$kpc-height corona of warm/hot gas surrounding a
spiral galaxy may have an important dynamical role. A step forward in
the modeling of a spiral disc subject to contiuous production and
dissipation of energy was recently taken by \cite{Elmegreen11}, who
addressed the stability of a disc where energy is continually injected
and dissipated over some multiple of the crossing time of turbulence.
However, in that paper only radial and tangential perturbations were
considered and vertical equilibrium was assumed.
The results presented in this paper rely on how well the vertical
structure of the disc is resolved in simulations. In the MW and DW
cases effective disc heights are of the same order of the
gravitational softening (that was kept of the same order as the one
used in cosmological simulations with the same mass resolution), so
numerical convergence of the results should be demonstrated. We
showed results for the MW\_HR simulation, and found no clear
dependence on resolution in any of our results. Moreover, the light
disc forming out of the SH simulation has an effective height of
$\sim1$ kpc, with a gravitational softening of only 43 pc, so in this
case the vertical structure is well resolved. We conclude that,
despite the vertical structure of the discs is barely resolved in some
of our simulations, the results should be not strongly affected by
resolution.
Due to the difficulty in performing measures of gas pressure,
observers typically use equation~\ref{eq:Pext} to estimate pressure
(often making strong assumptions on velocity dispersions), but its
validity has rarely been tested. \cite{Koyama09} compared this
formula with simulations of turbulent ISM in a shearing disc. In
their simulations the gas disc is assumed to be much thinner than the
stellar one, heating terms take account of cosmic rays, X-rays and
H$_2$ formation and destruction, while radiative feedback from massive
stars is modeled as localized increases of heating rate, but no SNe
are present. Their spatial resolution is of order of $\sim$1 pc.
They found that midplane and mass-weighted pressures typically differ
by an order of magnitude, and that equation~\ref{eq:Pext} is very
close to the armonic mean of the two. Interestingly, their midplane
pressure is a factor of three lower than $P_{\rm ext}$, which is what
we find for $Q_{\rm tot}=1$. Our results are not comparable to their
simulation, due to the vastly different resolution and to the much
hotter state of our discs (they have $\sigma_{\rm cold} \sim5$
{km s$^{-1}$}). They also proposed an improved analytic estimate of midplane
pressure; we compared it with our $P_{\rm sim}$ and found that it does
not improve much with respect to $P_{\rm ext}$. Our simulations are
instead comparable to those of \cite{Tasker08} and \cite{Joung09},
that have resolutions of 25 and 2 pc respectively, include feedback
frm SNe and show
multi-phase structures of the ISM in broad agreement with the
assumptions made to design {{\sc muppi}}. Unfortunately, they don't
explicitly address the question whether their pressure is well
reproduced by the external one of \cite{Elmegreen89}.
\section{Conclusions}
\label{section:conclusions}
In this paper we have shown how simulations based on the MUlti-Phase
Particle Integrator ({{\sc muppi}}) model
developed within the {{\sc gadget}3} TreePM+SPH code, are able to give
predictions on the various SK relations discussed in the literature.
{{\sc muppi}} is based on the assumption, suggested by observations of
\cite{Blitz04,Blitz06}, that the molecular fraction is modulated by
pressure. Moreover, it has the interesting feature of making thermal feedback
effective and able to heat discs. So, the interest of this paper does
not rely only in the test of a specific sub-resoution model for star
formation but it allows
to understand what are the testable consequences of a pressure-based
molecular fraction in discs efficiently heated by SN feedback.
Our main conclusions are the following.
(i) A pressure-based molecular fraction produces an environmental-dependent standard SK
relation, owing to the fact that the relation
between pressure and gas surface density is modulated by gas fraction.
This variation is very similar to that found between spiral and dwarf
galaxies \citep{Bigiel08,Bigiel10}, and it could be interpreted as a metallicity
dependence, since gas fraction is typically related to metallicity in
most chemical evolution scenarios. However, the variation in the
quantity $\Sigma_{\rm eq}$, the gas density at which the molecular
fraction is 1/2, cannot be larger than a factor of 3 between normal
spirals and gas-dominated discs, and values of $\Sigma_{\rm eq}>34$
{M$_\odot$ pc$^{-2}$} cannot be obtained in this framework.
(ii) We analyzed in detail the vertical structure of our simulated
galaxies, and found that hydrodynamical pressure is not well recovered by the
vertical hydrostatic equilibrium value of \cite{Elmegreen89}, because
kinematically hotter discs show higher pressure. A better fit is
given by:
$$P_{\rm fit} = \frac{1}{6}\Sigma_{\rm cold} \sigma_{\rm cold} \kappa $$
\noindent
(equation~\ref{eq:Pfit}), that we interpret as the pressure of
a disc with continuous energy injection. This expression allows to
connect the effective disc height with the gas velocity dispersion
(computed including kinetic and thermal energies) as $H_{\rm eff} = 3 \sigma_{\rm cold}/\kappa$
(equation~\ref{eq:Heff}).
(iii) Quite interestingly, our four simulated galaxies lie on the
same (non-linear) dynamical SK relation, independently of their gas fraction.
This is not a straightforward consequence of our assumptions, and
was not expected. It is worth recalling that a similar
phenomenology was found by \cite{Daddi10b} in the very different
context of $z\sim2$ star-forming galaxies. We interpret this result
as a manifestation of balance between energy injection from SNe and
energy dissipation. We have shown that, under the hypothesis that
gas energy is dissipated both by cooling and by viscosity, and that
the latter works on the timescale of one sound crossing time of the
disc effective height, we can obtain for stationary discs a unique
dynamical SK if the efficiency of energy injection after cooling
scales with gas specific energy. This is found to result from
energy balance in our multi-phase particles under a wide range of
cases.
(iv) Other results are of some interest. The model follows well
the standard, molecular and HI SK relations, with a tight
molecular SK that is a straightforward consequence of the model
assumptions. This molecular SK has a slope of $\sim1.4$,
marginally steeper than the 1.2 value found by \cite{Bigiel08}
but in agreement with \cite{Liu11}. The
scatter in the simulated relations is small, and this may hint that most
scatter is due to observational errors, if not to putting together
galaxies that lie on slightly different relations. However, our
sub-resolution model gives by construction the average properties of
the ISM on scales that are similar to the 750 pc scale used to bin the
data, and this may be the reason for the low scatter.
These simulations show that energy injection by SNe is fundamental in
determining the structure of star-forming discs, and that the
warm/hot phases created by stellar feedback may have an important
role in disc dynamics. Future observations will need to address
the issue of directly determining gas pressure in order to test
whether the usually assumed formula of \cite{Elmegreen89} or some
different estimates, like that provided by equation~\ref{eq:Pfit},
apply.
\section*{Acknowledgements}
We thank Frank Bigiel for providing his data on the SK relation of
spiral galaxies. Initial conditions for the simulations were kindly
provided by S. Callegari and L. Mayer (MW, MW\_HR, DW) and V. Springel
(SH). Simulations were run at ``Centro Interuniversitario del
Nord-Est per il Calcolo Elettronico'' (CINECA, Bologna), with CPU time
assigned under an INAF/CINECA grant and under an agreement between
CINECA and University of Trieste, and at CASPUR, with CPU time
assigned with the ``Standard HPC grant 2009'' call. We thank Anna
Curir, Bruce Elmegreen and Samuel Boissier for discussions. We
acknowledge partial support by the European Commissions FP7 Marie
Curie Initial Training Network CosmoComp (PITN-GA-2009-238356) and by
grants ASI-COFIS, PRIN-MIUR 2007, PD51-INFN and PRIN-INAF 2009 titled
"Towards an italian network for computational cosmology".
K.D. acknowledges the support by the DFG Priority Programme 1177 and
additional support by the DFG Cluster of Excellence "Origin and
Structure of the Universe".
\bibliographystyle{mn2e}
|
2,877,628,088,458 | arxiv | \subsection{Technical Lemmas}
\indent We first show an equality of information densities between the nonfeedback channel $\mathcal{F}^n\rightarrow \mathcal{Y}^n$ and the original channel $\mathcal{X}^n\rightarrow \mathcal{Y}^n$.
\begin{lemma}
\begin{equation*}
i(F^n;Y^n)=i^R(X^n(F^n)\rightarrow Y^n)
\end{equation*}
where $i^R(X^n(F^n)\rightarrow Y^n)$ is defined as
\begin{equation*}
i^R(X^n(F^n)\rightarrow Y^n)=i(X^n\rightarrow Y^n)-i(X^n\rightarrow Y^n||F^n).
\end{equation*}
\label{lemma4_3}
\end{lemma}
\begin{proof}
\begin{equation*}
\begin{split}
i(F^n;Y^n)&=\log \frac{p(F^n,Y^n)}{p(F^n)p(Y^n)}\\
&=\log \frac{\prod_{i=1}^{n}p(F_i,Y_i|F^{i-1},Y^{i-1})}{p(F^n)p(Y^n)}\\
&=\log \frac{\prod_{i=1}^{n}p(Y_i|F^{i},Y^{i-1})p(F_i|F^{i-1},Y^{i-1})}{p(F^n)p(Y^n)}\\
&\stackrel{(a)}{=}\log \frac{\prod_{i=1}^{n}p(Y_i|F^{i},Y^{i-1})p(F_i|F^{i-1})}{p(F^n)p(Y^n)} \\
&=\log \frac{\vec{p}(Y^n|F^n,X^n)}{p(Y^n)}-\log \frac{\vec{p}(Y^n|F^n,X^n)}{\prod_{i=1}^{n}p(Y_i|F^{i},Y^{i-1})}\\
&=\log \frac{\prod_{i=1}^n p(Y_i|F^i,X^i,Y^{i-1})}{p(Y^n)}-\log \frac{\vec{p}(Y^n|F^n,X^n)}{\prod_{i=1}^n p(Y_i|Y^{i-1},F^i)}\\
&\stackrel{(b)}{=}\log \frac{\prod_{i=1}^n p(Y_i|X^i,Y^{i-1})}{p(Y^n)}-\log \frac{\vec{p}(Y^n|F^n,X^n)}{\vec{p}(Y^n|F^n)}\\
&=\log \frac{\vec{p}(Y^n|X^n)}{p(Y^n)}-\log \frac{\vec{p}(Y^n|F^n,X^n)}{\vec{p}(Y^n|F^n)}\\
&=i(X^n\rightarrow Y^n)-i(X^n\rightarrow Y^n||F^n) \\
&=i^R(X^n(F^n)\rightarrow Y^n)\\
\end{split}
\end{equation*}
where (a) follows from the fact that no feedback exists from $\mathcal{Y}$ to $\mathcal{F}$. Line (b) follows from the Markov chain $F^i - (X^i,Y^{i-1})- Y_i$.
\end{proof}
\indent In the next lemma, we shows that there exists a suitable construction of $p(f^n)$ such that the induced channel input distribution equals the original channel input distribution. As we will see, this result allows us to work on the channel input distributions instead of code-function distributions.
\begin{lemma}
Given a channel $\lbrace p(y_i|x^i,y^{i-1})\rbrace_{i=1}^n$, a feedback link $\lbrace p(z_i|y^{i},z^{i-1})\rbrace_{i=1}^n$, a channel input distribution $\lbrace p(x_i|x^{i-1},z^{i-1})\rbrace_{i=1}^n$ and a sequence of code-function distributions $\lbrace p(f_i|f^{i-1})\rbrace_{i=1}^n$, the induced channel input distribution $\lbrace p_{ind}(x_i|x^{i-1},z^{i-1})\rbrace_{i=1}^n$ (induced by $\lbrace p(f_i|f^{i-1})\rbrace_{i=1}^n$) equals the original channel input distribution $\lbrace p(x_i|x^{i-1},z^{i-1})\rbrace_{i=1}^n$ if and only if the sequence of code-function distributions $\lbrace p(f_i|f^{i-1})\rbrace_{i=1}^n$ is \textit{good with respect to} $\lbrace p(x_i|x^{i-1},z^{i-1})\rbrace_{i=1}^n$. One choice of such a sequence of code-function distributions is as follows,
\begin{equation}
p(f_i|f^{i-1})=\prod_{z^{i-1}}p(f_i(z^{i-1})|f^{i-1}(z^{i-2}),z^{i-1}).
\label{equa4_6}
\end{equation}
\label{lemma4_2}
\end{lemma}
\indent We refer the readers to Definition $5.1$, Lemma $5.1$ and $5.4$ in \cite{Tati09} for the concept ``\textit{good with respect to}'' and the proof of the above lemma. According to Lemma \ref{lemma4_2}, it is straightforward to obtain the following result which plays an essential role in the channel coding theorem.
\begin{lemma}
For channels with noisy feedback,
\begin{equation*}
\begin{split}
&p(x^n,y^n,f^n)\\
=&\prod_{i=1}^{n} \prod_{z^{i-1}}\underbrace{p(f_i(z^{i-1})|f^{i-1}(z^{i-2}),z^{i-1})}_{\text{Encoding}}\sum_{z^{n}\in \lbrace\mathcal{Z}^{n}:x^n=f^n(z^{n-1})\rbrace}\prod_{i=1}^{n}\underbrace{p(z_i|y^i,z^{i-1})}_{\text{Feedback link}} \underbrace{p(y_i|f^i(z^{i-1}),y^{i-1})}_{\text{Channel}}\\
\end{split}
\end{equation*}
\label{lemma4_6}
\end{lemma}
The proof is shown in the Appendix. This lemma implies that $\underline{I}^R(X(F)\rightarrow Y)$ only depends on channel input distribution $\lbrace p(x_i|x^{i-1},z^{i-1})\rbrace_{i=1}^\infty$.
\subsection{Channel Coding Theorem}
\indent Now we show a general channel coding theorem in terms of the residual directed information.
\begin{theorem}(\textit{Channel Coding Theorem})
For channels with noisy feedback,
\begin{equation}
C_{FB}^{noise}=\sup_{X}\underline{I}^R(X(F)\rightarrow Y)
\label{equ4_10}
\end{equation}
where $\sup_{X}$ means that supremum is taken over all possible channel input distributions $\lbrace p(x_i|x^{i-1},z^{i-1})\rbrace_{i=1}^\infty$.
\label{thm4_2}
\end{theorem}
\indent The proof comes along the proof of Theorem \ref{thm4_1} in \cite{Verdu94} and hence is presented in the Appendix. Theorem \ref{thm4_2} indicates that, besides capturing the effective information flow of channels with noisy feedback, the residual directed information is also beneficial for characterizing the capacity. Although formula (\ref{equ4_10}) may not be the only or the simplest characterization of the noisy feedback capacity, it provides benefits in
many aspects. We herein present two of them as follows.
\begin{enumerate}
\item. Measurements of Information Flows: Let $p^*$ be the optimal solution of formula (\ref{equ4_10}). Then we obtain that, when the channel is used at capacity, the total transmission rate in the forward channel is in fact $\underline{I}(X\rightarrow Y)|_{p^*}$\footnote{$\underline{I}(X\rightarrow Y)|_{p^*}$ denotes that the value is evaluated at channel input distributions $p^*$.} instead of $C_{FB}^{noise}$ and the difference between them (i.e.redundant transmission rate) is $\underline{I}(X\rightarrow Y|F)|_{p^*}$. These numerical knowledge might be crucial in system design and evaluation.
\item. Induced Computable Bounds: Let $q^*= arg\sup_{X}\underline{I}(X\rightarrow Y)$ where supremum is taken over all possible channel input distributions $\lbrace p(x_i|x^{i-1},z^{i-1})\rbrace_{i=1}^\infty$. Since code-function $F$ is not involved at this point, the computation complexity is significantly reduced. Based on Theorem \ref{thm4_2}, it is straightforward to obtain $\underline{I}(X\rightarrow Y)|_{q^*}$ and $\underline{I}^R(X(F)\rightarrow Y)|_{q^*}$ as upper\footnote{Note that $\underline{I}(X\rightarrow Y)|_{q^*}=\sup_{\lbrace p(x_i|x^{i-1},z^{i-1})\rbrace_{i=1}^\infty}\underline{I}(X\rightarrow Y)\leq C_{FB}=\sup_{\lbrace p(x_i|x^{i-1},y^{i-1})\rbrace_{i=1}^\infty}\underline{I}(X\rightarrow Y)$ where $C_{FB}$ is the corresponding perfect feedback capacity. Therefore this upper bound is in general better than $C_{FB}$.} and lower bounds on the capacity, respectively. Further, the gap between the bounds is $\underline{I}(X\rightarrow Y|F)|_{q^*}$, which is definitely a tightness evaluation of the bounds.
\end{enumerate}
\subsection{Computable Bounds on the Capacity}
\indent As it is shown, the capacity characterization in Theorem \ref{thm4_2} is not computable in general due to the probabilistic limit and code-functions. This motivates us to explore some conditions under which the previous characterization can be simplified or to look at some computable bounds instead. Toward this end, we first introduce a strong converse theorem under which the ``probabilistic limit'' can be replaced by the ``normal limit''. We then turn to characterize a pair of upper and lower bounds which is much easier to compute and tight in certain practical situations.
\begin{definition}(\textit{Strong Converse})
A channel with noisy feedback capacity $C_{FB}^{noise}$ has a strong converse if for any $R>C_{FB}^{noise}$, every sequence of channel codes $\lbrace(n,M_n,\epsilon_n)\rbrace_{n=1}^{\infty}$ with
\begin{equation*}
\liminf_{n\rightarrow \infty}\frac{1}{n}\log M_n\geq R
\end{equation*}
satisfies $\lim_{n\rightarrow \infty}\epsilon_n =1$
\end{definition}
\begin{theorem}(\textit{Strong Converse Theorem})
A channel with noisy feedback capacity $C_{FB}^{noise}$ satisfies the strong converse property if and only if
\begin{equation}
\sup_{X}\underline{I}^R(X(F)\rightarrow Y)=\sup_{X}\overline{I}^R(X(F)\rightarrow Y)\footnote{This condition can be alternatively expressed as $\sup_{X}\underline{I}(F;Y)=\sup_{X}\overline{I}(F;Y)$. Since the computation complexity difference between the mutual information and residual directed information is not justified, either condition is a candidate for check. Note that how to check the strong converse is out of the scope of this paper.}
\label{equ4_3}
\end{equation}
Furthermore, if the strong converse property holds, we have
\begin{equation*}
C_{FB}^{noise}=\sup_{X}\lim_{n\rightarrow \infty}\frac{1}{n}I^R(X^n(F^n)\rightarrow Y^n).
\label{thm4_5}
\end{equation*}
\end{theorem}
\indent The proof directly follows from chapter $3.5$ in \cite{bookhan03} by appropriate replacement of $i^R(X^n(F^n)\rightarrow Y^n)$ on $i(F^n; Y^n)$. This theorem gives us an important message that, for channels satisfying the strong converse property, we may compute the noisy feedback capacity by taking the normal limit instead of the probabilistic limit. How to further simplify the capacity characterization will be explored in the future.\\
\indent We next propose computable upper bounds on the noisy feedback capacity.
\begin{theorem}(\textit{Upper Bound})\footnote{As we will see from the proof, this upper bound holds for any finite-alphabet channel with or without strong converse property. }
\begin{equation}
\bar{C}_{FB}^{noise}=\sup_{X}\liminf_{n\rightarrow \infty}\frac{1}{n}I(X^n\rightarrow Y^n||Z^n)
\label{equa4_8}
\end{equation}
where $\bar{C}_{FB}^{noise}$ denotes the upper bound of the capacity and the supremum is taken over all possible channel input distribution $\lbrace p(x_i|x^{i-1},z^{i-1})\rbrace_{i=1}^\infty$.
\label{thm_coding_aditive}
\end{theorem}
\begin{remark}
The computation complexity of formula (\ref{equa4_8}), which is independent of code-functions, is significantly reduced and is similar to that of directed information. We herein conjecture that most of the algorithms for computing the directed information may apply to compute formula (\ref{equa4_8}). For example, for finite-state machine channels \cite{Yang05} with noisy feedback, formula (\ref{equa4_8}) may be computable by using dynamic programming approach along the lines of \cite{Yang05}.
\end{remark}
\indent We need the following lemma before showing the proof of Theorem \ref{thm_coding_aditive}.
\begin{lemma}
\begin{equation*}
I(F^n;Y^n)=I^R(X^n(F^n)\rightarrow Y^n)=I(X^n\rightarrow Y^n||Z^n)-I(F^n;Z^n|Y^n)
\end{equation*}
\label{lemma4_7}
\end{lemma}
\begin{proof}
\begin{equation*}
\begin{split}
&I^R(X^n(F^n)\rightarrow Y^n)\\
\stackrel{(a)}{=}&I(F^n;Y^n)\\
=&I(F^n;(Y^n,Z^n))-I(F^n;Z^n|Y^n) \\
\stackrel{(b)}{=}&I(F^n \rightarrow (Y^n,Z^n))-I(F^n;Z^n|Y^n) \\
=&\sum_{i=1}^n I(F^i,(Y_i,Z_i)|Y^{i-1},Z^{i-1})-I(F^n;Z^n|Y^n)\\
=&\sum_{i=1}^n H(Y_i,Z_i|Y^{i-1},Z^{i-1})-H(Y_i,Z_i|Y^{i-1},Z^{i-1},F^i)-I(F^n;Z^n|Y^n)\\
=&\sum_{i=1}^n H(Z_i|Y^{i},Z^{i-1})+H(Y_i|Y^{i-1},Z^{i-1})-H(Z_i|Y^{i},Z^{i-1},F^i)-H(Y_i|Y^{i-1},Z^{i-1},F^{i})-I(F^n;Z^n|Y^n) \\
\stackrel{(c)}{=}&\sum_{i=1}^n H(Y_i|Y^{i-1},Z^{i-1})-H(Y_i|Y^{i-1},Z^{i-1},F^{i})-I(F^n;Z^n|Y^n)\\
\stackrel{(d)}{=}&\sum_{i=1}^n H(Y_i|Y^{i-1},Z^{i-1})-H(Y_i|Y^{i-1},X^{i},Z^{i-1},F^{i})-I(F^n;Z^n|Y^n)\\
\stackrel{(e)}{=}&\sum_{i=1}^n H(Y_i|Y^{i-1},Z^{i-1})-H(Y_i|Y^{i-1},X^{i},Z^{i-1})-I(F^n;Z^n|Y^n)\\
=&\sum_{i=1}^n I(X^i,Y_i|Y^{i-1},Z^{i-1})-I(F^n;Z^n|Y^n)\\
=&I(X^n\rightarrow Y^n||Z^{n})-I(F^n;Z^n|Y^n)\\
\end{split}
\end{equation*}
where (a) follows from Lemma \ref{lemma4_3}. Line (b) follows from the fact that there exists no feedback from $(Y^n,Z^n)$ to $F^n$ and thus the mutual information and directed information coincide. Line (c) follows from the fact that $H(Z_i|Y^{i},Z^{i-1})=H(Z_i|Y^{i},Z^{i-1},F^i)$ since $F^i-(Y^{i},Z^{i-1})-Z_i$ forms a Markov chain. Line (d) follows from the fact that $X^i$ can be determined by $F^i$ and the outputs of the feedback link $Z^{i-1}$. Line (e) follows from the Markov chain $F^{i}-(Y^{i-1},X^{i},Z^{i-1})-Y_i$.
\end{proof}
\indent Now we present the proof of Theorem \ref{thm_coding_aditive} as follows.
\begin{proof}
\indent Recall Lemma A1 in \cite{Han93}, we have $\underline{I}(F;Y)\leq \liminf_{n\rightarrow \infty}\frac{1}{n}I(F^n;Y^n)$ for any sequence of joint probability. That is, $\underline{I}^R(X(F)\rightarrow Y)\leq \liminf_{n\rightarrow \infty}\frac{1}{n}I^R(X^n(F^n)\rightarrow Y^n)$. Then by Lemma \ref{lemma4_7},
\begin{equation}
\begin{split}
C_{FB}^{noise}\leq &\sup_{X}\liminf_{n\rightarrow \infty}\frac{1}{n}I^R(X^n(F^n)\rightarrow Y^n)\\
=&\sup_{X}\liminf_{n\rightarrow \infty}\frac{1}{n}(I(X^n\rightarrow Y^n||Z^n)-I(F^n;Z^n|Y^n)) \\
\leq &\sup_{X}\liminf_{n\rightarrow \infty}\frac{1}{n}I(X^n\rightarrow Y^n||Z^n)\\
\end{split}
\label{equa4_7}
\end{equation}
\end{proof}
\begin{corollary}
Assume that there is an independent additive noise feedback (Fig.\ref{figure5}), then
\begin{equation*}
\bar{C}_{FB}^{noise}=\sup_{X}\liminf_{n\rightarrow \infty}\frac{1}{n}I(X^n\rightarrow Y^n|V^n)
\end{equation*}
where $\sup_{X}$ means that supremum is taken over all possible channel input distribution $\lbrace p(x_i|x^{i-1},y^{i-1}+v^{i-1})\rbrace_{i=1}^\infty$.
\label{coro_conditional_directed_info}
\end{corollary}
\begin{proof}
\begin{equation*}
\begin{split}
I(X^n\rightarrow Y^n||Z^n)
=&\sum_{i=1}^n I(X^i,Y_i|Y^{i-1},Z^{i-1})\\
=&\sum_{i=1}^n I(X^i,Y_i|Y^{i-1},V^{i-1})\\
=&I(X^n\rightarrow Y^n||V^n)\\
\stackrel{(a)}{=}&I(X^n\rightarrow Y^n|V^n)\\
\end{split}
\end{equation*}
where (a) follows from remark \ref{remark_pre_01}. The proof is complete.
\end{proof}
\indent Next, we show a lower bound on the capacity for strong converse channels with additive noise feedback. In fact, any particular coding scheme may induce a low bound on the noisy feedback capacity. However, the lower bound proposed in the following has nice features and its own advantages.
\begin{theorem}(\textit{Lower Bound})
Assume that a channel with an independent additive noise feedback (Fig.\ref{figure5}) satisfies the strong converse property. A lower bound on the noisy feedback capacity is given by
\begin{equation*}
\underline{C}_{FB}^{noise}=\bar{C}_{FB}^{noise}-\bar{h}(V)
\end{equation*}
where
\begin{equation*}
\bar{h}(V)=\limsup_{n\rightarrow \infty}\frac{1}{n}H(V^n).
\end{equation*}
\label{coro_conditional_directed_info}
\end{theorem}
\begin{proof}
\indent We need to show that, for any $\delta>0$, there exists a sequence of $(n,M,\epsilon_n)$ channel codes ($\epsilon_n\rightarrow 0$ as $n\rightarrow \infty$) with transmission rate
\begin{equation*}
\begin{split}
R=&\bar{C}_{FB}^{noise}-\bar{h}(V)-\delta\\
=&\sup_{X}\liminf_{n\rightarrow \infty}\frac{1}{n}I(X^n\rightarrow Y^n|V^n)-\bar{h}(V)-\delta.\\
\end{split}
\end{equation*}
\indent Now, for any fixed $\delta>0$, we take $\xi$ satisfying $0<\xi<\delta$ and let $X_\xi$ be a sequence of channel input distributions $\lbrace p(x_i|x^{i-1},z^{i-1})\rbrace_{i=1}^\infty$ satisfying
\begin{equation}
\left(\liminf_{n\rightarrow \infty}\frac{1}{n}I(X^n\rightarrow Y^n||Z^n)\right)\bigg|_{X=X_\xi}=\sup_{X}\liminf_{n\rightarrow \infty}\frac{1}{n}I(X^n\rightarrow Y^n||Z^n)-\xi
\label{equ4_4}
\end{equation}
where $\left(\liminf_{n\rightarrow \infty}\frac{1}{n}I(X^n\rightarrow Y^n||Z^n)\right)\vert_{X=X_\xi}$ denotes that $\liminf_{n\rightarrow \infty}\frac{1}{n}I(X^n\rightarrow Y^n||Z^n)$ is evaluated at $X=X_\xi$. According to the definition of supremum, the existence of $X_\xi$ is guaranteed. Since for strong converse channels we have
\begin{equation*}
C_{FB}^{noise}=\sup_{X}\lim_{n\rightarrow \infty}\frac{1}{n}I^R(X^n(F^n)\rightarrow Y^n),
\end{equation*}
we know that, for any $\delta>0$, there exist a sequence of $(n,M,\epsilon_n)$ channel codes ($\epsilon_n\rightarrow 0 $ as $ n\rightarrow \infty$) with transmission rate
\begin{equation*}
R=\left(\lim_{n\rightarrow \infty}\frac{1}{n}I^R(X^n(F^n)\rightarrow Y^n)\right)\bigg|_{X=X_\xi}-(\delta-\xi).
\end{equation*}
By Lemma \ref{lemma4_7},
\begin{equation*}
\begin{split}
R=&\left(\lim_{n\rightarrow \infty}\frac{1}{n}(I(X^n\rightarrow Y^n||Z^n)-I(F^n;Z^n|Y^n))\right)\bigg|_{X=X_\xi}-(\delta-\xi) \\
=&\left(\lim_{n\rightarrow \infty}\frac{1}{n}(I(X^n\rightarrow Y^n||Z^n)-H(Z^n|Y^n)+H(Z^n|Y^n,F^n))\right)\bigg|_{X=X_\xi}-(\delta-\xi) \\
\geq &\left(\liminf_{n\rightarrow \infty}\frac{1}{n}(I(X^n\rightarrow Y^n||Z^n)-H(Z^n|Y^n))\right)\bigg|_{X=X_\xi}-(\delta-\xi) \\
=&\left(\liminf_{n\rightarrow \infty}\frac{1}{n}(I(X^n\rightarrow Y^n||Z^n)-\sum_{i=0}^n H(Z_i|Z^{i-1},Y^n))\right)\bigg|_{X=X_\xi}-(\delta-\xi) \\
\geq &\left(\liminf_{n\rightarrow \infty}\frac{1}{n}(I(X^n\rightarrow Y^n||Z^n)-\sum_{i=0}^n H(Z_i|Z^{i-1},Y^i))\right)\bigg|_{X=X_\xi}-(\delta-\xi) \\
\stackrel{(a)}{=}&\left(\liminf_{n\rightarrow \infty}\frac{1}{n}(I(X^n\rightarrow Y^n||Z^n)-\sum_{i=0}^n H(V_i|V^{i-1}))\right)\bigg|_{X=X_\xi}-(\delta-\xi) \\
\geq &\left(\liminf_{n\rightarrow \infty}\frac{1}{n}(I(X^n\rightarrow Y^n||Z^n)-H(V^n))\right)\bigg|_{X=X_\xi}-(\delta-\xi) \\
\geq &\left(\liminf_{n\rightarrow \infty}\frac{1}{n}I(X^n\rightarrow Y^n||Z^n)\right)\bigg|_{X=X_\xi}+\liminf_{n\rightarrow \infty}-\frac{1}{n}H(V^n)-(\delta-\xi) \\
=&\left(\liminf_{n\rightarrow \infty}\frac{1}{n}I(X^n\rightarrow Y^n||Z^n)\right)\bigg|_{X=X_\xi}-\limsup_{n\rightarrow \infty}\frac{1}{n}H(V^n)-(\delta-\xi) \\
\stackrel{(b)}{=}&\sup_{X}\liminf_{n\rightarrow \infty}\frac{1}{n}I(X^n\rightarrow Y^n||Z^n)-\xi-\bar{h}(V)-(\delta-\xi) \\
=&\sup_{X}\liminf_{n\rightarrow \infty}\frac{1}{n}I(X^n\rightarrow Y^n||Z^n)-\bar{h}(V)-\delta\\
\stackrel{(c)}{=}&\sup_{X}\liminf_{n\rightarrow \infty}\frac{1}{n}I(X^n\rightarrow Y^n|V^n)-\bar{h}(V)-\delta\\
\end{split}
\end{equation*}
where (a) follows from the fact that $Z_i=Y_i+V_i$ and the Markov Chain $(Z^{i-1},Y^{i})-V^{i-1}-V_i$. Line (b) follows from equation (\ref{equ4_4}). Line (c) follows from Corollary $3$.\\
\indent Since $\delta$ can be arbitrarily small, the proof is complete.
\end{proof}
\begin{remark}
This theorem reveals an important message that the gap between the proposed upper and lower bounds only depends on the feedback additive noise $V$ (i.e. independent from the forward channel). Further, if the entropy rate of noise $V$ goes to zero\footnote{In many practical situations, the entropy rate of the feedback noise is small. For example, if the feedback link only suffers intersymbol interference as illustrated in Chapter $4$ \cite{bookGallager68}, the entropy rate turns out to be approximately $0.0808$. Further, if the cardinality of $V^\infty$ is finite (yet the feedback is still noisy), the entropy rate is clearly zero.}, the proposed upper and lower bound converges and thus the capacity is known.
\end{remark}
\indent We end this section by investigating two examples of noisy feedback channels.
\begin{example}
\indent The example shows that for DMC with noisy feedback the characterized upper bound equals to the open-loop capacity. This implies that the upper bound should be tight when the channel ``converges'' to DMC. Besides, this example verifies the result (i.e. Theorem \ref{Thm_4_1}) in Section IV.\\
\indent Consider a binary symmetric channel (BSC) with a binary symmetric feedback. Note that this is the simplest model of a noisy feedback channel, yet it captures most features of the general problem. We model the noisy channel/feedback as additive noise channel/feedback as follows.
\begin{equation*}
Y_i=X_i+ U_i \quad (\text{mod}\quad 2) \quad \text{and} \quad Z_i=Y_i+ V_i \quad (\text{mod} \quad 2)
\end{equation*}
where we assume that $Pr(U_i=1)=1-Pr(U_i=0)=\alpha$ and $Pr(V_i=1)=1-Pr(V_i=0)=\beta$. It is known that the capacity of this noisy feedback channel equals the nonfeedback capacity $1-H(\alpha)$ where $H(\alpha)=-\alpha\log{\alpha}-(1-\alpha)\log{(1-\alpha)}$. Next, we show that maximizing the conditional directed information in Corollary \ref{coro_conditional_directed_info} provides the noisy feedback capacity. That is,
\begin{equation*}
\sup_{X}\liminf_{n\rightarrow \infty}\frac{1}{n}I(X^n\rightarrow Y^n|V^n)=1-H(\alpha).
\end{equation*}
\indent This can be done as follows.
\begin{equation*}
\begin{split}
\liminf_{n\rightarrow \infty}\frac{1}{n}I(X^n\rightarrow Y^n|V^n)=&\liminf_{n\rightarrow \infty}\frac{1}{n}\sum_{i=1}^n I(X^i;Y_i|Y^{i-1},V^{i-1})\\
=&\liminf_{n\rightarrow \infty}\frac{1}{n}\sum_{i=1}^n H(Y_i|Y^{i-1},V^{i-1})-H(Y_i|X^{i},Y^{i-1},V^{i-1}) \\
=&\liminf_{n\rightarrow \infty}\frac{1}{n}\sum_{i=1}^n H(Y_i|Y^{i-1},V^{i-1})-H(Y_i|X_{i}) \\
=&\liminf_{n\rightarrow \infty}\frac{1}{n}\sum_{i=1}^n H(Y_i|Y^{i-1},V^{i-1})-H(U_i) \\
\stackrel{(a)}{\leq} &\liminf_{n\rightarrow \infty}\frac{1}{n}\sum_{i=1}^n H(Y_i|Y^{i-1})-H(U_i) \\
\stackrel{(b)}{\leq} &\liminf_{n\rightarrow \infty}\frac{1}{n}\sum_{i=1}^n H(Y_i)-H(U_i)\\
\stackrel{(c)}{\leq} &1-H(\alpha)\\
\end{split}
\end{equation*}
where taking equality in $(a)$ implies $\liminf_{n\rightarrow \infty}\frac{1}{n}I(V^{n-1};Y^n)=0$, that is, the capacity-achieving encoder should be non-typical. This verifies Theorem \ref{Thm_4_1} in Section IV. Taking equalities in $(b)$ and $(c)$ imply that the capacity-achieving encoder should produce equal-probability channel outputs (i.e. uniform distribution). It is obvious that there exists such an optimal encoder that all above equalities hold.
\end{example}
\begin{figure}
\begin{center}
\includegraphics[scale=0.75]{sim_fig01_0.1.eps}
\caption{The upper bound on the capacity of a first moving average Gaussian channel with AWGN feedback.}
\label{sim_fig01_0.1}
\end{center}
\end{figure}
\begin{example} In this example, we consider a colored Gaussian channel with additive white Gaussian noise feedback and compute the proposed upper bound\footnote{Although the Gaussian channels are not finite-alphabet, the upper bound characterization still holds. The derivation of the upper bound follows exactly the same idea in this paper and can be found in \cite{Chong11_allerton}.}. Specifically, we assume the forward channel and the feedback link as follows.
\begin{equation*}
Y_i=X_i+W_i \quad \text{and} \quad Z_i=Y_i+V_i
\end{equation*}
where $W_i=U_i+0.1U_{i-1}$, $U_i$ is a white Gaussian process with zero mean and unit variance and $V_i$ a white Gaussian process with zero mean and variance $\sigma$. We take coding block length $n=30$ and power limit $P=10$ for computing the upper bound. See Fig. \ref{sim_fig01_0.1}. We refer the interested readers to \cite{Chong11_isit_bounds, Chong11_allerton} for the details of the computation and discussions. From the plot of the upper bound, we see that the noisy feedback capacity is very sensitive to the feedback noise, at least for certain Gaussian channels.
\end{example}
\subsection{Discrete Memoryless Channel and Typical Closed-Loop Encoder}
\begin{definition}(\textit{Discrete Memoryless Channel})
A discrete memoryless channel is a discrete channel satisfying
\begin{equation*}
p(y_i|x^i,y^{i-1})=p(y_i|x_i)
\end{equation*}
\end{definition}
\begin{definition}(\textit{Typical Closed-Loop Encoder })
Given a channel $\lbrace p(y_i|x^i,y^{i-1})\rbrace_{i=1}^\infty$, a noisy feedback link $\lbrace p(z_i|y^i,z^{i-1})\rbrace_{i=1}^\infty$, an encoder is defined as a typical closed-loop encoder if it satisfies
\begin{equation*}
\liminf_{n\rightarrow \infty}\frac{1}{n}I(Z^{n-1}\rightarrow Y^n)>0.
\end{equation*}
For the additive noise feedback case as shown in Fig.\ref{figure5}, the condition is equivalent to
\begin{equation*}
\liminf_{n\rightarrow \infty}\frac{1}{n}I(V^{n-1}; Y^n)>0.
\end{equation*}
\label{def_typicalencoder}
\end{definition}
\begin{remark}
The equivalence is straightforward to check. That is,
\begin{equation*}
\begin{split}
\liminf_{n\rightarrow \infty}\frac{1}{n}I(Z^{n-1}\rightarrow Y^n)=&\liminf_{n\rightarrow \infty}\frac{1}{n}\sum_{i=1}^{n}H(Y_i|Y^i)-H(Y_i|Y^{i-1},Z^{i-1})\\
=&\liminf_{n\rightarrow \infty}\frac{1}{n}\sum_{i=1}^{n}H(Y_i|Y^i)-H(Y_i|Y^{i-1},V^{i-1})\\
=&\liminf_{n\rightarrow \infty}\frac{1}{n}I(V^{n-1}\rightarrow Y^n)\\
\stackrel{(a)}{=}&\liminf_{n\rightarrow \infty}\frac{1}{n}I(V^{n-1};Y^n).\\
\end{split}
\end{equation*}
where (a) follows the fact that there is no feedback from $Y$ to $V$ and thus the mutual information and the directed information coincide.
\end{remark}
\begin{remark}
This definition implies that a typical closed-loop encoder should non-trivially take feedback information $Z^{n-1}$ to produce channel inputs $X^{n}$ over time. It is easy to verify that an encoder is non-typical if it discards all feedback information (i.e. open-loop encoder) or only extracts feedback information for finite time instants.
\end{remark}
\begin{remark}
The typical closed-loop encoder is only well-defined under the assumption of typical noisy feedback (definition \ref{def_typcialnoise}). Otherwise, for any encoder, we have
\begin{equation*}
\begin{split}
\liminf_{n\rightarrow \infty}\frac{1}{n}I(Z^{n-1}\rightarrow Y^n)=&\liminf_{n\rightarrow \infty}\frac{1}{n}\sum_{i=1}^n I(Z^{i-1}; Y_i|Y^{i-1})\\
=&\liminf_{n\rightarrow \infty}\frac{1}{n}\sum_{i=1}^n H(Z^{i-1}|Y^{i-1})-H(Z^{i-1}|Y^{i})\\
\leq &\liminf_{n\rightarrow \infty}\frac{1}{n}\sum_{i=1}^n H(Z^{i-1}|Y^{i-1})\\
= &0.
\end{split}
\end{equation*}
\end{remark}
\indent Now, we present the main theorem of this section.
\begin{theorem}
The capacity $C_{FB}^{noise}$ of a discrete memoryless channel with noisy feedback equals the non-feedback capacity $C$. The capacity $C_{FB}^{noise}$ is not achievable by implementing any typical closed-loop encoder. Alternatively, any capacity-achieving encoder is non-typcial. Furthermore, the rate-loss by implementing a typical closed-loop encoder is lower bounded by $\liminf_{n\rightarrow \infty}\frac{1}{n}I(Z^{n-1}\rightarrow Y^n)$.\footnote{The ``rate-loss'' refers to the gap between the capacity $C$ and the achievable rate $R$. Given a channel $\lbrace p(y_i|x^i,y^{i-1})\rbrace_{i=1}^\infty$ and a noisy feedback link $\lbrace p(z_i|y^i,z^{i-1})\rbrace_{i=1}^\infty$, the value of $I(Z^{n-1}\rightarrow Y^n)$ only depends on the channel input distributions $\lbrace p(x_i|x^{i-1},z^{i-1})\rbrace_{i=1}^\infty$ induced by the implemented encoder.}
\label{Thm_4_1}
\end{theorem}
\begin{remark}
This negative result implies that it is impossible to find a capacity-achieving feedback coding scheme for DMC with noisy feedback whereas it is possible in perfect feedback case (e.g. Schalkwijk-Kailath scheme). For example, \cite{Martins08} has proposed a linear coding scheme for AWGN channel with bounded feedback noise and \cite{Chance10} has proposed a concatenated coding scheme for AWGN channel with noisy feedback. It is easy to check that both of these closed-loop encoders are typical and therefore both coding schemes cannot achieve the capacity unless, as discussed in \cite{Martins08,Chance10}, the feedback additive noise is shrinking to zero (i.e. non-typical noisy feedback).
\end{remark}
\begin{remark}
Theorem \ref{Thm_4_1} indicates that the noisy feedback is unfavorable in the sense of achievable rate. However, using noisy feedback still provides many benefits as mentioned in the Introduction. Furthermore, from a control theoretic point of view, (noisy) feedback is necessary for stabilizing unstable plants and achieving certain performances. Therefore, we need a tradeoff while using noisy feedback.
\end{remark}
\indent Before moving to prove the main theorem, we need the following lemma.
\begin{lemma}
For any typical closed-loop encoder,
\begin{equation*}
\liminf_{n\rightarrow \infty}\frac{1}{n}I(X^{n}\rightarrow Y^n|W)>0.
\end{equation*}
\label{lem_dmc}
\end{lemma}
\begin{proof}
For any $0\leq i\leq n$, we have
\begin{equation*}
\begin{split}
I(W;Z_i|Y^i,Z^{i-1})=&H(Z_i|Y^i,Z^{i-1})-H(Z_i|Y^{i},Z^{i-1},W)\\
=&H(Z_i|Y^i,Z^{i-1})-H(Z_i|Y^{i},Z^{i-1})\\
=&0.\\
\end{split}
\end{equation*}
Then,
\begin{equation*}
\begin{split}
I(W;(Y^n,Z^{n-1}))=&I(W;(Y^n,Z^n))-I(W;Z_n|Y^n,Z^{n-1})\\
=&\sum_{i=1}^{n}I(W;(Y_i,Z_i)|Y^{i-1},Z^{i-1})\\
=&\sum_{i=1}^{n}I(W;Y_i|Y^{i-1},Z^{i-1})+I(W;Z_i|Y^i,Z^{i-1})\\
=&\sum_{i=1}^{n}H(Y_i|Y^{i-1},Z^{i-1})-H(Y_i|Y^{i-1},Z^{i-1},W)\\
=&\sum_{i=1}^{n}H(Y_i|Y^{i-1},Z^{i-1})-H(Y_i|Y^{i-1},Z^{i-1},W,X^i)\\
=&\sum_{i=1}^{n}H(Y_i|Y^{i-1},Z^{i-1})-H(Y_i|Y^{i-1},X^i)\\
\end{split}
\end{equation*}
We investigate another equality as follows.
\begin{equation*}
\begin{split}
&I(X^{n}\rightarrow Y^n)-I(Z^{n-1}\rightarrow Y^n)\\
=&\sum_{i=1}^{n}H(Y_i|Y^i)-H(Y_i|Y^{i-1},X^{i})-H(Y_i|Y^i)+H(Y_i|Y^{i-1},Z^{i-1})\\
=&\sum_{i=1}^{n}H(Y_i|Y^{i-1},Z^{i-1})-H(Y_i|Y^{i-1},X^{i})\\
\end{split}
\end{equation*}
Combine the above equalities, we have
\begin{equation*}
\begin{split}
I(Z^{n-1}\rightarrow Y^n)=&I(X^{n}\rightarrow Y^n)-I(W;(Y^n,Z^{n-1}))\\
\stackrel{(a)}{=}&I(W;Y^n)+I(X^n\rightarrow Y^n|W)-I(W;(Y^n,Z^{n-1}))\\
=&I(X^n\rightarrow Y^n|W)-I(W;Z^{n-1}|Y^n)\\
\end{split}
\end{equation*}
where $(a)$ follows from Theorem \ref{thm3_1}. According to the definition of typical closed-loop encoder, the proof is complete.
\end{proof}
\indent Now we are ready to prove Theorem \ref{Thm_4_1}.
\begin{proof}
\indent Firstly, we prove that
\begin{equation*}
C_{FB}^{noise}=C=\max_{p(x)}I(X;Y)
\end{equation*}
\indent Since a nonfeedback channel code is a special case of a noisy feedback channel code, any rate that can be achieved without feedback can be achieved with noisy feedback. Therefore, we have $C_{FB}^{noise}\geq C$. Given a noisy feedback link, we clearly have $C_{FB}^{noise}\leq C_{FB}$ where $C_{FB}$ is the capacity of channels with perfect feedback. As $C=C_{FB}$ for DMC \cite{shannon56}, we have $C_{FB}^{noise}=C=\max_{p(x)}I(X;Y)$.\\
\indent Next, we show that for any typical closed-loop encoder, the achievable rates $R$ are strictly less then $C$ and the difference is lower bounded by $\liminf_{n\rightarrow \infty}\frac{1}{n}I(Z^{n-1}\rightarrow Y^n)$. Let $\mathcal{W}$ be uniformly distributed over $\lbrace 1,2,\cdots,2^{nR} \rbrace$ and $P_e^{(n)}=Pr(W\neq\hat{W})$ with $P_e^{(n)}\rightarrow 0$ as $n\rightarrow \infty$. Then
\begin{equation*}
\begin{split}
nR&=H(W)\\
&=H(W|\hat{W})+I(W;\hat{W})\\
&\stackrel{(a)}{\leq} 1+P_e^{(n)}nR+I(W;\hat{W})\\
&\stackrel{(b)}{\leq} 1+P_e^{(n)}nR+I(W;Y^n) \\
\end{split}
\end{equation*}
where (a) and (b) follow from Fano's inequality and Data-processing inequality, respectively.\\
\indent Next,
\begin{equation*}
\begin{split}
I(W;Y^n)&=I^R(X^n(W)\rightarrow Y^n)\\
&=I(X^n\rightarrow Y^n)-I(X^n\rightarrow Y^n|W)\\
&=\sum_{i=1}^{n}H(Y_i|Y^{i-1})-\sum_{i=1}^{n}H(Y_i|X^i,Y^{i-1})-I(X^n\rightarrow Y^n|W)\\
&\stackrel{(c)}{=}\sum_{i=1}^{n}H(Y_i|Y^{i-1})-\sum_{i=1}^{n}H(Y_i|X_i)-I(X^n\rightarrow Y^n|W)\\
&\stackrel{(d)}{\leq}\sum_{i=1}^{n}H(Y_i)-\sum_{i=1}^{n}H(Y_i|X_i)-I(X^n\rightarrow Y^n|W)\\
&=\sum_{i=1}^{n}I(X_i;Y_i)-I(X^n\rightarrow Y^n|W)\\
&\leq nC-I(X^n\rightarrow Y^n|W)\\
\end{split}
\end{equation*}
where (c) follows from the definition of DMC and (d) follows from the fact that removing conditioning increases entropy.\\
\indent Putting these together, we have
\begin{equation*}
R\leq \frac{1}{n}+P_e^{(n)}R+C-\frac{1}{n}I(X^n\rightarrow Y^n|W)
\end{equation*}
Therefore,
\begin{equation*}
\begin{split}
R&\leq \liminf_{n\rightarrow\infty}\lbrace\frac{1}{n}+P_e^{(n)}R+C-\frac{1}{n}I(X^n\rightarrow Y^n|W)\rbrace\\
&= C-\liminf_{n\rightarrow\infty}\frac{1}{n}I(X^n\rightarrow Y^n|W)\\
\end{split}
\end{equation*}
\indent According to the proof of Lemma \ref{lem_dmc}, we have
\begin{equation*}
\begin{split}
R&\leq C-\liminf_{n\rightarrow\infty}\frac{1}{n}(I(Z^{n-1}\rightarrow Y^n)+I(W;Z^{n-1}|Y^n))\\
&\leq C-\liminf_{n\rightarrow\infty}\frac{1}{n}I(Z^{n-1}\rightarrow Y^n)\\
\end{split}
\end{equation*}
\indent By the definition of typical closed-loop encoder, the proof is complete.
\end{proof}
\begin{figure}
\begin{center}
\includegraphics[scale=0.75]{figure7.eps}
\caption{Binary codeword erasure channel/feedback}
\label{figure7}
\end{center}
\end{figure}
\subsection{Example}
\indent We give an example of communication through DMC with typical noisy feedback, from which we may get insight on how feedback ``noise'' reduces effective transmission rate and how signaling helps rebuild the coordination between the transmitter and the receiver. Consider a binary codeword erasure channel (BCEC) with a noisy feedback as shown in Fig.\ref{figure7}. The channel input is a m-bit codeword. This input codeword will be reliably transmitted with probability $1-\alpha$, and otherwise get erased with probability $\alpha$. Similarly, we assume a noisy feedback link with erasure probability $p$. It is obvious that the capacity of this channel is $C_{FB}^{noise}=m(1-\alpha)$. One simple but nonoptimum encoding strategy is the following: use the first bit in every m-bit codeword as a signaling bit (i.e. $1$ refers to a retransmitted m-bit codeword while $0$ refers to a new one). If the output of the feedback link is $e$, the encoder will retransmit the previous codeword with signaling ``1'', otherwise, transmit the next codeword with signaling ``0''. Under this strategy, the decoder can recover the message with arbitrarily small error due to the signaling bit. Next, we analyze the achievable rate of this strategy. Assume that $n$ bits information need to be transmitted and $n$ is sufficient large. Then $\alpha n$ bits will be lost and $(1-\alpha)n$ bits will reliably get through. Due to the noisy feedback, the encoder will retransmit $b_1=\alpha n+p(1-\alpha)n$ bits. Similarly, $\alpha b_1$ bits will be lost and $(1-\alpha)b_1$ bits will get through. Then the encoder will retransmit $b_2=\alpha b_1+p(1-\alpha)b_1$ bits. After retransmit $t$ times with $t\rightarrow \infty$, the achievable transmission rate $R$ is
\begin{equation*}
\begin{split}
R&=\lim_{t\rightarrow \infty} \frac{\log{2^{n}}}{\frac{1}{m-1}(n+b_1+b_2+\cdots+b_t)}\\
&=\lim_{t\rightarrow \infty} \frac{n(m-1)}{n+\frac{(\alpha n+p(1-\alpha)n)(1-\alpha+p(1-\alpha))^t}{1-\alpha+p(1-\alpha))}}\\
&=\frac{n(m-1)}{n+\frac{(\alpha n+p(1-\alpha)n)}{1-(\alpha+p(1-\alpha))}}\\
&=(m-1)(1-p)(1-\alpha)\\
\end{split}
\end{equation*}
Then we have
\begin{equation*}
\frac{R}{C_{FB}^{noise}}=(1-p)(1-\frac{1}{m}).
\end{equation*}
\indent Here, it shows that the loss of transmission rate is caused by two factors: uncertainty in the feedback link and signaling in the forward channel. If $p=0$ (i.e. perfect feedback) and $m\rightarrow \infty$ (i.e. the signaling bit could be neglected), we have $R=C_{FB}^{noise}$. Additionally, we should notice an interesting fact in this example that the loss of effective transmission rate is independent of the noise in the forward channel.
\section{Introduction}
\indent The theory of feedback has been well studied \cite{Doyle92} for control systems but only partially investigated for communication systems. So far, a large body of work has looked at communication channels with perfect feedback and obtained many notable results. See \cite{Schalkwijk66,Schalkwijk66_2,cover89,Massey1990,Liu05,Ofer07,Tati09,Permuter09} and reference therein. As an illustration, it is known that perfect feedback improves the error exponent and reduces the coding complexity \cite{Cover88}. For channels with memory, using perfect feedback can increase the capacity compared with the non-feedback case \cite{cover89}. However, only few papers have studied channels with noisy feedback and many challenging problems are still open. Namely, how does noisy feedback affect the transmission rate in forward communication channels? Is noisy feedback helpful in improving decoding error exponent or reducing encoding complexity? More generally, is feedback beneficial to
communicate even though it is noisy? These questions are difficult because the noisy feedback induces a loss of coordination between the transmitter and the receiver. We can classify the results in the literature into two main categories. The first category studies the usefulness of noisy feedback by investigating reliability functions and error exponents. \cite{Draper06} shows that the noisy feedback can improve the communication reliability by specifying a variable-length coding strategy. \cite{Kim07} derives the upper and lower bounds on the reliability function of the additive white Gaussian noise channel with additive white Gaussian noise feedback. \cite{Burnashev08} considers a binary symmetric channel with a binary symmetric feedback link and shows that the achievable error exponent is improved under certain conditions. The second category focuses on the derivation of coding schemes mostly for additive Gaussian channels with noisy feedback based on the well-known Schalkwijk-Kailath scheme \cite{Schalkwijk66}. We refer interested readers to \cite{Omura68,Lavenberg71,Martins08,Chance10,Kumar} for details.\\
\indent Instead of concentrating on specific aspects or channels, in this paper, we study the noisy feedback problem in generality. We first focus on the effective information flow through channels with noisy feedback. We introduce a new concept, the residual directed information, which exactly quantifies the effective information flow through the channel and provides us with a clear view of the information flow in the noisy feedback channel. In light of this new concept, we show the failure of using the \textit{directed information} defined by Massey \cite{Massey1990} in noisy feedback channels, which is otherwise useful in the perfect feedback case. Furthermore, we investigate the DMC with \textit{typical noisy feedback} (definition \ref{def_typcialnoise}) and prove that the capacity is not achievable by using any typical closed-loop encoder (definition \ref{def_typicalencoder}). In other words, no encoder that typically (to be made more precise in the paper) uses the feedback information can achieve the capacity. This negative result is due to the fact that, by typically using noisy feedback, we need sacrifice certain rate for signaling in order to rebuild the cooperation of the transmitter and receiver such that the message can be recovered with arbitrarily small probability of error. Next, we give a general channel coding theorem in terms of the residual directed information for channels with noisy feedback, which is an extension of \cite{bookhan03}. The main idea is to convert the channel coding problem with noisy feedback into an equivalent channel coding problem without feedback by considering code-functions instead of code-words \cite{Shannon58}, \cite{Tati09}. In fact, code-functions can be treated as a generalization of code-words. By explicitly relating code-function distributions and channel input distributions, we convert a mutual information optimization problem over code-function distributions into a residual directed information optimization problem over channel input distributions. Although the theoretical result is in the form of an optimization problem, computing the optimal solution is not feasible. We then turn to investigate computable bounds which are characterized by the causal conditional directed information. Since this new form is a natural generalization of the directed information, the computation is amenable to the dynamic programming approach proposed by Tatikonda and Mitter \cite{Tati09} for the perfect feedback capacity problem.\\
\indent The main contributions of this paper can be summarized as follows: $1)$. We propose a new information theoretic concept, the residual directed information, to identify and capture the effective information flow in communication channels with noisy feedback and then analyze the information flow in the forward channel. $2)$. We prove that, for DMC with typical noisy feedback, no capacity-achieving closed-loop encoding strategy exists under certain reasonable conditions. $3)$. We show a general noisy feedback channel coding theorem in terms of the residual directed information. $4)$. We propose computable bounds on the noisy feedback capacity, which are characterized by the causal conditional directed information.\\
\indent Throughout the paper, capital letters $X,Y,Z,\cdots$ will represent random variables and lower case letters $x,y,z,\cdots$ will represent particular realizations. We use $x^n$ to represent the sequence $(x_1,x_2,\cdots,x_n)$ and $x^0=\emptyset$. We use $\log$ to represent logarithm base $2$.
\section{Technical Preliminaries}
\input{Preliminary}
\section{Residual Directed Information and Information Flow}
\input{RDI}
\section{Discrete Memoryless Channel With Noisy Feedback}
\label{sec_DMC_nfb}
\input{DMC_nfb}
\section{A Channel Coding Theorem and Computable Bounds on the Capacity}
\label{sec_channelcoding}
\input{Channelcoding}
\section{Conclusion}
\indent We proposed a new concept, the \textit{residual directed information} for characterizing the effective information flow through communication channels with noisy feedback, which extends Massey's concept of \textit{directed information}. Based on this new concept, we first analyzed the information flow in noisy feedback channels and then showed that the capacity of DMC is not achievable by using any typical closed-loop encoder. We next proved a general channel coding theorem in terms of the proposed residual directed information. Finally, we proposed computable bounds characterized by the causal conditional directed information.\\
\indent The results in the paper open up new directions for investigating the role of noisy feedback in communication systems. Furthermore, the new definitions, concepts and methodologies presented in the paper are potential to be extended to multiple access channels, broadcast channels or general multi-user channels with noisy feedback.
\section{Appendix}
\input{apend_nfb}
\bibliographystyle{IEEEtran}
\subsection{Noisy Feedback and Causality}
\indent According to Fig.\ref{figure1}, we model the channel at time $i$ as $p(y_i|x^i,y^{i-1})$. The channel output (without any encoding) is fed back to the encoder through a noisy link, which is modeled as $p(z_i|y^i,z^{i-1})$. At time $i$, the deterministic encoder takes the message $\mathcal{W}$ and the past outputs $Z_1,Z_2,\cdots,Z_{i-1}$ of the feedback link, and then produces a channel input $X_i$. Note that the encoder has access to the output of the feedback link with one time-step delay. At time $n$, the decoder takes all the channel outputs $Y_1,Y_2,\cdots,Y_{n}$ and then produces the decoded message $\hat{\mathcal{W}}$. We present the time ordering of these random variables below.
\begin{equation*}
W,X_1,Y_1,Z_1,X_2,Y_2,Z_2,\cdots,X_{n-1},Y_{n-1},Z_{n-1},X_n,Y_n,\hat{W}
\end{equation*}
\indent Note that all initial conditions (e.g. channel, feedback link, channel input, etc.) are automatically assumed to be known in prior by both the encoder and the decoder. Before entering the more technical part of this paper, it is necessary to give a specific definition of ``noisy feedback''.
\begin{definition}(\textit{Noisy Feedback Link})
The feedback link is noisy if for some time instant $i$ there exists no function $g_i$ such that
\begin{equation}
g_i(X^{i},Z^{i},W)= Y^{i}.
\label{eqI_01}
\end{equation}
The feedback link is noiseless if it is not noisy.
\label{def_noisyfb}
\end{definition}
\begin{remark}
This definition states that, for noisy feedback links, not all the channel outputs can be exactly recovered at the encoder side and, therefore, the encoder and decoder lose mutual understanding. In other words, at time instant $i+1$, the encoder cannot access to the past channel outputs $Y^i$ through information $(X^{i},Z^{i},W)$ to produce channel input $X_{i+1}$. We refer ``perfect (ideal) feedback'' to be the case of $Z^i=Y^i$ for all time instant $i$. Essentially, noiseless feedback is equivalent to perfect feedback since, in both cases, the encoder can access to the channel outputs without any error.
\end{remark}
\begin{example}
Consider the feedback link as $Z_i=Y_i+V_i$ where $V_i$ denotes additive noise at time instant $i$. If channel outputs $Y_i$ only takes value in a set of integers (i.e. $\pm 1, \pm 2, \cdots$) and $V_i$ only takes value in $\lbrace \pm 0.2, \pm 0.4 \rbrace$, then obviously the channel outputs can be exactly recovered at the encoder side. Thus, this feedback link is noiseless even though it is imperfect.
\label{exp01}
\end{example}
\indent Next, we give a definition of \textit{typical noisy feedback link} which will be studied in the next section.
\begin{definition}(\textit{Typical Noisy Feedback Link})
Given channel $\lbrace p(y_i|x^i,y^{i-1})\rbrace_{i=1}^\infty$, the noisy feedback link $\lbrace p(z_i|y^i,z^{i-1})\rbrace_{i=1}^\infty$ is typical if it satisfies
\begin{equation}
\liminf_{n\rightarrow \infty}\frac{1}{n}\sum_{i=1}^{n}H(Z^{i-1}|Y^{i-1})>0
\end{equation}
for any channel input distribution $\lbrace p(x_i|x^{i-1},z^{i-1})\rbrace_{i=1}^\infty$. The noisy feedback link is non-typical if it is not typical.
\label{def_typcialnoise}
\end{definition}
\begin{remark}
This definition implies that the noise in the feedback link must be active consistently over time (e.g. not physically vanishing). In practice, the typical noisy feedback link is the most interesting case for study.
\end{remark}
\begin{example}
Consider a binary symmetric feedback link modeled as $Z_i=Y_i\oplus V_i$ where noise $V_i$ is i.i.d and takes value from $\lbrace 0,1\rbrace$ with equal probability. Then we have
\begin{equation*}
\begin{split}
\liminf_{n\rightarrow \infty}\frac{1}{n}\sum_{i=1}^{n}H(Z^{i-1}|Y^{i-1})=&\liminf_{n\rightarrow \infty}\frac{1}{n}\sum_{i=1}^{n}H(V^{i-1}|Y^{i-1})\\
\geq &\liminf_{n\rightarrow \infty}\frac{1}{n}\sum_{i=1}^{n}H(V_{i-1}|Y^{i-1})\\
\stackrel{(a)}{=}&\liminf_{n\rightarrow \infty}\frac{1}{n}\sum_{i=1}^{n}H(V_{i-1})\\
=&1\\
\end{split}
\end{equation*}
where (a) follows the fact that $Y^{i-1}$ is independent from $V_{i-1}$ due to one step delay. Therefore, this noisy feedback link is typical.
\end{example}
\indent We summarize the family of the feedback link in Fig.\ref{111}.\footnote{In the sequel, the term ``noisy feedback'' refers to ``typical noisy feedback'' unless specified.} We next define the achievable rate and capacity for channels with noisy feedback.
\begin{definition}(\textit{Channel Code})
Consider a message $\mathcal{W}$ which is drawn from an index set $\lbrace 1,2,\cdots,M\rbrace$ and a noisy feedback communication channel $(\mathcal{X}^n, \lbrace p(y_i|x^i,y^{i-1})\rbrace_{i=1}^{n}, \mathcal{Y}^n,\lbrace p(z_i|y^i,z^{i-1})\rbrace_{i=1}^{n},\mathcal{Z}^n)$ with the interpretation that $X_i$ is the input and $Y_i$ is the output/input of the channel/feedback and $Z_i$ is the output of the noisy feedback link at time instant $i$ ($1\leq i\leq n$). Then a $(M,n)$ channel code consists of an index set $\lbrace 1,2,\cdots,M\rbrace$, an encoding function: $\lbrace 1,2,\cdots,M\rbrace\times \mathcal{Z}^{n-1}\rightarrow\mathcal{X}^n$, and a decoding function:$\mathcal{Y}^n\rightarrow \lbrace 1,2,\cdots,M\rbrace$ where the decoding function is a deterministic rule that assigns a guess to each possible received vector.
\end{definition}
\begin{definition}(\textit{Achievable Rate})
The rate $R$ of a $(M,n)$ code is
\begin{equation*}
R=\frac{\log M}{n} \qquad \text{bits per channel use}
\end{equation*}
The rate is said to be achievable if there exists a sequence of $(2^{nR},n)$ codes\footnote{With a slight abuse of notation, we write $nR$ instead of $\lfloor nR\rfloor$ for convenience.} such that the maximal probability of error tends to zero as $n\rightarrow \infty$.
\end{definition}
\begin{definition}(\textit{Channel Capacity})
The capacity of a channel with noisy feedback is the supremum of all achievable rates.
\end{definition}
\indent When there is no feedback from the channel output to the encoder, the maximum of mutual information (i.e. $\max_{p(x^n)}{I(X^n;Y^n)}$) characterizes the maximum information flow through the channel with arbitrarily small probability of decoding error. This quantity is defined as the capacity of the channel. When there is a noiseless feedback, supremizing directed information $I(X^n\rightarrow Y^n)$ over $\overrightarrow{p}(x^n|y^n)$ gives us the feedback capacity \cite{Tati09}, \cite{Kim08_capacity_fb}, \cite{Permuter09}. When there is a noisy feedback, the appropriate measure/characterization of the effective information flow through the channel has been unknown until now. In the next section, we provide the missing measure.
\begin{figure}
\begin{center}
\includegraphics[scale=0.50]{111.eps}
\caption{Family of Feedback links in Communication systems. The ``typical noisy feedback'' is the case which we are interested in.}
\label{111}
\end{center}
\end{figure}
\subsection{Residual Directed Information}
\indent Based on the ``(causal conditional) directed information'', the \textit{residual directed information} and its density with respect to message $W$ is defined as follows.
\begin{definition}(\textit{Residual Directed Information and Its Density})
\begin{equation}
I^R(X^n(W)\rightarrow Y^n)=I(X^n\rightarrow Y^n)-I(X^n\rightarrow Y^n||W).
\end{equation}
Equivalently,
\begin{equation}
I^R(X^n(W)\rightarrow Y^n)=I(X^n\rightarrow Y^n)-I(X^n\rightarrow Y^n|W).
\label{RDI}
\end{equation}
The residual directed information density is defined as
\begin{equation*}
i^R(X^n(W)\rightarrow Y^n)=i(X^n\rightarrow Y^n)-i(X^n\rightarrow Y^n|W)
\end{equation*}
\end{definition}
\indent The following theorem shows that the residual directed information captures the mutual information between the message and the channel outputs which we refer to the \textit{effective information flow}.
\begin{theorem}
If $X^n$ and $Y^n$ are the inputs and outputs, respectively, of a discrete channel with noisy feedback, as shown in Fig.\ref{figure1}, then
\begin{equation*}
I(W;Y^n)=I^R(X^n(W)\rightarrow Y^n)=I(X^n\rightarrow Y^n)-I(X^n\rightarrow Y^n|W).
\end{equation*}
\label{thm3_1}
\end{theorem}
\begin{proof}
\begin{equation*}
\begin{split}
&I(W;Y^n)\\
=&H(Y^n)-H(Y^n|W)\\
=&\sum_{i=1}^{n}H(Y_i|Y^{i-1})-\sum_{i=1}^{n}H(Y_i|Y^{i-1},W)\\
=&\sum_{i=1}^{n}H(Y_i|Y^{i-1})-\sum_{i=1}^{n}H(Y_i|Y^{i-1},W,X^i)-(\sum_{i=1}^{n}H(Y_i|Y^{i-1},W)-\sum_{i=1}^{n}H(Y_i|Y^{i-1},W,X^i))\\
\stackrel{(a)}{=}&\sum_{i=1}^{n}H(Y_i|Y^{i-1})-\sum_{i=1}^{n}H(Y_i|Y^{i-1},X^i)-(\sum_{i=1}^{n}H(Y_i|Y^{i-1},W)-\sum_{i=1}^{n}H(Y_i|Y^{i-1},W,X^i)) \\
=&\sum_{i=1}^{n}I(X^i;Y_i|Y^{i-1})-\sum_{i=1}^{n}I(X^i;Y_i|Y^{i-1},W)\\
=&I(X^n\rightarrow Y^n)-I(X^n\rightarrow Y^n|W) \\
\stackrel{(b)}{=}&I^R(X^n(W)\rightarrow Y^n) \\
\end{split}
\end{equation*}
where (a) follows from the Markov chain $W - (X^i,Y^{i-1})- Y_i$. Line (b) follows from the definition of residual directed information.\\
\end{proof}
\begin{remark} This theorem implies that, for noisy feedback channels, the directed information $I(X^n\rightarrow Y^n)$ captures both the effective information flow (i.e. $I(W;Y^n)$) generated by the message and the redundant information flow (i.e. $I(X^n\rightarrow Y^n|W)$) generated by the \textit{feedback noise} (dummy message). Since only $I(W;Y^n)$ is the relevant quantity for channel capacity, the well-known directed information clearly fails to characterize the noisy feedback capacity.
\end{remark}
\indent In the following corollary, we explore some properties of the residual directed information.
\begin{corollary}
The residual directed information $I^R(X^n(W)\rightarrow Y^n)$ satisfies the following properties:
\begin{enumerate}
\item $I^R(X^n(W)\rightarrow Y^n)\geq 0$ (with equality if and only if the message set $W$ and channel outputs $Y^n$ are independent.)
\item $I^R(X^n(W)\rightarrow Y^n)\leq I(X^n\rightarrow Y^n)\leq I(X^n;Y^n)$.
\end{enumerate}
The first equality holds if the feedback is perfect. The second equality holds if there is no feedback.
\label{col3_1}
\end{corollary}
\begin{proof}
\indent 1). Follows from Theorem \ref{thm3_1}, $I^R(X^n(W)\rightarrow Y^n)=I(W;Y^n)\geq 0$. The necessary and sufficient condition of $I^R(X^n(W)\rightarrow Y^n)=0$ is obvious by looking at $I(W;Y^n)$.\\
\indent 2). Since $I(X^n\rightarrow Y^n|W)=\sum_{i=1}^n I(X^i;Y_i|Y^{i-1},W)\geq 0$ (equality holds for the perfect feedback case),
\begin{equation*}
\begin{split}
I^R(X^n(W)\rightarrow Y^n)&=I(X^n\rightarrow Y^n)-I(X^n\rightarrow Y^n|W)\leq I(X^n\rightarrow Y^n)\\
\end{split}
\end{equation*}
\indent The proof of the second inequality $I(X^n\rightarrow Y^n)\leq I(X^n;Y^n)$ is presented in \cite{Massey1990}.
\end{proof}
\begin{figure}
\begin{center}
\includegraphics[scale=0.60]{figure5.eps}
\caption{Channels with additive noise feedback}
\label{figure5}
\end{center}
\end{figure}
\subsection{Information Flow in Noisy Feedback Channels}
\indent To gain more insight in the information flow of noisy feedback channels, we apply the new concept to channels with additive noise feedback and analyze its information flow. See Fig.\ref{figure5}. We present the time ordering of these random variables below\footnote{$Z_i$ is not shown in the time ordering since we have $Z_i=Y_i+V_i$.}.
\begin{equation*}
W,X_1,Y_1,V_1,X_2,Y_2,V_2,\cdots,X_{n-1},Y_{n-1},V_{n-1},X_n,Y_n,\hat{W}
\end{equation*}
\begin{corollary}
If $X^n$ and $Y^n$ are the inputs and outputs, respectively, of a discrete channel with additive noise feedback, as shown in Fig.\ref{figure5}, then
\begin{equation*}
I(X^n\rightarrow Y^n)=I(W;Y^n)+I(V^{n-1};Y^n)+I(W;V^{n-1}|Y^n)
\end{equation*}
\label{col3_3}
\end{corollary}
\begin{proof}
We herein adopt a derivation methodology similar to the one in Theorem \ref{thm3_1}.
\begin{equation*}
\begin{split}
&I(W;Y^n)\\
=&H(Y^n)-H(Y^n|W)\\
=&\sum_{i=1}^{n}H(Y_i|Y^{i-1})-\sum_{i=1}^{n}H(Y_i|Y^{i-1},W)\\
=&\sum_{i=1}^{n}H(Y_i|Y^{i-1})-\sum_{i=1}^{n}H(Y_i|Y^{i-1},W,V^{i-1})-(\sum_{i=1}^{n}H(Y_i|Y^{i-1},W)-\sum_{i=1}^{n}H(Y_i|Y^{i-1},W,V^{i-1}))\\
\stackrel{(a)}{=}&\sum_{i=1}^{n}H(Y_i|Y^{i-1})-\sum_{i=1}^{n}H(Y_i|Y^{i-1},W,Z^{i-1})-(\sum_{i=1}^{n}H(Y_i|Y^{i-1},W)-\sum_{i=1}^{n}H(Y_i|Y^{i-1},W,V^{i-1}))\\
=&\sum_{i=1}^{n}H(Y_i|Y^{i-1})-\sum_{i=1}^{n}H(Y_i|Y^{i-1},X^{i})-(\sum_{i=1}^{n}H(Y_i|Y^{i-1},W)-\sum_{i=1}^{n}H(Y_i|Y^{i-1},W,V^{i-1}))\\
=&\sum_{i=1}^{n}I(X^i;Y_i|Y^{i-1})-\sum_{i=1}^{n}I(V^{i-1};Y_i|Y^{i-1},W)\\
=&I(X^n\rightarrow Y^n)-I(V^{n-1}\rightarrow Y^n|W)
\end{split}
\end{equation*}
where (a) follows from the fact that $Z^{i-1}=Y^{i-1}+V^{i-1}$. Next,
\begin{equation*}
\begin{split}
I(V^{n-1}\rightarrow Y^n|W)\stackrel{(b)}{=}&I(V^{n-1};Y^n|W) \\
=&H(V^{n-1}|W)-H(V^{n-1}|Y^n,W)\\
\stackrel{(c)}{=}&H(V^{n-1})-H(V^{n-1}|Y^n)+H(V^{n-1}|Y^n)-H(V^{n-1}|Y^n,W)\\
=&I(V^{n-1};Y^n)+I(W;V^{n-1}|Y^n)\\
\end{split}
\end{equation*}
where (b) follows from the fact that there exists no feedback from $Y^n$ to $V^{n-1}$ and (c) follows from the fact that the noise $V^{n-1}$ is independent from $W$. Putting previous equations together, the proof is complete.
\end{proof}
\begin{figure}
\begin{center}
\includegraphics[scale=0.55]{figure2.eps}
\caption{The information flow of channels with additive noise feedback}
\label{figure2}
\end{center}
\end{figure}
\indent Corollary \ref{col3_3} allows us to explicitly interpret the information flow on a dependency graph (e.g. $N=3$). See Fig.\ref{figure2}. The solid lines from message $\mathcal{W}$ to sequence $X^3$ represent the dependence of $X^3$ on $W$. The dotted lines from additive noise $V^2$ to sequence $X^3$ represent the dependence of $X^3$ on $V^2$. The dependence of the channel inputs $X^3$ on the channel outputs $Y^2$ is not shown in the graph since the directed information only captures the information flow from $X^3$ to $Y^3$ \cite{Massey1990}. As it is shown in the zoomed circle, the directed information flow from $X^3$ to $Y^3$ (through cut $A-B$) implicitly contains three sub-information flows wherein the mutual information $I(W;Y^3)$ and $I(V^2;Y^3)$ measure the message-transmitting and the noise-transmitting information flows, respectively. The feedback noise $V^2$ is treated as a dummy message which also needs to be recovered by the decoder. The conditional mutual information $I(W;V^2|Y^3)$ quantifies the mixed information flow between the message-transmitting and noise-transmitting flows. Essentially, the second term in the residual directed information (i.e. $I(X^n\rightarrow Y^n|W)$) precisely captures the non-message transmitting information flows (i.e. $I(V^{n-1};Y^n)$ and $I(W;V^{n-1}|Y^n)$). Therefore, the residual directed information should be a proper measure to work with for channels with noisy feedback.\\
\indent Understanding the information flow in noisy feedback channels leads us to a higher level to investigate the noisy feedback problem and performs as the basis to develop fruitful results (to be seen later).
\subsection{Proof of Lemma \ref{lemma4_6}}
Before giving the proof, we need the following Lemma.
\begin{lemma}
For channels with noisy feedback, as shown in Fig.\ref{figure1},
\begin{equation*}
p(x^n,y^n)=\sum_{z^{n}\in \mathcal{Z}^{n}}\prod_{i=1}^{n}\underbrace{p(z_i|y^i,z^{i-1})}_{\text{Feedback link}} \underbrace{p(x_i|x^{i-1},z^{i-1})}_{\text{Encoding}}\underbrace{p(y_i|x^i,y^{i-1})}_{\text{Channel}}
\end{equation*}
\label{lemma4_0}
\end{lemma}
\begin{proof}
\begin{equation*}
\begin{split}
p(x^n,y^n)&=\sum_{z^{n}\in \mathcal{Z}^{n}}p(x^n,y^n,z^n)\\
&=\sum_{z^{n}\in \mathcal{Z}^{n}}p(z_{n}|x^{n},y^{n},z^{n-1}) p(x^{n},y^{n},z^{n-1})\\
&=\sum_{z^{n}\in \mathcal{Z}^{n}}p(z_{n}|x^{n},y^{n},z^{n-1}) p(y_n|x^{n},y^{n-1},z^{n-1}) p(x^{n},y^{n-1},z^{n-1})\\
&=\sum_{z^{n}\in \mathcal{Z}^{n}}p(z_{n}|x^{n},y^{n},z^{n-1}) p(y_n|x^{n},y^{n-1},z^{n-1}) p(x_n|x^{n-1},y^{n-1},z^{n-1})\\
&\indent p(x^{n-1},y^{n-1},z^{n-1})\\
&\stackrel{(a)}{=}\sum_{z^{n}\in \mathcal{Z}^{n}}p(z_{n}|y^{n},z^{n-1}) p(y_n|x^{n},y^{n-1}) p(x_n|x^{n-1},z^{n-1})\\
&\indent p(x^{n-1},y^{n-1},z^{n-1}) \\
&=\sum_{z^{n}\in \mathcal{Z}^{n}}\prod_{i=1}^{n}p(z_i|y^i,z^{i-1}) p(x_i|x^{i-1},z^{i-1})p(y_i|x^i,y^{i-1}) \\
\end{split}
\end{equation*}
where $(a)$ follows from the Markov chains: $x^{n}- (y^{n},z^{n-1})-z_{n}$, $z^{n-1}- (x^{n},y^{n-1})-y_{n}$ and $y^{n-1}-(x^{n-1},z^{n-1})-x_{n}$.
\end{proof}
\indent Now, we are ready to give the proof of Lemma \ref{lemma4_6}.
\begin{proof}
\begin{equation*}
\begin{split}
\indent &p(x^n,y^n,f^n)\\
&=p(x^n,y^n|f^n) p(f^n)\\
&\stackrel{(a)}{=}p(f^n) \sum_{z^{n}\in \mathcal{Z}^{n}}\prod_{i=1}^{n}p(z_i|y^i,z^{i-1},f^n) p(x_i|x^{i-1},z^{i-1},f^n) p(y_i|x^i,y^{i-1},f^n)\\
&=p(f^n) \sum_{z^{n}\in \lbrace\mathcal{Z}^{n}:x^n=f^n(z^{n-1})\rbrace}\prod_{i=1}^{n}p(z_i|y^i,z^{i-1},f^n) p(y_i|f^i(z^{i-1}),y^{i-1},f^n) \\
&\stackrel{(b)}{=}p(f^n) \sum_{z^{n}\in \lbrace\mathcal{Z}^{n}:x^n=f^n(z^{n-1})\rbrace}\prod_{i=1}^{n}p(z_i|y^i,z^{i-1}) p(y_i|f^i(z^{i-1}),y^{i-1}) \\
&\stackrel{(c)}{=}\prod_{i=1}^{n} \prod_{z^{i-1}}p(f_i(z^{i-1})|f^{i-1}(z^{i-2}),z^{i-1}) \sum_{z^{n}\in \lbrace\mathcal{Z}^{n}:x^n=f^n(z^{n-1})\rbrace}\prod_{i=1}^{n}p(z_i|y^i,z^{i-1})p(y_i|f^i(z^{i-1}),y^{i-1}) \\
\end{split}
\end{equation*}
where (a) follows from Lemma \ref{lemma4_0}. Line (b) follows from the Markov chains: $f^n - (y^i,z^{i-1})- z_i$ and $f^n -(f^i(z^{i-1}),y^{i-1})-y_i$. Line (c) follows from Lemma \ref{lemma4_2}.
\end{proof}
\subsection{Proof of Theorem \ref{thm4_2}}
\indent We now prove the channel coding theorem by combining the following converse theorem and achievability theorem.\\
a). \textit{Converse Theorem}\\
\indent The following is a generalization of theorem $4$ in \cite{Verdu94} which gives an upper bound for bounding the block error probability.
\begin{lemma}
Every $(n,M,\epsilon)$ channel code satisfies
\begin{equation*}
\epsilon\geq Prob\lbrace\frac{1}{n}i^R(X^n(F^n)\rightarrow Y^n)\leq \frac{1}{n}\log M-\gamma\rbrace-2^{-\gamma n}
\end{equation*}
for every $\gamma>0$.
\label{lemma4_4}
\end{lemma}
\begin{proof}
We assume the disjointness of the decoding sets $D$. i.e. $D_{w=i}\cap D_{w=j}=\emptyset$ if $i \neq j$. Under this restriction on the decoder, \cite{Verdu94} has shown that any $(n,M,\epsilon)$ channel code for the nonfeedback channel $\mathcal{F}^n\rightarrow \mathcal{Y}^n$ satisfies for all $\gamma>0$
\begin{equation*}
\epsilon\geq Prob\lbrace\frac{1}{n}i(F^n; Y^n)\leq \frac{1}{n}\log M-\gamma \rbrace-2^{-\gamma n}
\end{equation*}
By Lemma \ref{lemma4_3}, we have
\begin{equation*}
i(F^n;Y^n)=i^R(X^n(F^n)\rightarrow Y^n)
\end{equation*}
The proof is complete.
\end{proof}
\indent Note that this Lemma holds independently of the decoder that one uses. The only restriction on the decoder is the disjointness of the decoding region.
\begin{theorem}(\textit{Converse Theorem})
\begin{equation*}
C_{FB}^{noise}\leq \sup_{X}\underline{I}^R(X(F)\rightarrow Y)
\end{equation*}
\label{thm4_3}
\end{theorem}
\begin{proof}
Assume that there exists a sequence of $(n,M,\epsilon_n)$ channel codes with $\epsilon_n\rightarrow 0$ as $n\rightarrow \infty$ and with transmission rate
\begin{equation*}
R=\liminf_{n\rightarrow \infty}\frac{1}{n}\log M.
\end{equation*}
By Lemma \ref{lemma4_4}, we know that for all $\gamma>0$,
\begin{equation*}
\epsilon_n \geq Prob\lbrace\frac{1}{n}i^R(X^n(F^n)\rightarrow Y^n)\leq \frac{1}{n}\log M-\gamma\rbrace-2^{-\gamma n}
\end{equation*}
As $n\rightarrow \infty$, the probability on the right-hand side must go to zero since $\epsilon_n\rightarrow 0$.
By the definition of $\underline{I}^R(X(F)\rightarrow Y)$, we have
\begin{equation*}
\limsup_{n\rightarrow \infty}\frac{1}{n}\log M-\gamma \leq \underline{I}^R(X(F)\rightarrow Y)
\end{equation*}
Since $\gamma$ can be arbitrarily small, we have
\begin{equation*}
R\leq \limsup_{n\rightarrow \infty}\frac{1}{n}\log M\leq \underline{I}^R(X(F)\rightarrow Y)\leq \sup_{X} \underline{I}^R(X(F)\rightarrow Y)
\end{equation*}
The proof is complete.
\end{proof}
b). \textit{Achievability Theorem}\\
\indent The following is a generalization of Feinstein's lemma \cite{Feinstein54} based on the residual directed information.
\begin{lemma}
Fix a positive integer $n$, $0<\epsilon<1$, a channel $\lbrace p(y_i|x^i,y^{i-1})\rbrace_{i=1}^n$ and a feedback link $\lbrace p(z_i|y^i,z^{i-1})\rbrace_{i=1}^n$. For every $\gamma>0$ and a channel input distribution $\lbrace p(x_i|x^{i-1},z^{i-1})\rbrace_{i=1}^n$, there exists a channel code $(n,M,\epsilon)$ that satisfies
\begin{equation*}
\epsilon \leq Prob\lbrace\frac{1}{n}i^R(X^n(F^n)\rightarrow Y^n)\leq \frac{1}{n}\log M+\gamma\rbrace+2^{-\gamma n}
\end{equation*}
\label{lemma4_5}
\end{lemma}
\begin{proof}
Given a channel input distribution $\lbrace p(x_i|x^{i-1},z^{i-1})\rbrace_{i=1}^n$, we generate a code-function distribution $\lbrace p(f_i|f^{i-1})\rbrace_{i=1}^n$ such that the induced channel input distribution equals the original channel input distribution. There exists such a code-function distribution according to Lemma \ref{lemma4_2}. In \cite{Verdu94}, it has been shown that for a nonfeedback channel $\lbrace p(y_i|f^i,y^{i-1})\rbrace_{i=1}^n$, a channel input distribution $\lbrace p(f_i|f^{i-1})\rbrace_{i=1}^n$ and for every $\gamma>0$, there exists a channel code $(n,M,\epsilon)$ that satisfies
\begin{equation*}
\epsilon \leq Prob\lbrace\frac{1}{n}i(F^n; Y^n)\leq \frac{1}{n}\log M+\gamma\rbrace+2^{-\gamma n}
\end{equation*}
Recall that this result is proved by random coding argument. Then, by Lemma \ref{lemma4_3}, we have
\begin{equation*}
i(F^n;Y^n)=i^R(X^n(F^n)\rightarrow Y^n)
\end{equation*}
The proof is complete after simple replacement.
\end{proof}
\begin{theorem}(\textit{Achievability Theorem})
\begin{equation*}
C_{FB}^{noise}\geq \sup_{X}\underline{I}^R(X(F)\rightarrow Y)
\end{equation*}
\label{thm4_4}
\end{theorem}
\begin{proof}
Fix arbitrary $0<\epsilon<1$ and channel input distribution $\lbrace p(x_i|x^{i-1},z^{i-1})\rbrace_{i=1}^n$. We shall show that $\underline{I}^R(X(F)\rightarrow Y)$ is a $\epsilon$-achievable rate by demonstrating that, for every $\delta>0$, and all sufficient large $n$, there exists a $(n,M,2^{-\frac{n\delta}{4}}+\frac{\epsilon}{2})$ code with rate
\begin{equation*}
\underline{I}^R(X(F)\rightarrow Y)-\delta<\frac{\log M}{n}<\underline{I}^R(X(F)\rightarrow Y)-\frac{\delta}{2}
\end{equation*}
If, in Lemma \ref{lemma4_5}, we choose $\gamma=\frac{\delta}{4}$, then the right-hand side value in Lemma \ref{lemma4_5} becomes
\begin{equation*}
\begin{split}
&Prob\lbrace\frac{1}{n}i^R(X^n(F^n)\rightarrow Y^n)\leq \frac{1}{n}\log M+\frac{\delta}{4}\rbrace+2^{-\frac{n\delta}{4}}\\
\leq&Prob\lbrace\frac{1}{n}i^R(X^n(F^n)\rightarrow Y^n)\leq \underline{I}^R(X(F)\rightarrow Y)-\frac{\delta}{4}\rbrace+2^{-\frac{n\delta}{4}}\\
\leq&\frac{\epsilon}{2}+2^{-\frac{n\delta}{4}}\\
\end{split}
\end{equation*}
where the second inequality holds for all sufficiently large n because of the definition of $\underline{I}^R(X(F)\rightarrow Y)$. Therefore, $\underline{I}^R(X(F)\rightarrow Y)$ is a $\epsilon$-achievable rate.
\end{proof}
\indent The proof of Theorem \ref{thm4_2} is obtained by combining Theorem \ref{thm4_3} and Theorem \ref{thm4_4}. |
2,877,628,088,459 | arxiv | \section*{Abstract}
We detect an omission in the paper ``Conditionally exactly soluble
class of quantum potentials" by A. de Souza Dutra [Phys. Rev. A 47
(1993) R2435]. There, two strongly singular $s-$wave bound state
problems have been claimed completely solvable in closed form.
Unfortunately, all the displayed wave functions represented merely
asymptotically correct (so called ``Jost") solutions and did not
satisfy the appropriate threshold boundary condition. We show that
the incorporation of its standard form only leads to a very
partial exact solvability at a single energy and for special
couplings.
\newpage
\noindent For the two strongly singular $s-$wave potentials given,
in the units $\hbar = 2 \mu =1$, by the formulae
\begin{equation}
V_1(r)={A \over r} + {B \over r^{1/2} } +{G \over r^2},
\ \ \ \ \ \ \ \ G = G_0 = -{3 \over 16},
\label{pota}
\end{equation}
and
\begin{equation}
V_2(r)={A \, r^{2/3}} + {B \over r^{2/3} } +{G \over r^2},
\ \ \ \ \ \ \ \ G = g_0=
-{5 \over 36}
\label{potbe}
\end{equation}
A. de Souza Dutra \cite{Dutra} offered the explicit elementary
wave functions as well as closed formulae for all their
bound-state energies. One of the three couplings is not free: This
entitled him to coin their ``conditionally" exactly soluble (CES)
status. In what follows we intend to demonstrate that in the sense
of the Ushveridze's monograph \cite{Ushveridze} both these forces
$V_{1,2}(r)$ only remain {\em partially} solvable at certain
specific values of the energies $E$ and couplings $B$.
Our present main point is that all the solutions presented in ref.
\cite{Dutra} still have to satisfy an appropriate and, for reasons
to be made understandable here, forgotten boundary condition in
the origin. Indeed, it is well known that for a central potential,
Schr\"{o}dinger equation $ -\triangle \Psi(\vec{r}) + V(|\vec{r}|)
\Psi(\vec{r})= E \Psi(\vec{r}) $ degenerates to an infinite set of
the ordinary (often called radial) decoupled differential
equations
\begin{equation}
-\, \frac{{\rm d}^2}{{\rm d} r^2} \psi(r)
+\frac{\ell(\ell+1)}{r^2} \psi(r) + V(r) \psi(r)= E \psi(r), \ \ \
\ \ \ \ \ \ell = 0, 1, \ldots \label{SEr}
\end{equation}
for the separate angular-momentum components of the whole original
wave function. The Newton's excellent review \cite{Newton}
summarizes the details. {\em Under the assumption of the
analyticity of $V(r)$ in the origin} it shows that and why the
standard physical requirement of normalizability of bound states $
||\Psi(\vec{r})|| < \infty $ is {\em strictly} equivalent to the
integrability of their partial waves,
\begin{equation}
\psi(r) \in L_2(0,\infty). \label{normaliz}
\end{equation}
For $\ell = 1, 2, \ldots$ the unphysical component
$\psi_{irregular}(r) \approx r^{-\ell}$ of the general threshold
solution of eq. (\ref{SEr}) is manifestly non-integrable near $r
\approx 0$.
In the $s-$wave with $\ell=0$ a more subtle argumentation is
needed \cite{Newton}. In practice, the subtlety is usually avoided
by the replacement of eq. (\ref{normaliz}) by the boundary
condition
\begin{equation}
\lim_{r \to 0}\psi(r) = 0.
\label{kveak}
\end{equation}
Even when we solve the ordinary harmonic oscillator the latter
boundary condition in the origin offers a more straightforward
recipe for numerical calculations. Let us repeat: {\em for
analytic potentials}, eqs. (\ref{normaliz}) and (\ref{kveak}) are
equivalent but the proof \cite{Newton} of their equivalence
immediately fails for the ``very next" non-analytic $ V(r) \approx
G\,r^{-2} $, $r \approx 0$, say, in the Kratzer's solvable
phenomenological model \cite{Kratzer} with $G \neq 0$ etc. One
must re-analyze the whole quantization procedure anew, even for
harmonic oscillator at $G\to 0$ \cite{Fluegge}.
For all the similar singular forces with the finite limit in eq.
(\ref{SEr}),
\begin{displaymath}
G =\lim_{r \to 0}\ r^2 V(r) \neq 0
\end{displaymath}
we have to re-define the dominant singularity $\ell(\ell+1) + G =
{\cal L}({\cal L} +1) $. The new parameter $ {\cal L} =\sqrt{\left
(\ell+\frac{1} {2}\right )^2 + G } -\frac{1}{2} $ enters then the
modified threshold solutions $\psi_{regular}(r) \approx r^{{\cal
L}+1}$ and $\psi_{irregular}(r) \approx r^{-{\cal L}}$. The
irregular one is eliminated as manifestly violating the
normalizability (\ref{normaliz}) at ${\cal L} \geq 1/2$.
The latter bound means $G\geq 3/4$ in $s-$wave with $\ell=0$.
Below such a strength of repulsion the Hamiltonian {\em ceases to
be self-adjoint}. The conclusion is strongly counter-intuitive.
Mathematically, the problem is serious. First spotted and analyzed
by Case \cite{Case}, it means that at $G < 3/4$, the textbook
quantization of the Kratzer-like singular models {\em is not
unique at all}. A more detailed discussion may be found in the
literature (cf., e.g., \cite{Frank} or \cite{Reed}). In its light,
physics community currently accepts a unique way of quantization
which is, mathematically speaking, a mere regularization. It is
often supported by the various sufficiently robust {\it ad hoc}
arguments (cf., e.g., \cite{Fluegge} on pp. 157 and 167 or ref.
\cite{Stillinger}).
For our present purposes, in the physical language of textbook
\cite{Landau}, the correct recipe may be formulated as follows.
\begin{itemize}
\item
In the domain of a weak repulsion we distinguish between the
physical $\psi_{regular}(r) \approx r^{{\cal L}+1}$ and unphysical
$\psi_{irregular}(r) \approx r^{-{\cal L}}$. As long as both of
them remain normalizable, we impose an extra, {\em stronger}
boundary condition in the origin,
\begin{equation}
\lim_{r \to 0}\psi(r) = 0, \ \ \ \ \ \ \ \ G \in (0, 3/4).
\label{weak}
\end{equation}
It coincides with (\ref{kveak}) but its mathematical meaning of a
convenient choice of the most plausible self-adjoint extension is
different.
\item
In the domain of weak attraction, both solutions
$\psi_{regular}(r) \approx r^{{\cal L}+1}$ and
$\psi_{irregular}(r) \approx r^{-{\cal L}}$ are compatible with
eq.~(\ref{weak}). In a sensible physical theory which
distinguishes between the two, the replacement of eq. (\ref{weak})
by an even stronger artificial constraint is needed,
\begin{equation}
\lim_{r \to 0}\psi(r)/\sqrt{r} = 0, \ \ \ \ \ \ \ \ G \in
(-1/4,0).
\label{weakdva}
\end{equation}
\item
Below the lower bound $G \leq -1/4$ one cannot prevent the
spectrum from collapse by any means. Particles would definitely
fall in the origin.
\end{itemize}
\noindent We may summarize: In practice, bound state solutions of
the Schr\"{o}dinger differential eq. (\ref{SEr}) may be
constructed in two ways, namely,
\begin{itemize}
\item
{\bf [RS]}
as the regular solutions $\psi_{regular}(r)$
constrained by the asymptotic normalizability condition
\begin{displaymath} \psi_{regular}(R) = 0, \
\ \ \ \ \ R \to \infty; \label{Jost}
\end{displaymath}
\item
{\bf [JS]}
from the so called Jost solutions $\psi_{Jost}(r)$,
{\em always} exhibiting the square-integrable asymptotic decrease
by definition.
\end{itemize}
\noindent The former regular-solution approach [RS] proves useful
within the framework of the standard Taylor series method
\cite{Ince} and in non-numerical context \cite{chov}.
Schr\"{o}dinger equation (\ref{SEr}) becomes converted into the
exactly solvable two-term recurrences at $q=0$ (harmonic
oscillator), into the three-term recurrences at $q=1$ (sextic
forces) etc \cite{Classif}. Rather unexpectedly, for all the
positive integers $q=1, 2, \ldots$, a few bound states may still
appear in an exact polynomial (i.e., terminating Taylor-series)
form. An explicit construction of these exceptional elementary
states is based on the solution of the Magyari's nonlinear
algebraic equations \cite{Magyari}. They determine a few energy
levels exactly and restrict also the free variability of the
available couplings.
In the latter context, potentials $V_{1,2}(r)$ exhibit a certain
incomplete dynamical symmetry and play an exceptional role as
quasi-exactly solvable in a certain narrower sense (cf. ref.
\cite{Ushveridze} for more details). This would make the ambitious
conclusions of ref. \cite{Dutra}, if they were all true, even more
important.
Their analysis must be based on the alternative option [JS] which
requires the threshold boundary condition (\ref{kveak}),
(\ref{weak}) or (\ref{weakdva}) \cite{Frank}. This is a core of
our present message. For the particular forces (\ref{pota}) and
(\ref{potbe}) such an approach has already thoroughly been tested
numerically in ref. \cite{Stillinger}. The Liouvillean
\cite{Liouville} change of variables $r\to x = r^{const}$ and
$\psi(r) \to x^{const} \chi(x)$ has been employed there. As long
as it leaves the form of the Schr\"{o}dinger equation unchanged,
it reduces all the bound-state problems with forces of the type
(\ref{pota}) and (\ref{potbe}) to their ``canonical" equivalents
with polynomial potentials
\begin{equation}
V_T(x) =a r^{-2}+{b \, r^2 } +{c\, r^4} + \ldots + y\,r^{4q} +
z\,r^{4q+2}, \ \ \ \ \ \ \ \ \ \ \ a > -1/4. \label{potaz}
\end{equation}
On this basis, we may easily deduce the leading-order solutions
(near the origin) also for our singular potentials $V_{1,2}(r)$ of
eqs. (\ref{pota}) and (\ref{potbe}),
\begin{displaymath}
\psi_{1,regular}(r) \sim r^{3/4},\ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \psi_{1,irregular}(r) \sim
r^{1/4}, \label{nn9}
\end{displaymath}
\begin{displaymath}
\psi_{2,regular}(r) \sim r^{5/6}, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\psi_{2,irregular}(r) \sim r^{1/6}.
\end{displaymath}
This is to be compared with the Dutra's wave functions: Say, for
potential $V_1(r)$ we may quote equation Nr. (9) from
\cite{Dutra},
\begin{equation} \psi_1^{(D)}(r) = C\,r^{1/4}
\exp\left [- \frac{1}{2} \beta^2\, \left (
r^{1/2}-\frac{B}{2E}\right )^2\right ]
\ H_n \left [\beta\, \left (
r^{1/2}-\frac{B}{2E}\right )\right ]. \label{n9}
\end{equation}
Its energies $E = -\beta^4/4$ are parametrized by $\beta=\beta_n$
and numbered by an integer $n=0, 1, \ldots$. The formula also
contains a certain normalization constant $C = C_n$ and Hermite
polynomials $H_n(x)$. We immediately detect an inconsistency of
the latter two equations near the origin.
Similar observation is also made for $\psi_2(r)$ from equation Nr.
(13) in \cite{Dutra}: None of the Dutra's wave functions satisfies
the physical boundary condition (\ref{weakdva}). An explanation of
this obvious misunderstanding is in fact not too difficult: The
solutions were merely constrained by the too weak (though, in
practice, much more frequently encountered) and, hence,
inapplicable threshold condition (\ref{kveak}). We may summarize
that the inconsequent use of the boundary conditions would lead to
a physically absurd spectrum covering the whole real line, $E \in
(-\infty,\infty)$.
The Dutra's ``non-anonymous" (i.e., Hermite-polynomial) solutions
have already evoked a non-negligible response in the current
literature. As an example one might quote the paper \cite{Nag}.
Its authors relied on the physical correctness of the Dutra's
argumentation and were misguided in their mathematical
appreciation of the role of supersymmetry in the similar problems.
Still, the majority of their argument remains valid. Hence, let us
show, in the conclusion, how the Dutra's exceptional solutions
could be ``saved" for similar applications.
Obviously, one has to incorporate simply the necessary constraint
(\ref{weakdva}). An inspection, say, of our sample eq. (\ref{n9})
reveals that $\psi^{(D)}_1(r)$ satisfies condition (\ref{weakdva})
{\em if and only if} its Hermite-polynomial component acquires an
exact nodal zero in the origin. In terms of the known numbers
$X=X(n,k)$ (calculated as the $k-$th nontrivial zeros of $H_n(X)$,
cf. Table~1) this requirement, unfortunately, fixes the
non-Coulombic coupling as a function of the energy $E =
-\beta^4/4$,
\begin{equation}
B = \frac{1}{2}\,X\, \beta^3 \neq 0. \label{Be}
\end{equation}
This makes both these values coupled to the additional (in fact,
Magyari's \cite{Magyari}) constraint. As a cubic equation for the
energy $E$ it appears under Nr. (8) in ref. \cite{Dutra}. This
algebraic selfconsistency condition must be combined with eq.
(\ref{Be}). The resulting polynomial equation in $\beta$ (of
twelfth degree!) is easily factorized in closed form. Its real
roots we need are
\begin{displaymath}
\beta=\beta(n, k) = 2\, \sqrt{\frac{-A}
{2n +1-X^2(n,k) }
}.
\end{displaymath}
They all exist for any $A < 0$. This is an important conclusion:
let us note that eq. Nr. (8) of ref. \cite{Dutra} re-appears as
equation Nr. (16) in ref. \cite{Nag}, etc.
For illustration, let us finally fix the scale $A=-1$ and display
the first few non-numerical specifications of energies
$E=-\beta^4$ and their couplings (\ref{Be}) in Table~2. The same
parameters are to be used also in the definition (\ref{n9}) of the
correct bound-state wave function. {\it Mutatis mutandis}, the
entirely parallel ``return to validity" applies also to
$\psi_2^{(D)}(r)$ in \cite{Dutra}. We omit the details here,
re-emphasizing only that both the Dutra's expressions
$\psi^{(D)}_{1,2}(r)$ are elementary and still satisfy the
Schr\"{o}dinger differential equation, exhibiting also the correct
asymptotic behaviour. Thus, we may return, say, to the paper by
Dutt et al \cite{Khare}, originally motivated by ref. \cite{Dutra}
as well. In the light of our present notes, the importance of the
latter paper increases: Its authors have, involuntarily, found and
constructed {\em the first} CES example in one dimension!
\section*{Acknowledgements}
Years long discussions of the subject with my colleagues in Theory
Group of NPI in \v{R}e\v{z} and with authors of refs. \cite{Nag}
and \cite{Khare} contributed to this paper. An anonymous referee
attracted my attention {\it ad fontes} \cite{Case} and
\cite{Frank}. The reference to the highly relevant paper
\cite{Stillinger} was kindly communicated to me by A. de Souza
Dutra in his non-anonymous referee report. He also informed me
about his correspondence with F. H. Stillinger, the subsequent
private communication with whom is also acknowledged.
\newpage
|
2,877,628,088,460 | arxiv | \section{Introduction}
The robust prediction of dynamical systems behaviour remains an open question in machine learning, and engineering in general. The ability to make robust predictions is important not only for forecasting systems of interest like weather \citep{garg2021weatherbench, Ravuri2021SkilfulRadar} but even more so because it enables innovations in fields like system control, autonomous planning \citep{Hafner2018LearningPixels} and computer aided engineering \citep{Brunton2020MachineMechanics}. In this context, the use of deep generative models has recently gained significant traction for sequence modelling \citep{Girin2020DynamicalReview}.
Robustness of machine learning models can be considered along two axes: long-term prediction and out-of-distribution (OOD) performance. Accurate long-term prediction can be notoriously difficult in many dynamical systems, because error accumulation can diverge in finite time \citep{Zhou2020Informer:Forecasting, Raissi2019Physics-informedEquations}, a problem that even traditional solvers can suffer from. More importantly, machine learning techniques are known to suffer from poor OOD performance \citep{Goyal2020InductiveCognition}, i.e. when they are employed in a setting they had not encountered during the training phase.
Before addressing the OOD problem, we must first define what constitutes as OOD in dynamical systems. We start by the observation that even simple dynamical systems, i.e the swinging pendulum or the $n$-body system, can have multiple continuous parameters that affect their evolution. These parameters can be manifested as differential equation coefficients, boundary or initial conditions etc. Our starting point is to consider distinct ranges of those parameters as separate domains. Under this view, it becomes apparent why OOD prediction of dynamical systems can be hard: capturing the whole range of those parameters in a single training set is unrealistic \citep{Fotiadis2020ComparingPropagation} and further inductive biases are required \citep{Miladinovic2019DisentangledRepresentations, Bird2019CustomizingSystems, Barber2021JointSystems}.
We focus on the inductive bias of disentangled representations for which the dynamics are separated from the domain parameters. Many approaches based on the use of neural networks try to jointly learn the dynamics and the physical parameters, which results in convoluted representations and usually leads to overfitting \citep{Bengio2012RepresentationPerspectives}. System identification can be used to extract parameters, but requires knowledge of the underlying system to be computationally effective \citep{ayad2019systid}. We, instead, leverage advances in Variational Autoencoders (VAEs) \citep{Kingma2014Auto-encodingBayes} that enable learning disentangled representations. Disentanglement enables different latent variables to focus on different factors of variation of the data distribution, and has been applied in the context of image generation \citep{Higgins2017Beta-VAE:FRAMEWORK, Kim2018DisentanglingFactorising}. This can be extend to modelling dynamical systems by looking at disentanglement from a causal perspective: from all the generative models which can have the same marginal distribution, identify the one with the true causal factors. To map this idea to sequence modelling we treat the domain parameters of a dynamical system as factors of variation. Recent findings \citep{Locatello2018ChallengingRepresentations} emphasize the vital role of inductive biases from models or data for useful disentanglement. Unsupervised disentanglement, based on the assumption of domain stationarity, is a promising direction \citep{Miladinovic2019DisentangledRepresentations}. Nevertheless, this leaves a wealth of ground truth domain parameters, which can be cheaply collected in simulated datasets. This type of privileged information originating from simulations has been shown to be effective for domain adaptation in computer vision tasks \citep{sarafianos2017adaptive, lee2018spigan}. We thus use supervised disentanglement \citep{Locatello2019DisentanglingLabels} by treating the ground truth domain parameters from simulations as privileged information which, to the best of our knowledge, has not been applied to dynamical system prediction previously.
\textbf{Contributions}
Our work is the first, to the best of our knowledge, that explicitly treats domain parameters as factors of variation of the data distribution and uses privileged information from simulated data to disentangle those domain parameters from dynamics in a supervised way. We furthermore conduct experiments both in the low-dimensional phase space of 3 dynamical systems and the high-dimemnsional video rendering of a swinging pendulum. Disentanglement has, in the past, been mostly applied to VAEs because they are easily amenable to it. The problem is that these models usually lack in competitive performance. In the case of video sequences, we additionally apply disentanglement on a hybrid model with both stochastic and deterministic parts \citep{Hafner2018LearningPixels}. In doing so, we not only assess disentanglement on a generative model outside boundaries of VAEs but furthermore we do it on a model which is considered state-of-the-art in long-term video prediction \cite{Saxena2021ClockworkAutoencoders}. In all cases, the prediction performance is assessed both in-distribution and also in OOD settings of increasing degrees of distribution shift. To our understanding, this is the first time such a rigorous OOD test is performed. Our results in phase-space demonstrate that disentangled models can better capture the variability of dynamical systems compared to baseline models both in-distribution and OOD. In modelling dynamics in video sequences, results indicate that disentanglement is beneficial both for long-term prediction and OOD prediction.
\textbf{Limitations} This work focuses on dynamical system prediction. While the results can potentially open up many applications in general time-series modelling, this is out of the scope of this work. Furthermore this work aims to be a rigorous empirical study focused on downstream task performance, the inspection of the disentangled representations with appropriate metrics is also out of our scope.
\section{Related Work}
\textbf{VAEs and disentanglement} Disentanglement aims to produce representations where separate factors of variation in the data are encoded into independent latent components. This can be seen as finding the true causal model of the data. While supervised disentanglement in generative models is a long-standing idea \citep{Mathieu2016DisentanglingTraining}, information-theoretic properties can be leveraged to allow unsupervised disentanglement in VAEs \citep{Higgins2017Beta-VAE:FRAMEWORK, Kim2018DisentanglingFactorising}. The impossibility result from \citep{Locatello2018ChallengingRepresentations} suggested that disentangled learning is only possible by inductive biases coming either from the model or the data. Hence, the focus shifted back to semi- or weakly-supervised disentanglement approaches \citep{ Locatello2019DisentanglingLabels, Locatello2020Weakly-SupervisedCompromises}. While most of these methods focus on disentanglement metrics, we directly assess using the downstream prediction task.
\textbf{Disentanglement in sequence modelling} While disentanglement techniques are mainly tested in a static setting, there is a growing interest in applying it to sequence dynamics. Using a bottleneck based on physical knowledge, \citet{Iten2018DiscoveringNetworks} learn an interpretable representation that requires conditioning the decoder on time, but it can return physically inconsistent predictions in OOD data \citep{Barber2021JointSystems}. Deep state-space models (SSMs) have also employed techniques for disentangling content from dynamics \citep{Fraccaro2017ALearning, Li2018DisentangledAutoencoder}, but, focus mostly on modelling variations in the content, failing to take dynamics into account. In hierarchical approaches \citep{Karl2017DeepData}, different layers of latent variables correspond to different timescales: for example, in speech analysis for separating voice characteristics and phoneme-level attributes \citep{Hsu2017UnsupervisedData}. In an approach similar to our work, \citet{Miladinovic2019DisentangledRepresentations} separate the dynamics from sequence-wide properties in dynamical systems like Lotka-Volterra, but do so in an unsupervised way which dismisses a wealth of cheap information and only assesses OOD generalization in a limited way.
\textbf{Feed-forward models for sequence modelling} Deep SSM models are difficult to train as they require non-trivial inference schemes and a careful design of the dynamic model \citep{Krishnan2015DeepFilters, Karl2017DeepData}. Feed-forward models, with necessary inductive biases, have been used for sequence modelling both in language \citep{Bai2018TrellisModeling} and also in dynamical systems \citep{Greydanus2019HamiltonianNetworks, Fotiadis2020ComparingPropagation}. Disentanglement has not been successfully addressed in these models; together with \citet{Barber2021JointSystems}, our work is an attempt in this direction.
\textbf{Privileged information for domain adaptation.} Using privileged information during training has been shown to help with domain adaptation in computer vision tasks. For example, using segmentation masks of simulated urban scenery can improve semantic segmentation on the target domain \citep{lee2018spigan}. Similarly, clip art data can help with domain transfer in an action recognition task \citep{sarafianos2017adaptive}.
\begin{figure}[t]
\centering
\begin{subfigure}{.45\linewidth}
\includegraphics[width=\linewidth]{images/model-vae.pdf}
\end{subfigure}
\hfill
\begin{subfigure}{.49\linewidth}
\includegraphics[width=\linewidth]{images/model-VAE-CNN.pdf}
\end{subfigure}
\caption{\textbf{The VAE-SD model (Left).} From an $n$-dimensional phase space input, a $o$-dimensional prediction of future time-steps is derived. The loss function has three parts: the reconstruction loss is replaced by a prediction loss, the KL-divergence enforces the prior on to the latent space, and the extra loss term enforces the supervised disentanglement of domain parameters in latent space. \textbf{The CNN-VAE-SD model(Right).} The input frames are first encoded to a low-dimensional space, analogous to phase space. Then, as before the VAE-SD prediction scheme is applied recursively. The low-dimensional predictions are then decoded back to pixel space.}
\label{fig:model_sdvae}
\end{figure}
\section{Methods}
\subsection{Variational Autoencoders}
Variational autoencoders (VAEs) \citep{Kingma2014Auto-encodingBayes} offer a principled approach to latent variable modeling by combining a variational inference model $q_{\phi}(\bm{z}|\bm{x})$ with a generative model $p_{\theta}(\bm{x}|\bm{z})$. As in other approximate inference methods, the goal is to maximize the evidence lower bound (ELBO) over the data:
$$\begin{aligned}
\mathcal{L}_{\phi, \theta}(\bm{x}) &= \mathbb{E}_{q_{\phi}(\mathbf{z} \mid \mathbf{x})}[\log p_{\theta}(\mathbf{x} \mid \mathbf{z})]-D_{KL}(q_{\phi}(\mathbf{z} \mid \mathbf{x})|| p(\mathbf{z}))
\end{aligned}$$
The first part of the ELBO is the reconstruction loss (in our case the prediction loss) and the second part is the Kullback-Leibler divergence that quantifies how close the posterior is to the prior.
\textbf{Design choices for the model} We use an isotropic unit Gaussian prior $p(\bm{z}) =\mathcal{N}(\bm{z} \mid \mathbf{0}, \bm{I})$ which helps to disentangle the learned representation \citep{Higgins2017Beta-VAE:FRAMEWORK}. The approximate posterior (encoder) distribution is a Gaussian with diagonal covariance $q_{\phi}(\bm{z} \mid \bm{x}) =\mathcal{N}\left(\bm{z} \mid \bm{\mu}_{z}, \bm{\Sigma}_{z}\right)$, allowing a closed form KL-divergence, while the decoder has a Laplace distribution $p_{\theta}(\bm{x} \mid \bm{z})=\mathrm{Laplace}\left(\bm{x} \mid \bm{\mu}_{x}, \gamma \bm{I}\right)$ with constant diagonal covariance $\gamma>0$, which is tuned empirically. This leads to an $L_1$ loss that provides improved results in some problems \citep{Mathieu2018DisentanglingAutoencoders} and empirically works better in our case. The parameters $\bm{\mu}_{z} \equiv \bm{\mu}_{z}(\bm{x} ; \phi), \bm{\Sigma}_{z} \equiv \operatorname{diag}\left[\bm{\sigma}_{z}(\bm{x} ; \phi)\right]^{2}$, and $\bm{\mu}_{x} \equiv \bm{\mu}_{x}(\bm{z} ; \theta)$ are computed via feed-forward neural networks.
\begin{figure}[t]
\centering
\begin{subfigure}{.35\linewidth}
\includegraphics[width=\linewidth]{images/datasets/distribution_PP_theta_omega.pdf}
\end{subfigure}
\hspace{1cm}
\begin{subfigure}{.35\linewidth}
\includegraphics[width=\linewidth]{images/datasets/distribution_PP_g_l.pdf}
\end{subfigure}
\hspace{1cm}
\caption{\textbf{Parameter distribution for the video pendulum test-sets.} The initial angle and angular velocity are drawn from the same uniform distribution for all test-sets. For the in-distribution test-set we draw the pendulum length and gravity from the same distribution as during training. The OOD test-sets represent distribution shifts of increasing magnitude.}
\label{fig:data-pp}
\end{figure}
\subsection{Disentanglment of domain parameters in latent space}\label{sec:supode}
Apart from the disentanglement that stems from the choice of prior $p(\bm{z})$, we explicitly disentangle part of latent space so that it corresponds to the domain parameters of each input sequence. We achieve this by using a regression loss term $\mathcal{L}_{\bm{\xi}}(\mathbf{z}_{1:k}, \bm{\xi})$ between the ground truth factors of the domain parameters $\bm{\xi} \in \mathbb{R}^k$ and the output of the corresponding latents, $\mathbf{z}_{1:k}$. We opted for an $L_1$ loss, corresponding to a Laplacian prior with mean $\bm{\xi}$ and unitary covariance. Previous methods have reported that binary cross-entropy works better than $L_2$ \citep{Locatello2019DisentanglingLabels} but this does not fit well in a setting like ours. We hypothesize that BCE works better because of the implicit scaling. To address this, we propose applying a function $\mathcal{G}(\mu_{z_i})$ which linearly scales the $\mu_{z_i}$ between the min and max values of the corresponding factor of variation:
$$\mathcal{G}\left({\mu}_{z_i}\right) = \mu_{z_i} \cdot (\max(\xi_i) - \min(\xi_i)) + \min(\xi_i) $$
where $\xi_i$ are the domain parameters and their corresponding minimum and maximum values of domain parameters from the training set. In all cases, the regression term is weighted by a parameter $\delta$ which is empirically tuned. Plugging these choices in results in the following loss function:
{\small
\begin{align*}\mathcal{L}_{\phi, \theta}(\bm{x})=&
\frac{1}{n} \sum_{i=1}^{n}\left\{
\mathbb{E}_{q_{\phi}\left(\bm{z} \mid \bm{x}^{(i)}\right)}\left[\frac{1}{\gamma}\left\|\bm{x}^{(i)}-\bm{\mu}_{x}(\bm{z} ; \theta)\right\|_{1}\right]+d \log \gamma\right. && \text{(Reconstruction loss)} \\
&+\left\|\bm{\sigma}_{z}\left(\bm{x}^{(i)} ; \phi\right)\right\|_{2}^{2}-\log \left|\operatorname{diag}\left[\bm{\sigma}_{z}\left(\bm{x}^{(i)} ; \phi\right)\right]^{2}\right| +\left\|\bm{\mu}_{z}\left(\bm{x}^{(i)} ; \phi\right)\right\|_{2}^{2} && \text{(KL-Divergence)} \\
&+\delta \left\| \bm{\xi}^{(i)} -\left.\mathcal{G}\left(\bm{\mu}_{z_{1:k}}\left(\bm{x}^{(i)} ; \phi\right)\right) \right\|_{1}
\right\} && \text{(Sup. disentangl. loss)}
\end{align*}}
Using the reparameterization trick \citep{Kingma2014Auto-encodingBayes}, the loss is amenable to optimzation by stochastic gradient descent, with batch size $n$. The model architecture can be seen in Figure \ref{fig:model_sdvae}(left).
\subsection{Disentanglement for video dynamics)}\label{sec:methods-rssm}
We further investigate the effect of disentanglement in video sequence dynamics. To this end, two generative models are used. The first is derived from the VAE formulation of the previous section and is called CNN-VAE and is similar to the VAE with the addition of a convolutional encoder and a decoder.
The encoder projects the input frames down to a low-dimensional space which can be though as equivalent to the phase space of the system. A VAE is applied in this projection to predict in the future coordinates in the "phase space". The decoder then maps the predictions of the VAE back to pixel space. The schematic of the model can be seen in Figure \ref{fig:model_sdvae}(right).
The second model we use is the Recurrent State Space Model (RSSM) which has been successfully used for planning \citep{Hafner2018LearningPixels}. Since RSSM is a hybrid model combining deterministic and variational components, it allows us to assess disentanglement outside the limited scope of VAEs. Furthermore, being a state-of-the-art model in long-term video prediction \citep{Saxena2021ClockworkAutoencoders}, it allow us to identify the limits of applying disentanglement in competitive models. The loss function we use, shares the same formulation as in the original work of \citep{Hafner2018LearningPixels} with the addition of the supervised disentanglement loss. Since in the RSSM formulation there are latent variables for each time-step, we apply an $L_2$ disentanglement loss on all of them:
\begin{align*}
\mathcal{L}_{RSSM-SD}=\sum_{t=1}^{T}&\left(\underbrace{\mathbb{E}_{q(\bm{s_{t}} \mid \bm{o_{\leq t})}}[\ln p(\bm{o_{t}} \mid \bm{s_{t})})]}_{\text{reconstruction}}-
\underbrace{
\mathbb{E}_{q(\bm{s_{t-1}} \mid \bm{o_{\leq t-1}})}\left[\mathrm{KL}[q(\bm{s_{t} }\mid \bm{o_{\leq t}}) \| p(\bm{s_{t}} \mid \bm{s_{t-1}})]\right]
}_{\text{prediction}} \right.\\
&\left.+\underbrace{\delta \mathbb{E}_{q(\bm{s_{t}} \mid \bm{o_{\leq t}})}\left[\left\|
\bm{\xi} - \bm{s_{t}}^{(1:k)} \right\|_{2}
\right]}_{\text{supervised disentanglement loss}}\right)
\end{align*}
Where $\bm{o_t}$ is the observations, $\bm{s_t}$ the stochastic latent variables at time $t$, $\bm{\xi}$ are the $k$ dimensional domain parameters and $\delta$ tunes the supervised disentanglement strength.
\begin{figure}[t]
\centering
\begin{subfigure}{.32\linewidth}
\centering
\includegraphics[width=\linewidth]{images/results/quant_vaes_system_pendulum-2_n_time_200_nexps_5_useval_True.pdf}
\end{subfigure}
\hfill
\begin{subfigure}{.32\linewidth}
\centering
\includegraphics[width=\linewidth]{images/results/quant_vaes_system_lv-3_n_time_200_nexps_5_useval_True.pdf}
\end{subfigure}
\hfill
\begin{subfigure}{.32\linewidth}
\centering
\includegraphics[width=\linewidth]{images/results/quant_vaes_system_3body-4_n_time_200_nexps_5_useval_True.pdf}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=0.4\linewidth]{images/results/quant_legend_vaes.pdf}
\end{subfigure}
\caption{\textbf{Disentanglement scaling methods.} MAE for the first 200 time-steps. Boxplots show the top 5 models of each architecture. Both VAE-SD \& VAE-SSD outpeform the VAE in all 3 systems. VAE-SSD better captures the parameter space of the original test-set but in most cases VAE-SD generalizes better OOD.}
\label{fig:mae-vaesd}
\end{figure}
\begin{figure}[t]
\centering
\begin{subfigure}{.32\linewidth}
\centering
\includegraphics[width=\linewidth]{images/results/quant_mlp-vae_system_pendulum-2_n_time_200_nexps_5_useval_True.pdf}
\end{subfigure}
\hfill
\begin{subfigure}{.32\linewidth}
\centering
\includegraphics[width=\linewidth]{images/results/quant_mlp-vae_system_lv-3_n_time_200_nexps_5_useval_True.pdf}
\end{subfigure}
\hfill
\begin{subfigure}{.32\linewidth}
\centering
\includegraphics[width=\linewidth]{images/results/quant_mlp-vae_system_3body-4_n_time_200_nexps_5_useval_True.pdf}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=0.5\linewidth]{images/results/quant_legend_mlp-vae.pdf}
\end{subfigure}
\caption{\textbf{Disentanglement in VAE vs MLP.} MAE for the first 200 time-steps. Boxplots show the top 5 models of each architecture. While disentanglement in VAE-SD consistently improves results, disentanglement in MLP-SD does not always generalize well OOD, producing unstable predictions for the OOD Lotka-Volterra datasets.}
\label{fig:mae-mlpvsvae}
\end{figure}
\section{Experiment - ODE Phase Space Dynamics}
\subsection{Datasets}
In the phase-space experiments we compare the models on three well studies dynamical systems, the swinging pendulum, the Lotka-Volterra equations used to model prey-predator populations, and the planar 3-body system:
\begin{center}
$\begin{array}{ll}
\begin{array}{l}
\textbf{Simple pendulum: } \ddot{\theta}+\frac{g}{\ell} \sin \theta=0 \\
\textbf{Lotka-Volterra:}
\begin{array}{l} \dot{x}=\alpha x-\beta x y \\
\dot{y}=\delta x y-\gamma y\end{array}
\end{array} &
\textbf{3-body system:} \begin{array}{c}\bar{m}_{i} \frac{d \vec{v}_{i}}{d t}=K_{1} \sum_{j} \frac{\bar{m}_{i} \bar{m}_{j}}{\bar{r}_{i j}^{3}} \overrightarrow{r_{i j}} \\ \frac{d \overrightarrow{\bar{x}}_{i}}{d \bar{t}}=K_{2} \vec{v}_{i}, i\in{1,2,3}\end{array}
\end{array}$
\end{center}
The systems where chosen for varied complexity in terms of degrees of freedom, number of ODE equations and factors of variation. For the pendulum we consider one factor of variation, its length $l$; Lotka-Volterra has 4 factors of variation $\alpha, \beta, \gamma, \delta$ and the 3-body system has also 4 factors of variation $K_1, m_1, m_2, m_3$.
Factors are drawn uniformly from a predetermined range which is the same between the training, validation and test sets. To further assess the OOD prediction accuracy, we create two additional test sets with factor values outside of the original range. We denote these datasets as OOD Test-set Easy and Hard, representing a smaller and bigger shift from the original range. As a visual example, the distribution of the factors of variation for the Lotka-Volterra system is illustrated in Figure \ref{fig:data-lv} of the Appendix. The data were additionally corrupted with Gaussian noise. Dataset details can be found on Table \ref{tab:datasets} of the Appendix.
\subsection{Models and training}
The main goal of these experiments is to assess whether OOD prediction can be improved by disentangling dynamical system parameters in the latent space of VAEs. We opt to use simple models to allow more experiments and comparisons.
Our main baseline is the VAE upon which we propose two enhancements that leverage supervised disentanglement.
The first VAE model, termed VAE-SD uses supervised disentanglement without a scaling function while the second model termed VAE-SSD uses an additional linear scaling function $\mathcal{G}(\mu_{z_i})$ for the latent variable mean vector $\mu_{z_i}$, as described in Section \ref{sec:supode}. Another baseline is a multilayer perceptron (MLP) autoencoder which allows comparison with a deterministic counterpart of the VAE. We also use supervised disentanglement on the latent neurons of the MLP, a model we refer to as MLP-SD. This enables us to assess if the privileged information can improve deterministic models. Lastly, we include an LSTM model, a popular choice for low dimensional sequence modelling \citep{yu2020lstm}, as a representative recurrent method.
Early experiments revealed a significant variance on the performance of the models, depending on hyperparameters. Under these conditions, we took various steps to make model comparisons as fair as possible. Firstly, all models have similar capacity in terms of neuron count. Secondly, we tune various hyperparameter dimensions, some of which are shared, while others are model-specific. Third, we conduct a thorough grid search on the hyperparameters to avoid undermining a model (details can be found in Tables \ref{tab:hp-pendulum}, \ref{tab:hp-lv} and \ref{tab:hp-3body} of the Appendix). Lastly, we train the same number of experiments for all models which amounts to 1440 trained models in total, as summarized in Table \ref{tab:numexperiments} of the Appendix.
\begin{figure*}[t]
\begin{subfigure}{.32\linewidth}
\begin{tikzpicture}[spy using outlines={circle,gray,magnification=2.5,size=2.0cm, connect spies}]
\node {\includegraphics[width=\linewidth]{images/viz/viz_noise_pendulum-2_n_test_4_start_200_duration_1000_05.pdf}};
\spy on (-1.0,-0.75) in node [left] at (1.2,0.2);
\end{tikzpicture}
\end{subfigure}
\begin{subfigure}{.32\linewidth}
\begin{tikzpicture}[spy using outlines={circle,gray,magnification=2.5,size=2.0cm, connect spies}]
\node {\includegraphics[width=\linewidth]{images/viz/viz_noise_lv-3_n_test_4_start_200_duration_0600_02.pdf}};
\spy on (-1,-0.75) in node [left] at (1.5,0.2);
\end{tikzpicture}
\end{subfigure}
\begin{subfigure}{.32\linewidth}
\begin{tikzpicture}[spy using outlines={circle,yellow,magnification=2.0,size=0cm}]
\node {\includegraphics[width=\linewidth]{images/viz/viz_noise_3body-4_n_test_4_start_500_duration_0300_10.pdf}};
\spy on (-0.2,-1.05) in node [left] at (1.7,-0.7);
\end{tikzpicture}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=0.9\linewidth]{images/viz/viz_legend_noise.pdf}
\end{subfigure}
\caption{\textbf{Model predictions in phase space}. Trajectories are taken from the OOD Test-Set Hard of each system. The model input is noisy. The circle and bold `$\times$' markers denote the start and end of the ground truth trajectories respectively.}
\label{fig:qual-all}
\end{figure*}
\subsection{Results}
For each dynamical system we focus on the performance on the three test-sets, the in-distribution test set, which shares the same parameter distribution with the training set, and the two OOD test-sets (Easy and Hard), which represent an increasing parameter shift from the training data. Models are compared on the cumulative Mean Absolute Error (MAE) between prediction and ground truth for the first 200 time-steps. We consider this to be sufficiently long-term as it is at least 20 times longer than the prediction horizon used during training. Long predictions are obtained by re-feeding the model predictions back as input. This approach has been shown to work well in systems where the dynamics are locally deterministic \citep{Fotiadis2020ComparingPropagation}. A summary of the quantitative results can be found in Figures \ref{fig:mae-vaesd} \& \ref{fig:mae-mlpvsvae} and Table \ref{tab:resmae}. To account for the variability in the results, we present a summary of the best 5 runs of each model, selected by validation MAE. We generally observe that model performance is correlated to the distribution shift of test-sets, and this is consistent for all systems and models. The MAE is increasing as we move from the in-distribution test-set to the OOD Easy and Hard test-sets. Nevertheless, not all models suffer equally from the OOD performance drop.
Comparing the VAEs (Figure \ref{fig:mae-vaesd}), we see that disentangled VAE models offer a substantial and consistent improvement over the VAE across all 3 dynamical systems. The improvement is more pronounced for the OOD test-sets where the distribution shift is greater, a strong indication that disentanglement of domain parameters is an inductive bias that can lead to better generalization. We also observe that VAE-SSD model the in-distribution data better that VAE-SD. This seems to come at a slight overfitting cost, because the VAE-SD provides better OOD extrapolation in most cases. This could be explained because the scaling function is dependent on min and max values of the factors of the training set. The extra information allows the model to better capture the training data but sacrifices some generalization capacity.
On the other hand, disentanglement results for the MLP are mixed. While in-distribution MLP-SD offers better results than the plain MLP, on the OOD test-sets, MLP-SD only performs favourably in the pendulum data. Furthermore in Lotka-Volterra, MLP-SD models are very unstable, and this is a drawback that affects some VAE-SD model too (see Table \ref{tab:resnan} of the Appendix). Probabilistic models seem better suited to capture the variation in the data. The contrast between VAE-SD and MLP-SD illustrates that making use of privileged information and latent space disentanglement are not trivial and more work is needed to help us understand what works in practice and why. Lastly, the LSTM (Figure \ref{fig:mae-all} \& Table \ref{tab:resmae} of the Appendix) is only comparable in the pendulum dataset and only for small OOD shifts. Qualitative predictions can be found in Figure \ref{fig:qual-all}.
\begin{figure}[t]
\centering
\caption*{\textbf{SSIM}}
\begin{subfigure}{.32\linewidth}
\centering
\includegraphics[width=\linewidth]{images/pp_lines/pp_lines_best_colors2_ssim_dset_test_1_point.pdf}
\end{subfigure}
\hfill
\begin{subfigure}{.32\linewidth}
\centering
\includegraphics[width=\linewidth]{images/pp_lines/pp_lines_best_colors2_ssim_dset_test_2_point.pdf}
\end{subfigure}
\hfill
\begin{subfigure}{.32\linewidth}
\centering
\includegraphics[width=\linewidth]{images/pp_lines/pp_lines_best_colors2_ssim_dset_test_3_point.pdf}
\end{subfigure}
\end{figure}
\begin{figure}[t]
\centering
\caption*{\textbf{LPIPS}}
\begin{subfigure}{.32\linewidth}
\centering
\includegraphics[width=\linewidth]{images/pp_lines/pp_lines_best_colors2_lpips_dset_test_1_point.pdf}
\end{subfigure}
\hfill
\begin{subfigure}{.32\linewidth}
\centering
\includegraphics[width=\linewidth]{images/pp_lines/pp_lines_best_colors2_lpips_dset_test_2_point.pdf}
\end{subfigure}
\hfill
\begin{subfigure}{.32\linewidth}
\centering
\includegraphics[width=\linewidth]{images/pp_lines/pp_lines_best_colors2_lpips_dset_test_3_point.pdf}
\end{subfigure}
\caption{\textbf{Prediction quality on the video pendulum} as a function of the distance predicted into the future (x axis). For SSIM higher is better. For LPIPS lower is better.}
\label{fig:res-ssim-lpips-point}
\end{figure}
\section{Experiment - Video Sequence Dynamics}
In the first experiment we assessed supervised disentanglement for phase space prediction, where the states of the input trajectories are fully observable and only the domain parameters are unknown. This experiment extends the idea of supervised disentanglement to pixel-space input and output, where the physical states have to be inferred by the model.
\begin{figure}[t]
\centering
\includegraphics[width=0.94\hsize]{images/pp-qual/_pp_qual_diff_testset_1_3.pdf}
\caption{Absolute difference between ground truth and predictions on the test-set of the pendulum data set.}
\label{fig:qual-pendulum1}
\end{figure}
\subsection{Datasets}
The dynamical system we use in this experiment is the swinging pendulum, a common benchmark for modelling dynamics in video sequences \citep{Brunton2020MachineMechanics, Barber2021JointSystems}. We consider 4 factors of variation, the length $l$, gravity $g$ and initial angle $\theta$ and angular velocity $\omega$. Factors are drawn uniformly from a predetermined range. As before, we create a test-set and two additional OOD test-sets (Easy and Hard). The OOD sets have length and gravity values outside of the original range, while the initial conditions $\theta, \omega$ are drawn from the same distribution. The distribution of the factors of variation for the test-sets is illustrated in Figure \ref{fig:data-pp}. The trajectories are first computed in phase space using a numerical simulator and then rendered as video frames of $64\times64$ pixels. More details about the dataset can be found in Section \ref{sec:dataset-pp} of the Appendix.
\subsection{Models and Training}
In this experiment we use two different models CNN-VAE and RSSM. CNN-VAE is described in Section \ref{sec:methods-rssm} and architectural details can be found in Section \ref{sec:app:cnnvae}. During training the CNN-VAE the inner VAE is recursively used to predict, the number of recursions being a hyperparameter (Table \ref{tab:hp-vaecnn} of the Appendix). We found that this type of training leads to more stable long term predictions. In total, 48 CNN-VAE models are trained half of which are with supervised disentanglement (CNN-VAE-SD). The RSSM model is a generative model including both a stochastic and deterministic component. We only use supervised disentanglement on the stochastic part, and term that model RSSM-SD. Disentanglement is applied all four factors of variation of the domain, despite only length and gravity varying between datasets. Detailed architectural and training details can be found in Section \ref{sec:app:rssm} of the Appendix.
\subsection{Results}
Figure \ref{fig:res-ssim-lpips-point} shows the quality of predictions on video pendulum on two perceptual metrics: structural similarity (SSIM) and LPIPS \citep{Zhang2018} as a function of predicted time distance. We select the models which have the best cumulative metrics over the first 800 timesteps on a validation set.
For the CNN-VAE, effects of disentanglement are more subtle. We observe that, in-distribution, the disentangled CNN-VAE-SD has very similar quality when compared to the CNN-VAE. For the OOD datasets, though, disentanglement offers improved long-term predictions. The improvement is more noticeable on LPIPS, and when the distribution shift is bigger (OOD Test-set Hard), indicating that disentanglement can help with OOD robustness. For RSSM, we first note that both models perform significantly better than the CNN-VAE models, which is expected since they are considered competitive in long-term video prediction. Disentanglement in RSSM seems to produce a trade-off. The plain RSSM model better in short-term prediction but its performance deteriorates with time, reaching VAE-CNN levels in all metrics. On the other hand, the RSSM-SD model provides the best long-term scores in all metrics and all datasets. Qualitative results in Figure \ref{fig:qual-all} show that almost all models produce accurate short time predictions (approximately up to 200 time-steps). This further stresses the importance of disentanglement for long-term performance. In terms of OOD robustness, disentanglement also appears to be helping. While the RSSM-SD model lacks in short-term prediction quality on the in-distribution test-set, this performance gap closes as the OOD test-sets get harder. More specifically, on the in-distribution test-set the RSSM-SD overtakes RSSM in SSIM after around 400 frames, while in the OOD Easy and Hard test sets, this happens around 350 and 250 time-steps respectively. This narrowing gap indicates robustness improves with increasing distribution shifts. The above findings are corroborated by peak signal-to-noise ratio (PSNR) comparisons (Figure \ref{fig:res-psnr-point} and Table \ref{tab:res-pp-point} of the Appendix). Furthermore, the qualitative results show that all models accurately capture the appearance of the pendulum even long-term. Where they differ is on how well they capture the dynamics of the pendulum movement. This could offer an explanation why disentangling the domain from the dynamics is important, and why in practice offers better long-term and out-of-distribution performance.
Overall, experiments suggest that supervised disentanglement can be used to model dynamical systems in video sequences, resulting in improved long-term and OOD performance.
\section{Conclusions}
Using supervised disentanglement of domain parameters in generative models is a promising avenue for improving robustness. Our experiments show that it can improve both OOD generalization and long-term prediction of dynamical systems. This was demonstrated in phase-space with VAEs and also in video sequence modelling with state-of-the-art RSSMs.
By treating the domain parameters as factors of variation of the data and applying supervised disentanglement, several inductive biases are potentially enforced. First, the model in addition to prediction also performs ``soft'' system identification which acts as a regularizer. Second, it creates an implicit hierarchy such that some latent variables correspond to sequence-wide domain parameters and the rest capture instant dynamics. We speculate that this could additionally make the latent space more interpretable. Third, if the model can correctly extract the parameters this mean that the prediction is based both on them which is closer to how numerical integrators work, where the domain is known. All of these could lead the model to learn the correct causal structure of the data. Nevertheless, using privileged information for OOD robustness is not always straightforward and requires further exploration. This is evident by the results of the MLP autoencoders which do not yield as consistent improvements. A criticism of our method could be that cheap privileged information is not always available and/or depends on using simulated data. Firstly, training on simulations is an increasingly appealing option because it is a cheap way to generate data to begin with. This is, also, clearly demonstrated by the many advancements on techniques like sim2real \citep{peng2007sim2real} that try to bring models trained in simulated data to the real world. So there seems to be no reason not to use the privileged information that comes with simulated data.
Under that light supervised disentanglement can provide a pathway for real world applications where robustness in dynamical system prediction is critical. Applying the method to other datasets where there are more complex dynamic can increase its relevance. Sequence-wide parameters could also be exploited through self-supervision.
\section{Citations, figures, tables, references}
\label{others}
\begin{align*}
\mathcal{L}_{RSSM-SD}=\sum_{t=1}^{T}&\left(\underbrace{\mathbb{E}_{q(\bm{s_{t}} \mid \bm{o_{\leq t} ,h_{\leq t})}}[\ln p(\bm{o_{t}} \mid \bm{s_{t}, h_{t}))})]}_{\text{reconstruction}}-
\underbrace{
\mathbb{E}_{q(\bm{s_{t-1}} \mid \bm{o_{\leq t-1}, h_{\leq t-1}})}\left[\mathrm{KL}[q(\bm{s_{t} }\mid \bm{o_{\leq t}, h_{\leq t}}) \| p(\bm{s_{t}} \mid \bm{s_{t-1}})]\right]
}_{\text{prediction}} \right)
\end{align*}
These instructions apply to everyone, regardless of the formatter being used.
\subsection{Citations within the text}
Citations within the text should be based on the \texttt{natbib} package
and include the authors' last names and year (with the ``et~al.'' construct
for more than two authors). When the authors or the publication are
included in the sentence, the citation should not be in parenthesis using \verb|\citet{}| (as
in ``See \citet{Hinton06} for more information.''). Otherwise, the citation
should be in parenthesis using \verb|\citep{}| (as in ``Deep learning shows promise to make progress
towards AI~\citep{Bengio+chapter2007}.'').
The corresponding references are to be listed in alphabetical order of
authors, in the \textsc{References} section. As to the format of the
references themselves, any style is acceptable as long as it is used
consistently.
\subsection{Figures}
All artwork must be neat, clean, and legible. Lines should be dark
enough for purposes of reproduction; art work should not be
hand-drawn. The figure number and caption always appear after the
figure. Place one line space before the figure caption, and one line.
\subsection{Tables}
All tables must be centered, neat, clean and legible. Do not use hand-drawn
tables. The table number and title always appear before the table. See
\section{Default Notation}
In an attempt to encourage standardized notation, we have included the
notation file from the textbook, \textit{Deep Learning}
\cite{goodfellow2016deep} available at
\url{https://github.com/goodfeli/dlbook_notation/}. Use of this style
is not required and can be disabled by commenting out
\texttt{math\_commands.tex}.
\centerline{\bf Numbers and Arrays}
\bgroup
\def1.5{1.5}
\begin{tabular}{p{1in}p{3.25in}}
$\displaystyle a$ & A scalar (integer or real)\\
$\displaystyle {\bm{a}}$ & A vector\\
$\displaystyle {\bm{A}}$ & A matrix\\
$\displaystyle {\tens{A}}$ & A tensor\\
$\displaystyle {\bm{I}}_n$ & Identity matrix with $n$ rows and $n$ columns\\
$\displaystyle {\bm{I}}$ & Identity matrix with dimensionality implied by context\\
$\displaystyle {\bm{e}}^{(i)}$ & Standard basis vector $[0,\dots,0,1,0,\dots,0]$ with a 1 at position $i$\\
$\displaystyle \text{diag}({\bm{a}})$ & A square, diagonal matrix with diagonal entries given by ${\bm{a}}$\\
$\displaystyle {\textnormal{a}}$ & A scalar random variable\\
$\displaystyle {\mathbf{a}}$ & A vector-valued random variable\\
$\displaystyle {\mathbf{A}}$ & A matrix-valued random variable\\
\end{tabular}
\egroup
\vspace{0.25cm}
\centerline{\bf Sets and Graphs}
\bgroup
\def1.5{1.5}
\begin{tabular}{p{1.25in}p{3.25in}}
$\displaystyle {\mathbb{A}}$ & A set\\
$\displaystyle \mathbb{R}$ & The set of real numbers \\
$\displaystyle \{0, 1\}$ & The set containing 0 and 1 \\
$\displaystyle \{0, 1, \dots, n \}$ & The set of all integers between $0$ and $n$\\
$\displaystyle [a, b]$ & The real interval including $a$ and $b$\\
$\displaystyle (a, b]$ & The real interval excluding $a$ but including $b$\\
$\displaystyle {\mathbb{A}} \backslash {\mathbb{B}}$ & Set subtraction, i.e., the set containing the elements of ${\mathbb{A}}$ that are not in ${\mathbb{B}}$\\
$\displaystyle {\mathcal{G}}$ & A graph\\
$\displaystyle \parents_{\mathcal{G}}({\textnormal{x}}_i)$ & The parents of ${\textnormal{x}}_i$ in ${\mathcal{G}}$
\end{tabular}
\vspace{0.25cm}
\centerline{\bf Indexing}
\bgroup
\def1.5{1.5}
\begin{tabular}{p{1.25in}p{3.25in}}
$\displaystyle {a}_i$ & Element $i$ of vector ${\bm{a}}$, with indexing starting at 1 \\
$\displaystyle {a}_{-i}$ & All elements of vector ${\bm{a}}$ except for element $i$ \\
$\displaystyle {A}_{i,j}$ & Element $i, j$ of matrix ${\bm{A}}$ \\
$\displaystyle {\bm{A}}_{i, :}$ & Row $i$ of matrix ${\bm{A}}$ \\
$\displaystyle {\bm{A}}_{:, i}$ & Column $i$ of matrix ${\bm{A}}$ \\
$\displaystyle {\etens{A}}_{i, j, k}$ & Element $(i, j, k)$ of a 3-D tensor ${\tens{A}}$\\
$\displaystyle {\tens{A}}_{:, :, i}$ & 2-D slice of a 3-D tensor\\
$\displaystyle {\textnormal{a}}_i$ & Element $i$ of the random vector ${\mathbf{a}}$ \\
\end{tabular}
\egroup
\vspace{0.25cm}
\centerline{\bf Calculus}
\bgroup
\def1.5{1.5}
\begin{tabular}{p{1.25in}p{3.25in}}
$\displaystyle\frac{d y} {d x}$ & Derivative of $y$ with respect to $x$\\ [2ex]
$\displaystyle \frac{\partial y} {\partial x} $ & Partial derivative of $y$ with respect to $x$ \\
$\displaystyle \nabla_{\bm{x}} y $ & Gradient of $y$ with respect to ${\bm{x}}$ \\
$\displaystyle \nabla_{\bm{X}} y $ & Matrix derivatives of $y$ with respect to ${\bm{X}}$ \\
$\displaystyle \nabla_{\tens{X}} y $ & Tensor containing derivatives of $y$ with respect to ${\tens{X}}$ \\
$\displaystyle \frac{\partial f}{\partial {\bm{x}}} $ & Jacobian matrix ${\bm{J}} \in \mathbb{R}^{m\times n}$ of $f: \mathbb{R}^n \rightarrow \mathbb{R}^m$\\
$\displaystyle \nabla_{\bm{x}}^2 f({\bm{x}})\text{ or }{\bm{H}}( f)({\bm{x}})$ & The Hessian matrix of $f$ at input point ${\bm{x}}$\\
$\displaystyle \int f({\bm{x}}) d{\bm{x}} $ & Definite integral over the entire domain of ${\bm{x}}$ \\
$\displaystyle \int_{\mathbb{S}} f({\bm{x}}) d{\bm{x}}$ & Definite integral with respect to ${\bm{x}}$ over the set ${\mathbb{S}}$ \\
\end{tabular}
\egroup
\vspace{0.25cm}
\centerline{\bf Probability and Information Theory}
\bgroup
\def1.5{1.5}
\begin{tabular}{p{1.25in}p{3.25in}}
$\displaystyle P({\textnormal{a}})$ & A probability distribution over a discrete variable\\
$\displaystyle p({\textnormal{a}})$ & A probability distribution over a continuous variable, or over
a variable whose type has not been specified\\
$\displaystyle {\textnormal{a}} \sim P$ & Random variable ${\textnormal{a}}$ has distribution $P$\\% so thing on left of \sim should always be a random variable, with name beginning with \r
$\displaystyle \mathbb{E}_{{\textnormal{x}}\sim P} [ f(x) ]\text{ or } \mathbb{E} f(x)$ & Expectation of $f(x)$ with respect to $P({\textnormal{x}})$ \\
$\displaystyle \mathrm{Var}(f(x)) $ & Variance of $f(x)$ under $P({\textnormal{x}})$ \\
$\displaystyle \mathrm{Cov}(f(x),g(x)) $ & Covariance of $f(x)$ and $g(x)$ under $P({\textnormal{x}})$\\
$\displaystyle H({\textnormal{x}}) $ & Shannon entropy of the random variable ${\textnormal{x}}$\\
$\displaystyle D_{\mathrm{KL}} ( P \Vert Q ) $ & Kullback-Leibler divergence of P and Q \\
$\displaystyle \mathcal{N} ( {\bm{x}} ; {\bm{\mu}} , {\bm{\Sigma}})$ & Gaussian distribution %
over ${\bm{x}}$ with mean ${\bm{\mu}}$ and covariance ${\bm{\Sigma}}$ \\
\end{tabular}
\egroup
\vspace{0.25cm}
\centerline{\bf Functions}
\bgroup
\def1.5{1.5}
\begin{tabular}{p{1.25in}p{3.25in}}
$\displaystyle f: {\mathbb{A}} \rightarrow {\mathbb{B}}$ & The function $f$ with domain ${\mathbb{A}}$ and range ${\mathbb{B}}$\\
$\displaystyle f \circ g $ & Composition of the functions $f$ and $g$ \\
$\displaystyle f({\bm{x}} ; {\bm{\theta}}) $ & A function of ${\bm{x}}$ parametrized by ${\bm{\theta}}$.
(Sometimes we write $f({\bm{x}})$ and omit the argument ${\bm{\theta}}$ to lighten notation) \\
$\displaystyle \log x$ & Natural logarithm of $x$ \\
$\displaystyle \sigma(x)$ & Logistic sigmoid, $\displaystyle \frac{1} {1 + \exp(-x)}$ \\
$\displaystyle \zeta(x)$ & Softplus, $\log(1 + \exp(x))$ \\
$\displaystyle || {\bm{x}} ||_p $ & $L^p$ norm of ${\bm{x}}$ \\
$\displaystyle || {\bm{x}} || $ & $L^2$ norm of ${\bm{x}}$ \\
$\displaystyle x^+$ & Positive part of $x$, i.e., $\max(0,x)$\\
$\displaystyle \bm{1}_\mathrm{condition}$ & is 1 if the condition is true, 0 otherwise\\
\end{tabular}
\egroup
\vspace{0.25cm}
\end{document}
|
2,877,628,088,461 | arxiv | \section{Introduction}
The asymptotic behavior of the variance of the number of zeros of random trigonometric polynomials $\sum a_k\cos(kt) + b_k\sin(kt)$ with independent Gaussian coefficients has been established in \cite{Gra11}. Since then, the variances of numerous models have been studied: for instance, see \cite{Aza16} for the analogous model $\sum a_k\cos(kt)$, see \cite{Bal19, Do20} in the independent framework with non Gaussian coefficients, or more recently \cite{Lub21} for random orthogonal polynomials on the real line. In greater dimension, the asymptotic behavior of the variance of random nodal volume has been established in several kinds of random waves models, see for instance the survey \cite{Ros19} and the references therein.\jump
Roughly speaking, two distinct methods are used in the literature to study the asymptotic behavior of the variance: the Wiener chaos expansion of the number of zeros (see for instance \cite{Aza16, Mar20}), and the Kac--Rice formula (see \cite{Gra11,Lub21}). In this paper we adopt the second method to make explicit the asymptotics of the variance of the number of zeros of random trigonometric polynomials with dependent coefficients.\jump
This paper can be viewed as the natural continuation of \cite{Ang19} and \cite{Ang21}, in which the asymptotic behavior of the mean number of zeros of this model has been established. To the best of our knowledge, the question of variance for dependent trigonometric polynomials has not been considered until now and, the aforementioned examples do not seem to cover this situation.\jump
Let us now detail our model. Let $\T = \R/2\pi\Z$ be the one-dimensional torus, which can be identified with a segment of $\R$ of length $2\pi$. For $s,t\in \T$ we define the distance
\[\dist(s,t)=d(s-t,2\pi\Z),\numberthis\label{eq:19}\]
For $s\in\T$ we define
\[X_n(s) = \frac{1}{\sqrt{n}}\sum_{k=1}^{n}a_k\cos(ks) + b_k\sin(ks),\]
with $(a_k)_k, (b_k)_k$ two independent stationary centered Gaussian processes with correlation function $\rho : \Z\rightarrow \R$. That is,
\[\forall k,l\geq0,\quad\E[a_ka_l] = \E[b_kb_l] = \rho(k-l).\]
Thanks to Bochner Theorem, the correlation function $\rho$ is associated with a spectral measure $\mu$ on the torus $\T$ via the relation
\[\rho(k) = \frac{1}{2\pi}\int_\T e^{-iku}\dd \mu(u).\]
We denote by $Z_{X_n}(I)$ the number of zeros of $X_n$ on a subinterval $I$ of the torus $\T$. Under suitable conditions on the spectral measure $\mu$, it has been shown in \cite{Ang19} and \cite{Ang21} that the expectation of the number of zeros of the process $X_n$ on $\T$ behaves like $\frac{2}{\sqrt{3}}n$, as in the independent framework.\jump
The first main theorem of this article is the following, and makes explicit the variance asymptotics for the number of zeros of the process $X_n$ on $\T$, as $n$ grows to infinity, under the assumption that the spectral measure $\mu$ has a positive continuous density.
\begin{theorem}
\label{theorem1}
We suppose that the spectral measure $\mu$ has a positive continuous density $\psi$ with respect to the Lebesgue measure on the torus $\T$. Then there is an explicit positive constant $C_\infty$ that does not depend on $\psi$, such that for any subinterval $J$ of the torus,
\[\lim_{n\rightarrow +\infty} \frac{\Var(Z_{X_n}(J))}{n} = \mathrm{length}(J)\,C_\infty.\]
\end{theorem}
The constant $C_\infty\simeq 0.089$ is thus the same constant computed in \cite{Gra11} in the particular framework of independent Gaussian random variables. It is quite remarkable that the limiting value is universal with respect to the dependency of the Gaussian coefficients, as long as the spectral measure $\mu$ has a positive continuous density.\jump
Theorem \ref{theorem1} is in fact an application of the more general next Theorem \ref{theorem2}, that puts forwards the principal ingredients necessary to obtain such a universal asymptotics for the variance. Let $(X_n)_n$ be a sequence of centered Gaussian random processes defined on an open subinterval $I$ of the torus $\T$, with covariance functions $(r_n)_n$ defined for $n\geq 0$ and $s,t\in I$ by
\[r_n(s,t) = \E[X_n(s)X_n(t)].\]
\begin{theorem}
\label{theorem2}
Let $J$ be a subinterval of $I$ such that $\overline{J}\subset I$. We suppose that the sequence of random processes $(X_n)_n$ satisfies the two following conditions
\begin{itemize}
\item[$(A1)$] Let $K$ be a compact subset of $\R$. There exists a continuous positive function $\psi$ on $I$ such that uniformly on $x\in J$, $u,v\in K$, and $a,b\in \lbrace 0,1,2,3,4\rbrace$,
\[\lim_{n\rightarrow+\infty} \frac{1}{n^{a+b}} r_n^{(a,b)}\left(x+\frac{u}{n},x+\frac{v}{n}\right) = (-1)^b\psi(x)\sinc^{(a+b)}(u-v).\]
\item[$(A2)$] There is a constant $C$ and an exponent $\alpha > \frac{1}{2}$ such that for $a,b\in\lbrace 0,1\rbrace$,
\begin{align*}
\forall s,t\in J,\quad|r_n^{(a,b)}(s,t)|\leq C\frac{n^{a+b}}{(n\dist(s,t))^\alpha}.
\end{align*}
\end{itemize}
Then there is an explicit positive constant $C_\infty$ independent of the function $\psi$, such that
\[\lim_{n\rightarrow +\infty} \frac{\Var(Z_{X_n}(J))}{n} = \mathrm{length}(J)\,C_\infty.\]
\end{theorem}
This last Theorem \ref{theorem2} covers Theorem \ref{theorem1}, as we will prove that the assumptions on the spectral measure $\mu$ in Theorem \ref{theorem1} ensure that the associated sequence of processes $(X_n)_{n}$ satisfies hypotheses $(A1)$ and $(A2)$ of Theorem \ref{theorem2}. In this particular case, we can take advantage of the fact that the covariance function $r_n$ of the model is a trigonometric polynomial of degree $n$, to show that the general statements of hypotheses $(A1)$ and $(A2)$ roughly follow from the case $a=b=0$.\jump
In fact, Theorem \ref{theorem2} covers various previously known results about the asymptotics of variance of the number of zeros of general random trigonometric polynomials. Indeed, Theorem it also allows to make explicit the variance asymptotics for the number of zeros on any compact subinterval of $]0,\pi[$ of the process
\[\widetilde{X}_n(s)= \frac{1}{\sqrt{n}}\sum_{k=0}^n a_k\cos(ks+\theta),\quad\quad\theta\in\R,\numberthis\label{eq:33}\]
with $(a_k)_{k\geq 0}$ a stationary sequence of centered random Gaussian variables whose spectral measure has a positive continuous density. Note that the variance asymptotics for the number of zeros on $[0,\pi]$ for this process was established in the case of Gaussian iid $(a_k)_{k\geq 0}$ in \cite{Aza16}. In fact, a slight modification of our proof would prove the asymptotics for the variance of the number of zeros on the whole interval $[0,\pi]$. \jump
More recently, the authors in \cite{Lub21} established the variance asymptotics for the number of zeros of sums of real orthogonal polynomials (with respect to some compactly supported measure $\nu$) with independent Gaussian coefficients. The one to one mapping between $[-1,1]$ and the half-torus $[0,\pi]$ given by the relation $x =\cos(\theta)$ allows us to partially connect their results and Theorem \ref{theorem2}. Let $(P_n)_n$ be a sequence of real orthogonal polynomials with respect to a measure $\nu$ supported on $[-1,1]$. We suppose that $\nu$ has a continuous positive density $\phi$ on $]-1,1[$ which satisfies,
\[\int_0^\pi \log(\phi(\cos\theta))\dd \theta > -\infty\quand \int_0^1 \frac{\omega_\phi(x)}{x}\dd \delta <+\infty.\]
We define
\[\overline{X}_n(s) = \frac{1}{\sqrt{n}}\sum_{k=0}^n a_kP_k(\cos(s)),\]
where $(a_n)_{n\geq 0}$ are iid centered Gaussian random variables. Then the hypotheses $(A1)$ and $(A2)$ of Theorem \ref{theorem2} are satisfied with $\psi= 1/\phi$ (see \cite{Lub09,Tot00}) and Theorem \ref{theorem2} implies \cite[Cor.~1.3]{Lub21}.\jump
Section \ref{sec1} of the paper is devoted to the study of the Kac density, that is the integrand in the Kac--Rice formula, for the second moment of the number of zeros of a general one-dimensional Gaussian process. We first recall standard formulas for Gaussian conditional expectation, and the integral expression for the second moment of the number of zeros of a smooth Gaussian process. Under suitable regularity assumptions on the covariance function, we then deduce the behavior of the Kac density near the diagonal and we give a non-singular formula for this integrand, valid in a neighborhood of the diagonal. At last we give an explicit bound for the Kac density far away from the diagonal.\jump
In Section \ref{sec3}, we make explicit the asymptotics of the Kac density associated with a sequence of processes satisfying the hypotheses of Theorem \ref{theorem2}. Hypothesis $(A1)$ implies in particular that as $n$ grows to infinity, the sequence $(X_n)_n$ has a local limit proportional to the $\sinc$ process on $\R$. Together with the results of Section \ref{sec1}, we deduce after a suitable scaling that the Kac density associated to the process $X_n$ uniformly converges towards the Kac density of the $\sinc$ process on $\R$, as $n$ grows to infinity. This fact leads to the proof of Theorem \ref{theorem2}. We then check that the model of trigonometric polynomials with dependent coefficients satisfying the hypotheses of Theorem \ref{theorem1}, also satisfies the hypotheses of Theorem \ref{theorem2}, from which immediately follows the conclusion of Theorem \ref{theorem1}.
\section{Kac--Rice formula for the second moment}\label{sec1}
In this section we give the two main lemmas that quantify the behavior of the Kac density along the diagonal and far from the diagonal, for a general centered Gaussian process satisfying some natural regularity assumptions. First we recall some formula for the conditional density of Gaussian vectors. For a Gaussian stochastic process $(Y(s))_{s\in I}$, where $I$ is a subinterval of $\R$, we denote by $r$ its covariance function defined for $s,t\in I$ by
\[r(s,t) = \E[Y(s)Y(t)],\]
and for $a,b>0$, we denote by $r^{(a,b)}(s,t)$ the derivatives
\[r^{(a,b)}(s,t) = \partial_1^a\partial_2^b r(s,t) = \E[Y^{(a)}(s)Y^{(b)}(t)],\]
provided that $Y$ is sufficiently regular.
\subsection{Determinant formulas}
The Kac--Rice formula for the expectation and variance of the number of zeros of a centered Gaussian stochastic process $(Y(s))_{s\in I}$ involves the conditional Gaussian distributions
\[\gamma_0(s) \sim \mathrm{Law}\left((Y'(s)|Y(s)=0)\right)\;\,\text{and}\;\, (\gamma_{1}(s,t),\gamma_{2}(s,t))\sim\mathrm{Law}\left((Y'(s),Y'(t)|Y(s) = Y(t) = 0)\right).\]
We will need formulas for the distribution of $\gamma_0$, $\gamma_{1}$ and $\gamma_{2}$. For now we assume that the process $Y$ has $\CC^2$ sample paths and that the joint distribution of the Gaussian vector $(Y(s),Y(t))$ does not degenerate for $s\neq t$. Let
\[\Omega(s) := \Cov(Y(s),Y'(s)),\quand\Sigma(s,t) := \Cov(Y(s),Y(t),Y'(s),Y'(t)).\]
Then we have
\[
\Omega(s) = \begin{pmatrix}
r(s,s) & r^{(1,0)}(s,s) \\
r^{(1,0)}(s,s) & r^{(1,1)}(s,s)
\end{pmatrix},\]
and
\[\Sigma(s) = \begin{pmatrix}[cc|cc]
r(s,s) & r(s,t) & r^{(1,0)}(s,s) & r^{(0,1)}(s,t) \\
r(s,t) & r(t,t) & r^{(1,0)}(s,t) & r^{(1,0)}(t,t) \\
\hline
r^{(1,0)}(s,s) & r^{(1,0)}(s,t) & r^{(1,1)}(s,s) & r^{(1,1)}(s,t) \\
r^{(0,1)}(s,t) & r^{(1,0)}(t,t) & r^{(1,1)}(s,t) & r^{(1,1)}(t,t)
\end{pmatrix} = \begin{pmatrix}[c|c]
\Sigma_{11} & \Sigma_{12} \\
\hline
^T\Sigma_{12} & \vphantom{\int^\int}\Sigma_{22}\quad
\end{pmatrix}.
\]
Now, set
\[\omega(s) := r^{(1,1)}(s,s) - \frac{r^{(1,0)}(s,s)^2}{r(s,s)}\quand\Gamma := \Sigma_{22} - ^T\Sigma_{12}\Sigma_{11}^{-1}\Sigma_{12}.\numberthis\label{eq:30}\]
The matrix $\Gamma$ is the Schur complement of $\Sigma_{11}$ in $\Sigma$.
Let $x\in\R^4$ with $x = (x_1,x_2)$ and $x_1,x_2\in\R^2$. Let $y=(y_1,y_2)\in\R^2$. We define
\[m = \frac{r^{(1,0)}(s,s)}{r(s,s)}y_1\quand M = \Sigma_{12}\Sigma_{11}^{-1}x_1.\]
The identities
\[^Ty\Omega^{-1}y = \frac{y_1^2}{r(s,s)} + \frac{(y_2-m)^2}{\omega(s)}\quand\,^Tx \Sigma^{-1} x = \,^Tx_1\Sigma_{11}^{-1}x_1 + \,^T(x_2-M)\Gamma^{-1}(x_2-M)\numberthis\label{eq:07}\]
imply, according to the standard formula for the conditional density, the following lemma.
\begin{lemma}
\label{lemma7}
\[\E[\gamma_0]=0,\quad\E[\gamma_0^2] = \omega(s),\quand\E[\gamma_{1}]=0,\;\E[\gamma_{2}]=0,\quad\Cov(\gamma_{1},\gamma_{2}) = \Gamma(s,t).\]
\end{lemma}
By row reduction we have
\[\det(\Sigma) = \det(\Sigma_{11})\det(\Gamma)\quand \Sigma^{-1} = \begin{pmatrix}
* & * \\
* & \Gamma^{-1}
\end{pmatrix},\numberthis\label{eq:06}\]
and thus,
\[\det[\Cov(\gamma_{1},\gamma_{2})] = \frac{\det\left[\Cov(Y(s),Y(t),Y'(s),Y'(t))\right]}{\det\left[\Cov(Y(s),Y(t))\right]}.\numberthis\label{eq:31}\]
The relation between the inverse of a matrix and its adjugate matrix implies
\[\E[\gamma_{1}^2] = \frac{\det\left[\Cov(Y(s),Y(t),Y'(s))\right]}{\det\left[\Cov(Y(s),Y(t))\right]},\quad \E[\gamma_{2}^2] = \frac{\det\left[\Cov(Y(s),Y(t),Y'(t))\right]}{\det\left[\Cov(Y(s),Y(t))\right]},\numberthis\label{eq:03}\]
\[\E[\gamma_{1}\gamma_{2}] = \frac{\det\left[\Sigma_{3,4}\right]}{\det\left[\Cov(Y(s),Y(t))\right]},\quad\text{and also}\quad \E[\gamma_0^2] = \frac{\det\left[\Cov(Y(s),Y'(s))\right]}{\E[Y(s)^2]},\]
where $\Sigma_{3,4}$ is the matrix $\Sigma$ with the third row and the fourth line removed.\jump
From Definition of $\Gamma$ in Equation \eqref{eq:30} and the identity $\det(A)\leq \det(A+B)$ valid for any two positive semi-definite matrices $A$ and $B$, we also have the inequality
\[\det(\Gamma) \leq \det(\Sigma_{22}).\numberthis\label{eq:08}\]
\jump
Note that when $s=t$, the formulas for the distribution of $\gamma_0$ and $(\gamma_{1},\gamma_{2})$ are singular. We will make explicit in Lemma \ref{lemma2} below the behavior of these distributions near the diagonal, provided that the process $Y$ is sufficiently regular.
\subsection{Kac--Rice formula for the variance}
Let $(Y(t))_{t\in I}$ be a centered Gaussian process such that the joint distribution of the Gaussian vector $(Y(s),Y(t),Y'(s),Y'(t))$ does not degenerate as $s\neq t$. Denote by $p_s$ the density of $Y(s)$ and by $p_{s,t}$ the density of $(Y(s),Y(t))$. We define
\[Z_Y(I) = \Card\enstq{s\in I}{Y(s)=0}\]
the number of zeros of $Y$ in $I$. Kac--Rice formula (see \cite[Thm.~3.2]{Aza09}) then asserts that
\begin{align*}
\E[Z_Y] &= \int_I \rho_1(s)\dd s,
\end{align*}
with
\begin{align*}
\rho_1(s) &= \E\left[|Y'(s)|\;\middle|\; Y(s)=0\right]p_s(0)\;= \frac{1}{2\pi\sqrt{\det(\Omega)}}\int_\R |y|\exp\left(-\frac{y^2}{2w(s)}\right)\dd y,
\end{align*}
and
\[\E[Z_Y^2]-\E[Z_Y] = \iint_{I^2} \rho_2(s,t)\dd s\dd t,\]
with
\begin{align*}
\rho_2(s,t)&= \E\left[|Y'(s)||Y'(t)|\;\middle|\; Y(s)=0, Y(t)=0\right]p_{s,t}(0,0)\\
&=\frac{1}{(2\pi)^2\sqrt{\det(\Sigma(s,t))}}
\iint_{\R^2} |y_1||y_2|\exp\left(-\frac{1}{2}\,^T\!y\Gamma^{-1}(s,t)y\right)\dd y_1\dd y_2.
\end{align*}
Hence we deduce the integral representation
\begin{align*}
\Var(Z_Y) &= \left(\E[Z_Y^2] - \E[Z_Y] - \E[Z_Y]^2\right) + \E[Z_Y]\\
&=\left(\iint_{I^2} \left(\rho_2(s,t) - \rho_1(s)\rho_1(t)\right)\dd s \dd t\right) + \int_I\rho_1(t)\dd t.\numberthis\label{eq:29}
\end{align*}
\begin{remark}
\label{remark3}
$\rho_1$ and $\rho_2$ have explicit values given by
\[\rho_1(s) = \frac{1}{\pi}\frac{\sqrt{\vphantom{\int}\det(\Omega)}}{r(s,s)}\quand\rho_2(s,t)=\frac{1}{\pi^2\sqrt{\det(\Sigma_{11})}}\left(\sqrt{\det(\Gamma)}+\Gamma_{12}\arcsin
\left(\frac{\Gamma_{12}}{\sqrt{\Gamma_{11}\Gamma_{22}}}\right)\right),\]
but these explicit formulas will not be used throughout the proof.
\end{remark}
\subsection{Estimates for the singularity along the diagonal}
Our aim now is to derive the asymptotics of formulas \eqref{eq:31} and \eqref{eq:03} as $\tau = (t-s)$ goes to zero. In the case where the process $Y$ is sufficiently regular, a Taylor development around the point $s$ with sufficiently high order allows us to remove the singularity at $\tau=0$. This procedure is standard, see for instance \cite{Anc20} for a general treatment, or \cite[Prop.~5.8]{Aza09}.\jump
In the following, $(Y(s))_{s\in I}$ is a Gaussian process such that its covariance function is of class $\CC^4$ in each variable. We define the following quantities.
\[\xi(s) = \frac{1}{4}\det\left[\Cov(Y(s),Y'(s),Y^{(2)}(s))\right],\]
\[\Delta(s) = \frac{1}{144}\det\left[\Cov(Y(s),Y'(s),Y^{(2)}(s),Y^{(3)}(s))\right],\]
\[\zeta(s) = \frac{1}{24}\det\left[\Cov(Y(s),Y'(s),Y^{(3)}(s))\right].\]
The following lemma details the exact asymptotics for the distributions of $\gamma_0,\gamma_1$ and $\gamma_2$ as the parameter $\tau=(t-s)$ goes to zero.
\begin{lemma}
\label{lemma8}
Suppose that there is a constant $C$ such that for $a,b\in\lbrace 0,1,2,3,4\rbrace$ and $s,t\in I$,
\[|r^{(a,b)}(s,t)|\leq C\quand\det[\Omega(s)]\geq \frac{1}{C}.\]
Then there is an explicit constant $T_\varepsilon$ depending only on the constant $C$, such that for all $s\in I$ and $\tau\in[-T_\varepsilon,T_\varepsilon]$,
\[\det[\Sigma_{11}(s,s+\tau)]=\tau^2\det[\Omega(s)] + \tau^3R_{\Sigma_{11}}(s,\tau),\]
\[\Gamma(s,s+\tau) = \frac{\tau^2}{\det[\Omega(s)]} \begin{pmatrix}
\xi(s) & -\xi(s) \\
-\xi(s) & \xi(s)
\end{pmatrix} + \tau^3R_\Gamma(s,\tau)\]
\[\det\left[\Sigma(s,t)\right] = \tau^8\Delta(s) + \tau^9R_\Sigma(s,\tau),\]
\[\E[(\gamma_1+\gamma_2)^2] = \tau^4\frac{\zeta(s)}{\det[\Omega(s)]} + \tau^5R_{\gamma}(s,\tau).\]
Moreover, the functions $\Omega,\gamma,\Delta,\zeta,R_{\Sigma_{11}},R_\Gamma,R_\Sigma,R_\gamma$ are continuous functionals with respect to the $\|.\|_\infty$ norm, of the covariance function $r$ and its derivatives up to order $4$.
\end{lemma}
\begin{proof}
By Taylor expansion with integral remainder, we have
\[Y(s+\tau) = \sum_{k=0}^n \frac{Y^{(k)}(s)}{k!}\tau^k + \frac{\tau^{n+1}}{n!}\int_0^1(1-u)^nY^{(n+1)}(s+\tau u)\dd u.\]
We will use this expression with $n=1,2,3$. For instance, by applying a unitary transformation on the covariance matrix and the above Taylor formula with $n=1$, we get
\begin{align*}
\det[\Sigma_{11}(s,t)]=\det\left[\Cov(Y(s),Y(t))\right] &= \det\left[\Cov(Y(s),Y(t)-Y(s))\right]\\
&= \tau^2\det\left[\Cov(Y(s),Y'(s))\right] + \tau^3 R_0(s,\tau)\\
&= \tau^2\det\left[\Omega(s)\right] + \tau^3 R_0(s,\tau).
\end{align*}
The remainder $R_0(s,\tau)$ is explicit and follows from the expansion of the determinant. Here we have
\begin{align*}
R_0(s,\tau) = ae-2bc - c^2,
\end{align*}
with
\[a = \E[Y(s)^2] = r(s,s),\quad b = \E[Y(s)Y'(s)] = r^{(0,1)}(s,s)\quad d = \E[Y'(s)^2] = r^{(1,1)}(s,s),\]
\[c = \int_0^1 (1-u)r^{(0,2)}(s,s+\tau u)\dd u,\]
\[e= 2\int_0^1 (1-u)r^{(1,2)}(s,s+\tau u)\dd u + \tau\int_0^1\int_0^1 (1-u)(1-v)r^{(2,2)}(s+\tau u,s+\tau v)\dd u\dd v.\]
By our assumptions the remainder $R_0(s,\tau)$ is a continuous functional of the covariance function of $r$ and its partial derivatives up to order $2$. We deduce that there are explicit positive constants $C'$ and $T_\varepsilon$ depending only on $C$, such that for all $s\in I$ and $\tau\in[-T_\varepsilon,T_\varepsilon]$,
\[\frac{1}{\tau^2}\det[\Sigma_{11}(s,s+\tau)] \geq C'.\numberthis\label{eq:18}\]
We can do similar computations for the others determinants. In particular, we have
\begin{align*}
\det\left[\Cov(Y(s),Y(t),Y'(s))\right] &= \det\left[\Cov(Y(s),Y'(s),Y(t)-Y(s)-\tau Y'(s))\right]\\
&= \tau^4\xi(s) + \tau^5 R_{11}(s,\tau),
\end{align*}
\begin{align*}
\det\left[\Cov(Y(s),Y(t),Y'(t))\right] = \tau^4\xi(s) + \tau^5 R_{22}(s,\tau),
\end{align*}
\begin{align*}
\det\left[\Sigma_{3,4}(s,t)\right] = -\tau^4\xi(s) + \tau^5 R_{12}(s,\tau),
\end{align*}
\begin{align*}
\det\left[\Sigma(s,t)\right] = \tau^8\Delta(s) + \tau^9 R_\Sigma(s,\tau),
\end{align*}
\begin{align*}
\det\left[\Cov(Y(s),Y(t),Y'(t)+Y'(s))\right] = \tau^6\zeta(s) + \tau^7 R_3(s,\tau).
\end{align*}
As for the computation of $R_0(s,\tau)$, the remainders $R_{11}$, $R_{12}$, $R_{22}$, $R_\Sigma$ and $R_3$ can be explicitly computed and are defined as a sum of integrals (on a compact interval) of quantities that depends polynomially on $r^{(a,b)}(s+\tau_1,s+\tau_2)$ for $a,b\in\lbrace 0,1,2,3,4\rbrace$ and $\tau_1,\tau_2\in[s,s+T_0]$. In particular, they are continuous functionals of the covariance function $r$ and its derivatives up to order $4$.\jump
The asymptotics for $\Gamma$ and $(\gamma_1+\gamma_2)$ follows from these identities, and the fact that the common denominator of $\gamma_1,\gamma_2$, $\gamma_{12}$ and $(\gamma_1+\gamma_2)$ in their expression \eqref{eq:03}, which is $\det(\Sigma_{11})$, is bounded from below by a positive constant on $[-T_\varepsilon,T_\varepsilon]$, according to \eqref{eq:18}.
\end{proof}
\begin{remark}
\label{remark2}
From Lemma \ref{lemma8}, we have the following behavior near the diagonal for $\Gamma$.
\[\Gamma(s,s+\tau) \simeq \frac{\tau^2}{\det[\Omega(s)]} \begin{pmatrix}
\xi(s) & -\xi(s) \\
-\xi(s) & \xi(s)
\end{pmatrix}.\]
It means that at a scaling limit, the vector $(\gamma_1,\gamma_2)$ degenerates. If $X\simeq \mathcal{N}(0,\xi)$, then we have approximately
\[\E[|\gamma_1||\gamma_2|] \simeq \tau^2\E[|X|\,|\!-\!X|] = \E[X^2] = \tau^2\frac{\xi(s)}{\det[\Omega(s)]}.\numberthis\label{eq:04}\]
It will be convenient to define
\[(\widetilde{\gamma}_1,\widetilde{\gamma}_2) = \left(\frac{\gamma_1+\gamma_2}{\tau^2}\frac{\gamma_2}{\tau}\right),\quand \widetilde{\Gamma}\,(s,s+\tau) = \Cov(\widetilde{\gamma}_1,\widetilde{\gamma}_2).\]
We have then
\[\det[\Cov(\widetilde{\gamma}_1,\widetilde{\gamma}_2)] = \frac{\det[\Cov(\gamma_{1},\gamma_{2})]}{\tau^6} = \frac{\Delta(s)}{\det\left[\Omega(s)\right]}+\tau R_\Sigma(s,\tau).\]
This time, we have the following asymptotics
\[\widetilde{\Gamma}(s,s+\tau) \simeq \begin{pmatrix}
\zeta(s) & \eta(s) \\
\eta(s) & \xi(s)
\end{pmatrix},\numberthis\label{eq:05}\]
with $\eta(s)$ some explicit function of $s$. The limit matrix is non degenerate, provided that $\Delta(s)>0$.
\end{remark}
The following lemma precises the behavior of $\rho_2(s,t)$ near the diagonal.
\begin{lemma}
\label{lemma3}
Suppose that there is a constant $C$ such that for $a,b\in\lbrace 0,1,2,3,4\rbrace$ and $s,t\in I$,
\[|r^{(a,b)}(s,t)|\leq C,\quad \det[\Omega(s)]\geq \frac{1}{C} \quand \Delta(s)\geq \frac{1}{C}.\]
Then there exists a positive constant $T_\varepsilon$ depending only on the constant $C$, and continuous bounded functions $\kappa$ and $R$ such that
\[\forall \tau\in[-T_\varepsilon,T_\varepsilon],\quad \rho_2(s,s+\tau) = \tau\kappa(s) + \tau^2R(s,\tau).\]
Moreover, the functions $\kappa$ and $R$ are continuous functionals of the covariance function $r$ and its derivatives up to order $4$.
\end{lemma}
\begin{proof}
We use the results established in Lemma \ref{lemma8}. In order to obtain a relevant scaling, we follow Remark \ref{remark2} and we make the change of variable $(\gamma_1,\gamma_2)\rightarrow (\widetilde{\gamma_1},\widetilde{\gamma_2})$. By the previous asymptotics, we deduce
\begin{align*}
\rho_2(s,s+\tau)&= p_{s,s+\tau}(0,0)\E\left[|\gamma_1||\gamma_2|\right]\\
&=\frac{\tau^2}{2\pi\det[\Sigma_{11}(s)]}
\E\left[|\tau\widetilde{\gamma_1}-\widetilde{\gamma_2}||\widetilde{\gamma_2}|\right]\\
&= \frac{\tau^2}{(2\pi)^2\sqrt{\det(\Sigma_{11})\det(\widetilde{\Gamma})}}\iint_{\R^2} |\tau x_1-x_2||x_2|\exp\left(-\frac{1}{2}\,^T\!x\widetilde{\Gamma}^{-1}x\right)\dd x_1\dd x_2\\
&= \frac{\tau^2}{(2\pi)^2\sqrt{\frac{1}{\tau^6}\det(\Sigma)}}\iint_{\R^2} |\tau x_1-x_2||x_2|\exp\left(-\frac{1}{2}\,^T\!x\widetilde{\Gamma}^{-1}x\right)\dd x_1\dd x_2\\
&= \frac{\tau}{(2\pi)^2\sqrt{\Delta(s) + \tau\R_\Sigma(s)}}\int_\R(I_\tau (x_1) + J_\tau(x_1))\dd x_1,
\end{align*}
where
\[I_\tau(x_1) = \int_{\tau x_1}^\infty(\tau x_1-x_2)|x_2|\exp\left(-\frac{1}{2}\,^T\!x\widetilde{\Gamma}^{-1}x\right)\dd x_1\dd x_2,\]
and
\[J_\tau(x_1) = \int_{-\infty}^{\tau x_1}(x_2-\tau x_1)|x_2|\exp\left(-\frac{1}{2}\,^T\!x\widetilde{\Gamma}^{-1}x\right)\dd x_1\dd x_2.\]
By Remark \ref{remark2} and Equation \eqref{eq:05}, there is a constant $T_\varepsilon$ and $C'$ such that the matrix $\widetilde{\Gamma}$ satisfies for all $s\in I$ and $\tau\in[-T_\varepsilon,T_\varepsilon]$ the inequality $\det(\widetilde{\Gamma})\geq 1/C'$. The formula for the inverse matrix shows that that the functions $\tau\mapsto I_\tau(x_1)$ and $\tau\mapsto J_\tau(x_2)$ are continuous functionals of the covariance function $r$ and its derivatives up to order $4$, as well as their derivatives with respect to the parameter $\tau$. It means that we can write
\[\rho_2(s,s+\tau) = \tau\kappa(s) + \tau^2R(s,\tau),\]
with $\kappa$ and $R$ some explicit continuous functional of the covariance function $r$ and its derivatives up to order $4$.
\end{proof}
\begin{remark}
\label{remark4}
As suggested by Remark \ref{remark2} and Equation \eqref{eq:04}, the proof of Lemma \Ref{lemma3} shows that
\[\kappa(s) = \frac{\xi(s)}{2\pi\det[\Omega(s)]^2}.\]
\end{remark}
\subsection{Decay estimates far from the diagonal}
In order to derive the asymptotics of the variance of the number of zeros, we need to estimate the quantity $(\rho_2(s,t)-\rho_1(s)\rho_1(t))$ in the Kac--Rice formula \eqref{eq:29} for the variance. Suppose that as $s$ and $t$ are far enough from each other, the random vector $(Y(s),Y'(s))$ and $(Y(t),Y'(t))$ are then "almost" decorrelated. Heuristically we should therefore have
\begin{align*}
\E\left[|Y'(s)||Y'(t)|\;\middle|\; Y(s)=0, Y(t)=0\right] \simeq \E\left[|Y'(s)|\;\middle|\; Y(s)=0\right]\;\E\left[|Y'(t)|\;\middle|\; Y(t)=0\right],
\end{align*}
and
\[p_{s,t}(0,0)\simeq p_s(0)p_t(0).\]
The two following lemmas make the above heuristic rigorous and quantify the error term. We define
\[M(\tau) := \sup_{s\in I}\sup_{a,b\in\lbrace 0,1\rbrace} |r^{(a,b)}(s,s+\tau)|.\numberthis\label{eq:27}\]
\begin{lemma}
\label{lemma9}
We assume the existence of a constant $C$ such that for $a,b\in\lbrace0,1\rbrace$ and $s,t\in I$,
\[|r^{(a,b)}(s,t)|\leq C.\]
Then
\[|\det(\Sigma(s,s+\tau)-\det(\Omega(s)\det(\Omega(s,+\tau))|\leq 20C^2 M(\tau)^2.\]
\end{lemma}
\begin{proof}
For a pair $(p,q)$ in $\lbrace1,2,3,4\rbrace$, with $p<q$ we denote by $(\tilde{p},\tilde{q})$ the complementary pair of $(p,q)$ in $\lbrace1,2,3,4\rbrace$. For instance, if $(p,q) = (2,3)$ then $(\tilde{p},\tilde{q})=(1,4)$. We set
\[\Sigma_{p,q} = \begin{pmatrix}
\Sigma_{1p} & \Sigma_{1q} \\
\Sigma_{3p} & \Sigma_{3q}
\end{pmatrix},\quand \widetilde{\Sigma}_{p,q} = \begin{pmatrix}
\Sigma_{2\tilde{p}} & \Sigma_{2\tilde{q}} \\
\Sigma_{4\tilde{q}} & \Sigma_{4\tilde{q}}
\end{pmatrix}.\]
The formula for Laplace expansion along rows $1$ and $3$ asserts that
\[\det(\Sigma) = \sum_{p<q}(-1)^{p+q}\det(\Sigma_{p,q})\det(\widetilde{\Sigma}_{p,q}).\]
Observe that
\[\Sigma_{1,3} = \Omega(s),\quand \widetilde{\Sigma}_{1,3} = \Omega(t),\]
and thus,
\[\det(\Sigma) = \det(\Omega(s))\det(\Omega(t)) + \sum_{\underset{(p,q)\neq (1,3)}{p<q}} (-1)^{p+q} \det(\Sigma_{p,q})\det(\widetilde{\Sigma}_{p,q}).\]
For each pair $(p,q)\neq (1,3)$, there is at least a column of the form $^T(r^{(a,b)}(s,s+\tau),r^{(c,d)}(s,s+\tau))$ in both $\Sigma_{p,q}$ and $\widetilde{\Sigma}_{p,q}$. For instance,
\[\Sigma_{1,4} = \begin{pmatrix}
r(s,s) & r^{(0,1)}(s,t) \\
r^{(1,0)}(s,s) & r^{(1,1)}(s,t)
\end{pmatrix},\quand \widetilde{\Sigma}_{1,4} = \begin{pmatrix}
r(t,t) & r^{(0,1)}(s,t) \\
r^{(0,1)}(t,t) & r^{(1,1)}(s,t)
\end{pmatrix}.\]
It means that for every pair $(p,q)\neq (1,3)$, we have the inequalities
\[|\det(\Sigma_{p,q})|\leq 2CM(\tau),\quand |\det(\widetilde{\Sigma}_{p,q})|\leq 2CM(\tau),\]
and thus
\[\left|\det(\Sigma(s,t)) - \det(\Omega(s))\det(\Omega(t))\right| \leq 20C^2M(\tau)^2.\]
\end{proof}
\begin{lemma}
\label{lemma4}
We assume the existence of a constant $C$ such that for $a,b\in\lbrace0,1\rbrace$ and $s,t\in I$,
\[|r^{(a,b)}(s,t)|\leq C,\quad\quad \det(\Omega(s)) \geq 1/C.\]
There are positive constants $\varepsilon$ and $C'$ depending only on the constant $C$, such that for all $\tau$ satisfying $M(\tau)\leq \varepsilon$, we have
\[|\rho_2(s,s+\tau)-\rho_1(s)\rho_1(s+\tau)|\leq C'M(\tau)^2.\]
\end{lemma}
\begin{proof}
The hypotheses of Lemma \ref{lemma4} imply that
\[r(s,s)\geq \frac{\det(\Omega(s))}{r^{(1,1)}(s,s)}\geq \frac{1}{C^2},\quand \omega(s)=\frac{\det(\Omega(s))}{r(s,s)}\geq \frac{1}{C^2}.\numberthis\label{eq:12}\]
Set $\varepsilon = 1/(\sqrt{40}C^2)$. Lemma \ref{lemma9} above implies that for all $\tau$ such that $M(\tau)\leq \varepsilon$, we have the following inequality
\begin{align*}
\det(\Sigma(s,s+\tau))&\geq \det(\Omega(s))\det(\Omega(t))- 20C^2M(\tau)^2 \geq \frac{1}{2C^2}.\numberthis\label{eq:11}
\end{align*}
Moreover, Equations \eqref{eq:06} and \eqref{eq:08} implies that
\[\det(\Sigma)=\det(\Sigma_{11})\det(\Gamma)\leq C^2\det(\Gamma),\quand \det(\Sigma)\leq \det(\Sigma_{11})\det(\Sigma_{22})\leq C^2 \det(\Sigma_{11}).\]
Combining these inequalities with \eqref{eq:11}, we deduce the following inequalities for all $\tau$ satisfying $M(\tau)\leq \varepsilon$
\[\det(\Sigma_{11}(s,s+\tau)) \geq \frac{1}{2C^4}\,,\quand \det(\Gamma(s,s+\tau)) \geq \frac{1}{2C^4}.\numberthis\label{eq:10}\]
Now let $s,t\in I$ such that for $\tau = t-s$ we have $M(\tau)\leq \varepsilon$. We can express the difference
\begin{align*}
&\rho_2(s,s+\tau)-\rho_1(s)\rho_1(s+\tau) = R_1 + R_2,
\end{align*}
with
\[R_1 = \left[\frac{1}{\sqrt{\det(\Sigma)}}\!-\!\frac{1}{\sqrt{\det(\Omega(s))\det(\Omega(t))}}\right]\!\!
\left(\int_\R |x|\exp\left(-\frac{x^2}{2\omega(s)}\right)\dd x\right)\!\!\left(\int_\R |y|\exp\left(-\frac{y^2}{2\omega(t)}\right)\dd y\right),\]
\[R_2 = \frac{1}{\sqrt{\det(\Sigma)}}\iint_{\R^2}|x||y|\left[\exp\left(-\frac{1}{2}\,^T\!(x,y)\Gamma^{-1}\,(x,y)\right)-\exp\left(-\frac{1}{2}\left(\frac{x^2}{\omega(s)}+\frac{y^2}{\omega(t)}\right)\right)\right]\dd x\dd y.\]
We treat first the term $R_1$. Let
\[\mathrm{Denom} = \sqrt{\det(\Sigma)\det(\Omega(s))\det(\Omega(t))}\left(\sqrt{\det(\Sigma)}+\sqrt{\det(\Omega(s))\det(\Omega(t))}\right).\]
From Lemma \ref{lemma9} and Equation \eqref{eq:11}, there is an explicit constant $C'$ depending only on $C$ such that
\begin{align*}
\left|\frac{1}{\sqrt{\det(\Sigma)}}-\frac{1}{\sqrt{\det(\Omega(s))\det(\Omega(t))}}\right|& = \frac{\left|\det(\Sigma)- \det(\Omega(s))\det(\Omega(t))\right|}{\mathrm{Denom}}\\
& \leq C'M(\tau)^2.
\end{align*}
From the estimate \eqref{eq:12}, the quantity $\omega(s)$ is bounded from below $1/C^2$. We deduce the existence of a constant $C''$ depending only on $C$ and such that
\[|R_1|\leq C''M(\tau)^2.\]
For the term $R_2$, we estimate the distance between $\Gamma$ and the diagonal matrix $\mathrm{diag}(\omega(s),\omega(t))$. First, we have
\[\left|\vphantom{\Sigma^2}\det\left[\Cov(Y(s),Y(t))\right] - r(s,s)r(t,t)\right| = r(s,t)^2 \leq M(\tau)^2.\numberthis\label{eq:13}\]
Developing the determinant $\det[\Cov(Y(s),Y(t),Y'(s))]$ along the second row, we also deduce
\[\left|\det\left[\Cov(Y(s),Y(t),Y'(s))\right] - r(t,t)\det(\Omega(s))\right|\leq 6CM(\tau)^2.\numberthis\label{eq:14}\]
If the parameter $\tau$ satisfies the inequality $M(\tau)\leq \varepsilon$, the inequalities \eqref{eq:13} and \eqref{eq:10} imply
\begin{align*}
r(s,s)r(t,t)\geq \det(\Sigma_{11}(s,s+\tau)) - M(\tau)^2 \geq \frac{1}{2C^4} - \frac{1}{40C^4} \geq \frac{1}{3C^4}.\numberthis\label{eq:15}
\end{align*}
Using the formulas \eqref{eq:03} for the coefficients of $\Gamma(s,t)$, we get
\begin{align*}
\left|\E[\gamma^2_1]-\omega(s)\right|=\left|\frac{\det\left[\Cov(Y(s),Y(t),Y'(s))\right]}{\det\left[\Cov(Y(s),Y(t))\right]}- \frac{r(t,t)\det(\Omega(s))}{r(s,s)r(t,t)}\right|\leq C'M(\tau)^2\numberthis\label{eq:16},
\end{align*}
where the last inequality is justified by the inequalities \eqref{eq:13} and \eqref{eq:14}, and the lower bounds for the denominators given by inequalities \eqref{eq:10} and \eqref{eq:15}.
Similarly,
\[\left|\E[\gamma^2_2] -\omega(t)\right| \leq C'M(\tau)^2\quand|\E[\gamma_{1}\gamma_{2}]| \leq C'M(\tau)^2.\numberthis\label{eq:17}\]
From Equation \eqref{eq:10}, $\det(\Gamma)$ is bounded from below by an explicit positive constant. Estimates \eqref{eq:16} and \eqref{eq:17} imply the existence of a constant $C'$ depending only on $C$ such that
\[\left\|\Gamma - \mathrm{diag}(\omega(s),\omega(t))\right\| \leq C'M(\tau)^2,\]
\[\left\|\Gamma^{-1} - \mathrm{diag}\left(\frac{1}{\omega(s)},\frac{1}{\omega(s)}\right)\right\| \leq C'M(\tau)^2.\]
In order to recover the estimate on $R_2$ we use the following inequality valid for $a,b\in\R$
\[|e^{b} - e^{a}|\leq |b-a|\left(e^{a} + e^{b}\right).\]
Since the quantities $\omega^{-1}$ and $\det(\Sigma)=\det(\Omega)\det(\Gamma)$ are uniformly bounded from below by an explicit positive constant, and $\Gamma^{-1}$ is bounded, we get
\begin{align*}
|R_2|&\leq \frac{C'M(\tau)^2}{\sqrt{\det(\Sigma)}}\iint_{\R^2} |x||y||(|x|+|y|)\left[\exp\left(-\frac{1}{2}\,^T\!(x,y)\Gamma^{-1}\,(x,y)\right)\right.\\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad+\left.\exp\left(-\frac{1}{2}\left(\frac{x^2}{\omega(s)}+\frac{y^2}{\omega(t)}\right)\right)\right]\dd x\dd y.
\end{align*}
Since $\omega(s),\det(\Gamma(s,s+\tau)$ and $\det(\Sigma(s,s+\tau)$ are bounded below by an explicit constant depending only on $C$, we deduce the existence of a constant $C''$ (depending only on $C$) such that
\[|R_2|\leq C''M(\tau)^2.\]
Combining the estimates for $R_1$ and $R_2$ we obtain
\[|\,\rho_2(s,s+\tau)- \rho_1(s)\rho_1(t)\,|\leq C'M(\tau)^2,\]
where $C'$ is a constant depending only on the constant $C$.
\end{proof}
\section{Proof of the main theorems}\label{sec3}
\subsection{Asymptotics of the Kac density}
Let $(X_n)_n$ be a sequence of processes of the torus $\T$ satisfying the hypotheses of Theorem \ref{theorem2} and let $J$ be a subinterval of $I$ with $\overline{J}\subset I$. We define for $x\in J$
\[\forall s\in\R,\; Y_n(s) :=X_n\left(\frac{s}{n}\right)\quand\forall \tau\in\R,\; Y_n^x(\tau) := X_n\left(x+\frac{\tau}{n}\right).\]
Let $Y_\infty$ be a centered stationary Gaussian process with $\sinc$ covariance function. An explicit computation gives
\[\E[(Y_\infty'(0))^2] = \frac{1}{3},\quad \E[(Y_\infty^{(2)}(0))^2] = \frac{1}{5},\quand \E[(Y_\infty^{(3)}(0))^2] = \frac{1}{7}.\numberthis\label{eq:23}\]
Let
\[q_n(s,t):=\E[Y_n(s)Y_n(t)]=r_n\left(\frac{s}{n},\frac{t}{n}\right),\quad\text{and thus}\quad q_n^{(a,b)}(s,t) = \frac{1}{n^{a+b}}r_n^{(a,b)}\left(\frac{s}{n},\frac{t}{n}\right).\]
Hypothesis $(A1)$ of Theorem \ref{theorem2} implies that for $a,b\in\lbrace 0,1,2,3,4\rbrace$, we have the following uniform convergence on $x\in J$ and on $\tau$ in compact subsets of $\R$
\[\lim_{n\rightarrow+\infty} q_n^{(a,b)}(nx,nx + \tau) = \psi(x)(-1)^b\sinc^{(a+b)}(\tau).\numberthis\label{eq:25}\]
It means that the finite dimensional distributions of the process $(Y_n^x(\tau))_{\tau\in\R}$ and its derivatives up to order $4$ converge in distribution towards the finite dimensional distributions of the process $\psi(x)Y_\infty$ and its derivatives up to order $4$. Moreover, it implies the quantity $q_n^{(a,b)}$ is uniformly bounded for all $x\in J$ and $a,b\in\lbrace0,1,2,3,4\rbrace$ by some constant $C$. By Cauchy--Schwarz inequality for covariance functions, we deduce then that for $s,t\in J$,
\[q_n^{(a,b)}(s,t)\leq \sqrt{q_n^{(a,a)}(s,s)}\sqrt{q_n^{(b,b)}(t,t)} \leq C.\numberthis\label{eq:35}\]
Denote by $\rho_{1,n}$ and $\rho_{2,n}$ (resp. $\rho_{1,\infty}$ and $\rho_{2,\infty}$) the Kac densities of the process $Y_n$ (resp. $Y_\infty$). Since the process $Y_\infty$ is stationary, the function $\rho_{1,\infty}$ is a constant and the function $\rho_{2,\infty}$ depends only on $t-s$.
Similarly, we will use the notation $\Omega_n,\Omega_\infty,\Gamma_n,\Gamma_\infty$, etc. as defined in Section \ref{sec1}. The following lemma give the asymptotics of the Kac densities as $n$ grows to infinity.
\begin{lemma}
\label{lemma5}
We have the following uniform convergences, with $x\in J$ and $\tau$ in a compact set of $\R$
\[\lim_{n\rightarrow+\infty} \rho_{1,n}(nx) = \rho_{1,\infty}\quand\lim_{n\rightarrow+\infty} \rho_{2,n}(nx,nx+\tau) = \rho_{2,\infty}(\tau).\]
\end{lemma}
\begin{proof}
We begin with the convergence of $\rho_{1,n}$. From Equation \eqref{eq:25} with $\tau=0$ and the fact that the function $\psi$ is bounded from below by a positive constant $C_\psi$, we deduce the following uniform convergence on $x\in J$
\[\lim_{n\rightarrow+\infty} \Omega_n(nx) = \psi(x)\Omega_\infty,\quand\lim_{n\rightarrow+\infty} \omega_n(nx) = \psi(x)\omega_\infty.\]
It follows from \eqref{eq:23} that $\Omega_\infty = \mathrm{diag}(1,1/3)$ and $\omega_\infty = 1/3$. It implies the existence of rank $n_0$ independent of $x$ such that
\[\forall n\geq n_0,\forall x\in J,\quad\det[\Omega_n(nx)]\geq \frac{C_\psi^2}{4}\quand \omega_n(nx) \geq \frac{C_\psi}{4}.\numberthis\label{eq:24}\]
It implies the following uniform convergence on $x\in J$
\begin{align*}
\lim_{n\rightarrow+\infty} \rho_{1,n}(nx) &= \lim_{n\rightarrow+\infty}\frac{1}{2\pi\sqrt{\det(\Omega_n(nx))}}\int_\R |y|\exp\left(-\frac{y^2}{2w_n(nx)}\right)\dd y\\
&=\frac{1}{2\pi\psi(x)\sqrt{\det(\Omega_\infty)}}\int_\R |y|\exp\left(-\frac{y^2}{2\psi(x)\omega_\infty}\right)\dd y\\
&=\frac{1}{2\pi\sqrt{\det(\Omega_\infty)}}\int_\R |u|\exp\left(-\frac{u^2}{2\omega_\infty}\right)\dd u\\
&= \rho_{1,\infty}.
\end{align*}
Now for the convergence of $\rho_{2,n}$, let $T_\varepsilon$ be some small positive constant and $K$ a large compact set of $\R$. Observe that the process $Y_\infty$ is stationary and the support of its spectral measure has an accumulation point. It implies (see \cite[Ex. ~3.5]{Aza09}) that for $\tau\in K\setminus[-T_\varepsilon,T_\varepsilon]$, the covariance matrix of the Gaussian vector $(Y_\infty(s),Y_\infty(s+\tau),Y_\infty'(s),Y_\infty'(s+\tau))$ is nondegenerate and moreover, one can find an explicit positive constant $C$ such that
\[\forall \tau\in K\setminus[-T_\varepsilon,T_\varepsilon],\quad 1/C\leq \det(\Sigma_\infty)\leq C\quand 1/C\leq \det(\Sigma_{11,\infty}) \leq C.\]
From Equation \eqref{eq:06} we deduce that the matrices $\Gamma_\infty$ and $\Sigma_\infty$ are nondegenerate on $K\setminus[-T_\varepsilon,T_\varepsilon]$ and we have the following convergences, uniformly on $x\in J$ and $\tau\in K\setminus[-T_\varepsilon,T_\varepsilon]$
\[\lim_{n\rightarrow+\infty} \Sigma_n(nx,\tau) = \psi(x)\Sigma_\infty(\tau),\]
\[\lim_{n\rightarrow+\infty} \Gamma_n(nx,\tau) = \psi(x)\Gamma_\infty(\tau).\]
Moreover, there exists a rank $n_0$ depending only on $T_\varepsilon$ such that
\[\forall n\geq n_0,\forall \tau\in K\setminus[-T_\varepsilon,T_\varepsilon],\quad\det(\Sigma_n(nx,nx+\tau))\geq \frac{C_\psi^4}{2C}\quand \det(\Gamma_n(nx,nx+\tau))\geq \frac{C_\psi^2}{2C^2}.\]
We deduce the following convergence, uniform on $x\in J$ and $\tau\in K\setminus[-T_\varepsilon,T_\varepsilon]$
\begin{align*}
\lim_{n\rightarrow+\infty} \rho_2(nx,nx+\tau) &= \lim_{n\rightarrow+\infty} \frac{1}{(2\pi)^2\sqrt{\det(\Sigma_n)}}
\iint_{\R^2} |y_1||y_2|\exp\left(-\frac{1}{2}\,^T\!y\Gamma_n^{-1}y\right)\dd y_1\dd y_2\\
&= \frac{1}{(2\pi)^2\psi^2(x)\sqrt{\det(\Sigma_\infty)}}
\iint_{\R^2} |y_1||y_2|\exp\left(-\frac{1}{2\psi(x)}\,^T\!y\Gamma_\infty^{-1}y\right)\dd y_1\dd y_2\\
&= \frac{1}{(2\pi)^2\sqrt{\det(\Sigma_\infty)}}
\iint_{\R^2} |u_1||u_2|\exp\left(-\frac{1}{2}\,^T\!u\Gamma_\infty^{-1}u\right)\dd u_1\dd u_2\\
&= \rho_{2,\infty}(\tau).
\end{align*}
It remains to prove the uniform convergence in the case where $\tau$ lives in a neighborhood of the origin. To this end, we apply Lemma \ref{lemma3}. We check first that our process $Y_n$ fulfills the hypotheses of the lemma. From Equation $\eqref{eq:35}$, there is a constant $C$ such that for $n\geq0$, $x\in J$, $\tau\in \R$ and $a,b\in\lbrace 0,1,2,3,4\rbrace$,
\[ |q_n^{(a,b)}(nx,nx + \tau)|\leq C.\numberthis\label{eq:26}\]
Moreover, we have uniformly on $x\in J$
\[\lim_{n\rightarrow +\infty} \det(\Omega_n(nx)) = \psi^2(x)\Omega_\infty (0)= \frac{\psi^2(x)}{3}.\]
Set
\[\Delta_n(nx) = \det[\Cov(Y_n(nx),Y_n'(nx),Y_n^{(2)}(nx),Y_n^{(3)}(nx))].\]
From identities \eqref{eq:23}, we have
\begin{align*}
\lim_{n\rightarrow +\infty} \Delta_n(nx) = \psi^4(x)\Delta_\infty = \psi^4(x)\begin{vmatrix}
1 & 0 & -\frac{1}{3} & 0 \\
0 & \frac{1}{3} & 0 & -\frac{1}{5} \\
-\frac{1}{3} & 0 & \frac{1}{5} & 0 \\
0 & -\frac{1}{5} & 0 & \frac{1}{7}
\end{vmatrix} = \frac{16}{23625}\psi^4(x).
\end{align*}
In particular there is a rank $n_0$ such that uniformly on $x\in J$
\[\forall n\geq n_0,\quad \det(\Omega_n(nx))\geq \frac{C_\psi^2}{4}\quand \Delta_n(nx)\geq \frac{C_\psi^4}{2000}.\]
Hence, the hypotheses of Lemma \ref{lemma3} are satisfied. There exists a positive constant $T_\varepsilon$ such that for $n$ greater than $n_0$, there exists continuous functions $R_n$ and $\kappa_n$ such that
\[\forall \tau\in[-T_\varepsilon,T_\varepsilon],\quad \rho_{2,n}(nx,nx+\tau) = \tau \kappa_n(nx) + \tau^2R_n(nx,\tau),\]
and the functions $\kappa_n$ and $R_n$ are continuous functionals of $q_n$ and its partial derivatives up to order $4$. From the uniform convergence of $q_n(nx,nx+\tau)$ and its derivatives, we obtain the uniform convergence of $\kappa_n(nx)$, $R_n(nx,nx+\tau)$ and thus $\rho_{2,n}(nx,nx+\tau)$ towards $\kappa_\infty$, $R_\infty(\tau)$ and $\rho_{2,\infty}(\tau)$ on $[-T_\varepsilon,T_\varepsilon]$.\jump
Gathering the uniform convergence of $\rho_{2,n}(nx,nx+\tau)$ towards $\rho_{2,\infty}(\tau)$ on $K\setminus[-T_\varepsilon,T_\varepsilon]$, and on $[T_\varepsilon,T_\varepsilon]$ we have proved the uniform convergence for $x\in J$ and $\tau\in K$ of $\rho_{2,n}(nx,nx+\tau)$ towards its limit.
\end{proof}
The following lemma establishes a decay property for the Kac density.
\begin{lemma}
\label{lemma6}
Let $\alpha$ be the exponent in Assumption $(A2)$ of Theorem \ref{theorem2}. There exists a constant $C$ and a rank $n_0$ independent of $x\in J$ such that for all $\tau$ with $x+\frac{\tau}{n}\in J$ and $n\geq n_0$, we have
\[\left|\rho_{2,n}(nx,nx+\tau) - \rho_{1,n}(nx)\rho_{1,n}(nx+\tau)\right| \leq \frac{C}{\dist(\tau,2\pi n\Z)^{2\alpha}}.\]
\end{lemma}
\begin{proof}
We check that the hypotheses of Lemma \ref{lemma4} are satisfied with the process $Y_n$ defined on the subinterval $nJ$ of $\R$. In virtue of Hypothesis $(A2)$, there is a constant $C$ such that for $a,b\in\lbrace 0,1\rbrace$, $n\in\N$ and $x\in J$,
\begin{align*}
q_n^{(a,b)}(nx,nx+\tau)= \frac{1}{n^{a+b}}r_n^{(a,b)}\left(x,x+\frac{\tau}{n}\right)\leq \frac{C}{\dist(\tau,2\pi n\Z)^\alpha}.
\end{align*}
It implies that the function $M(\tau)$ defined in \eqref{eq:27} satisfies,
\[\forall \tau\in\R,\quad M(\tau)\leq \frac{C}{\dist(\tau, 2\pi n\Z)^\alpha}.\]
From Equation \eqref{eq:35}, the function $q_n^{(a,b)}$ is uniformly bounded for $a,b\in\lbrace 0,1\rbrace$ by a constant $C$, and inequality \eqref{eq:24} states that for $n$ greater than a rank $n_0$ independent of $x$, and uniformly in $x\in J$, $\det[\Omega_n(nx)]\geq C_\psi^2/4$. For $n\geq n_0$, Lemma \ref{lemma4} implies the existence of positive constants $\varepsilon$ and $C'$ independent of $n$ and $x$, such that for all $\tau$ with $x+\frac{\tau}{n}
\in J$ satisfying $M(\tau)\leq \varepsilon$, we have
\[|\rho_{2,n}(nx,nx+\tau)-\rho_{1,n}(nx)\rho_{1,n}(nx+\tau)|\leq \frac{C'}{\dist(\tau,2\pi n\Z)^{2\alpha}}.\numberthis\label{eq:02}\]
If $M(\tau)\geq \varepsilon$ then
\[\dist(\tau,2\pi n\Z)\leq \frac{C}{\varepsilon}.\]
In that case, Lemma \ref{lemma5} implies that the left-hand side of Equation \eqref{eq:02} is bounded by a constant $C_\varepsilon$ independent of $n$ and $x$. To conclude, note that Inequality \eqref{eq:02} remains valid for all $\tau\in\R$, with $C'$ replaced by $C' + C_\varepsilon(C/\varepsilon)^{2\alpha}$.
\end{proof}
\subsection{Proof of Theorem \ref{theorem2}}
Let $J$ be a subinterval of $I$ with $\overline{J}\subset I$. We identify $J$ with a segment $[a,b]$, such that $|b-a|\leq 2\pi$. Let $n\geq n_0$, where $n_0$ is the rank defined in Lemma \ref{lemma6}. We write
\begin{align*}
\Var(Z_{X_n}(J)) &= \Var(Z_{Y_n}(nJ))\\
&=n\int_{a}^{b}\!\rho_{1,n}(nx)\dd x + n\int_{a}^{b}\!\int_{-n(a-x)}^{n(b-x)} \left(\rho_{2,n}(nx,nx+\tau) - \rho_{1,n}(nx)\rho_{1,n}(nx+\tau)\right)\dd \tau\dd x.
\end{align*}
For the first term, Lemma \ref{lemma5} asserts that uniformly on $x\in J$, we have
\[\lim_{n\rightarrow+\infty} \rho_{1,n}(nx) = \rho_{1,\infty},\quad\text{and thus}\quad \lim_{n\rightarrow+\infty} \int_{a}^{b}\rho_{1,n}(nx)\dd x = |b-a|\rho_{1,\infty}.\]
For the second term, we define
\begin{align*}
R_n = &\int_{a}^{b}\int_{n(a-x)}^{n(b-x)} \left(\rho_{2,n}(nx,nx+\tau) - \rho_{1,n}(nx)\rho_{1,n}(nx+\tau)\right)\dd \tau\dd x\\
&\quad-\quad|b-a|\int_\R \left(\rho_{2,\infty}(\tau) - \rho_{1,\infty}^2\right)\dd \tau.
\end{align*}
Let $A$ be a large constant. We split $R_n$ into four parts:
\[R_n = R_{1,n}(A)- R_{2,n}(A)+ R_{3,n}(A) - R_{3,\infty}(A),\]
where,
\begin{align*}
R_{1,n}(A) &= \int_{a}^{b}\int_{n(a-x)}^{n(b-x)}\one_{|\tau|\leq A} \left[\left(\rho_{2,n}(nx,nx+\tau)-\rho_{2,\infty}(\tau)\right)\right]\dd \tau\dd x,\\
R_{2,n}(A) &= \int_{a}^{b}\int_{n(a-x)}^{n(b-x)}\one_{|\tau|\leq A} \left[\left(\rho_{1,n}(nx)\rho_{1,n}(nx+\tau)-\rho_{1,\infty}^2\right)\right]\dd\tau\dd x,\\
R_{3,n}(A) &= \int_{a}^{b}\int_{n(a-x)}^{n(b-x)}\one_{|\tau|\geq A} \left(\rho_{2,n}(nx,nx+\tau) - \rho_{1,n}(nx)\rho_{1,n}(nx+\tau)\right)\dd \tau\dd x,\\
R_{3,\infty}(A) &= |b-a|\int_{\R\setminus[-A,A]}(\rho_{2,\infty}(\tau) - \rho_{1,\infty}^2)\dd \tau\dd x.
\end{align*}
For the the terms $R_{1,n}(A)$ and $R_{2,n}(A)$, Lemma \ref{lemma5} shows that the two integrands converge uniformly towards $0$ and thus,
\[\lim_{n\rightarrow +\infty} R_{1,n}(A) = 0\quand \lim_{n\rightarrow +\infty} R_{2,n}(A) = 0.\]
For the term $R_{3,n}(A)$, Lemma \ref{lemma6} and the inequality $\alpha>1/2$ gives
\begin{align*}
|R_{3,n}(A)|\leq \int_{a}^{b}\int_{n(a-x)}^{n(b-x)}\one_{|\tau|\geq A} \frac{C}{\tau^{2\alpha}}\dd \tau\,\leq\, 2C|b-a|\int_{A}^\infty \frac{1}{\tau^{2\alpha}}\dd \tau \,\leq\, \frac{2C|b-a|}{(2\alpha+1)A^{2\alpha-1}}.\numberthis\label{eq:32}
\end{align*}
For the term $R_{3,\infty}$, we apply dominated convergence to $R_{3,n}(A)$. Equation \eqref{eq:32} implies in particular that the integrand is bounded by a integrable function that does not depend on the parameter $n$, and \ref{lemma5} shows that the integrand converges to the integrand of $R_{3,\infty}(A)$. Hence by Equation \eqref{eq:32},
\begin{align*}
|R_{3,\infty}(A)| = \lim_{n\rightarrow +\infty} |R_{3,n}(A)| \leq \frac{2C|b-a|}{(2\alpha+1)A^{2\alpha-1}}.
\end{align*}
Gathering the estimates for $R_{1,n}(A)$, $R_{2,n}(A)$, $R_{3,n}(A)$ and $R_{3,\infty}(A)$, we deduce that
\[\limsup_{n\rightarrow+\infty} |R_n| \leq \frac{4C|b-a|}{(2\alpha+1)A^{2\alpha-1}}.\]
Letting $A$ go to infinity, we deduce that $\lim_{n\rightarrow+\infty} R_n = 0$, from which follows the following convergence
\[\lim_{n\rightarrow +\infty} \frac{\Var(Z_{X_n}(J))}{n} = \mathrm{length}(J)\,C_\infty,\]
where
\[C_\infty := \int_\R \left(\rho_{2,\infty}(\tau) - \rho_{1,\infty}^2\right)\dd \tau + \rho_{1,\infty}.\]
It remains to show the positiveness of the constant $C_\infty$. A slight modification of the above proof would show that
\[C_\infty = \lim_{n\rightarrow +\infty} \frac{\Var(Z_{Y_\infty}[0,n])}{n}.\]
It has been shown by several methods (see for example \cite{Anc20,Slu91}) that $C_\infty>0$, not only for the process $Y_\infty$, but also for a large class of stationary Gaussian processes satisfying mild conditions on their covariance functions.
\begin{remark}
Using the explicit formulas given by Remark \ref{remark3}, we have
\[\rho_{1,\infty} = \frac{1}{\pi\sqrt{3}}\quad \rho_{2,\infty}(\tau)=\frac{1}{\pi^2\sqrt{\det(\Sigma_{11,\infty}(\tau))}}\left(\sqrt{\det(\Gamma_\infty(\tau))}+\Gamma_{12,\infty}\arcsin
\left(\frac{\Gamma_{12,\infty}}{\sqrt{\Gamma_{11,\infty}\Gamma_{22,\infty}}}\right)\right).\]
\end{remark}
\subsection{Proof of Theorem \ref{theorem1}}
In the following, we consider the sequence of trigonometric Gaussian polynomials $(X_n)_{n}$ defined in introduction. Assuming the hypotheses of Theorem \ref{theorem1}, we show that this sequence of processes satisfies hypotheses $(A1)$ and $(A2)$ of Theorem \ref{theorem2} with $I=\T$, from which follows the conclusion of Theorem \ref{theorem1}. Following \cite{Ang21}, the next computation gives an integral expression for the covariance function of $X_n$.
\begin{align*}
r_n(s,t) :\!&= \E[X_n(s)X_n(t)]\\
&= \frac{1}{n}\sum_{k,l=1}^n \rho(k-l)\cos(ks-lt)\\
&= \frac{1}{n}\sum_{k,l=1}^n\left(\frac{1}{2\pi}\int_0^{2\pi}e^{-i(k-l)u}\dd\mu(u)\right)\cos(ks-lt)\\
&=\frac{1}{2\pi n}\int_0^{2\pi}\mathrm{Re}\left(\sum_{k,l=1}^ne^{-i(k-l)u}e^{iks-ilt}\right)\dd\mu(u)\\
&=\frac{1}{2\pi n}\int_0^{2\pi}\mathrm{Re}\left(\left(\sum_{\,k=1}^ne^{ik(s-u)}\right)\left(\sum_{\,l=1}^ne^{il(t-u)}\right)\right)\dd\mu(u)\\
&= \cos\left(\frac{n+1}{2}(s-t)\right)\frac{1}{2\pi}\int_0^{2\pi} K_n(s-u,t-u)\dd \mu(u),\numberthis\label{eq:36}
\end{align*}
where $K_n$ is the two points Fejér kernel
\[K_n(s,t) = \frac{1}{n}\frac{\sin\left(\frac{ns}{2}\right)}{\sin\left(\frac{s}{2}\right)}\frac{\sin\left(\frac{nt}{2}\right)}{\sin\left(\frac{t}{2}\right)}.\]
In the case where $\rho(k-l) = \delta_{k,l}$, the measure $\mu$ is the normalized Lebesgue measure on $[-\pi,\pi]$. In that case, we denote by $r_{0,n}$ its covariance function. The function $r_{0,n}$ has the following explicit expression.
\[r_{0,n}(s,t) = \frac{1}{2n}\left[\frac{\sin\left(\left(n+\frac{1}{2}\right)(s-t)\right)}{\sin\left(\frac{s-t}{2}\right)}-1\right].\numberthis\label{eq:01}\]
Let us now assume that the spectral measure $\mu$ has a continuous and positive density $\psi$ on $\T$. The two following lemmas show that the covariance function $r_n$ satisfies hypotheses $(A1)$ and $(A2)$ of Theorem \ref{theorem2} with any exponent $\alpha\in]1/2,1[$. \jump
\begin{lemma}
\label{lemma2}
Let $a,b\in \lbrace 0,1,2,3,4\rbrace$. Uniformly for $s\in \T$ and $u,v$ in any compact subset of $\R$,
\[\lim_{n\rightarrow+\infty} \frac{1}{n^{a+b}}r_n^{(a,b)}\left(s+\frac{u}{n},s+\frac{v}{n}\right) = \psi(s)(-1)^b\sinc^{(a+b)}(u-v).\]
\end{lemma}
\begin{proof}
Let us first remark that the covariance function $r_n$ is here a trigonometric polynomial an can thus be extended to an analytic function on $\C$. We will prove that the conclusion of Lemma \ref{lemma2} holds when $u$ and $v$ belong to a compact subset of $\C$. In that case, it suffices to prove the lemma for $a=b=0$. The general case follows from the analyticity of the covariance function $r_n$ with respect to the parameters $u$ and $v$, and the uniform convergence on compact subsets of $\C$. We have
\begin{align*}
r_n\left(s+\frac{u}{n},s+\frac{v}{n}\right) &= I_n + \psi(s)r_{0,n}\left(s+\frac{u}{n},s+\frac{v}{n}\right),
\end{align*}
where
\[I_n = \cos\left(\frac{n+1}{2}\frac{u-v}{n}\right)\frac{1}{2\pi}\int_{-\pi}^{\pi} K_n\left(\frac{u}{n}-x,\frac{v}{n}-x\right)\left[\psi\left(x+s\right)-\psi(s)\right]\dd x.\]
Firstly, we have
\begin{align*}
r_{0,n}\left(s+\frac{u}{n},s+\frac{v}{n}\right) &= \frac{1}{2n}\left[\frac{\sin\left(\left(n+\frac{1}{2}\right)\frac{u-v}{n}\right)}{\sin\left(\frac{u-v}{2n}\right)}-1\right]\\
&=\sinc(u-v) + O\left(\frac{1}{n}\right),
\end{align*}
where the remainder in uniform on $s\in\R$ and $u,v$ in compact subsets of $\C$. It remains to prove that the quantity $I_n$ converges towards $0$ uniformly on $s\in\T$ and $u,v$ in compact sets of $\C$. Let $K$ be a compact subset of $\C$ and $C(K) = 1+\sup_{u\in K}|\mathrm{Re}(u)|$. We have
\begin{align*}
|I_n| &\leq \frac{1}{2\pi n}\int_{-\pi}^{\pi} \left|\frac{\sin\left(\frac{u-nx}{2}\right)}{\sin\left(\frac{1}{2}\left(\frac{u}{n}-x\right)\right)}\frac{\sin\left(\frac{v-nx}{2}\right)}{\sin\left(\frac{1}{2}\left(\frac{v}{n}-x\right)\right)}\right|\omega_\psi(x)\dd x\\
&\leq \frac{1}{2\pi n^2}\int_{-n\pi}^{n\pi} \left|\frac{\sin\left(\frac{u-y}{2}\right)}{\sin\left(\frac{u-y}{2n}\right)}\frac{\sin\left(\frac{v-y}{2}\right)}{\sin\left(\frac{v-y}{2n}\right)}\right|\omega_\psi\left(\frac{y}{n}\right)\dd y\\
&\leq R_1 + R_2,\\
\end{align*}
where
\[R_1 = \frac{1}{2\pi n^2}\int_{-C(K)}^{C(K)} \left|\frac{\sin\left(\frac{u-y}{2}\right)}{\sin\left(\frac{u-y}{2n}\right)}\frac{\sin\left(\frac{v-y}{2}\right)}{\sin\left(\frac{v-y}{2n}\right)}\right|\omega_\psi\left(\frac{y}{n}\right)\dd y,\]
and
\[R_2 = \frac{1}{2\pi n^2}\int_{-n\pi}^{n\pi} \one_{\lbrace|y|\geq C(K)\rbrace}\left|\frac{\sin\left(\frac{u-y}{2}\right)}{\sin\left(\frac{u-y}{2n}\right)}\frac{\sin\left(\frac{v-y}{2}\right)}{\sin\left(\frac{v-y}{2n}\right)}\right|\omega_\psi\left(\frac{y}{n}\right)\dd y.\]
For the term $R_1$, we have
\[R_1 \leq \frac{1}{2\pi n^2}\int_{-C(K)}^{C(K)} \omega_\psi\left(\frac{y}{n}\right)\dd y.\]
Since the spectral density is continuous (hence uniformly continuous) on $\T$, the quantity $R_1$ converges towards zero as $n$ goes to infinity by dominated convergence, uniformly on $s\in\T$ and $u,v\in K$. For the term $R_2$ we use the following inequalities, valid for $\mathrm{Re}(z)\in[-5\pi/6,5\pi/6]$:
\[\frac{3}{5\pi}|\mathrm{Re}(z)|\leq |\sin(\mathrm{Re}(z))|\leq |\sin(z)|\numberthis\label{eq:22}.\]
There is a rank $n_0$ depending only on the compact subset $K$ such that, for all $n\geq n_0$, $u,v\in K$ and $y\in [-n\pi,n\pi]$,
\[-\frac{5\pi}{6}\leq \frac{u-y}{2n}\leq \frac{5\pi}{6}\quand -\frac{5\pi}{6}\leq \frac{v-y}{2n}\leq \frac{5\pi}{6}.\]
Define
\[y_K := \sup_{z\in K} |\mathrm{Im}(z)|\quand I(K) := \left[\sup_{|\mathrm{Re}(z)|\leq y_K}|\sin(z)|\right]^2.\]
It follows from the series of inequalities \eqref{eq:22} that
\begin{align*}
R_2 &\leq \frac{I(K)}{2\pi n^2}\int_{-n\pi}^{n\pi} \one_{\lbrace|y|\geq C(K)\rbrace}\frac{1}{\left|\sin\left(\frac{u-y}{2n}\right)\right|}\frac{1}{\left|\sin\left(\frac{v-y}{2n}\right)\right|}\omega_\psi\left(\frac{y}{n}\right)\dd y\\
&\leq \frac{5}{3}I(K)\int_{-n\pi}^{n\pi} \one_{\lbrace|y|\geq C(K)\rbrace}\frac{1}{|\mathrm{Re}(u)-y||\mathrm{Re}(v)-y|}\omega_\psi\left(\frac{y}{n}\right)\dd y\\
&\leq \frac{5}{3}I(K)\int_{-\infty}^\infty \one_{\lbrace|y|\geq C(K)\rbrace}\frac{1}{\left(|y|-C(K)+1\right)^2}\,\omega_\psi\left(\frac{y}{n}\right)\dd y.
\end{align*}
By dominated convergence, the quantity $R_2$ also converges towards $0$ when $n$ goes to infinity, uniformly on $s\in\T$ and $u,v\in K$.
\end{proof}
\begin{lemma}
\label{lemma1}
Let $a,b\in\lbrace 0,1\rbrace$ and $0<\alpha<1$. There is a constant $C$ such that
\begin{align*}
\forall s,t\in \T,\quad |r_n^{(a,b)}(s,t)|\leq C\frac{n^{a+b}}{(n\dist(s,t))^\alpha}.
\end{align*}
\end{lemma}
\begin{proof}
Let $s,t\in \T$. If $\dist(s,t)\leq 4/n$ then according to the previous Lemma \ref{lemma2}, there is a constant $C$ such that
\[|r_n^{(a,b)}(s,t)|\leq Cn^{a+b} \leq 4^\alpha C\frac{n^{a+b}}{(n\dist(s,t))^\alpha},\]
and the conclusion of Lemma \ref{lemma1} holds. It suffices then to prove Lemma \ref{lemma1} for $s,t\in \T$ such that $\dist(s,t)\geq 4/n$. Let $C(s,\varepsilon)$ denote the circle of center $s$ and radius $\varepsilon$. By Cauchy integral formula,
\begin{align*}
|r_n^{(a,b)}(s,t)| &= \left|\frac{1}{(2i\pi)^{a+b}}\int_{C\left(s,\frac{1}{n}\right)}\int_{C\left(t,\frac{1}{n}\right)}\frac{r_n(w,z)}{(s-w)^{a+1}(t-z)^{b+1}}\dd w\dd z\right|\\
&\leq \vphantom{\int^\int}n^{a+b}\!\!\!\sup_{w\in C\left(s,\frac{1}{n}\right)}\sup_{z\in C\left(t,\frac{1}{n}\right)} |r_n(w,z)|\\
&\leq n^{a+b}\sup_{|u|\leq 1}\sup_{|v|\leq 1} \left|r_n\left(s+\frac{u}{n},t+\frac{v}{n}\right)\right|.
\end{align*}
Let $u,v$ be complex numbers such that $|u|\leq 1$ and $|v|\leq 1$. Using the explicit formula \eqref{eq:36} for $r_n$ we obtain
\begin{align*}
r_n\!\left(s+\frac{v}{n},t+\frac{v}{n}\right) &= \frac{1}{2\pi n}\cos\left(\frac{n+1}{2}\left(s-t+\frac{u-v}{n}\right)\right)\!\!\int_{-\pi}^\pi\! \frac{\sin\left(\frac{n(s-x)+u}{2}\right)}{\sin\left(\frac{s-x+\frac{u}{n}}{2}\right)}\frac{\sin\left(\frac{n(t-x)+v}{2}\right)}{\sin\left(\frac{t-x+\frac{v}{n}}{2}\right)}\psi(x)\dd x.
\end{align*}
Using the fact that the sine and cosine functions are bounded by some constant $C$ on the complex strip $\enstq{z\in\C}{|\mathrm{Im}(z)|\leq 1}$ , we have
\begin{align*}
\left|r_n\left(s+\frac{v}{n},t+\frac{v}{n}\right)\right| &\leq \frac{C\|\psi\|_\infty}{2\pi n}\int_\T \left|\frac{\sin\left(\frac{n(s-x)+u}{2}\right)}{\sin\left(\frac{s-x+\frac{u}{n}}{2}\right)}\frac{\sin\left(\frac{n(t-x)+v}{2}\right)}{\sin\left(\frac{t-x+\frac{v}{n}}{2}\right)}\right|\dd x.
\end{align*}
Let $\delta = \dist(s,t)/2$. Up to translating $s$ and $t$ by $\pm 2\pi$ and exchanging $s$ and $t$, we can assume that $\delta = \frac{t-s}{2}$. We then make the change of variable $x = y + \frac{t+s}{2}$ to obtain
\[\int_\T \left|\frac{\sin\left(\frac{n(s-x)+u}{2}\right)}{\sin\left(\frac{s-x+\frac{u}{n}}{2}\right)}\frac{\sin\left(\frac{n(t-x)+v}{2}\right)}{\sin\left(\frac{t-x+\frac{v}{n}}{2}\right)}\right|\dd x =
\int_{-\pi}^\pi \left|\frac{\sin\left(\frac{n(\delta+y)-u}{2}\right)}{\sin\left(\frac{\delta+y-\frac{u}{n}}{2}\right)}\frac{\sin\left(\frac{n(\delta-y)+v}{2}\right)}{\sin\left(\frac{\delta-y+\frac{v}{n}}{2}\right)}\right|\dd y.\]
This last integral splits into two integrals $I_1$ and $I_2$ defined by
\[I_1 = \int_{0}^\pi \left|\frac{\sin\left(\frac{n(\delta+y)-u}{2}\right)}{\sin\left(\frac{\delta+y-\frac{u}{n}}{2}\right)}\frac{\sin\left(\frac{n(\delta-y)+v}{2}\right)}{\sin\left(\frac{\delta-y+\frac{v}{n}}{2}\right)}\right|\dd y\quand I_2 = \int_{-\pi}^0 \left|\frac{\sin\left(\frac{n(\delta+y)-u}{2}\right)}{\sin\left(\frac{\delta+y-\frac{u}{n}}{2}\right)}\frac{\sin\left(\frac{n(\delta-y)+v}{2}\right)}{\sin\left(\frac{\delta-y+\frac{v}{n}}{2}\right)}\right|\dd y.\]
Both term can be treated the exact same way. We have by Hölder inequality with $0<\alpha<1$,
\begin{align*}
I_1&\leq \left(\int_0^\pi \left|\frac{\sin\left(\frac{n(\delta+y)-u}{2}\right)}{\sin\left(\frac{\delta+y-\frac{u}{n}}{2}\right)}\right|^\frac{1}{1-\alpha}\dd y\right)^{1-\alpha}
\left(\int_0^\pi \left|\frac{\sin\left(\frac{n(\delta-y)+v}{2}\right)}{\sin\left(\frac{\delta-y+\frac{v}{n}}{2}\right)}\right|^\frac{1}{\alpha}\dd y\right)^\alpha.\numberthis\label{eq:38}
\end{align*}
For the left integral in \eqref{eq:38}, we make use of the following inequalities, which are consequences of inequalities \eqref{eq:22}, $|u|\leq 1$ and $\delta\geq 2/n$.
\[\left|\sin\left(\frac{\delta+y-\frac{u}{n}}{2}\right)\right|\geq \frac{3}{10\pi}\left(\delta +y - \frac{\mathrm{Re}(u)}{n}\right)\geq\frac{3}{10\pi}\left(\frac{\delta}{2}+ y\right),\]
to get
\begin{align*}
\left(\int_0^\pi \left|\frac{\sin\left(\frac{n(\delta+y)-u}{2}\right)}{\sin\left(\frac{\delta+y-\frac{u}{n}}{2}\right)}\right|^\frac{1}{1-\alpha}\dd y\right)^{1-\alpha}&\leq
C\left(\int_0^\infty \frac{\dd y}{\left(y+\frac{\delta}{2}\right)^\frac{1}{1-\alpha}}\right)^{1-\alpha}\\
&\leq \frac{C'}{\delta^\alpha}.\numberthis\label{eq:37}
\end{align*}
For the right integral in \eqref{eq:38}, we make the change of variable $x=n(\delta-y)+\mathrm{Re}(v)$. We also use the inequality
\[|x+iy|\leq C|\sin(x+iy)|,\]
valid for $x\in [-2\pi/3,2\pi/3]$ and $y\in [-1/4,1/4]$, and some positive constant $C$.
\begin{align*}
\left(\int_0^\pi \left|\frac{\sin\left(\frac{n(\delta-y)+v}{2}\right)}{\sin\left(\frac{\delta-y+\frac{v}{n}}{2}\right)}\right|^\frac{1}{\alpha}\dd y\right)^\alpha&\leq
n^{-\alpha}\left(\int_{n(\delta-\pi)+\mathrm{Re}(v)}^{n\delta + \mathrm{Re}(v)} \left|\frac{\sin\left(\frac{x-i\,\mathrm{Im}(v)}{2}\right)}{\sin\left(\frac{x-i\,\mathrm{Im}(v)}{2n}\right)}\right|^\frac{1}{\alpha}\dd y\right)^\alpha\\
&\leq 2Cn^{1-\alpha}\left(\int_{-\infty}^\infty \left|\frac{\sin\left(\frac{x-i\,\mathrm{Im}(v)}{2}\right)}{x-i\,\mathrm{Im}(v)}\right|^\frac{1}{\alpha}\dd y\right)^\alpha\\
&\leq C'n^{1-\alpha},\numberthis\label{eq:39}
\end{align*}
where in the last inequality we used the fact that the integrand is uniformly bounded in a neighborhood of the origin, and that $\frac{1}{\alpha}>1$ so the integrand is also integrable near $\pm\infty$. Plugging estimates \eqref{eq:37} and \eqref{eq:39} into inequality \eqref{eq:38} we obtain
\[\left|r_n\left(s+\frac{v}{n},t+\frac{v}{n}\right)\right|\leq \frac{C\|\psi\|_\infty}{2\pi n}(I_1+I_2) \leq \frac{2C\|\psi\|_\infty}{2\pi n}\left(C'^2\frac{n^{1-\alpha}}{\delta^\alpha}\right) \leq \frac{C''}{(n\dist(s,t))^\alpha},\]
where $C''$ is a positive constant independent of $s$, $t$ and $n$.
\end{proof}
\begin{remark}
If we assume the following Dini--Lipschitz condition on the spectral density $\psi$, that is
\[\int_0^1 \frac{\omega_\psi(x)}{x}\dd x<+\infty,\]
then we can take $\alpha = 1$ in Lemma \ref{lemma2}, and we have the following inequality
\[|r^{(a,b)}(s,t)|\leq C\frac{n^{a+b-1}}{\dist(s,t)}\left[\|\psi\|_\infty+\int_0^{2\pi} \frac{\omega_\psi(x)}{x}\dd x\right],\]
Where $C$ is an explicit constant.
\end{remark}
\printbibliography
\end{document}
|
2,877,628,088,462 | arxiv | \section{Introduction}
Photoionization is a fundamental process in nature where absorption of a photon by an atom leads to the emission of a photoelectron and to the creation of a positive ion: $A+\gamma \to A^+ +e^-$. The photoelectric effect is possible only when the energy of the photon, $\hslash \omega$, is sufficiently large to overcome the electron binding energy of the atom, $I_\mathrm{p}$. Although the process is explained qualitatively as a one-electron transition from an occupied atomic orbital to a continuum of states, the quantitative photoionization rates are affected by electron-electron correlation effects in the atom, as evidenced in works based on many-body perturbation theory ~\cite{starace_theory_1982, Amusia1990,KheifetsPRA2013}.
Given photons with a wavelength that is larger than the size of the atom, $a_0=5.29\times10^{-11}$\,m, the electronic transitions can be simplified to follow the dipole {\it selection rules}, where the angular momentum of the electron must change by one unit: $\Delta\ell = \pm 1$. This reduces the complexity of the problem to a finite set of continua that can be reached by the photoelectron. As a complement to the dipole selection rules, Fano proposed a {\it propensity rule}, which states that absorption of light favours increasing the electron angular momentum, $\ell \to \ell+1$, over decreasing the angular momentum, $\ell \to \ell-1$ ~\cite{fano_propensity_1985}. This simple rule explains why in neon the probability to reach the $d$-wave from the initial $2p$ orbital is larger than the probability to reach the $s$-wave. As with most simple rules, there are also some notable exceptions, e.g.\ in photoionization from the $3p$ orbital in argon in the vicinity of the Cooper minimum, where the $d$-wave contribution becomes very small due to a vanishing dipole element for the transition~\cite{cooper:1962}.
In recent years, novel light sources have made it possible to study light--matter interactions in more extreme conditions where the atoms are subject to more intense short-wavelength fields~\cite{Seddon_2017}, multi-color fields~\cite{Allaria2013,LutmanPRL2013,GauthierPRL2016} and short pulses with duration on the femtosecond and attosecond time scale~\cite{KrauszRMP2009}.
One class of problems that has attracted attention is {\it laser-assisted photoionization}, where an atom is photoionized using radiation of short wavelength, typically extreme ultraviolet radiation (XUV), but with an additional long-wavelength laser field, typically in the infrared range (IR), which \textit{dresses} the atom: $A + \gamma_\mathrm{XUV} \pm q \gamma_\mathrm{IR} \to A^+ + e^-$.
In this case, the electron is ionized by the XUV field and then subsequently interacts with the IR field leading to laser-driven continuum--continuum transitions.
In the multi-cycle pulse limit, the resulting photoelectron spectrum includes a main peak at an energy given by the XUV photoelectric effect and a number of sideband peaks due to the increasing number of interactions with the IR field: $E_{\mathrm{kin}} = \hslash \omega_{\mathrm{XUV}} \pm q \hslash \omega_{\mathrm{IR}} - I_\mathrm{p}$.
In the case where the IR field is weak, the strength of the sidebands decreases with each order as expected from perturbation theory with a probability determined by the intensity power law in atomic units: $P_q \propto (I_\mathrm{IR})^{q}$.
In the opposite case, when the IR field is strong, laser-assisted photoionization can be interpreted using semi-classical electron trajectories~\cite{Dusterer_interference_2013, Dusterer_two-color_2019}.
Laser-assisted photoionization has been studied analytically using time-dependent Volkov states, which by their closed-form solution allow for efficient calculations of cross-sections for laser-assisted scattering and ionization~\cite{kroll_charged-particle_1973, MadsenAJP2005,AlvaroNJP2013}.
More accurate numerical studies have been performed by perturbation theory within the single-active electron (SAE) approximation \cite{TomaJPB2002,DahlstromCP2013} and by many-body perturbation theory at the level of one-photon Random Phase Approximation with Exchange (RPAE) with uncorrelated continuum--continuum transitions for closed shell atoms \cite{DahlstromPRA2012} and for photodetachment of negative ions \cite{LindrothPRA2017}.
Recently, a gauge-invariant two-photon RPAE approach has been demonstrated \cite{VinbladhPRA2019}.
Numerical simulations have also been performed in the time domain within the SAE \cite{ZhangPRA2010,NagelePRA2011,IvanovPRA2017,BrayPRA2018}, for helium \cite{PazourekPRL2012} and many-electron atoms, e.g.\ neon by R-matrix theory ~\cite{MoorePRA2011, Lysaght_time-dependent_2009} and argon by Density Functional Theory (DFT) \cite{Sato2018}.
Many-electron correlations in non-linear photoionization has also been studied by time-dependent theories based on Configuration Interaction Singles (CIS) ~\cite{Karamatskou_2014} and Multi-Configuration Self-Consistent Fields (MCSCF)~\cite{orimo_application_2019}.
Laser-assisted photoionization is an important process in attosecond science, where it is at the core of both pulse characterization techniques using the RABBIT technique~\cite{PaulScience2001} and for measurement of atomic delays in photoionization~\cite{SchultzeScience2010,isinger_photoionization_2017}.
Recently, atomic delay measurements have been performed with angular resolution~\cite{HeuserPRA2016,busto_fano_2019,cirelli_anisotropic_2018}.
This has evidenced that subtle differences in absorption and emission processes in the continuum--continuum transitions can lead to a strong dependency on the atomic delay with angle of emission, incomplete quantum interference in RABBIT measurements and to qualitatively different angular distributions of photoelectrons, as explained by Busto et al.\ by extending Fano's propensity rule to continuum--continuum transitions~\cite{busto_fano_2019}.
In this paper, we perform {\it ab-initio} simulations of laser-assisted photoionization by propagating the Time-Dependent Schrödinger Equation (TDSE) within the Configuration Interaction Singles approximation (TDCIS)~\cite{RohringerPRA2006, GreenmanPRA2010}.
This allows us to examine the angular distributions and propensity rules in laser-assisted photoionization of helium and neon atoms for both the first sideband and higher-order sidebands generated by absorption of multiple IR photons.
We find different angular distributions formed by absorption and emission processes in the continuum, and we are able to verify that the propensity rules can be extended to higher-order continuum--continuum transitions driven by the IR field.
The paper is structured as follows.
In Section~\ref{sec:method}, our method is described along the relevant laser parameters.
In Section~\ref{sec:results}, we present our results of the numerical simulations.
Finally, in Section~\ref{sec:conclusions}, we draw conclusions of the presented data and discuss potential topics for future studies.
Atomic units ($\hslash = e = m_e = 4\pi\varepsilon_0=1$) are used through out this paper if not specifically stated.
\section{Method}
\label{sec:method}
In this section we describe our method to compute laser-assisted photoionization from closed-shell atoms. In part A we describe the vector potential used to model the electromagnetic fields, in part B we review the TDCIS ansatz, in part C we present details of our t-SURFF implementation and in part D we give some more details on our numerical implementation.
\subsection{Field description}
\label{subsec:method-field-description}
The numerical experiments are carried out with Gaussian XUV- and IR-pulses, linearly polarized along the quantization axis $\hat{z}$, that are overlapped in time and defined by a vector potential given by
\begin{align}
A =& \left[A_0^{\mathrm{XUV}} \sin(\omega_{\mathrm{XUV}}t) + A_0^{\mathrm{IR}} \sin(\omega_{\mathrm{IR}}t)\right] \nonumber \\ \times& \exp\left[-2\ln(2) \frac{t^2}{\tau^2}\right],
\end{align}
where $A_0^{\mathrm{XUV}} = 0.005$\,a.u.\ and $A_0^{\mathrm{IR}} = 0.003$\,a.u.\ which yields an peak intensity of the IR pulse of $5.6\times10^9$\,W/cm$^2$.
This intensity implies only perturbative action by the IR-field.
The duration of the pulses is given by $\tau = 410 \text{\,a.u}.\,\approx 10$\,fs and the frequency of the IR-field is given by $\omega_{\mathrm{IR}} \approx 1.55$\,eV to match a Ti-Saph.\ laser system.
The fact that the XUV pulse duration is longer than the IR period, $\tau > 2\pi/\omega_\mathrm{\mathrm{IR}}$, implies that the photoelectron spectrum will consist of discrete peaks that correspond to interaction with $q$ photons in the continuum.
\subsection{TDCIS ansatz}
\label{subsec:method-tdcis}
The TDCIS ansatz~\cite{RohringerPRA2006} for the many electron wave function is
\begin{equation}
\label{eq:cis}
\ket{\Psi(t)} = \alpha_0(t) \ket{\Phi_0} + \sum_a^{\mathrm{occ}} \sum_p^{\mathrm{exc}} \alpha_a^p(t) \ket{\Phi_a^p},
\end{equation}
where $\ket{\Phi_0}$ is the Hartree-Fock ground state and the singly excited states $\ket{\Phi_a^p}$ are constructed using the framework of second quantization,
\begin{equation}
\ket{\Phi_a^p} = \frac{1}{\sqrt{2}} \{ \hat{c}_{p+}^\dagger \hat{c}_{a+} + \hat{c}_{p-}^\dagger \hat{c}_{a-} \}\ket{\Phi_0},
\label{eq:Phipa}
\end{equation}
where $\hat{c}_{p\sigma}^\dagger$ creates an electron in the virtual (exc) orbital $p$ with spin $\sigma$, $\ket{\phi_{p\sigma}}$, while $\hat{c}_{a\sigma}$ creates a hole in the initially occupied (occ) orbital $a$ with spin $\sigma$, $\ket{\varphi_{a \sigma}}$.
The ansatz in Eq.~\eqref{eq:Phipa} assures that the spin {\it singlet} state character of the closed-shell initial state is maintained also for excited states. Similarly, we adopt the {\it gerade} formulation of TDCIS to make full use of the symmetry in magnetic quantum numbers, $m_p=m_a$, for linearly polarized fields \cite{PabstPRA2012}.
In TDCIS the time dependence is found only in the complex amplitudes, $\alpha_0(t)$ and $\alpha^p_a(t)$, and the static orbitals are found by solving the mean-field HF problem without fields present. The time evolution of the complex amplitudes is found by projecting the ansatz in Eq.~\eqref{eq:cis} onto the TDSE with a laser-interaction Hamiltonian, as shown in Ref.~\cite{GreenmanPRA2010,PabstPRA2012}.
Here, we consider light-matter interaction within the dipole approximation given by $V_I(t)=A_z(t) \hat p_z$, where $E_z(t)=-\partial A_z(t)/\partial t$ is the electric field with linear polarization along the $\hat z$-direction.
The choice of helium and neon is done on the basis of their different initial angular momentum state and hence their difference in the accessible continuum-state after the absorption of one XUV-photon. In addition, both helium and neon are well-described by the truncated basis of the TDCIS theory.
\subsection{Implementation of t-SURFF with TDCIS}
\label{subsec:implementation-tsurff-tdcis}
Within TDCIS theory, the excited many-body state can be expressed as one-electron time-dependent orbitals~\cite{RohringerPRA2006},
\begin{equation}
\chi_{a}(\mathbf{r},t) = \sum_{p}^\mathrm{exc}\alpha^p_a(t)\phi_p(\mathbf{r}),
\end{equation}
associated with each created hole, $a$.
The time-dependent orbitals can also be expanded as
\begin{equation}
\label{eq:td-orbitals-expansion}
\chi_{a}(\mathbf{r},t) = \frac{1}{r} \sum_{\ell_p} \psi_{\ell_p m_a}^a (r,t) Y_{\ell_p m_a}(\Omega_{\mathbf{r}}),
\end{equation}
where $\ell_p$ runs over all possible angular momenta attainable by the electron.
The t-SURFF method relies on knowledge of the photoelectron wavefunction, $\chi_{a}(\mathbf{r}_c,t)$, at a given radius, $r_{c}<r_{\mathrm{ecs}}$, at all times, $t$, and then makes use of the approximate Volkov states to account for field-induced dynamics of the photoelectron beyond $r_c$ \cite{TaoNJP2012}.
We use a modified Volkov Hamiltonian,
\begin{equation}
\label{eq:volkov-hamiltonian}
\hat{H}^{(V)}_a(t) = \frac{\hat p^2}{2} +{A_z}(t)\hat p_z - \varepsilon_a,
\end{equation}
where $\hat p^2=-\nabla^2$ and $\hat p_z=-i\partial/\partial z$, to model the dynamics of the time-dependent orbital in the region far from the ion, where Coulomb interactions can be neglected.
The energy of the photoelectron depends on the binding energy of orbital $a$ in accordance with Koopman's theorem, $I_\mathrm{p}=-\varepsilon_a$.
The time-dependent orbitals that satisfy the TDSE with the Hamiltonian from Eq.~\eqref{eq:volkov-hamiltonian} are
\begin{align}
\label{eq:volkov-state}
\chi^{(V)}_{\mathbf{k},a}(\mathbf{r},t) = \frac{1}{(2\pi)^{3/2}}
\exp[i\mathbf{k}\cdot\mathbf{r}] \nonumber \\
\times \exp\left[ -i\int_{t_\mathrm{ref}}^t \dd t'\left\{ \frac{k^2}{2} +{A_z}(t'){k_z}-\varepsilon_a\right\}\right],
\end{align}
which are plane waves with a time-dependent phase.
The spectral amplitudes for laser-assisted photoionization are found using a complex amplitude for the overlap of time-dependent orbitals in the outer region, $r>r_c$, using a radial Heaviside operator acting at $r_c$, $\hat{\theta}(r_c)$, defined as
\begin{equation}
\label{eq:bka}
b_{\mathbf{k},a}(t) = \mel{\chi^{(V)}_{\mathbf{k},a}(t)}{\hat{\theta}(r_c)}{\chi_{a}(t)}.
\end{equation}
The complex amplitude in Eq.~\eqref{eq:bka} becomes the scattering amplitude when the time
is evaluated at a late hour, $t=T$, after which all external fields have ended and after which the photoelectron wave packet have propagated far away from the ion \cite{TaoNJP2012}.
We obtain a final expression for the scattering coefficients in t-SURFF given by
\begin{widetext}
\begin{align}
\label{eq:TSURFF-final}
\begin{split}
b_{\mathbf{k},a}(T)
&= i ~\sqrt[]{\frac{2}{\pi}} \int_{-\infty}^T \dd t~ \exp\left[i \int_{t_{\mathrm{ref}}}^t \dd \tau \left\{ \frac{k^2}{2} + A_z(\tau)k_z - \varepsilon_a \right\} \right] \\
& \times \sum_{\ell_p} \bigg{\{} (-i)^{\ell_p} \frac{1}{2} \left(kr_c j_{\ell_p}^\prime(kr_c) +
j_{\ell_p}(kr_c) \right) \psi_{\ell_p m_a}^a(r_c,t) Y_{\ell_p m_a}(\Omega_{\mathbf{k}}) \\
&- \frac{(-i)^{\ell_p}}{2} r_c j_{\ell_p}(kr_c) \psi_{\ell_p m_a}^{a \prime} (r_c,t) Y_{\ell_p m_a}(\Omega_{\mathbf{k}}) \\
& + \frac{i}{2~\sqrt[]{\pi}} r_c A_z(t) ~\sqrt[]{2\ell_p+1} \psi_{\ell_p m_a}^a(r_c,t)
\sum_{\ell=\ell_p \pm 1} (-i)^\ell \frac{j_{\ell}(kr_c)}{2\ell+1} C_{\ell_p m_a,10}^{\ell m_a} C_{\ell_p 0,10}^{\ell 0}
Y_{\ell m_a}(\Omega_{\mathbf{k}}) \bigg{\}},
\end{split}
\end{align}
\end{widetext}
where $j_{\ell}(kr)$ is the spherical bessel function of order $\ell$ and $j_{\ell}^{\prime}(kr)$ is its derivative, both evaluated at $kr$.
Further $Y_{\ell,m}(\Omega_{\mathbf{k}})$ is the spherical harmonic of order $\ell$ and $m$, evaluated at angle $\Omega_{\mathbf{k}} \equiv (\theta_{\mathbf{k}}, \varphi_{\mathbf{k}})$, and $C_{\ell m, 10}^{\ell^\prime m}$ is the Clebsch-Gordan coefficient for a dipole transition with linear polarized light.
We note that the t-SURFF method is an approximate method for analysis of photoelectrons when applied to this problem with a long-range potential from the remaining ion~\cite{TaoNJP2012}. Single or multiphoton transitions that populate Rydberg states can also be expected to cause some problems in special cases due to their large radial extent, $\sim n^2$.
The angular distribution of the photoelectron can be described by a coherent superposition of partial waves with the corresponding angular momenta determined by dipole selection rules.
As an example, the case of two final angular momenta, which is expressed by two spherical harmonics: $Y_{\ell>,m}$ and $Y_{\ell<,m}$, with $\ell_>=\ell_0+1$ and $\ell_<=\ell_0-1$, has a complex amplitude with an angle-dependence given by
\begin{equation}
\label{eq:fthetavarhpi}
f(\theta) =
\tilde a_{\ell_>}Y_{\ell_>,m}(\theta,\varphi)+
\tilde a_{\ell_<}Y_{\ell_<,m}(\theta,\varphi).
\end{equation}
To find these partial wave amplitudes from the general scattering amplitudes, $b_{\mathbf{k},a}(T)$, we solve a minimization problem,
\begin{equation}
\label{eq:minimization}
\tilde{a} = \argmin_{a} \sum_{i} \bigg{|}f_q(\theta_i) - \sum_{m}|\sum_{\ell} a_{\ell m} Y_{\ell m}(\theta_i, \varphi)|^2 \bigg{|}^2,
\end{equation}
for the general complex amplitudes $\tilde{a} = \{\tilde{a}_{\ell m}\}$. The magnetic quantum number of the photoelectron is linked to that of the hole, $m_p=m_a=m$, which is typically unresolved in experiments and, therefore, is summed over incoherently. In Eq.~\eqref{eq:minimization} the angular probability distribution of a given peak $q$ is computed by integrating over energy, that is
\begin{equation}
\label{eq:peak-integration}
f_q(\theta_i) = \frac{1}{2} \int_{E_q - \xi}^{E_q + \xi} \dd E ~ |b_{\mathbf{k}_i,a}(T)|^2,
\end{equation}
where $E_q$ is the energy at the center of the peak $q$, using Eq.~\eqref{eq:bka} for a given final momentum of the photoelectron, $k=|\mathbf{k}|$ evaluated at a set of polar angles $\theta_i$. This procedure allows us to extract partial wave amplitudes for all photoelectron peaks, $\pm q$, which we label by $\tilde{a}_{\ell m}^{\pm q}$, where the reference to $m$ is sometimes is omitted for brevity.
\subsection{Numerical implementation}
\label{subsec:methods-numerical-implementation}
Our method is similar to that of Karamatskou et al.~\cite{Karamatskou_2014}, as it combines TDCIS for closed-shell atoms~\cite{GreenmanPRA2010} with the Time-Dependent Surface Flux (t-SURFF) method~\cite{TaoNJP2012}, but our method differs in a number of ways: (i) our numerical implementation is based on B-splines~\cite{deboor}, (ii) we use Exterior Complex Scaling (ECS) to handle the boundary conditions of the outgoing photoelectrons \cite{SimonPLA1979}, (iii) our implementation of t-SURFF, Eq.~\eqref{eq:TSURFF-final}, differs in its detailed derivation, as discussed in Appendix~\ref{apdx:tsurff}.
In this work we restrict the active space configuration in energy, $E^p_a=\varepsilon_p-\varepsilon_a<30 \text{\,a.u.\,} = 816.33$\,eV, and use an electron angular momentum of at least $\ell = 6$.
We also restrict the TDCIS calculation to the outermost valence orbital in the sum over occ in Eq.~\eqref{eq:cis} and, therefore, do not consider XUV-stimulated hole--hole transitions that can lead to further excitation of the ion within TDCIS~\cite{YouPRA2016}.
The latter restriction implies that we consider the $1s$ orbital in helium, but only excitation from the $2p$ orbital in neon.
The binding energies used are the Hartree-Fock binding energies $I_p = 0.918$\,a.u.\ for helium and $I_p = 0.850$\,a.u.\ for the $2p$ orbital in neon.
For the B-spline interpolation, we use 165 and 320 knotpoints in the inner region for helium and neon respectively and 30 knotpoints in the ECS region for both atoms.
The polynomials used are chosen to be of order 6.
We use a knotpoint spacing of 0.4\,a.u.\ and an ECS-angle of 25 degrees.
The use of ECS leads to a non-Hermitian Hamiltonian, where the virtual states are exponentially damped in time by complex eigenvalues. In space the electron wavefunctions remain physical within the radius $r_{\mathrm{ecs}} = 64$\,a.u.\ for helium and $r_{\mathrm{ecs}} = 120$\,a.u.\ for neon. Inside the ECS region, $r>r_{\mathrm{ecs}}$, the photoelectron wavefunction is damped radially, which helps to remove nonphysical reflections from the end point of the radial knotpoint sequence. The use of ECS restricts the propagation of TDCIS to the velocity gauge.
\section{Results}
\label{sec:results}
In this section we present our results from laser-assisted photoionization simulations.
We present the numerically obtained photoelectron angular distributions (PAD) for helium in part~A and for neon in part~B.
The frequency of the XUV-photon, $\omega_{\mathrm{XUV}}$, is varied in order to study how the PAD depends on different final kinetic energies of the photoelectron.
In Fig.~\ref{fig:sketch}, the laser-assisted photoionization paths are shown for helium and neon respectively.
The main peak in the photoelectron spectrum is originating from absorption of one XUV photon and it is denoted $q=0$.
The sidebands corresponding to additional absorption $(+)$ and emission $(-)$ of $q$ IR photons are denoted by $\pm q$.
The photoelectron alters its orbital angular momentum by plus or minus one for each interaction event with the dressing IR-field.
The PAD results from different spherical harmonics in superposition, as shown for each value of $q$ in Fig~\ref{fig:sketch}.
\begin{figure}
\begin{center}
\begin{tikzpicture}[scale=0.8, transform shape]
\node at (3.5,5.5) {\large $(a)$};
\node at (1.5,5.5) {\large He};
\node (peaks) at (-1,5.5) {\large Peaks (q)};
\node (p2) at (-1,5) {$2$};
\node (p1) [below of=p2] {$1$};
\node (c) [below of=p1] {$0$};
\node (m1) [below of=c] {$-1$};
\node (m2) [below of=m1] {$-2$};
\coordinate (O) at (0,0);
\coordinate (a) at (1,3);
\coordinate (aa) at (2,4);
\coordinate (ab) at (0,4);
\coordinate (aaa) at (3,5);
\coordinate (aab) at (1,5);
\draw[thick] (-1,0.5) -- (1,0.5);
\foreach \x in {-1,-0.9,...,1} {
\draw[-,help lines] (\x,0.5) -- (\x+0.1,0.6);
}
\draw (-0.5,0) -- (0.5,0);
\node[below] (O) {$s$};
\draw[->, color=blue] (O) -- (a) node[below, color=black] {$p$};
\draw (0.5,3) -- (1.5,3);
\draw[->, color=red] (a) -- (aa) node[below, color=black] {$d$};
\draw (-0.5,4) -- (0.5,4);
\draw[->, color=red, dashed] (a) -- (ab) node[below, color=black] {$s$};
\draw (1.5,4) -- (2.5,4);
\draw[->, color=red] (aa) -- (aaa) node[below, color=black] {$f$};
\draw (0.5,5) -- (1.5,5);
\draw[->, color=red] (ab) -- (aab) node[below, color=black] {$p$};
\draw (2.5,5) -- (3.5,5);
\draw[->, color=red, dashed] (aa) -- (aab);
\coordinate (ee) at (2,2);
\coordinate (eb) at (0,2);
\coordinate (eee) at (3,1);
\coordinate (eeb) at (1,1);
\draw[->, color=red, dashed] (a) -- (ee) node[below, color=black] {$d$};
\draw (-0.5,2) -- (0.5,2);
\draw[->, color=red] (a) -- (eb) node[below, color=black] {$s$};
\draw (1.5,2) -- (2.5,2);
\draw[->, color=red, dashed] (ee) -- (eee) node[below, color=black] {$f$};
\draw (0.5,1) -- (1.5,1);
\draw[->, color=red, dashed] (eb) -- (eeb) node[below, color=black] {$p$};
\draw (2.5,1) -- (3.5,1);
\draw[->, color=red] (ee) -- (eeb);
\end{tikzpicture}
\begin{tikzpicture}[scale=0.8, transform shape]
\node at (4.5,5.5) {\large $(b)$};
\node at (2,5.5) {\large Ne};
\node (peaks) at (-1,5.5) {\large Peaks (q)};
\node (p2) at (-1,5) {$2$};
\node (p1) [below of=p2] {$1$};
\node (c) [below of=p1] {$0$};
\node (m1) [below of=c] {$-1$};
\node (m2) [below of=m1] {$-2$};
\coordinate (ON) at (1,0);
\coordinate (0sN) at (0,3);
\coordinate (0dN) at (2,3);
\coordinate (1fN) at (3,4);
\coordinate (1pN) at (1,4);
\coordinate (2sN) at (0,5);
\coordinate (2dN) at (2,5);
\coordinate (2gN) at (4,5);
\draw[thick] (0,0.5) -- (2,0.5);
\foreach \x in {0,0.1,...,2} {
\draw[-,help lines] (\x,0.5) -- (\x+0.1,0.6);
}
\draw (0.5,0) -- (1.5,0);
\node at (1,-0.2) {$p$};
\draw[->, color=blue, dashed] (1,0) -- (0sN) node[below, color=black] {$s$};
\draw[dotted, thick] (-0.5,3) -- (0.5,3);
\draw[->, color=blue] (1,0) -- (0dN) node[below, color=black] {$d$};
\draw (1.5,3) -- (2.5,3);
\draw[->, color=red] (0sN) -- (1pN);
\draw[->, color=red] (0dN) -- (1fN) node[below, color=black] {$f$};
\draw (0.5,4) -- (1.5,4);
\draw[->, color=red, dashed] (0dN) -- (1pN) node[below, color=black] {$p$};
\draw (2.5,4) -- (3.5,4);
\draw[->, color=red, dashed] (1pN) -- (2sN) node[below, color=black] {$s$};
\draw[dotted, thick] (-0.5,5) -- (0.5,5);
\draw[->, color=red] (1pN) -- (2dN) node[below, color=black] {$d$};
\draw[->, color=red, dashed] (1fN) -- (2dN);
\draw (1.5,5) -- (2.5,5);
\draw[->, color=red] (1fN) -- (2gN) node[below, color=black] {$g$};
\draw (3.5,5) -- (4.5,5);
\coordinate (m1pN) at (1,2);
\coordinate (m1fN) at (3,2);
\draw[->, color=red, dashed] (0sN) -- (m1pN);
\draw[->, color=red] (0dN) -- (m1pN) node[below, color=black] {$p$};
\draw (0.5,2) -- (1.5,2);
\draw[->, color=red, dashed] (0dN) -- (m1fN) node[below, color=black] {$f$};
\draw (2.5,2) -- (3.5,2);
\draw[->, color=red, dashed] (m1fN) -- (4,1) node[below, color=black] {$g$};
\draw (1.5,1) -- (2.5,1);
\draw[->, color=red, dashed] (m1pN) -- (2,1) node[below, color=black] {$d$};
\draw (3.5,1) -- (4.5,1);
\draw[->, color=red] (m1fN) -- (2,1);
\draw[->, color=red] (m1pN) -- (0,1) node[below, color=black] {$s$};
\draw[dotted, thick] (-0.5,1) -- (0.5,1);
\end{tikzpicture}
\end{center}
\caption{Laser-assisted photoionization paths in (a) helium and (b) neon in the case of absorption or emission of IR-photons. In neon, the dotted $s$-states indicate that they are only reachable for the case of $m=0$. The propensity rule is illustrated with filled lines as these transitions are relatively more probable than the transitions of dashed lines when comparing absorption and emission.}
\label{fig:sketch}
\end{figure}
\subsection{Helium}
\label{subsec:results-helium}
In Fig.~\ref{fig:raw-XUV38-He1s}~(a) we present our simulation of the angle-resolved photoelectron spectrum in helium on a logarithmic scale.
The main central line corresponds to absorption of one XUV-photon and the sidebands correspond to absorption or emission of $q$ IR-photons.
Alongside the spectrum, the PADs for sidebands, retrieved by Eq.~\eqref{eq:peak-integration}, are shown in Fig.~\ref{fig:raw-XUV38-He1s}(b) for the first absorption and emission peaks: $q=\pm 1$, and in Fig.~\ref{fig:raw-XUV38-He1s}(c) for the second absorption and emission peaks: $q=\pm 2$.
The main peak, $q=0$, is included for reference. The maxima of the angular distributions are normalized to unity in order to make the comparison between the PADs of the different peaks easier.
The partial wave fitting, where only dipole-allowed spherical harmonics are included in the sum of Eq.~\eqref{eq:minimization}, is in excellent agreement with the simulated PAD for all peaks, $q$.
The PAD shows an asymmetry between absorption and emission of IR-photons in the continuum which is expressed by the different number of minima in the sideband peaks.
For example, in the first absorption and emission peaks, $q=1$ and $q=-1$, we observe two minima and one minimum, respectively.
\begin{figure*}
\centering
{\includegraphics[scale=0.29]{raw_He1s_XUV38_new.pdf}}
{\includegraphics[scale=0.29]{crest_He1s_peak1_XUV38.pdf}}
{\includegraphics[scale=0.29]{crest_He1s_peak2_XUV38.pdf}}
\caption{(a) Angle-resolved photoelectron spectrum in helium using an XUV-photon energy of 38\,eV. (b) PAD using $q=\pm 1$ and (c) PAD using $q=\pm 2$. The dots in (b,c) are fits to the data using Eq.~\eqref{eq:minimization}.}
\label{fig:raw-XUV38-He1s}
\end{figure*}
In Fig.~\ref{fig:gaffel-He1s}, we present the normalized PAD as a function of XUV-photon energy for $q=1$ and $q=2$ peaks in helium. A uniform filter is applied to smoothen spurious oscillations (that are inherent to the t-SURFF method for Coulomb-like problems). Each figure corresponds to multiple laser-assisted photoionization simulations with all parameters fixed except the frequency of the ionizing XUV-field. In the high kinetic energy limit, the multiple minima of the absorption peaks, $q\in[1,2]$, tend towards a polar angle of $90$ degrees.
On the contrary, in the case of emission, the position of the single minimum is independent of the kinetic energy and located at $90$ degrees (not shown).
\begin{figure}
\centering
{\includegraphics[scale=0.45]{gaffel_He1s_peak1.pdf}}
{\includegraphics[scale=0.45]{gaffel_He1s_peak2.pdf}}
\caption{PAD of helium peaks (a) $q=1$ and (b) $q=2$ as a function of XUV-photon energy.}
\label{fig:gaffel-He1s}
\end{figure}
\subsection{Neon}
\label{subsec:results-neon}
In Fig.~\ref{fig:raw-XUV38-Ne2p}, the obtained angle-resolved photoelectron spectrum in neon (a) is shown alongside the normalized PADs of the first (b) and second (c) absorption and emission peaks.
The fitted spherical harmonics match well with the angular distribution of the peaks of the sidebands.
Contrary to helium, we now have two possible intermediate cases after absorption of an XUV-photon with $\ell=0$ and $\ell=2$.
In neon we do not observe any qualitative difference in the PAD, comparing the first absorption and emission peaks, $q=\pm 1$.
Both peaks show one single minimum with a qualitatively similar angular distribution.
However, in the second absorption and emission peaks, $q=\pm 2$, there is a clear difference between $q=2$, which shows two distinct minima, and $q=-2$, which shows a single minimum.
Unlike helium, the angular distribution in neon results from an incoherent superposition of magnetic quantum numbers of the hole, $m=m_a$. Therefore, we complement our neon studies with $m$-resolved PADs.
Since we deal with systems of spherical symmetry, the positive $m=+1$ channel and the negative $m=-1$ channel will yield the same photoelectron angular distribution, and without loss of generality we can consider it one effective channel.
We denote this channel as the \textit{gerade} $m=\pm 1$ channel \cite{PabstPRA2012}.
\begin{figure*}
\centering
{\includegraphics[scale=0.29]{raw_Ne2p_XUV38_new.pdf}}
{\includegraphics[scale=0.29]{crest_Ne2p_peak1_XUV38.pdf}}
{\includegraphics[scale=0.29]{crest_Ne2p_peak2_XUV38.pdf}}
\caption{(a) Angle-resolved photoelectron spectrum in neon $2p$ using an XUV-photon energy of 38\,eV. (b) PAD using $q=\pm 1$ and (c) PAD using $q=\pm 2$. The dots in (b,c) are fits to the data using Eq.~\eqref{eq:minimization}.}
\label{fig:raw-XUV38-Ne2p}
\end{figure*}
In Fig.~\ref{fig:raw-Ne2p-XUV38-m}, the angle-resolved photoelectron spectra are presented in neon on a logarithmic scale for $m=0$ and $m=\pm 1$, respectively, alongside the normalized PAD of the absorption and emission peaks.
For the $m=0$ channel isolated, the PADs of $q=\pm 1$ show small differences. In absorption we find with two shallow minima, while in emission there are instead two shoulders. Both absorption and emission have a deep minimum at 90 degrees. In total the $m=0$ case has three minima for $q=1$, and one minimum for $q=-1$.
In the $m=\pm 1$ channel, the difference between one-photon absorption and emission, $q=\pm 1$, is more distinct because the PAD shows four and three clear minima, respectively (including the minima at 0 and 180 degrees).
In $q=\pm 2$, there is a clear difference between absorption and emission of two IR photons for both $m=0$ and $m=\pm 1$.
In the $m=0$ channel, we identify two minima and two outer shoulders in the case of absorption, $q=2$, and a flat region with a single shallow minimum in the case of emission, $q=-2$.
Likewise, in $m=\pm 1$, we identify a clear difference between $q=2$ and $q=-2$. The $q=2$ peak has five minima, while the $q=-2$ peak has three minima (including the minima at the polar angle of 0 and 180 degrees).
\begin{figure*}
\centering
{\includegraphics[scale=0.29]{raw_Ne2p_m=0_XUV38.pdf}}
{\includegraphics[scale=0.29]{crest_Ne2p_peak1_XUV38_m0.pdf}}
{\includegraphics[scale=0.29]{crest_Ne2p_peak2_XUV38_m0.pdf}}
{\includegraphics[scale=0.29]{raw_Ne2p_m=1_XUV38.pdf}}
{\includegraphics[scale=0.29]{crest_Ne2p_peak1_XUV38_m1.pdf}}
{\includegraphics[scale=0.29]{crest_Ne2p_peak2_XUV38_m1.pdf}}
\caption{Angle-resolved photoelectron spectrum and PADs in neon $2p$ with $m=0$ (a-c) and $m=\pm 1$ (d-f). The XUV-photon energy is 38\,eV. The dots in (b,c) and (e,f) are fits to the data using Eq.~\eqref{eq:minimization}.}
\label{fig:raw-Ne2p-XUV38-m}
\end{figure*}
In Fig.~\ref{fig:gaffel-Ne2p}, we present the normalized PAD of peaks $q=1$ and $q=2$ as a function of the XUV-photon energy for neon in the non-resolved $m$ case, and the two resolved $m=0$ and $m=\pm 1$ cases.
While the PADs change in shape with increasing XUV photon energy, they maintain their qualitative attributes. We note that the neon $m=\pm 1$ cases, shown in Fig.~\ref{fig:gaffel-Ne2p}~(c) and (f), are qualitatively similar to the helium $m=0$ case, shown in Fig.~\ref{fig:gaffel-He1s}~(a) and (b), with the exception of two stationary minima in neon at 0 and 180 degrees.
\begin{figure*}
\centering
{\includegraphics[scale=0.29]{gaffel_Ne2p_peak1.pdf}}
{\includegraphics[scale=0.29]{gaffel_Ne2p_peak1_m0.pdf}}
{\includegraphics[scale=0.29]{gaffel_Ne2p_peak1_m1.pdf}}
{\includegraphics[scale=0.29]{gaffel_Ne2p_peak2.pdf}}
{\includegraphics[scale=0.29]{gaffel_Ne2p_peak2_m0.pdf}}
{\includegraphics[scale=0.29]{gaffel_Ne2p_peak2_m1.pdf}}
\caption{PAD of neon as a function of XUV-photon energy for peaks (a) $q=1$ unresolved in $m$, (b) $q=1$ with $m=0$, (c) $q=1$ with $m=\pm 1$, (d) $q=2$ unresolved in $m$, (e) $q=2$ with $m=0$ and (f) $q=2$ with $m=\pm 1$.}
\label{fig:gaffel-Ne2p}
\end{figure*}
\section{Discussion}
\label{sec:Discussions}
In the previous section we have shown that the PADs of sidebands due to absorption and emission of IR photons exhibit qualitative differences. In order to understand this difference we turn to a partial wave analysis that allows us to study the relative strength of different laser-assisted photoionization paths (see Fig.~\ref{fig:sketch}). A similar approach was recently used by Busto~et al.\ to study the first-order sidebands, $q=\pm1$~\cite{busto_fano_2019}.
Each transition between partial waves in the continuum is determined by a radial dipole integral as well as an angular dipole integral.
According to Fano's propensity rule for photoionization \cite{fano_propensity_1985}, the radial integral favours transitions to higher angular momentum. In the continuum the photoelectron can both absorb and emit IR photons, which respectively favours increasing and decreasing angular momentum \cite{busto_fano_2019}. This is a direct consequence of time-reversal symmetry for the two continuum processes.
In the high-energy limit, this radial effect vanishes, while the angular effects remain constant.
This implies that the branching ratio of different partial waves will be determined by the angular integrals for both absorption and emission sidebands in the high-energy limit.
In order to study unique partial-wave paths in the continuum to the first sidebands, $q=\pm 1$, we consider helium with $m=0$ and neon with $m=\pm 1$. These are special transitions because they only have one intermediate angular momentum that is reached after absorption of the XUV-photon. Therefore, it is easy to compare one-photon IR absorption and emission processes directly using the complex amplitudes of partial waves, $\tilde{a}_{\ell_{>/<}}^{\pm q}$, extracted by Eq.~\eqref{eq:minimization} for $q=\pm 1$. The notation, $\ell_{>/<}$, refers to increasing and decreasing angular momentum, respectively, as defined above Eq.~\eqref{eq:fthetavarhpi}.
In Fig.~\ref{fig:crest-peaks-compare} we present the absolute ratio of the complex amplitudes, $|\tilde{a}_{\ell_>}^{\pm q}/\tilde{a}_{\ell_<}^{\pm q}|$.
In the high-energy limit, we find that the ratios for $q=\pm 1$ approach a value determined by the angular part of the dipole matrix element, shown in Fig.~\ref{fig:crest-peaks-compare} as a gray dotted line.
In helium, for $q\pm1$, the limit of the ratio is given by
\begin{equation}
\label{eq:one-photon-angular-ratio-he}
\bigg{|}\frac{\tilde{a}_{\ell_>}^{\pm 1}}{\tilde{a}_{\ell_<}^{\pm 1}} \bigg{|} \to \bigg{|} \frac{\mel{Y_{20}}{Y_{10}}{Y_{10}}}{\mel{Y_{00}}{Y_{10}}{Y_{10}}}\bigg{|} = \frac{2}{\sqrt{5}},
\end{equation}
which means that it is more probable to decrease angular momentum.
At low kinetic energies we find that the absorption process, $q=1$, favours increasing angular momentum due to an enhancement from the radial dipole contribution.
In neon, for $m=\pm1$ and $q=\pm 1$, the limit is
\begin{equation}
\label{eq:one-photon-angular-ratio-ne}
\bigg{|}\frac{\tilde{a}_{\ell_>}^{\pm 1}}{\tilde{a}_{\ell_<}^{\pm 1}}\bigg{|} \to \bigg{|} \frac{\mel{Y_{31}}{Y_{10}}{Y_{21}}}{\mel{Y_{11}}{Y_{10}}{Y_{21}}} \bigg{|} = \sqrt{\frac{8}{7}},
\end{equation}
which means that it is more probable to increase angular momentum.
At low kinetic energies we find that the emission process, $q=-1$, favours decreasing angular momentum due enhancement from the radial dipole contribution.
Qualitatively, we understand that the $q=\pm1$ ratios are close to one because there is one unique path to reach each final partial wave.
This is related to the comparable magnitude of dipole matrix elements from $\ell_0$ to $\ell_0 \pm 1$.
In the case of $q=\pm 2$ the physics is more complicated because there are two interfering paths leading to the lower angular momentum, while there is one unique path to the higher angular momentum for helium with $m=0$ and neon with $m=\pm 1$. The two interfering paths leading to the lower angular momentum is coined a \textit{diamond} due to its diagrammatically convincing shape.
In Fig.~\ref{fig:crest-peaks-compare} we show the ratios of absolute complex amplitudes between higher and lower final angular momentum for $q=\pm2$.
For helium the limit of the ratio is
\begin{equation}
\label{eq:two-photon-angular-ratio-he}
\bigg{|}\frac{\tilde{a}_{\ell_>}^{\pm 2}}{\tilde{a}_{\ell_<}^{\pm 2}}\bigg{|} \to \bigg{|} \frac{\mel{Y_{30}}{Y_{10}}{Y_{20}}\mel{Y_{20}}{Y_{10}}{Y_{10}}}{|\mel{Y_{10}}{Y_{10}}{Y_{20}}|^2 + |\mel{Y_{10}}{Y_{10}}{Y_{00}}|^2}\bigg{|} = \frac{2}{\sqrt{21}},
\end{equation}
while for neon the limit is
\begin{equation}
\label{eq:two-photon-angular-ratio-ne}
\bigg{|}\frac{\tilde{a}_{\ell_>}^{\pm 2}}{\tilde{a}_{\ell_<}^{\pm 2}}\bigg{|} \to \bigg{|}
\frac{\mel{Y_{41}}{Y_{10}}{Y_{31}}\mel{Y_{31}}{Y_{10}}{Y_{21}}}{|\mel{Y_{21}}{Y_{10}}{Y_{31}}|^2 + |\mel{Y_{21}}{Y_{10}}{Y_{11}}|^2}\bigg{|} = \frac{\sqrt{8}}{3 \sqrt{3}}.
\end{equation}
We note that these ratios are close to one half in both cases, which implies that the lower angular momentum is much favoured over the higher angular momentum in the $q=\pm 2$ peaks.
Although the matrix elements in the denominator of Eqs.~(\ref{eq:two-photon-angular-ratio-he},\ref{eq:two-photon-angular-ratio-ne}) are taken in absolute-square, one should not misunderstand this is an incoherent summation over paths in the diamond.
Instead, this indicates that the two coherent paths to the lower angular momentum add up constructively. The fact that the two paths in the diamond add up {\it in phase} with each other can be understood by considering the continuum--continuum phases acquired in laser-stimulated transitions, which only weakly depend on the angular momentum transitions, {\it c.f.} Ref.~\cite{DahlstromJPB2012}.
In the high-energy limit, the value of the $q=\pm 2$ ratio between final angular momenta $(>/<)$ is explained by a constructive interference effect between different intermediate partial waves,
while it is the radial integrals that explain the difference between the PADs in absorption $(+)$ and emission $(-)$ at low energies.
The $q=2$ peak in neon $m=\pm 1$ is a good example of the importance of this interplay between angular and radial integral effects.
The paths leading to the final lower angular momentum goes through increasing--decreasing or decreasing--increasing angular momentum pathways in the continuum that are (radially) weaker than the path of two times increasing the orbital angular momentum to the larger final angular momentum. Yet, the constructive interference to the lower angular momentum $\ell=2$, results in a probability ratio strongly favouring the transition that lowers the angular momentum.
In other words: two average paths tend to overtake the one enhanced path. However, in the low energy limit the radial effect can dominate over the interference effect, as evidenced in Fig.~\ref{fig:crest-peaks-compare}~(b) for $q=2$, where the larger angular momentum amplitude is marginally greater than the smaller angular momentum amplitude.
We now turn to the question how the weak radial effects can change the number of minima in the PADs?
The condition of a node in the angular distribution that consists of two partial waves, is found by setting $f(\theta)$ from Eq.~\eqref{eq:fthetavarhpi} to zero. This leads to the following relation:
\begin{equation}
\label{eq:fiszero}
\frac{\tilde a^{\pm q} _{\ell_>}}{\tilde a^{\pm q}_{\ell_<}}
= -
\frac{Y_{\ell_< m}(\theta,\varphi)}{Y_{\ell_> m}(\theta,\varphi)},
\end{equation}
where $\tilde a^{\pm q}_{\ell_{>/<}}$ are different for both absorption $(+)$ and emission $(-)$.
Interestingly, we have found that the ratios on the right hand side of Eq.~\eqref{eq:fiszero} are equal to
\begin{align}
\lim_{\theta\rightarrow\pi/2}-\frac{Y_{00}(\theta,\varphi)}{Y_{20}(\theta,\varphi)}&=\frac{2}{\sqrt{5}} \\
\lim_{\theta\rightarrow\pi/2}-\frac{Y_{11}(\theta,\varphi)}{Y_{31}(\theta,\varphi)}&=\sqrt{\frac{8}{7}} \\
\lim_{\theta\rightarrow\pi/2}-\frac{Y_{10}(\theta,\varphi)}{Y_{30}(\theta,\varphi)}&=\frac{2}{\sqrt{21}} \\
\lim_{\theta\rightarrow\pi/2}-\frac{Y_{21}(\theta,\varphi)}{Y_{41}(\theta,\varphi)}&=\frac{\sqrt{8}}{3\sqrt{3}},
\end{align}
in the limit of a polar angle equal to 90 degrees, which is equal to the corresponding ratios in Eqs.~(\ref{eq:one-photon-angular-ratio-he}--\ref{eq:two-photon-angular-ratio-ne}).
This implies that the condition for a node in Eq.~\eqref{eq:fiszero} is {\it just at the limit} for 90 degrees and, therefore, sensitive to small changes in the magnitude of the partial wave amplitudes.
In the case of absorption, the radial effect allows for nodes at angles close to 90 degrees due to increasing contribution of the higher angular momentum, while in emission this condition will not be satisfied.
For PADs in helium, shown in Fig.~\ref{fig:raw-XUV38-He1s}~(b,c), this effect explains the two sharp minima on either side of 90 degrees for both $q=1$ and $q=2$. The third sharp minimum at 90 degrees for $q=2$ arises due to the odd parity of the photelectron after absorption of an odd number of photons from the helium ground state.
In contrast, there is no sharp minima for $q=-1$ and only a single sharp minimum for $q=-2$ due to odd parity in helium. This is because the condition for additional minima of Eq.~\eqref{eq:fiszero} are not met due to an increased contribution of the lower angular momentum. In $q=-1$ in helium we do observe a local minimum at 90 degrees, which is not fulfilling the condition in Eq.~\eqref{eq:fiszero}.
For PADs in neon with $m=0$, shown in Fig.~\ref{fig:raw-Ne2p-XUV38-m}~(b,c), the condition for nodes is not satisfied for either $q=\pm 1$.
The $q=1$ case shows two shallow minima, while the $q=-1$ case shows two shoulders, which indicates that emission is further from the additional node condition in Eq.~\eqref{eq:fiszero}. There is a sharp minimum at 90 degrees in both $q=\pm 1$ in neon with $m=0$ due to odd parity after exchange of two photon. For $q=2$ we have three spherical harmomics that interfere. In this case we see are two sharp minima, where the condition of a node is fullfilled, and two additional outer shoulders, where the node condition is not fully satisfied. For $q=-2$ the flat region comes from the fact that the conditions for nodes is not fully met at either of these four instances.
For neon with $m=\pm 1$,
shown in Fig.~\ref{fig:raw-Ne2p-XUV38-m}~(e,f), there are again two partial waves that interfere.
The condition for additional nodes is found in both $q=1$ and $q=2$, while it is not found for $q=-1$ or $q=-2$. The nodes at 0, 90 and 180 degrees are related to the static symmetry properties of spherical harmonics with $m=\pm 1$.
Finally, the absence of a qualitative difference between $q=1$ and $q=-1$ for neon with incoherent addition of both $m=0$ and $m=\pm 1$, can be understood by the fact that $m=0$ is has two small maxima at approximately same angles where $m=\pm 1$ have nodes, as seen in Fig.~\ref{fig:raw-Ne2p-XUV38-m}~(b) and (e), respectively. This effect covers up the difference between absorption and emission processes in the first sideband of neon. This motivates precision experiments with resolution in the magnetic quantum number when studying laser-assisted photoionization to study the propensity rules in atoms.
Alternatively, the consequence of additional nodes that come from propensity rule effects can be studied by angle-resolved atomic delay measurements, as shown by Busto~et al.~\cite{busto_fano_2019}. The strong importance of including incoherently both $m=0$ and $m=\pm 1$ contributions for atomic delay simulations in neon was shown by Ivanov and Kheifets~\cite{IvanovPRA2017}. Physically, this is due to the fact that only the absorption paths obtain additional nodes, a criterion formulated in Eq.~\eqref{eq:fiszero}. Each additional node is associated with $\pi$-shifts in absorption paths that leads to strong angle-dependence of atomic delays. Our extension of sideband studies to the second sideband motivates angle-resolved atomic delay experiments with higher-order sidebands, similar to that proposed by Harth et al.~\cite{HarthPRA2019}.
\begin{figure}
\centering
\includegraphics[scale=0.35]{crest_compare_He1s_peak.pdf}
\includegraphics[scale=0.35]{crest_compare_Ne2pm1_peak.pdf}
\caption{Ratio of the magnitude of the coefficients of fitted spherical harmonics of the peaks of the angle-resolved photoelectron angular distribution as a function of XUV-photon energy in (a) helium and (b) neon, $m=\pm 1$.}
\label{fig:crest-peaks-compare}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
With our combined TDCIS and t-SURFF approach, we are able to simulate angle-resolved photoelectron spectra and identify qualitative differences between sidebands formed by laser-driven absorption and emission processes in the continuum.
First, we confirm the generalization of Fano's propensity rule to continuum--continuum absorption and emission processes to the first sideband peaks, then we show that the propensity rule also has consequences for the second-order sideband peaks. In addition to the propensity rule, we identify that interference effects of different intermediate partial waves plays an important role, which is a stronger effect than the propensity rule at high kinetic energies.
While Fano's propensity rule for absorption of photons states that an increase of the angular momentum is favoured over a decrease of angular momentum, the interference effect from multiphoton processes can strongly favour a decrease of the angular momentum for both laser absorption and emission processes in the continuum.
Finally, we find that the propensity rule can be used to explain the appearance of additional deep minima (nodes) in the angular distributions found in multiphoton absorption processes in the continuum in both the first and second sideband.
\subsection*{Acknowledgement}
JMD acknowledges support from the Olle Engkvist Foundation, the Knut and Alice Wallenberg Foundation and the Swedish Research Council.
|
2,877,628,088,463 | arxiv | \section{Introduction} \label{sec:intro}
Active galactic nuclei (AGN) are characterized by accretion onto the supermassive black hole at the center of the host galaxy. The accretion process is intrinsically turbulent, leading to stochastic variability observed across the electromagnetic spectrum. Occasionally, more drastic changes in AGN emission are observed in objects called ``changing-look AGN" (CLAGN), in which the source rapidly transitions from one spectral type to another through the appearance or disappearance of broad optical emission lines \citep[e.g.][]{Shappee2014,LaMassa2015,MacLeod2016,Ruan2016,MacLeod2019,Yang2018,Hon2020}. The appearance or disappearance of optical broad lines is also commonly accompanied by an increase or decrease in the UV/X-ray continuum flux, respectively. These continuum changes, along with significant changes in the IR fluxes, low levels of optical polarization, and complex multi-wavelength variability suggest that these changing-look events are driven by an intrinsic change in mass accretion rate \citep[e.g.][]{Sheng2017,Mathur2018,Stern2018,Hutsemekers2019}, rather than being a transient obscuration effect. However, the mechanisms which induce these rapid changes in mass accretion rate are still not well understood, with current theories including binary supermassive black holes \citep{Wang2020b}, state transitions similar to X-ray binaries \citep{Noda2018,Ruan2019,Ai2020}, magnetically elevated accretion in thick disks \citep{Dexter2019}, and instabilities in the accretion disk \citep{Ross2018,Sniegowska2019}.
1ES 1927+654 is a well-known nearby AGN ($z = 0.019422$) that was first discovered with the \textit{Einstein} satellite \citep{Elvis1992,Perlman1996} and has recently gone through an extreme changing-look event. Early X-ray observations of 1ES~1927+654 taken decades before the changing-look event showed little evidence for obscuration by dust along the line of sight with $N_H \approx 10^{21}$ cm$^{-2}$, yet no broad emission lines were detected in optical observations, even in polarized emission \citep{Boller2003,Tran2011}. This made 1ES~1927+654 part of the ``True Type 2" class of AGN, which seem to defy simple unified AGN models \citep[e.g.][]{Antonucci1993,Urry1995}, although recent studies have found that the broad lines seemingly missing in some True Type 2 AGN are simply too broad or too faint to be easily detected \citep[e.g.][]{Bianchi2019}. This classification prompted further X-ray observations of 1ES~1927+654, including observations with \textit{ROSAT} in the 1990s, \textit{Chandra} in 2001, and \textit{Suzaku} and \textit{XMM-Newton} in 2011. All of these observations showed that the X-ray spectrum was dominated by a steep power law ($\Gamma \approx 2.4$) and a soft excess with no evident narrow iron~K line \citep{Boller2003,Gallo2013}.
This unique source became even more interesting when the All-Sky Automated Survey for Supernovae \citep[ASAS-SN;][]{Shappee2014} reported a significant and rapid optical brightening of 1ES~1927+654 of at least two magnitudes in the V band on 2018 March 3 \citep[ASASSN-18el/AT2018zf;][]{Nicholls2018}. Archival data from the Asteroid Terrestrial-impact Last Alert System \citep[ATLAS;][]{Tonry2018} revealed that the outburst actually began around 2017 December 23. The discovery of this extreme optical transient in an existing AGN defied what would be expected for a typical AGN flare and prompted follow-up observations across the electromagnetic spectrum. Optical spectroscopy immediately following the detection of the optical outburst revealed a featureless, blue continuum, reminiscent of a quasar optical spectrum. Broad H$\alpha$ and H$\beta$ emission lines then appeared within a few months of the initial outburst detection, making 1ES~1927+654 a CLAGN and the first object to have been observed undergoing this transition in real time, over timescales of months \citep{Trakhtenbrot2019}. The change in optical state motivated high cadence X-ray monitoring of 1ES~1927+654 with \textit{XMM-Newton}, \textit{NuSTAR}, \textit{NICER}, and \textit{Swift} beginning in late May 2018.
The first follow-up X-ray observations revealed that the X-ray flux was near its pre-outburst level in 2011, but the spectrum was starkly different from its pre-outburst spectrum, dominated by an extremely soft, thermal component with very little emission above 3~keV \citep{Ricci2020}. The X-ray flux then dropped by over an order magnitude, followed by an increase in X-ray flux of approximately four orders of magnitude to the Eddington limit of a $10^6$ M$_\odot$ black hole, over the span of a few hundred days \citep{Ricci2020}. One possible explanation for this extreme variability and change in the X-ray spectrum is that a tidal disruption event (TDE) occurred in 1ES~1927+654, causing the depletion of the inner accretion disk and cutting off the energy supply to the corona that produces the hard X-ray emission in AGN. In addition, the early X-ray data revealed a broad emission-like feature at 1~keV, which is very prominent, but not well understood \citep{Ricci2021}. These fascinating and unique early X-ray observations motivated extensive X-ray monitoring, spanning more than three years with seven simultaneous \textit{XMM-Newton}/\textit{NuSTAR} observations and more than 500 \textit{NICER} observations. To date, 1ES~1927+654 has been observed for more than 1 Ms with \textit{NICER} and is currently the most observed AGN in the \textit{NICER} archive.
In this work, we present the X-ray evolution of the full outburst in 1ES~1927+654, extending from the initial X-ray observations taken in May 2018 to when it had decayed down to its pre-outburst level in June 2021. The first half of this monitoring campaign was first presented in \citet{Ricci2020,Ricci2021}, and in this work, we present new observations, including over 200 new \textit{NICER} observations four new simultaneous \textit{XMM-Newton} and \textit{NuSTAR} observations, from the second half of the outburst, as outlined in Table \ref{tab:obs}. In Section \ref{sec:obs}, we discuss the data used in this analysis and the data reduction processes. We present the X-ray spectral evolution of the source in Section \ref{sec:evolution}. In Section \ref{sec:physmod}, we shift our focus to a physically-motivated reflection model to explain the prominent broad 1~keV line in the early X-ray observations. Lastly, we discuss the impact of our findings on our understanding of 1ES~1927+654 and other TDEs, CLAGN, and nuclear transients in Section \ref{sec:discussion} and summarize our results in Section \ref{sec:conclusion}. Throughout this paper we adopt a standard $\Lambda$CDM cosmology with $H_0 = 70$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_\Lambda=0.73$, and $\Omega_M = 0.27$. All quoted errors are 90\% confidence ($\Delta \chi^2 = 2.706$ for one parameter of interest).
\section{Observations \& Data Reduction} \label{sec:obs}
1ES~1927+654 has been extensively monitored since its late 2017 optical outburst, with a particular focus on the dramatic evolution in the X-ray band. Together \textit{XMM-Newton} \citep{Jansen2001} and \textit{NuSTAR} \citep{Harrison2013} have performed seven simultaneous pointed observations of 1ES~1927+654 with roughly 4-6 months cadence. Details of the simultaneous \textit{XMM-Newton} and \textit{NuSTAR} observations are given in Table \ref{tab:obs}. Given how soft and bright 1ES~1927+654 has been, \textit{NICER} \citep{Gendreau2012,Arzoumanian2014} has observed 1ES~1927+654 with a much higher cadence of approximately one observation every two days.
Here we present the X-ray observations and data reduction for the follow-up campaign of 1ES~1927+654 up to June 2021, including the first half of the monitoring campaign presented in \citet{Ricci2020,Ricci2021} and the latter half of the monitoring campaign presented here for the first time. All proceeding X-ray spectral analysis was performed using XSPEC version 12.11.1 \citep{Arnaud1996} with $\chi^2$ fit statistics.
\begin{deluxetable*}{c c c c c c c c}
\caption{\textit{XMM-Newton} and \textit{NuSTAR} Observation Information} \label{tab:obs}
\tablehead{\colhead{Epoch} & \colhead{Date} & \colhead{Telescope} & \colhead{ObsID} & \colhead{Exposure} & \colhead{First Presented} & \colhead{Soft Count Rate}\tablenotemark{$\dagger$} & \colhead{Hard Count Rate}\tablenotemark{$\dagger$} \\
\colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{(ksec)} & \colhead{} & \colhead{(cts s$^{-1}$)} & \colhead{(cts s$^{-1}$)}}
\startdata
1 & 2018-06-05 & \textit{XMM} & 0830191101 & 46.4 & \citet{Ricci2020,Ricci2021} & $7.76 \pm 0.02$ & $0.0077 \pm 0.0005$ \\
& & \textit{NuSTAR} & 90401625002 & 45.9 & & -- & -- \\
2 & 2018-12-12 & \textit{XMM} & 0831790301 & 59.3 & \citet{Ricci2020,Ricci2021} & $49.06 \pm 0.04$ & $0.719 \pm 0.004$ \\
& & \textit{NuSTAR} & 90401641002 & 64.7 & & $0.0336 \pm 0.0007$ & $0.0080 \pm 0.0004$ \\
3 & 2019-05-06 & \textit{XMM} &0843270101 & 52.0 & \citet{Ricci2020,Ricci2021} & $52.34 \pm 0.04$ & $1.137 \pm 0.006$ \\
& & \textit{NuSTAR} & 90501618002 & 58.2 & & $0.066 \pm 0.001$ & $0.0254 \pm 0.0007$ \\
4 & 2019-11-02 & \textit{XMM} & 0843270201 & 53.5 & This work & $74.74 \pm 0.05$ & $2.216 \pm 0.008$ \\
& & \textit{NuSTAR} & 60502034002 & 50.7 & & $0.124 \pm 0.002$ & $0.046 \pm 0.001$ \\
5 & 2020-05-03 & \textit{XMM} & 0863230101 & 47.0 & This work & $40.83 \pm 0.04$ & $1.626 \pm 0.007$ \\
& & \textit{NuSTAR} & 60502034004 & 45.8 & & $0.133 \pm 0.002$ & $0.052 \pm 0.001$ \\
6 & 2020-09-16 & \textit{XMM} & 0863230201 & 49.8 & This work & $14.64 \pm 0.02$ & $1.046 \pm 0.006$ \\
& & \textit{NuSTAR} & 60602003002 & 36.1 & & $0.132 \pm 0.002$ & $0.137 \pm 0.002$ \\
7 & 2021-01-12 & \textit{XMM} & 0863230301 & 48.1 & This work & $2.21 \pm 0.01$ & $0.304 \pm 0.003$ \\
& & \textit{NuSTAR} & 60602003004 & 48.8 & & $0.048 \pm 0.001$ & $0.073 \pm 0.001$ \\
\enddata
\tablenotetext{\dagger}{Count rates are averaged over the entire observation, and uncertainties reflect the Poisson noise. For \textit{XMM-Newton}, the soft count rate is measured in the 0.3-2 keV range, and the hard count rate is measured in the 2-10 keV range. For \textit{NuSTAR}, the soft count rate is measured in the 3-5 keV range, and the hard count rate is measured in the 5-20 keV range.}
\end{deluxetable*}
\subsection{XMM-Newton} \label{subsec:xmm}
1ES~1927+654 was detected with each instrument on board \textit{XMM-Newton} during all seven observations. For the purposes of this work, we focus on data from the EPIC-pn camera, which has a higher effective area and is less sensitive to issues of pile-up than EPIC-MOS. Pile-up was particularly prevalent in the high-luminosity observations of 1ES~1927+654, motivating the use of only the EPIC-pn data. Given extensive observations with \textit{NICER} as well as the EPIC-pn data, the results of this work are not statistics-limited and EPIC-MOS data was not necessary. Likewise, we do not perform extensive modeling with the RGS data, as \citet{Ricci2021} found that although there was a weak ionized absorber in the soft spectrum, the 1~keV line that is the focus of this paper is still broad in the RGS spectrum. Thus, the main features studied in this work (the temperature evolution of the blackbody, the power law evolution, and the 1~keV feature) are all broad spectral components that are best constrained with broad-band modeling of EPIC-pn data. We do find that the best-fitting physical model for the Epoch 1 EPIC-pn spectrum (discussed in Section \ref{subsec:june2018}) provides an statistically acceptable fit to the RGS spectrum, even without including the weak ionized absorber found in \citet{Ricci2021}. Similarly, multi-wavelength analysis using the optical and UV data from the OM instrument on \textit{XMM-Newton} will be presented in a forthcoming paper focused on broad-band modeling of the spectral energy distribution (SED) of 1ES 1927+654 (Li et al. in prep.).
We reduced the EPIC-pn data from each \textit{XMM-Newton} observation using the \textit{XMM-Newton} Science Analysis System (SAS; version 18.0.0) with latest calibration files. We followed standard data reduction procedures for EPIC-pn, including running \texttt{epproc} to process the raw data and make calibrated event lists. We made standard cuts on high energy count rates to avoid periods of background flaring, excluding any time where the 10-12~keV count rate was above 0.4 cts s$^{-1}$. As 1ES~1927+654 was extremely bright and soft during most observations, we used annular extraction regions for all epochs to minimize the effects of pile-up. In Epochs 1, 6, and 7, pile-up was present, but not as significant as in the other observations, so we adopted an inner (outer) extraction radius of 6" (40") for these data. For Epochs 2-5, the effects of pile-up were significant given the source brightness and hence, we used an inner (outer) extraction radius of 15" (40"). In addition, where possible, we compared the \textit{XMM-Newton} spectra to the \textit{NICER} spectra, which do not suffer from the effects of pile-up, and found good agreement in spectral shape between the two using angular extraction regions for the \textit{XMM-Newton} data. We extracted a background spectrum from an off-source circular region on the same CCD chip with a radius of 35". We utilized all single and double events (PATTERN $\leq$ 4) when extracting spectra. Redistribution matrix files and ancillary response files for each observation were created using \texttt{rmfgen} and \texttt{arfgen}, respectively. Finally, the spectra were grouped to have a minimum of 25 counts per bin.
\subsection{NuSTAR} \label{subsec:NuSTAR}
1ES~1927+654 was simultaneously observed with \textit{NuSTAR} during all seven \textit{XMM-Newton} observations, but was undetected with \textit{NuSTAR} in the first observation \citep{Ricci2021}. For the remaining six observations, we reduced the \textit{NuSTAR} data using \textit{NuSTAR} Data Analysis Software (NuSTARDAS; version 2.0.0 in HEASoft version 6.28) with calibration files from \textit{NuSTAR} CALDB v20210104. We followed standard data reduction procedures for \textit{NuSTAR} data, processing the data with \texttt{nupipeline} and extracting spectra for both the FPMA and FPMB modules with \texttt{nuproducts}. Spectra were extracted from circular regions around the source with a radius of 50". Background spectra were extracted from a circular off-source region with a radius of 80". Given the softness of the source, pile-up was not evident in the \textit{NuSTAR} data. The spectra were again grouped to have a minimum of 25 counts per bin.
\subsection{NICER} \label{subsec:NICER}
High cadence X-ray observations of 1ES~1927+654 were taken with \textit{NICER}, with typical time between observations of a few hours to a few days. \textit{NICER} monitoring of 1ES~1927+654 began on 22 May 2018, and in this work, we report the results of all observations taken up to 21 June 2021. We reduced \textit{NICER} observations using tools in the NICERDAS suite (HEASoft version 6.28) with the 2020 gain. We exclude data from focal plane detector modules 14 and 34, which are known to be excessively noisy, and use an appropriately weighted ARF and RMF file for the remaining 50 detectors.
Unlike with \textit{XMM-Newton} and \textit{NuSTAR} observations where an X-ray background can be measured using off-source CCD pixels, \textit{NICER} background must be estimated using spectral parameters as there is no off-source region. We implement the 3C50 background model for our \textit{NICER} observations \citep{Remillard2022}, and filter on the background-subtracted spectra. Namely, we expect that below 0.2~keV and above 13~keV, the background-subtracted count rates should be sufficiently close to zero, since the detector has negligible effective area outside of 0.2-13~keV. Any strong deviation from net zero flux in these ranges indicates an issue with the background modeling. Thus, we exclude any spectra where the absolute value of the count rate in the 13-15~keV range is $> 0.1$ cts s$^{-1}$ or the absolute value of the count rate below $0.2$~keV is $> 10$ cts s$^{-1}$ from our fitting \citep[level 2 filtering as described in][]{Remillard2022}. All of the spectra were grouped on a per-ObsID basis and to a minimum of 25 counts per bin.
\section{Spectral Evolution} \label{sec:evolution}
\begin{figure*}[t!]
\centering
\includegraphics[width=17.5cm]{1ES1927_evolution_wphases.eps}
\caption{Evolution of various aspects of the X-ray spectrum of 1ES~1927+654 from May 2018 through June 2021. In all panels, blue circles correspond to a single \textit{NICER} ObsID and red stars are measurements from simultaneous \textit{XMM-Newton}/\textit{NuSTAR} observations. Where shown, the orange dashed line corresponds to the pre-outburst level reported in \citet{Gallo2013}. The vertical dashed black lines highlight a unique three-phase evolution, which were identified based on properties of the X-ray light curve and changes in the X-ray spectral properties. \textit{Top panel:} X-ray light curve of 1ES~1927+654 showing the 0.3-10~keV luminosity. The green dot-dashed line shows the Eddington limit for a $10^6$ M$_\odot$ black hole, highlighting the plateau in the X-ray light curve. \textit{Second panel:} Equivalent width of the 1~keV Gaussian line. Only fits with $< 50\%$ uncertainty on the equivalent width measurement and with Gaussian line significance of $> 99.99\%$ are shown here. \textit{Third panel:} Temperature of the blackbody spectral component. \textit{Bottom panel:} Evolution of the photon index of the power law spectral component. We exclude the \textit{NICER} data points where $\Gamma$ was fixed at 3 or where the relative uncertainty on $\Gamma$ was greater than 20\% (see Section \ref{subsec:phenom_model}).}
\label{fig:lumin}
\end{figure*}
In Figure \ref{fig:lumin}, we show the evolution of the X-ray properties of 1ES~1927+654 over the course of the last three years with \textit{NICER}, \textit{XMM-Newton}, and \textit{NuSTAR} observations. The long-term light curve of 1ES~1927+654 (with X-ray luminosities measured from the modeling in Section \ref{subsec:phenom_model}), shown in the top panel, is unlike any accreting source we have ever observed before. The high-cadence \textit{NICER} observations show rapid variability on the X-ray rise, contrasted with an extremely smooth decline in X-ray flux. The orange dashed line indicates the pre-outburst X-ray luminosity reported by \citet{Gallo2013} based on the \textit{XMM-Newton} and \textit{Suzaku} observations from 2011. At late times, the X-ray luminosity seems to be trending back toward its pre-outburst state and the X-ray spectrum is remarkably similar to the pre-outburst X-ray spectrum, which we discuss further in Section \ref{subsec:comp2arch}. The 0.3-10 keV X-ray luminosity of 1ES~1927+654 appears to reach its maximum around the Eddington limit for a $10^6$ M$_\odot$ black hole as shown in the green dot-dashed line in the top panel of Figure \ref{fig:lumin}. As the X-ray spectrum of 1ES~1927+654 is unlike any AGN ever observed before, we cannot apply a simple bolometric correction to estimate the bolometric luminosity and accretion rate. However, multi-wavelength observations with high optical/UV luminosity at early times suggest an extended super-Eddington phase until near the point at which the X-ray luminosity decreases (Li et al. in prep).
In Figure \ref{fig:lumin}, we also show three different evolutionary phases separated by vertical black dotted lines. These phases are identified based on the spectral properties and evolution of the light curve. The first phase corresponds to the early portion of the light curve, from first X-ray observations to mid-2019 (MJD $<$ 58680), and is dominated by rapid X-ray variability and a weak power law component. We chose the second phase to start where the X-ray light curve became very stable with little variability. This phase extends until an MJD of 58900, at which point the X-ray luminosity starts to drop and the photon index of the power law begins to decrease.
\subsection{Phenomenological Modeling} \label{subsec:phenom_model}
\subsubsection{Phenomenological Modeling with NICER}
To assess the evolution of the X-ray spectral shape, we applied a simple phenomenological model to all of the NICER data, including galactic and source absorption, a power law, and a blackbody (i.e. \texttt{tbabs*ztbabs*(zbb+zpower)} in XSPEC notation). In addition to these simple continuum components, the early X-ray observations of 1ES~1927+654 also reveal a prominent emission-like excess around 1~keV. We explore a physically-motivated reflection model of this feature in Section \ref{sec:physmod}, but for the purposes of tracking the line evolution, we include a Gaussian emission line model in our simple phenomenological model. Thus, our overall phenomenological model is \texttt{tbabs*ztbabs*(zbb+zpower+zgauss)} in XSPEC notation. The line is only included in our modeling when it is significant at the 99.99\% level, using the F-test as an approximate measure of statistical significance. As discussed further in Section \ref{subsec:disapp_1keV}, we only included the line up until the end of Phase 1 (up to an MJD of 58680) when the source has reached its peak X-ray luminosity, at which point the line has either disappeared or is indistinguishable from a broad continuum component. Thus, we fit each NICER observation with one of two models, either \texttt{tbabs*ztbabs*(zbb+zpower)} or \texttt{tbabs*ztbabs*(zbb+zpower+zgauss)}, depending on the above criteria for the inclusion of a Gaussian emission line.
When fitting, we adopted abundances from \citet{Wilms2000} and cross-sections from \citet{Verner1996}. The redshift of each component was fixed at the source redshift of $z = 0.019422$. Galactic absorption from the \texttt{tbabs} model was fixed at $N_H = 6.42 \times 10^{20}$ cm$^{-2}$ \citep{HI4PICollaboration2016}. The fit parameters for the base continuum model were the column density, blackbody temperature, blackbody normalization, photon index and power law normalization in each observation. When the Gaussian line was included in the model, the line energy, width, and normalization were all free in spectral fitting, but we bounded the Gaussian line energy to be between 0.8 and 1.2~keV and the Gaussian line width to be $< 0.3$~keV. These choices were made to ensure that the Gaussian line was picking up the 1~keV feature and not blurring into some additional continuum component.
When fitting \textit{NICER} data, we only considered energies where the source flux is higher than the background. This included data from 0.3~keV up to an upper bound $E_\mathrm{upper}$ which varied from less than 1~keV up to 5~keV over the observation period. For $E_\mathrm{upper} < 2.2$~keV, we found that the photon index was poorly constrained with a fractional uncertainty greater than 20\%. Hence, we chose to fix the photon index at $\Gamma = 3$ \citep[as in][]{Ricci2021} for any \textit{NICER} observations where $E_\mathrm{upper} < 2.2$~keV. We also tested a cutoff power law with \textit{NICER} observations, but found that there was a significant degeneracy between the photon index and the cutoff energy given the low values of $E_\mathrm{upper}$.
Given the large number of \textit{NICER} observations, we automated this fitting procedure and thus needed to apply some simple cuts to ensure that we are only including converged fits. We made a simple $\chi^2_\nu$ cut, keeping only the fits that have $\chi^2_\nu < 2$. This may have removed some observations with additional spectral complexity, but our goal with the phenomenological modeling was to provide a cohesive picture for the X-ray continuum evolution in 1ES 1927+654. In Section \ref{sec:physmod}, we perform more detailed physical modeling of the \textit{XMM-Newton}/\textit{NuSTAR} spectra, where we address these additional spectral complexities. Additionally, we noticed a few observations with poor background estimation, which showed up in the fitting procedure as artificially low photon indices in the \texttt{zpower} model (pegged at the minimum allowed $\Gamma$ of 1.4), and also excluded those observations from our results. These observations were sparsely and randomly spaced throughout the light curve, indicating that this likely was not an intrinsic change in the source properties. Finally, we only kept fits with sufficient constraints on the temperature of the blackbody and the photon index of the power law component, implementing a cut at 20\% fractional uncertainty for both of these parameters. These conservative cuts left us with 438 \textit{NICER} ObsIDs out of a total 495 ObsIDs considered, totalling more than 1 Ms of observation time presented in this analysis. In Appendix \ref{sec:app1}, we show the phenomenological modeling for three \textit{NICER} observations, which were taken close to simultaneous \textit{XMM-Newton}/\textit{NuSTAR} observations and are representative of the three evolutionary phases highlighted in Figure \ref{fig:lumin}.
\subsubsection{Phenomenological Modeling with XMM-Newton \& NuSTAR}
We followed a similar procedure for each simultaneous \textit{XMM-Newton}/\textit{NuSTAR} observation, jointly fitting the two spectra for each observation period with a simple phenomenological model. Likewise, we followed the same procedure as with the \textit{NICER} analysis for deciding whether or not to include a Gaussian emission line in the modeling. For the \textit{XMM-Newton} data, we only consider data between 0.3~keV and wherever the spectrum becomes background dominated, which varied between 3~keV (Epoch 1) and 10~keV (Epoch 6). Similarly, for the six \textit{NuSTAR} observations in which the source was detected, we considered data between 3~keV and wherever the background dominated, which varied between 8~keV (Epoch 2) and 20~keV (Epoch 7). The addition of the \textit{NuSTAR} data up to a higher energy allowed us to break the degeneracy between the photon index and cutoff energy in a cutoff power law model and required a cutoff in the power law distribution to most accurately describe the spectrum. Hence, when jointly fitting the \textit{XMM-Newton}/\textit{NuSTAR} data, we used a cutoff power law instead of a simple power law (i.e. \texttt{tbabs*ztbabs*(zbb+zcutoffpl)} or \texttt{tbabs*ztbabs*(zbb+zcutoffpl+zgauss)} in XSPEC notation). We fixed the cutoff energy at 300~keV when the cutoff energy is poorly constrained, which occurred in Epoch 1 when the source was undetected with \textit{NuSTAR} and in Epoch 7 when the cutoff energy was outside of the \textit{NuSTAR} bandpass. We note that fitting with a cutoff power law instead of a simple power law gives slightly lower photon indices compared to the \textit{NICER} values (as seen in the bottom panel of Figure \ref{fig:lumin}), which is in line with what we would expect to see when neglecting a high energy cutoff in the \textit{NICER} data. We show results of this phenomenological modeling to three of the \textit{XMM-Newton}/\textit{NuSTAR} observations in Appendix \ref{sec:app1}, with one observation from each evolutionary phase which are used later in Section \ref{sec:physmod} to do physically-motivated spectral modeling.
\subsection{Disappearance of the 1~keV Line} \label{subsec:disapp_1keV}
In Figure \ref{fig:1keV_eqw}, we show a ratio plot of the first four \textit{XMM-Newton} observations relative to the cutoff power law and blackbody phenomenological model. The base continuum model indicates an emission line at 1~keV and/or an absorption line between 1 and 2~keV, but the absorption residual is removed when including a 1~keV emission line in the spectral model. There is a clear trend in the strength of the 1~keV feature with time; the feature is strongest in the Epoch 1 \textit{XMM-Newton} observation and decreases in strength during the next two epochs. In the Epoch 4 spectrum, no strong 1~keV feature is present, although there are possibly additional broad continuum features that may arise from super-Eddington accretion during the peak X-ray luminosity phase.
\begin{figure}[t!]
\centering
\includegraphics[width=8.6cm]{1ES1927_xmm_1keVratio.eps}
\caption{Ratio of data fit to a simple continuum model with a cutoff power law and blackbody for the first four \textit{XMM-Newton} observations. The 1~keV feature is evident in the first three observations, but is not as prominent in the fourth observation. The first observation shows the strongest 1~keV feature. Data have been binned for visual purposes only.}
\label{fig:1keV_eqw}
\end{figure}
To assess the evolution of the 1~keV line, we looked at the equivalent width of the Gaussian model as a function of time. For this analysis, we used the same phenomenological modeling previously discussed and froze the Gaussian line at 1~keV to avoid a degeneracy between a weak line around 1~keV and an extremely broad line at the extremes of the model. This is shown for both \textit{NICER} and \textit{XMM-Newton} data in blue circles and red stars, respectively, in the second panel of Figure \ref{fig:lumin}. There is a large amount of scatter in the \textit{NICER} data at early times, which can be partially attributed to the large variability in luminosity of the \textit{NICER} observations as the observations with higher luminosity have the lower equivalent widths.
Both sets of observations show the same general trends. Namely, the 1~keV line shows a very clear transition from large equivalent widths to extremely low equivalent widths ($\lesssim 20$~eV) around an MJD of 58680. This corresponds roughly to when the X-ray luminosity in the top panel of Figure \ref{fig:lumin} reaches its plateau at around $L_{0.3-10\, \mathrm{keV}} \approx 1.2 \times 10^{44}$ erg s$^{-1}$. To ensure that the decrease in equivalent width was not just the result of an increased continuum, we also checked the line flux, which also dropped slightly at high luminosities, indicating that both factors play a role in observing such small equivalent widths at peak X-ray luminosity. After the plateau, the X-ray luminosity drops back to its pre-outburst level (similar to that of Epoch 1), yet the 1~keV line does not reappear. Hence, for the remainder of the phenomenological modeling presented, we exclude the Gaussian component from the model beyond an MJD of 58680, which is shown as a vertical black dashed line (transition from Phase 1 to Phase 2) in Figure \ref{fig:lumin}. This boundary is near where the analysis performed by \citet{Ricci2021} ends, so we refer the reader to that work for a detailed description of the line evolution before it disappeared. In Section \ref{sec:physmod}, we present a physically-motivated reflection model to explain the 1~keV feature and discuss that how the evolution could be the result of suppression due to over-ionization effects.
\subsection{Temperature Evolution} \label{subsec:temp_evol}
Shortly after the outburst began, the X-ray spectrum of 1ES~1927+654 was well-characterized by a relatively hot blackbody with a temperature of $kT_\mathrm{bb} \approx$ 0.1~keV, which increased up to close to 0.2~keV as the X-ray luminosity increased \citep{Ricci2021}. However, standard thin accretion disks around supermassive black holes are expected to be around $kT_\mathrm{bb} \lesssim 0.05$~keV. Thus, this hotter blackbody emission could be indicative of emission from a hot, super-Eddington inner accretion flow. The top panel of Figure \ref{fig:tempevol} shows the evolution of the blackbody temperature as a function of time over the entire observing period. Indeed, we find that the blackbody continues to be quite hot, plateauing around $kT_\mathrm{bb} \approx 0.2$~keV near peak X-ray luminosity and remaining close to this value during the X-ray decline.
\begin{figure}[t!]
\centering
\includegraphics[width=8.6cm]{1ES1927_LTevol_phases.pdf}
\caption{\textit{Top}: Temperature of the blackbody as a function of time. The \textit{NICER} data points are shown as circles and the \textit{XMM-Newton}/\textit{NuSTAR} data points are shown as stars. The data points are colored based on the phase from Figure \ref{fig:lumin}, with blue, magenta, and orange corresponding to Phase 1, 2, and 3 respectively. The phase boundaries from Figure \ref{fig:lumin} overplotted as black, vertical dotted lines. \textit{Bottom}: Luminosity of the blackbody as a function of the temperature. Again, the \textit{NICER} data points are shown as circles and the \textit{XMM-Newton}/\textit{NuSTAR} data points are shown as stars, which are again colored by the evolutionary phase. Dashed lines show the best fit to $L \propto T^n$ for Phases 1 and 3, with the shaded regions corresponding to the 90\% confidence intervals. The legend gives the exponent and uncertainty for each fit.}
\label{fig:tempevol}
\end{figure}
The trends of the blackbody temperature with luminosity can be a useful gauge of the nature of the blackbody component. Blackbodies with constant emitting area and standard thin accretion disks are expected to follow $L \propto T^4$ \citep{Shakura1973}, whereas advection-dominated disks are expected to deviate from this trend and follow $L \propto T^2$ \citep[e.g.][]{Watarai2000}. Contrarily, the soft excess, both in typical AGN and the pre-outburst spectrum of 1ES~1927+654, is roughly constant in temperature, with $kT_\mathrm{bb} \approx 100-150$~eV and is independent of accretion rate and black hole mass \citep[e.g.][]{Gierlinski2004,Piconcelli2005,Miniutti2009}.
We utilize these differences in expected behavior of the blackbody temperature to roughly assess the changes in the accretion state of 1ES~1927+654 over the course of the outburst. In the bottom panel of Figure \ref{fig:tempevol}, we show the luminosity of the blackbody as a function of its temperature, where the luminosity is determined using the normalization of the blackbody in XSPEC, given by $K_\mathrm{bb} = L_{39} / (D_{10} (1 + z))^2$, where $L_{39}$ is blackbody luminosity in units of $10^{39}$ erg s$^{-1}$ and $D_{10}$ is the distance to the source in units of 10 kpc. We color code the data by the phases in Figure \ref{fig:lumin} and fit the data in Phases 1 and 3 to the relationship $L \propto T^n$ using orthogonal distance regression (ODR). The results of fitting each individual group of data to $L \propto T^n$ are given in the legend of the bottom panel of Figure \ref{fig:tempevol}. We do not fit the data in Phase 2 as this corresponds to a relatively stable period where neither the luminosity nor the temperature of the blackbody component are changing enough to provide well-constrained fits to $L \propto T^n$. For the Phase 3 data, we excluded three data points at late time with low temperatures ($kT_\mathrm{bb} \approx 0.05$~keV) from the orange data points as these are likely to be from a different physical component of the system (the disk rather than the soft excess).
The data from Phase 1, shown in blue in Figure \ref{fig:tempevol}, are inconsistent with the standard $L \propto T^4$ picture for a thin accretion disk or constant emitting area blackbody. The slope is closer to the expected $L \propto T^2$ relationship for an advection dominated disk, although is still shallower. In combination with the hot blackbody temperature, this is indicative of a hot super-Eddington inner accretion flow. On the other hand, the late-time Phase 3 data are fit with an extremely steep slope in the $L-T$ plane, indicative of a temperature that is close to constant with luminosity. This could be the result of a rapidly shrinking effective blackbody radius or could be a phenomenological manifestation of the soft excess in AGN, given the relatively constant, high temperature relative to standard accretion disks. However, in this period there are still more significant deviations in the blackbody temperature than typically seen in AGN soft excesses, which could be the result of continued build up of the soft excess. Further monitoring of the source in its post-outburst state is necessary to determine the nature of the late-time blackbody component.
\subsection{Return of the Power Law Component and Softer-when-Brighter Behavior} \label{subsec:powerlaw}
When 1ES~1927+654 was originally observed in X-rays following the optical transient, the power law component commonly seen in AGN was extremely weak and shortly disappeared as the luminosity dropped \citep{Ricci2020,Ricci2021}. Using our phenomenological model for 1ES~1927+654, we tracked the evolution of the power law component as it returned to the spectrum during the X-ray rise. We show the evolution of the flux of the power law component in the top panel of Figure \ref{fig:fluxes}, and in the bottom panel of Figure \ref{fig:lumin}, we show the evolution of the photon index of the power law. In Figure \ref{fig:fluxes}, we also show the blackbody flux in the top panel and the ratio of power law to blackbody flux in the bottom panel to assess the evolution of the dominant component of the spectrum.
\begin{figure}[t!]
\centering
\includegraphics[width=8.6cm]{1ES1927_fluxes_withxmm.eps}
\caption{\textit{Top:} Power law (blue) and blackbody (orange) flux evolution in 1ES~1927+654. Both fluxes are computed in the 0.3-10 keV band. The stars show the \textit{XMM-Newton}/\textit{NuSTAR} data and the circles are the \textit{NICER} data. For visual clarity, we do not plot any data points with an uncertainty greater than 20\% on the either flux measurement. The black dotted lines show the phase boundaries defined in Figure \ref{fig:lumin}. \textit{Bottom:} Evolution of the ratio of power law to blackbody fluxes, with phase boundaries again overplotted. The stars show the \textit{XMM-Newton}/\textit{NuSTAR} data and the circles are the \textit{NICER} data. The pre-outburst flux ratio is shown with an orange dashed line.}
\label{fig:fluxes}
\end{figure}
During the rise in X-ray luminosity in Phase 1, the power law component began to reappear and increase drastically in flux, exhibiting rapid variability similar to the observed variability in the X-ray light curve in the top panel of Figure \ref{fig:lumin}. Despite significant changes in flux, when we can constrain the photon index during Phase 1 and Phase 2, it remained relatively constant with $\Gamma \approx 3.5$, which is much steeper than both the pre-outburst spectrum ($\Gamma \approx 2.4$) and the typical photon index in AGN \citep[$\Gamma \approx 1.8$; e.g.][]{Ricci2017}. Then, when the X-ray luminosity begins to drops at the beginning of Phase 3, both the power law flux and photon index decrease steadily, ultimately returning to close to the pre-outburst flux and photon index from the 2011 \textit{XMM-Newton} spectrum. Given the negligible power law emission in X-ray spectra of the early Phase 1 observations, this rapid evolution back to the pre-outburst state suggests that the formation of the X-ray corona in AGN can be a rapid process. We further discuss the implications for these findings on the evolution of the X-ray corona in Section \ref{subsec:disc_corona}.
Combining the decreasing photon index (i.e. the hardening of the X-ray spectrum) with the decreasing luminosity of 1ES~1927+654 during the Phase 3 observations leads to the standard ``softer-when-brighter" behavior that is typically exhibited by AGN \citep[e.g.][]{Shemmer2006,Sobolewska2009}. The top panel of Figure \ref{fig:hid} shows the luminosity versus hardness ratio, with the points colored by the phase from Figure \ref{fig:lumin}, highlighting the softer-when-brighter behavior at late times and the stark contrast to the early X-ray observations, which showed ``harder-when-brighter" during the X-ray rise \citep{Ricci2021}. Interestingly, harder-when-brighter behavior is also a commonly observed trend during the flares of quasi-periodic eruptions \citep[QPEs; e.g.][]{Miniutti2019,Giustini2020,Arcodia2021,Chakraborty2021} and in some ultra-luminous X-ray sources \citep[ULXs; e.g. NGC 247 ULX-1;][]{D'Ai2021}. We discuss this association further in Section \ref{subsec:disc_compare}, and compare the rapid variability during the X-ray rise to QPE behavior.
To investigate the nature of the late-time softer-when-brighter behavior, we looked into the relationship between the photon index and luminosity, whose correlation in standard AGN indicate that this behavior is driven by the X-ray corona \cite[e.g][]{Sobolewska2009}. The bottom panel of Figure \ref{fig:hid} shows the photon index versus luminosity relationship, colored again by the phases identified in Figure \ref{fig:lumin}. In the late-time Phase 3 data, a clear trend is found where the photon index decreases for decreasing luminosity, consistent with what other standard AGN studies have found.
\begin{figure}[t!]
\centering
\includegraphics[width=8.6cm]{1ES1927_hardness_gamma_phases.eps}
\caption{\textit{Top:} Hardness-intensity diagram for 1ES~1927+654 computed using model-based fluxes in 2-10~keV for the hard band and in 0.3-2~keV for the soft band. The stars show the \textit{XMM-Newton}/\textit{NuSTAR} data and the circles are the \textit{NICER} data. Data points are colored based on the phases identified in Figure \ref{fig:lumin}, with blue, magenta, and orange corresponding to Phase 1, 2, and 3 respectively. For visual clarity, we do not plot any data points with an uncertainty greater than 20\% on the hardness ratio. We also exclude the data points with $\Gamma$ frozen (as detailed in Section \ref{subsec:phenom_model}) as their 2-10~keV flux is poorly determined by the modeling. The softer-when-brighter-behavior associated with the corona is evident at late times in the Phase 3 data. \textit{Bottom:} Relationship between photon index and X-ray luminosity, colored again by time. The stars show the \textit{XMM-Newton}/\textit{NuSTAR} data and the circles are the \textit{NICER} data. We again exclude data points with $\Gamma$ frozen. The late-time data follow $\Gamma \propto L_X^{0.08}$, shown with the black dashed line, as found for standard AGN in \citet{Sobolewska2009}.}
\label{fig:hid}
\end{figure}
One possibility to explain the transition from harder-when-brighter to softer-when-brighter is that the dominant component of the spectrum has switched from thermal to Comptonized. The ratio of the power law to blackbody flux, shown in the bottom panel of Figure \ref{fig:fluxes}, is extremely low at early times, constant during much the peak X-ray luminosity phase and beginning of the X-ray decline, and increases later on in the X-ray decline. This implies that the early harder-when-brighter behavior is likely due to dominant blackbody emission and an increasing ratio of power law to blackbody flux. Once the source has reached its peak X-ray luminosity around an MJD of 58680, the ratio of the fluxes remains remarkably constant. Then at late times, both the decreasing photon index of the power law and the increase in the flux ratio of power law to blackbody flux likely contribute to the softer-when-brighter behavior.
\vspace{1cm}
\subsection{Similarity of Latest Observations to Pre-Outburst Observations} \label{subsec:comp2arch}
Despite significant changes in the spectral shape and X-ray luminosity over the course of the past 3 years, the most recent observations of 1ES~1927+654 look remarkably similar to the pre-outburst \textit{XMM-Newton} observations from May 2011. In Figure \ref{fig:ep07_compare}, we show a comparison between the two spectra, highlighting the similarity of their spectral shapes. Pre-outburst observations spanning from 1990-2011 with \textit{XMM-Newton}, \textit{Suzaku}, \textit{Chandra}, and \textit{ROSAT} all showed a relatively steep X-ray spectrum ($\Gamma \approx 2.4-2.7$), with a rather hot soft excess and little obscuration \citep{Boller2003, Gallo2013}. The best phenomenological fit in \citet{Gallo2013} to the pre-outburst 2011 observation includes a $\Gamma = 2.39\pm0.04$ power law and $kT = 170\pm5$~eV blackbody. Both relativistically blurred reflection and ionized outflows, two physical models for the soft excess, provide a good fit to the pre-outburst spectrum \citep{Gallo2013}. The narrow Fe K$\alpha$ line, which is a nearly ubiquitous feature in AGN, was notably not detected in the 2011 spectrum. The upper limit on the equivalent width was significantly below what would be expected based on the X-ray Baldwin effect, whereby the Fe K$\alpha$ equivalent width is inversely correlated with the X-ray luminosity \citep[e.g.][]{Iwasawa1993,Page2004,Bianchi2007,Ricci2014}, suggesting that the circumnuclear environment was devoid of gas and dust.
We find that the most recent simultaneous \textit{XMM-Newton}/\textit{NuSTAR} observation of 1ES~1927+654 can also be fit well by a power law and blackbody model with very similar fit parameters, finding a photon index of $\Gamma = 2.35 \pm 0.03$ and a blackbody temperature of $kT_\mathrm{bb} = 183_{-6}^{+7}$~eV. This gives a satisfactory fit with $\chi^2_\nu/\nu = 1.09/655$, but this fit can be improved by including an additional low temperature blackbody that could be from the accretion disk with $kT_\mathrm{bb} = 40 \pm 4$~eV, giving $\chi^2_\nu/\nu = 1.00/653$ ($\Delta\chi^2 = 63$ for 2 additional degrees of freedom). We find that a number of physical models for the soft excess can provide a good fit to this spectrum, indicating that the hotter blackbody component is likely associated with the soft excess as in the pre-outburst spectrum. Details of the physical modeling of the soft excess are outlined further in Section \ref{subsec:jan2021}.
\begin{figure}[t!]
\centering
\includegraphics[width=8.6cm]{1ES1927_ep07_compare.pdf}
\caption{Comparison of the spectra from the pre-outburst 2011 \textit{XMM-Newton} observation (orange) and the most recent joint \textit{XMM-Newton}/\textit{NuSTAR} observation taken in January 2021 (blue). \textit{NuSTAR} FPMA and FPMB spectra from the January 2021 observation are in lighter shades of blue. Both spectra are unfolded using a $\Gamma = 0$ power law and have been rebinned for visual purposes. \textit{Top:} Unfolded spectra for both observations. \textit{Bottom:} Unfolded spectra have been renormalized such that they have the same 0.3-10 keV flux.}
\label{fig:ep07_compare}
\end{figure}
Given the softness of the spectrum, the source is only dominant over the \textit{NuSTAR} background up to 20~keV, making constraining the cutoff energy of the corona difficult. The above phenomenological modeling suggests a cutoff energy of $E_\mathrm{cut} \gtrsim 15$~keV, although this value is somewhat dependent on the choice of model. We further explore the cutoff energy evolution and what this tells us about the formation and heating processes and timescales in the corona in Section \ref{subsec:disc_corona}, but note that the latest \textit{XMM-Newton}/\textit{NuSTAR} observations reveal a cutoff energy that is not unlike other AGN corona, suggesting that the source returned to close to its pre-outburst AGN state. We also find that the \textit{XMM-Newton} residuals show a hint of a feature around 6.4~keV in this observation, but no such feature seems to exist in the \textit{NuSTAR} data with lower energy resolution. Adding an additional Gaussian line at 6.4~keV to an \textit{XMM-Newton} only fit is significant at the 97\% confidence level (using an F-test to provide an approximation of the significance), but when including the \textit{NuSTAR} data we find that an additional Gaussian component at 6.4~keV does not provide any statistically-significant improvement to the fit.
Following the latest \textit{XMM-Newton}/\textit{NuSTAR} observations, the \textit{NICER} data reveal that 1ES~1927+654 brightened again by a factor of approximately four in 0.3-10~keV luminosity over the span of approximately two months. Archival flux values from \textit{ROSAT} and \textit{Chandra} observations in the 1990s and early 2000s reveal that this flux increase is not outside of the realm of normal for 1ES~1927+654 \citep[see Figure 9 from][]{Gallo2013}. This second rise is notably different than the initial X-ray rise in early 2018 as it is not rapidly variable, instead following the smooth nature seen in the X-ray decline, which could suggest that this change is more indicative of standard stochastic AGN variability. The latest \textit{NICER} data studied in this work are still fit well by the simple power law plus blackbody model discussed in Section \ref{subsec:phenom_model} with an average fit statistic of $\chi^2_\nu = 1.13$. Compared to the latest \textit{XMM-Newton}/\textit{NuSTAR} fits, the latest \textit{NICER} observations reveal that the average power law is steeper with $\Gamma \approx 2.9$, consistent with the typical softer-when-brighter behavior exhibited by AGN, and the average temperature of the blackbody is lower with $kT_\mathrm{bb} \approx 150$~eV.
\section{Physically-Motivated Modeling with Blurred Reflection} \label{sec:physmod}
One of the most striking features in the early spectra of 1ES~1927+654 was a broad emission-like excess around 1~keV. In Section \ref{subsec:disapp_1keV} we modeled the 1~keV feature as a simple Gaussian emission line and showed that the equivalent width drops when the source reaches its peak X-ray luminosity (near the Phase 1--Phase 2 boundary in Figure \ref{fig:lumin}). In this section, we explore a potential physical model for the 1~keV feature, namely reflection from a single-temperature blackbody irradiating spectrum, that can also account for broad residuals in the peak X-ray luminosity spectrum. We focus on three \textit{XMM-Newton}/\textit{NuSTAR} epochs which correspond to each of the three unique evolutionary phases highlighted in Figure \ref{fig:lumin}. Specifically, we use the data from \textit{XMM-Newton}/\textit{NuSTAR} Epochs 1, 4, and 7 as these are representative extremes in each of the regimes.
\subsection{June 2018 XMM-Newton Observation and the 1~keV Feature} \label{subsec:june2018}
The first \textit{XMM-Newton} observation of 1ES~1927+654 after the optical outburst began (June 2018, Epoch 1) shows the strongest 1~keV feature, as can be seen from the ratio plot in Figure \ref{fig:1keV_eqw}. The residuals resemble those of the ultrafast outflow in ASASSN-14li, although at a higher energy \citep{Kara2018}. To test this idea, we performed photoionized absorption modeling of the feature using an \textsc{xstar} absorption grid, which assumed that the ionizing spectrum was a blackbody with $kT_\mathrm{bb} = 100$ eV. However, we were unable to fit the data with a single absorber, finding a poor fit with $\chi^2_\nu > 2$. A model with two separate absorption components provides a slightly better fit ($\chi^2_\nu \approx 1.5$), but requires extreme parameters, including a blueshift of $z \approx -0.5$ for one of the absorbers, which would greatly exceed any observed outflow velocity seen in other accreting sources. An alternative explanation is that the feature is seen instead in emission. The width of the feature is on the order of 0.1~keV \citep[and even appears broad in the \textit{XMM-Newton} RGS data as seen in][]{Ricci2021}, suggesting emission from relatively close to the black hole and motivating the use of blurred reflection to model the 1~keV feature.
Reflection modeling in AGN usually assumes that the hot, optically thin corona irradiates the disk with a cutoff power law or thermally Comptonized irradiating spectrum \citep[e.g. \texttt{xillver}, \texttt{xillverCp};][]{Garcia2010,Garcia2013}. This is not applicable in the early X-ray observations of 1ES~1927+654 given the extremely weak power law component and dominant soft thermal component. To model reflection we therefore used a new model, \texttt{xillverTDE}, which models reflection from a single-temperature blackbody irradiating spectrum, reminiscent of the thermal continuum in the early X-ray spectra of 1ES~1927+654. The \texttt{xillverTDE} model is a new flavor of the \texttt{xillver} suite of models, which utilizes the largest atomic database and most accurate radiative transfer calculations for reflection. The model has seven free parameters, including the incident blackbody temperature, iron abundance, ionization parameter, disk density, inclination, redshift, and normalization.
Blackbody illuminating spectra are not only useful for modeling reflection in the soft thermal spectra in TDEs, but are also a key ingredient in modeling reflection from the surface or boundary layer of neutron stars and returning radiation in black hole accretion systems. Recently, a flavor of the \texttt{xillver} models called \texttt{xillverNS} has been developed to model the reflected emission from a single-temperature blackbody illuminating spectrum in neutron stars \citep{Garcia2022}. Developed for X-ray binary systems, this model includes a hotter temperature blackbody than in \texttt{xillverTDE}, which includes cooler temperatures in the range $kT_\mathrm{bb} = 0.03-0.3$~keV. \texttt{xillverNS} has proven extremely valuable for probing neutron star properties through reflection modeling \citep[e.g.][]{Ludlam2018,Ludlam2019,Ludlam2020} and as a probe for returning radiation in black hole X-ray binaries \citep[e.g.][]{Connors2020,Connors2021}.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{1ES1927_ep1_refl_schematic.pdf}
\caption{\textit{Left:} First \textit{XMM-Newton} observation fit to the model \texttt{tbabs*ztbabs*(zbb+zpower+gsmooth(xillverTDE))}. The top panel shows the unfolded spectrum with the three additive spectral components. The middle panel shows the ratio plot to a simple blackbody and power law fit without the \texttt{xillverTDE} component to highlight the vast improvement that the component has on the model. The bottom panel shows the ratio of the data to the model including \texttt{xillverTDE}. Data have been rebinned for visual purposes. \textit{Right:} Schematic showing the \texttt{xillverTDE} reflection modeling off of an optically-thick, super-Eddington outflow. }
\label{fig:xillverTDE}
\end{figure*}
To test whether reflection can provide a good description of the 1~keV emission in 1ES~1927+654, we utilize the \texttt{xillverTDE} model to fit the Epoch 1 \textit{XMM-Newton} data. In XSPEC notation, the model used is \texttt{tbabs*ztbabs*(zbb+zpower+gsmooth(xillverTDE))}. Recent simulations suggest that line profiles from super-Eddington disks are more symmetric and blueshifted compared to those from standard thin accretion disks \citep[e.g.][]{Thomsen2019,Thomsen2022}. Although the X-ray luminosity is about 10\% the Eddington limit for a $10^6\,M_\odot$ black hole, multi-wavelength SED modeling suggests that the overall luminosity is super-Eddington (Li et al. in prep). Hence, we allowed the \texttt{xillverTDE} component to be blueshifted and smooth the model with a constant velocity broadening to avoid the assumption of a standard thin accretion disk that is invoked in relativistic convolution models like \texttt{relconv} \citep{Dauser2010}. We also link the incident blackbody temperature with the blackbody of the \texttt{xillverTDE} model. The disk may see a slightly modified blackbody due to strong gravity effects around the black hole, but freeing the blackbody temperature does not provide a significantly improved fit. Likewise, most of the reflected emission in the 1~keV line is likely coming from close to the black hole, and thus a single-temperature approximation for a multi-temperature disk blackbody should be reasonable. We freeze the iron abundance in \texttt{xillverTDE} to solar and the inclination at $i = 45^\circ$ as the fit is not sensitive to inclination without relativistic blurring included. Leaving the inclination free does not significantly improve the fit, although the inclination angle is constrained to be $\lesssim 50^\circ$. We fit for the ionization parameter, disk density, redshift, and normalization and report the result of our fits in Table \ref{tab:xillverTDE}.
In the left panel of Figure \ref{fig:xillverTDE}, we show the results of using \texttt{xillverTDE} to model the 1~keV excess in the first \textit{XMM-Newton} observation. We find that a blueshifted reflection component with $z = -0.33$ provides a good fit to the data, giving $\chi^2_\nu/\nu = 1.30/216$ and significantly improving the residuals around 1~keV. We note that an equally good fit can be obtained using the \texttt{relconv} convolution model for relativistic blurring instead of Gaussian smoothing, but the fit requires a high iron abundance and a high inclination to achieve a significant blueshift on the \texttt{xillverTDE} model. In addition, the \texttt{relconv} fit requires an inner disk radius very close to the innermost stable circular orbit (ISCO), which is potentially inconsistent with an edge-on, geometrically thick super-Eddington inner accretion flow. In Section \ref{subsec:disc_phys_xillverTDE}, we discuss further the implications of this modeling and suggest that this could be reprocessed emission off of a geometrically thick accretion disk from a super-Eddington accretion flow. Another similar scenario has been invoked for the emission lines in the high-resolution \textit{XMM-Newton} RGS spectrum of the rapidly accreting AGN 1H 1934-063 by \citet{Xu2022}.
\subsection{November 2019 XMM-Newton/NuSTAR Observation} \label{subsec:nov2019}
As shown in Figure \ref{fig:1keV_eqw}, the equivalent width of the 1~keV line significantly decreases when the source reaches its peak luminosity and does not strengthen when the luminosity starts to decrease. This can be attributed both to the increased continuum flux and also a slight decrease in the line flux. Despite this drop in equivalent width and line flux measured in the phenomenological modeling, in the November 2019 \textit{XMM-Newton} spectrum (Epoch 4, taken when the source was at its peak X-ray luminosity), there are some broad residuals around 1~keV and 2.5~keV in the simple continuum fit. We show the ratio plot for this spectrum relative to a simple cutoff power law and blackbody model in the top right panel of Figure \ref{fig:ep4}, which has notable residuals and provides a relatively poor fit to this high quality spectrum ($\chi^2_\nu / \nu = 1.45/736$).
The source is almost certainly super-Eddington during this period, given the high X-ray luminosity (see Figure \ref{fig:lumin}), and therefore likely still has a geometrically thick inner accretion flow. Thus, the blackbody-based reflection component from the Epoch 1 \textit{XMM-Newton} observation could still be present in the Epoch 4 spectrum. We test this by using the same \texttt{xillverTDE} model, with the blackbody temperatures linked, solar abundances, a free blueshift parameter, and a constant velocity broadening, as with the Epoch 1 spectrum. This model is able to reproduce the two broad residual components at roughly 1~keV and 2.5~keV and provides a significant improvement to the spectral fit with $\chi^2 / \nu = 1.17/731$. In Figure \ref{fig:ep4}, we show the resulting ratio plot in the bottom right panel and the unfolded spectrum with the model components in the left panel. We also report the resulting fit parameters in Table \ref{tab:xillverTDE} and note that the density, blueshift, and broadening are comparable with the fit parameters from the Epoch 1 spectrum. The ionization parameter is larger than the Epoch 1 value, which is likely indicative of a higher ionizing luminosity and suggests that the change in the 1~keV feature seen in Figure \ref{fig:1keV_eqw} and discussed in Section \ref{subsec:disapp_1keV} may be an ionization effect. We note that as with Epoch 1, a similarly good fit to Epoch 4 spectrum can be obtained by blurring the \texttt{xillverTDE} model with \texttt{relconv}, with the same high inclination and high iron abundance required as in Epoch 1.
However, the Epoch 4 X-ray spectrum is now dominated by a relatively steep power law component, so there may be additional coronal reflection components that are not included in this modeling. Therefore, in addition to a blackbody-irradiating reflection model, we also tried including Gaussian-smoothed \texttt{xillverD}, a standard reflection model with variable density from a power law ionizing spectrum \citep{Garcia2016}. We find that this model can also provide a good fit to the spectrum ($\chi^2 / \nu = 1.15/729$), but that the addition of coronal reflection provides only marginal improvement in the spectral fit ($\Delta \chi^2 \approx 10$ for 2 fewer dof). Thus, we cannot rule out the possibility that the corona also produces reflection features, but our modeling does not statistically require the additional model component.
\begin{figure*}[t!]
\centering
\includegraphics[width=\textwidth]{1ES1927_ep4_mod.eps}
\caption{\textit{Left:} Unfolded spectrum from the November 2019 simultaneous \textit{XMM-Newton}/\textit{NuSTAR} observation of 1ES~1927+654 fit to the velocity-broadened \texttt{xillverTDE} model. Data have been rebinned for visual purposes. \textit{Top Right:} Ratio plot to the simple cutoff power law and blackbody model used in phenomenological modeling, highlighting the need for additional components in the spectral model. \textit{Bottom Right:} Ratio plot for the reflection model, including the \texttt{xillverTDE} model from a blackbody irradiating spectrum. }
\label{fig:ep4}
\end{figure*}
\subsection{January 2021 XMM-Newton/NuSTAR Observation} \label{subsec:jan2021}
In Section \ref{subsec:comp2arch}, we showed with phenomenological modeling that 1ES~1927+654 had returned to a state very similar to its pre-outburst state from the 2011 \textit{XMM-Newton} observation \citet{Gallo2013}, with a strong soft excess and a power law component with $\Gamma \approx 2.4$. Given the success of fitting the Epoch 1 and 4 data with reflection models, here we explore whether relativistic reflection can be sufficient to model the most recent simultaneous \textit{XMM-Newton}/\textit{NuSTAR} observation (January 2021, Epoch 7). Despite the lack of a strong Fe K feature in the X-ray spectrum of 1ES~1927+654, relativistically blurred reflection is a popular physically-motivated model for the soft excess \citep[e.g.][]{Crummy2006,Fabian2009,Garcia2019}. Moreover, the similarity of the X-ray spectrum to its pre-outburst state with a strong power law component motivates the use of a standard reflection model from a power law ionizing spectrum and relativistic blurring from a thin accretion disk.
We model reflection using the high density model \texttt{relxillD}, a flavor of the \texttt{relxill} suite of models which includes relativistic blurring from a thin accretion disk \citep{Dauser2014,Garcia2014}. High density reflection models have been shown to produce a stronger soft excess due to free-free processes dominating heating and leading to an increase in the temperature of the top layer of the disk \citep{Garcia2016}. As with the previous reflection modeling, we leave the ionization parameter, disk density, and normalization as free fit parameters. We use relativistic blurring from a thin accretion disk instead of a constant velocity broadening, so we also fit for the inclination, emissivity index, and spin of the black hole. We fix the inner edge of the accretion disk to the ISCO, which scales with the spin of the black hole, and fix the outer radius to $R_\mathrm{out} = 400 R_g$. We find that the relativistic reflection modeling can provide a good fit to the soft excess with $\chi^2_\nu / \nu = 1.02/650$. We show the results of this modeling in Figure \ref{fig:ep7}, and report the fit values for the reflection model in Table \ref{tab:xillverTDE}.
The current \texttt{relxillD} models have a fixed cutoff energy of $E_\mathrm{cut} = 300$~keV, while our fits suggest a lower cutoff energy. We thus found a degeneracy between the strength of the reflection component near the Compton hump and the cutoff energy of the power law component. In Figure \ref{fig:ep7} and Table \ref{tab:xillverTDE}, we show the high density, low cutoff energy fit but note that this is degenerate with a lower density, higher cutoff energy fit ($n \approx 10^{16}$ cm$^{-3}$, $E_\mathrm{cut} \approx 50$ keV). To test this degeneracy, we also fit the data with \texttt{relxillDCp}, which uses a thermal Comptonization illuminating spectrum with a variable coronal temperature. With \texttt{relxillDCp} and a corresponding thermal Comptonization spectrum for the continuum (\texttt{nthcomp} model in XSPEC), we find a coronal temperature of $kT_e = 6.1_{-1.2}^{+1.6}$ keV and a relatively high density around $\log (n / \mathrm{cm}^{-3}) \approx 18$, which is consistent with the low cutoff energy that we report in Table \ref{tab:xillverTDE} for the \texttt{relxillD} modeling (with $E_\mathrm{cut} \approx 2-3\, kT_e$). Further hard X-ray observations will better constrain the coronal temperature and the nature of the potential Fe K line.
The model suggests an edge on geometry with maximal inclination angle, which is somewhat in tension with emission off of a funnel geometry interpretation for the 1~keV line in the early X-ray spectra. However, inclination mismatches are not uncommon in X-ray spectral fitting, especially for rapidly accreting black holes under the assumption of thin accretion disks \citep[see, for example, Section 4.2 of][and references therein]{Mundo2020}, and the inner flow may be misaligned from the outer disk in the early X-ray observations (see Section \ref{subsec:disc_phys_xillverTDE}). Weak iron line features at $\sim6-7$~keV are not uncommon in sources with a strong soft excess \citep[e.g.][]{Mallick2018,Garcia2019,Xu2021} and are consistent with high density reflection models, which predict that the high density gas becomes hotter due to enhanced free-free heating, thus increasing the broadening of the line \citep{Garcia2016}. Likewise, the lack of a strong iron line feature in 1ES~1927+654 could also be explained by an extended corona that is in turn further blurring reflection from the inner accretion disk via a second Comptonization \citep[e.g.][]{Steiner2017}. This would only be relevant in a relatively low inclination system where the line of sight to the inner accretion flow intersects the corona, as suggested by our modeling of the broad 1~keV feature.
We also note that this relativistic reflection modeling is not the only possible model for the soft excess, which is a current mystery in the AGN X-ray community. Another common model for the soft excess is a warm, optically thick corona \citep[e.g.][]{Mehdipour2011,Done2012}. Often these two models cannot be distinguished from one another by statistics alone \citep[e.g.][]{Garcia2019,Ghosh2021,Xu2021}, adding to the confusion behind this feature. We find that the soft excess can also be modeled with a warm corona model, using \texttt{nthcomp} with a low electron temperature to model this component, which may alleviate the inclination mismatch as the late-time high inclination constraints are strongly dependent on the reflection modeling of the soft excess. Likewise, we also test an ionized outflow model, which was found to be consistent with the pre-outburst soft X-ray spectrum of 1ES~1927+654 if two ionized absorbers were included \citep{Gallo2013}. We find that a single ionized absorber (modeled with \texttt{zxipcf}) with $N_H \approx 10^{24}$ cm$^{-2}$, $\log\xi \approx 2.8$, and $z \approx -0.3$ can improve the soft X-ray fit in the Epoch 7 observation of 1ES~1927+654 with $\chi^2_\nu/\nu = 1.09/653$. An additional absorber does not significantly improve the fit, but as with the phenomenological modeling presented in Section \ref{subsec:comp2arch}, we find significant improvement with an additional low temperature blackbody component ($\Delta\chi^2 = 48$ for 2 additional dof, $kT_\mathrm{bb} \approx 30$ eV). All three models for the soft excess produce similar fit statistics, as is commonly found in other sources. Distinguishing between soft excess models is beyond the scope of this work.
\begin{figure}[t!]
\centering
\includegraphics[width=8.6cm]{1ES1927_ep7_mod.eps}
\caption{\textit{Top:} Unfolded spectrum from the January 2021 simultaneous \textit{XMM-Newton}/\textit{NuSTAR} observation of 1ES~1927+654 fit to the cutoff power law and standard relativistic reflection model. \textit{Bottom:} Ratio plot for the same model. Data have been rebinned for visual purposes.}
\label{fig:ep7}
\end{figure}
\begin{deluxetable*}{c c c c c}
\caption{Fit Results for the Blurred Reflection Modeling with \texttt{xillverTDE}} \label{tab:xillverTDE}
\tablehead{\colhead{Model Component (XSPEC Model)} & \colhead{Parameter (Units)} & \colhead{Epoch 1\tablenotemark{a}} & \colhead{Epoch 4\tablenotemark{b}} & \colhead{Epoch 7\tablenotemark{c}}}
\startdata
Galactic absorption (\texttt{tbabs}) & $N_{H,\,\mathrm{gal}}$ ($10^{20}$ cm$^{-2}$) & 6.42\tablenotemark{f} & 6.42\tablenotemark{f} & 6.42\tablenotemark{f} \\
Intrinsic absorption (\texttt{ztbabs}) & $N_H$ ($10^{20}$ cm$^{-2}$) & $2.6_{-0.5}^{+0.6}$ & $7.7_{-0.7}^{+0.7}$ & $1.8_{-1.7}^{+1.4}$ \\
Blackbody (\texttt{zbb}) & $kT_\mathrm{bb}$ (eV) & $88_{-2}^{+2}$ & $205_{-2}^{+8}$ & -- \\
& $F_{0.3-10\,\mathrm{keV,\, bb}}$ ($10^{-11}$ erg cm$^{-2}$ s$^{-1}$) & $1.8_{-0.1}^{+0.1}$ & $5.8_{-0.9}^{+0.2}$ & -- \\
Cutoff Power Law (\texttt{zcutoffpl}) & $\Gamma$ & $3.4_{-0.7}^{+0.8}$ & $3.59_{-0.11}^{+0.12}$ & $2.30_{-0.10}^{+0.11}$ \\
& $E_\mathrm{cut}$ (keV) & --\tablenotemark{d} & $8.1_{-2.1}^{+4.5}$ & $16.1_{-6.7}^{+28.7}$ \\
& $F_{0.3-10\,\mathrm{keV,\, pl}}$ ($10^{-11}$ erg cm$^{-2}$ s$^{-1}$) & $0.06_{-0.03}^{+0.08}$ & $17.4_{-1.5}^{+1.2}$ & $0.67_{-0.04}^{+0.04}$ \\
Blackbody Reflection (\texttt{xillverTDE}) & $\log(\xi$/erg cm s$^{-1}$) & $2.48_{-0.14}^{+0.01}$ & $3.00_{-0.57}^{+0.03}$ & -- \\
& $\log (n$/cm$^{-3}$) & $18.9_{-0.6}^{+0.1}$ & $19.0_{-0.7}$\tablenotemark{e} & -- \\
& $i$ (deg) & $45$\tablenotemark{f} & $45$\tablenotemark{f} & -- \\
& $z$\tablenotemark{g} & $-0.33_{-0.01}^{+0.01}$ & $-0.33_{-0.01}^{+0.04}$ & -- \\
& $F_{0.3-10\,\mathrm{keV,\, xillverTDE}}$ ($10^{-11}$ erg cm$^{-2}$ s$^{-1}$) & $0.45_{-0.03}^{+0.06}$ & $2.5_{-0.4}^{+0.6}$ & \\
Corona Reflection (\texttt{relxillD}) & $\log(\xi$/erg cm s$^{-1}$) & -- & -- & $1.24_{-0.09}^{+0.20}$ \\
& $\log (n$/cm$^{-3}$) & -- & -- & $18.3_{-0.4}^{+0.5}$ \\
& $i$ (deg) & -- & -- & $87_{-3}\tablenotemark{e}$ \\
& $q$ & -- & -- & $6.9_{-1.9}^{+3.1}$ \\
& $a$ & -- & -- & $0.93_{-0.10}^{+0.068}$ \\
& $F_{0.3-10\,\mathrm{keV,\, relxillD}}$ ($10^{-11}$ erg cm$^{-2}$ s$^{-1}$) & -- & -- & $0.25_{-0.07}^{+0.08}$ \\
Velocity Broadening (\texttt{gsmooth}) & $v$ $(c)$ & $0.12_{-0.01}^{+0.01}$ & $0.10_{-0.03}^{+0.09}$ & -- \\
Cross-Calibration (\texttt{const}) & $C_\mathrm{FPMA}$ & -- & $0.97_{-0.05}^{+0.05}$ & $1.04_{-0.06}^{+0.07}$ \\
& $C_\mathrm{FPMB}$ & -- & $1.01_{-0.05}^{+0.05}$ & $1.08_{-0.07}^{+0.07}$ \\
Fit Statistic & $\chi^2_\nu/\nu$ & 1.30/216 & 1.17/731 & 1.02/650 \\
\enddata
\tablenotetext{a}{June 2018, Model: \texttt{tbabs*ztbabs*(zbb+zpower+gsmooth(xillverTDE))}}
\tablenotetext{b}{November 2019, Model: \texttt{tbabs*ztbabs*(zbb+zcutoffpl+gsmooth(xillverTDE))}}
\tablenotetext{c}{January 2021, Model: \texttt{tbabs*ztbabs*(zcutoffpl+relxillD)}}
\tablenotetext{d}{No cutoff was included in this fit, given that the data only extend to 3~keV.}
\tablenotetext{e}{Parameter pegged at maximum value of the model.}
\tablenotetext{f}{Parameter was fixed when fitting.}
\tablenotetext{g}{Intrinsic blueshift, corrected for the cosmological redshift of the host galaxy.}
\end{deluxetable*}
\section{Discussion} \label{sec:discussion}
\subsection{A Relativistic Outflow Origin for the Broad 1~keV Line} \label{subsec:disc_phys_xillverTDE}
In Section \ref{sec:physmod}, we show that \texttt{xillverTDE}, a new flavor of the \texttt{xillver} reflection models with a single-temperature blackbody irradiating spectrum, can reproduce the broad 1~keV feature that is prominent in the early X-ray spectra of 1ES~1927+654. In this model, the 1~keV feature is primarily the result of velocity-broadened and blueshifted oxygen K-shell emission. The significant velocity broadening and blueshift required in our \texttt{xillverTDE} modeling suggests outflowing emission from close to the black hole. We show a schematic of how this emission could arise from the base of an outflow in a super-Eddington, geometrically thick inner accretion flow in the right panel of Figure \ref{fig:xillverTDE}. Simulations of super-Eddington accretion disks have shown that significant radiation pressure in the geometrically thick accretion disks causes optically thick winds to be launched from the disk \citep[e.g.][]{Ohsuga2009,Jiang2014,McKinney2014}. Likewise, some ULXs and TDEs, both of which are thought to be radiating close to or above the Eddington limit, have been shown to launch fast outflows \citep[e.g.][]{Middleton2014,Pinto2016,Walton2016,Kara2018,Kosec2018a,Kosec2018b,Pinto2021}. Thus, a geometrically thick accretion disk launching significant outflows is entirely plausible in 1ES~1927+654 and is supported by the modeling with \texttt{xillverTDE}.
Additionally, \citet{Ricci2020} first suggested that the outburst in 1ES~1927+654 was the result of a TDE in a pre-existing accretion disk. In this picture, the stellar debris hits the disk and produces shocks that cause the gas in the inner accretion flow to lose angular momentum and fall onto the black hole. This depletes the inner accretion flow and can thus cut off the energy supply to the corona. Hydrodynamic simulations of TDEs in AGN suggest that this depletion of the inner accretion flow happens in a super-Eddington manner, with a thick inner accretion disk formed \citep{Chan2019}. This optically and geometrically thick inner accretion disk could then easily irradiate itself, producing the reflected emission from a blackbody ionizing spectrum that we see in the early X-ray spectra as a broad 1~keV excess. In order to see this emission, however, we must have a sight line to the inner accretion flow. We suspect that to see the reflected emission off of this geometrically thick outflow we would require a relatively face-on accretion geometry, although the exact constraints on the inclination angle depend on the scale height $H/R$ of the disk, which is dependent on the Eddington ratio and hard to determine precisely.
Another possible way to get a sight line to the inner accretion flow is with a warped inner accretion disk. In TDEs, the misalignment between the angular momentum of the tidally disrupted star and the black hole in combination with Lens-Thirring precession has been shown theoretically to lead to warped accretion disks \citep[e.g.][]{Stone2012,Franchini2016}. These warped accretion disk models have been suggested to explain the quasi-periodic nature of the early X-ray light curve of the jetted TDE, Swift J1644 \citep[e.g.][]{Reis2012,Lei2013}. Similar physics could be at play in 1ES~1927+654 with a TDE that is misaligned with an existing accretion disk, leading to a warped inner accretion flow relative to the outer accretion disk. This misalignment would persist until angular momentum transport aligned the disk and the black hole spin axis. Thus, this could allow the outer accretion disk to have a higher inclination angle as suggested by reflection modeling of the pre-outburst and latest observations, while still allowing us to see into the inner accretion flow to see the broad 1~keV line during the early observations. The alignment timescale depends on the properties of the system, but for a $10^6 \, M_\odot$ black hole, solid body precession of the inner accretion flow could persist for on the order of one year \citep[e.g.][]{Stone2012,Franchini2016}, similar to the time in which 1ES~1927+654 exhibits rapid variability.
The success of \texttt{xillverTDE} goes beyond just the first \textit{XMM-Newton} observation. As shown in Figure \ref{fig:ep4}, the peak X-ray luminosity spectral fit is also greatly improved by including reflection from the blackbody. This could suggest that blackbody reflection is a key component of many super-Eddington accretors and super-soft X-ray sources where the thermal component dominates the X-ray continuum. Likewise, a similar scenario of soft reflection at the base of a wind has been invoked to explain emission features in the narrow line Seyfert 1 1H 1934-063, which is accreting near or above the Eddington limit \citep{Xu2022}. In addition to rapidly accreting AGN, TDEs in low mass supermassive black holes ($M \lesssim 10^7\, M_\odot$) are known to have super-Eddington fallback rates and very soft, thermal X-ray spectra, making them a perfect probe of this model. Broad soft X-ray features similar to the 1~keV line in 1ES~1927+654 are indeed present in other TDEs \citep[e.g.][]{Kara2018}, and future work to test this model on other X-ray detected TDEs with broad lines in their X-ray spectra is currently in progress (Masterson et al. in prep).
\subsection{Limitations of the Reflection Modeling} \label{subsec:disc_limxillver}
Using \texttt{xillverTDE} to model the 1~keV line the Epoch 1 and 4 \textit{XMM-Newton}/\textit{NuSTAR} spectra requires high density ($n \gtrsim 10^{18}$ cm$^{-3}$) to produce significant oxygen K-shell features. High densities have been shown to produce significantly different reflection features compared to lower density models, especially in the soft X-ray band \citep{Garcia2016}. This is primarily due to increased temperatures in the illuminated slab due to an increase in the bremsstahlung emissivity and has been an important improvement for modeling the soft excess with relativistic reflection. The expected densities in the inner, radiation pressure dominated region of the accretion flow scale inversely with the mass of the black hole \citep[e.g.][]{Svensson1994}, and hence, high densities like those found in our fits with \texttt{xillverTDE} are expected around low mass supermassive black holes. However, with increased mass accretion rates and a geometrically thick inner accretion flow, the viscous timescales are short, implying an extremely high mass accretion rate when combined with a high density.
The illuminating spectrum in \texttt{xillverTDE} is extremely soft compared to standard coronal reflection models, making it more difficult to ionize and excite the gas around the black hole. This can be enhanced with an increase in the temperature of the gas, which occurs with increased density \citep[see][]{Garcia2016}. Thus, high density can help create the right conditions for oxygen K-shell transitions, which produces the significant soft spectral features we see with \texttt{xillverTDE}. However, in the inner accretion flow, a thick disk with $H/R \sim 1$ as shown in the right panel of Figure \ref{fig:xillverTDE} would have a viscous timescale, which scales like $(H/R)^{-2}$, on the order of hours to days, which is much shorter than the time in which the feature is observed in the spectrum ($\sim$ 1-2 years).
One possibility is to explain the high densities required by \texttt{xillverTDE} while maintaining the picture of a geometrically thick inner accretion flow is that the emission is coming from the base of the outflow. The base of the outflow is expected to be higher density than the gas in the wind itself and also explains the necessary blueshift and symmetry required by the model. Additionally, the observed density may be increased if the outflow is clumpy, and this could also potentially explain some of the rapid variability exhibited at early times when the 1~keV feature is strongest.
1~keV features have been seen in other highly accreting compact objects, including ULXs, neutron star X-ray binaries, and high-Eddington AGN, and have been successfully modeled with a variety of radiative processes, including collisionally ionized emission, photoionized emission, and soft reflection \citep[e.g.][]{Pinto2021,Ludlam2022,Xu2022}. We therefore conclude that our reflection-based modeling with \texttt{xillverTDE} is an accurate description of the data, but ultimately may not be unique. Further deep X-ray observations of super-soft, super-Eddington sources are necessary to test these soft reflection models.
\subsection{Return to Typical AGN Corona} \label{subsec:disc_corona}
1ES~1927+654 is the first AGN in which we have witnessed the destruction of the corona and a disappearing power law X-ray spectral component. As the first source to undergo such drastic X-ray evolution, extensive X-ray monitoring presented here provides a unique opportunity to study how the corona is formed and powered.
When the power law component returns to the X-ray spectrum, the photon index is around $\Gamma \gtrsim 3$, which is much higher than what is seen in most AGN \citep[$\Gamma \approx 1.8-2.0$; e.g.][]{Nandra1994,Piconcelli2005,Winter2009,Ricci2017} and in the pre-outburst spectrum of 1ES~1927+654 \citep[$\Gamma \approx 2.4$;][]{Gallo2013}. The photon index remains roughly constant over the majority of the X-ray rise and plateau, despite rapid order of magnitude changes in the X-ray flux. The constancy of the photon index suggests a balance between heating and cooling in the corona that is steady, yet dominated by cooling more so than standard AGN corona given the abnormally high photon indices. At late times, the X-ray luminosity decreases steadily, and this is matched with a steadily decreasing photon index. This is more typical of what is seen in AGN X-ray observations, where AGN X-ray spectra appear softer when they are more luminous \citep[e.g.][]{Shemmer2006,Sobolewska2009}.
The inclusion of \textit{XMM-Newton}/\textit{NuSTAR} observations in our analysis allows us to track not only the photon index evolution, but also the evolution of the cutoff energy of the corona, a property which was inaccessible with \textit{NICER} given its low effective area at energies $\gtrsim 2$~keV and the softness of the spectrum of 1ES~1927+654. The cutoff energy of the X-ray spectrum of 1ES~1927+654 measured with \textit{XMM-Newton}/\textit{NuSTAR} during the X-ray rise and plateau at peak luminosity is around $E_\mathrm{cut} \approx 3-8$~keV, which is extremely low compared to typical AGN values \citep[$E_\mathrm{cut} \approx 200$~keV; e.g.][]{Ricci2017}. Previous work has suggested that sources accreting at high Eddington ratios should have lower temperature corona due to increased Compton cooling from increased seed photon flux at high mass accretion rates \citep[e.g.][]{Pounds1995}. Indeed, this trend has been observed in large surveys of AGN \citep[e.g.][]{Ricci2018}, and many highly accreting sources have been shown to have steep X-ray spectra and low temperature corona \citep[$kT_e \approx 20$~keV; e.g.][]{Kara2017}. However, a cutoff energy as low as measured in 1ES~1927+654 is unheard of, suggesting that we are seeing the system in a unique evolutionary phase.
In addition, the corona temperature in 1ES~1927+654 exhibits a cooler-when-brighter behavior, as the temperature increases as the luminosity drops in the latter half of the observations, similar to the behavior observed in two other highly accreting systems \citep[e.g. Ark 564, Swift J2127.4+5654;][]{Barua2020,Kang2021}. However, other systems have recently been shown to exhibit the opposite behavior, with the temperature of the corona increasing as the sources brighten, despite an overall softer-when-brighter behavior observed in $\Gamma$ \citep[e.g.][]{Keek2016,Zhang2018}. This hotter-when-brighter coronal behavior has been suggested to be the result of changes in the coronal geometry, potentially related to an inflated corona during the X-ray bright phases \citep[e.g.][]{Wu2020}. The difference in the trend of coronal temperature with brightness has been suggested to be dependent on the nature of the corona, namely whether the corona is close to the pair-dominated regime in compactness-temperature ($l-\Theta$) space \citep{Kang2021}. Based on our observations of 1ES~1927+654, the behavior of the corona as it is being recreated is more similar to those rapidly accreting systems that are not dominated by pair production. This is fitting with the idea that an increase in flux of seed photons can more effectively cool the corona and lead to a cooler temperature observed when the source is brighter.
By January 2021, the cutoff energy is $E_\mathrm{cut} \gtrsim 15$~keV, which, although poorly constrained given degeneracies and limitations of the reflection modeling, is lower than most AGN, but comparable to some cool corona in nearby AGN. Unfortunately, no hard X-ray observations were taken prior to the optical outburst in 1ES~1927+654. Hence, determining the difference in coronal temperatures between the pre-outburst and latest X-ray observations is impossible. However, the similarity of the photon indices and spectral shape suggests that the corona is back to near its pre-outburst state. Further X-ray observations with deeper hard X-ray observations will be necessary to track the final state of the corona in 1ES~1927+654, but current data suggests that the corona is approaching more typical temperatures of AGN corona.
This rapid return to an AGN-like spectrum suggests that the timescale for which the corona can be reheated is rather fast ($\sim$3-4 years). Due to the lack of X-ray observations between May 2011 and June 2018, we cannot constrain an exact timescale for recreation of the corona, but the optical/UV transient beginning in December 2017 suggests a catastrophic event that destroyed (or rapidly cooled) the corona and a timescale for corona reformation (or reheating) on the order of 3-4 years.
\subsection{Comparison to Other Nuclear Transients} \label{subsec:disc_compare}
1ES~1927+654 is a unique source with X-ray properties that we have never observed before. However, there are some properties of 1ES~1927+654 that resemble a number of other nuclear transients like CLAGN, TDEs, and QPEs. Below we discuss some comparisons to these sources and highlight the differences that make 1ES~1927+654 a unique rapid accretion event.
\subsubsection{CLAGN and TDEs}
1ES~1927+654 clearly satisfied the requirements for undergoing a so-called ``changing-look" event, with the formation of broad optical emission lines \citep{Trakhtenbrot2019}. Despite this clear association with CLAGN at optical wavelengths, the X-ray observations clearly suggest some catastrophic change to the accretion flow that is not witnessed in such extreme measures in other CLAGN \citep[e.g.][]{Parker2016,Parker2019,Wang2020a,Guolo2021}. \citet{Ricci2020} suggested that the first half of the X-ray observations of 1ES~1927+654 was consistent with a TDE in an existing AGN disk based on the $t^{-5/3}$ dependence of the optical/UV flare and the extremely soft, nearly thermal X-ray spectrum. Similar claims of TDEs in existing AGN disks have been made recently in a handful of other CLAGN \citep[e.g.][]{Merloni2015,Blanchard2017,Liu2020}, although none show as dramatic an evolution in their X-ray spectra as 1ES~1927+654. One interesting analogy is to the transient PS16dtm, which \citet{Blanchard2017} suggested could be the result of an edge-on TDE that obscures the X-ray emitting regions of a relatively face-on pre-existing accretion disk in an AGN. If the outburst in 1ES~1927+654 is also the result of a TDE in a pre-existing AGN disk, then the stark contrast in X-ray emission with PS16dtm could be the result of an optimal viewing angle to the inner accretion flow in 1ES~1927+654 (although it is possible that the lack of X-ray emission observed from PS16dtm shortly after the transient event is a result of a similar dip in X-ray flux as seen in the early observations of 1ES~1927+654). In addition to the spectral features and the optical/UV light curve, the peculiar disappearance of the power law component of the X-ray spectrum in 1ES~1927+654 is potentially consistent with recently simulations of TDEs in pre-existing AGN disks that have shown that the inner accretion flow is rapidly depleted and would hence cut off the energy supply to the corona \citep{Chan2019}.
It is possible that the extreme changing-look event in 1ES~1927+654 was not driven by a TDE. Others have suggested that possibly a magnetic flux inversion occurred in the accretion disk of 1ES~1927+654, whereby an advection event allowed magnetic flux with opposite polarity to propagate inward in the disk \citep{Scepi2021}. The arguments supporting this idea include a delayed X-ray dip relative to the UV peak, a corresponding dip in the radio, and the return to the pre-outburst state following this event \citep{Scepi2021,Laha2022}. This model is thus in line with the interpretation put forth in this work if the anomalous flux inversion event creates a geometrically thick inner accretion flow. We would also expect a geometrically thick inner accretion flow if a TDE occurred in an existing AGN disk, and likewise we would expect that as in the magnetic flux inversion model, the system would return to its pre-outburst state after a TDE as well. The timescale for the return to a pre-outburst state would be set by a rather fast viscous timescale in the TDE case, given the high mass accretion rate and the suspected large scale height of the inner disk. We note that both the TDE and magnetic flux inversion models require a change in mass accretion rate and the destruction and recreation of the standard X-ray corona in 1ES~1927+654, although with different mechanisms triggering these changes.
\subsubsection{Quasi-Periodic Eruptions}
Additionally, the early X-ray observations of 1ES~1927+654 during the time when the light curve was rapidly variable resemble a newly discovered class of X-ray transients called quasi-periodic eruptions \citep[QPEs;][]{Miniutti2019,Giustini2020,Arcodia2021,Chakraborty2021}. QPEs were only first discovered in 2019, and currently only five QPEs are known. Hence, the mechanisms driving QPEs are currently not well understood, and there are many theories to explain their strange behavior, including interacting extreme mass ratio inspirals \citep[EMRIs; e.g.][]{King2020,Metzger2022}, self-lensing from binary black holes \citep{Ingram2021}, star-disk collisions \citep{Xian2021}, and tearing of warped accretion disks \citep[e.g.][]{Raj2021}.
The similarities between 1ES~1927+654 and QPEs include the super-soft X-ray spectra, rapid variability on short timescales, and harder-when-brighter behavior. Phase-resolved spectral analysis of QPEs revealed that their rapid variability is likely an intrinsic change in the accretion properties and not simply due to changes in obscuration. Similarly in 1ES~1927+654, obscuration does not seem to be responsible for the rapid variability in Phase 1, as the column density of the neutral absorption in the host galaxy is always quite low ($N_H \lesssim 10^{21}$ cm$^{-2}$). However, the early X-ray light curves of 1ES~1927+654 do not exactly resemble the variable nature of QPEs, which exhibit short-duration peaks followed by periods of near quiescence with extremely low variability. In addition, the high flux spectra in QPEs often require a single additional component to account for the difference in their spectra during the flares, which due to the similarity of this component to the typical AGN soft excess lead \citet{Miniutti2019} to argue that the QPE flares could be indicative of the formation of the soft excess in AGN. This is not the case for 1ES~1927+654, which we tested by using rate-resolved spectroscopy during the highly variable \textit{XMM-Newton} observations (Epochs 2 and 3). Instead, 1ES~1927+654 required an overall change in the spectral flux and shape, suggesting an intrinsic change in the mass accretion rate. Although 1ES~1927+654 and QPEs are not exactly the same in nature, their similarities are intriguing and motivate further follow-up of super-soft transient X-ray sources.
\section{Conclusions} \label{sec:conclusion}
1ES~1927+654 is one of the most peculiar X-ray transients discovered to date. In this paper, we have shown the X-ray spectral evolution beginning shortly after the optical/UV outburst in 2018 to June 2021 using extensive observations with \textit{NICER}, \textit{XMM-Newton}, and \textit{NuSTAR}. Our main findings are summarized below:
\begin{enumerate}[noitemsep,leftmargin=*]
\item Over the course of the past three years, the X-ray spectrum has transitioned from extremely soft and nearly thermal in the first observations to back to its pre-outburst state by January 2021. The late X-ray spectra are fit well by a significant power law component and soft excess, as in the pre-outburst spectrum.
\item We identify a unique three-phase evolution during the three year evolution (see Figure \ref{fig:lumin}). During Phase 1, the source exhibits rapid variability and deviates significantly from standard AGN behavior, including exhibiting harder-when-brighter behavior, a weak power law component, and a broad 1~keV line. Then, in Phase 2, 1ES~1927+654 enters a more stable super-Eddington phase, where the variability drops and the X-ray luminosity plateaus near the Eddington limit for a $10^6 \, M_\odot$ black hole. Finally, in Phase 3, 1ES~1927+654 begins to return to its pre-outburst state, as the X-ray luminosity declines in a steady manner and the spectrum hardens.
\item We track the evolution of the power law component and find that the photon index remains relatively constant despite extreme luminosity changes on short timescales. When the X-ray luminosity starts to drop, the photon index begins to decrease and the spectrum hardens, following the typical softer-when-brighter behavior observed in AGN.
\item The temperature of the blackbody undergoes significant evolution over the observing period, unlike what is commonly witnessed in TDEs. We show that the blackbody deviates from the expected $L \propto T^4$ for a standard thin accretion disk during the first $\sim 1$ year of observations (i.e. Phase 1 in Figure \ref{fig:lumin}), which implies that the inner accretion flow is super-Eddington. Likewise, at late times, the blackbody has a roughly constant temperature, and this lack of temperature evolution with luminosity is consistent with observations of the soft excess in AGN.
\item We study the evolution of the 1~keV line that was a prominent feature in the early X-ray spectra. Using the \textit{NICER} observations, we find that the equivalent width of the 1~keV feature decreases as the source approaches peak luminosity, suggesting that either the line has disappeared or has broadened significantly into a component indistinguishable from the continuum.
\item We apply a new reflection model from a single-temperature blackbody ionizing spectrum, \texttt{xillverTDE}, to explain the 1~keV feature and find that it can fit the first \textit{XMM-Newton} spectrum well if the emission is blueshifted. We propose that this could be the result of reflected emission off of an optically thick outflow from a geometrically thick, super-Eddington inner accretion flow (see right panel of Figure \ref{fig:xillverTDE}).
\item The \texttt{xilverTDE} model can also fit additional curvature in the peak X-ray luminosity spectrum (see Figure \ref{fig:ep4}). We also show that the standard relativistic reflection models can fit the soft excess in the latest \textit{XMM-Newton}/\textit{NuSTAR} observation, without the need for \texttt{xillverTDE} once the source has returned to its pre-outburst state.
\end{enumerate}
Extensive X-ray monitoring of the extreme nuclear transient in 1ES~1927+654 has provided unique insight into the nature of the corona in AGN, changing-look AGN, and super-Eddington accretors. Thanks to many dedicated optical transient facilities, the number of intriguing transients is increasing significantly and can be expected to continue to increase in the coming years. Prompt X-ray follow-up of TDE-like optical light curves or extreme optical flares in existing AGN will provide valuable information about the nature of such transients and allow for more detailed studies of the population of nuclear transients. These sources are crucial probes of the inner accretion flow, disk instabilities, AGN corona formation, and many more fascinating open areas of research in accretion physics.
\medskip
\noindent We thank the anonymous referee for helpful comments that improved this manuscript. MM thanks Ruancun Li, Christos Panagiotou, and Brenna Mockler for their useful comments and discussions. EK thanks Daniel Proga and John Raymond for insightful discussions. MM and EK acknowledge support from NASA Grant 80NSSC21K0661. MM, EK, BT, and IA acknowledge support from the MISTI Global Seed Funds and the MIT-Israel Zuckerman STEM Fund. CR acknowledges support from the Fondecyt Iniciacion grant 11190831 and ANID BASAL project FB210003. RR acknowledges support from NASA Grant 80NSSC19K1287. IA is a CIFAR Azrieli Global Scholar in the Gravity and the Extreme Universe Program and acknowledges support from that program, from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement number 852097), from the Israel Science Foundation (grant number 2752/19), from the United States - Israel Binational Science Foundation (BSF), and from the Israeli Council for Higher Education Alon Fellowship.
\facilities{\textit{XMM-Newton}, \textit{NuSTAR}, \textit{NICER}}
\software{XSPEC \citep{Arnaud1996},
Astropy \citep{AstropyCollaboration2013, AstropyCollaboration2018},
\texttt{xillver} \citep{Garcia2010,Garcia2013},
\texttt{relxill} \citep[v1.4.0;][]{Garcia2014,Dauser2014}}
|
2,877,628,088,464 | arxiv | \section{Introduction}
Neural machine translation (NMT) has achieved impressive performances on translation quality, due to the introduction of novel deep neural network (DNN) architectures such as encoder-decoder model \cite{cho-etal-2014-learning,Sutskever2014Sequence}, and self-attentional networks like Transformer \cite{Vaswani2017Attention}. The state-of-the-art NMT systems are now even comparable with human translators in sentence-level performance.
However, there are a number of issues on document-level translation \cite{laubli-etal-2018-machine}. These include pronoun resolution across the sentences \cite{guillou-etal-2018-pronoun}, which needs cross-sentential contexts. To incorporate such document-level contextual information, several methods for context-aware NMT have been recently proposed. Many of the works have focused on introducing new model architectures like multi-encoder models \cite{voita-etal-2018-context} for encompassing contextual texts of the source language. These works have shown significant improvement in addressing discourse phenomena such as anaphora resolution mentioned above, as well as moderate improvements in overall translation quality.
Despite some promising results, most of the existing works have trained the model by minimizing cross-entropy loss, making the model rather exploit contextual information implicitly such as a form of regularization \cite{kim-etal-2019-document, li-etal-2020-multi-encoder}. Data augmentation for context-aware NMT is also an important issue, despite that recent works have focused on back-translation \cite{huo-etal-2020-diving}.
In this paper, we propose a Coreference-based Contrastive Learning for context-aware NMT (CorefCL), a novel data augmentation and contrastive learning scheme leveraging coreference information. Cross-sentential coreference between the source and target sentence can be a good source of training signal for context-aware NMT since it occurs when one or more expressions refer to the same entity, thus reflects dependencies between the source and contextual sentences.
CorefCL starts by conducting automatic annotation of coreference between the source and contextual sentences. Then, the referred mentions on contextual sentences are corrupted by removing and/or replacing tokens to generate contrastive examples.
With those contrastive examples, we introduce a contrastive learning scheme equipped with a max-margin loss which encourages the model to discriminate between the original examples and the contrastive ones.
By doing so, CorefCL makes the model more sensitive to cross-sentential contextual information.
We experimented with CorefCL on three English-German corpora and one English-Korean document-level corpus, including WMT, IWSLT TED talk, and OpenSubtitles'18 English-German subtitles translation, and a web-crawled English-Korean subtitles translation. In all translation tasks, CorefCL consistently improves overall BLEU over baseline models without CorefCL. On experiments with three common context-aware model settings, we show that improvements by CorefCL are also model-agnostic. Finally, we show that the proposed method significantly improved the performance on ContraPro \cite{muller-etal-2018-large}, an English-German contrastive coreference benchmark.
\section{Related Works}
\subsection{Context-aware NMT}
Context-aware machine translation has been vigorously studied to exploit the crucial context information in surrounding sentences.
Recent works have shown that contextual information can help the model to generate not only more consistent but also more accurate translation \cite{smith2017integrating,voita-etal-2018-context,muller-etal-2018-large,kim-etal-2019-document}.
In particular, \citet{voita-etal-2018-context} introduced a context-aware Transformer model which is able to induce anaphora relations, \citet{miculicich-etal-2018-document} showed that a model using cross-sentential contextual information significantly outperforms in document-level translation tasks, and \citet{Yun-Hwang2020Improving} insisted that context-aware models record the best performance especially in spoken language translation tasks where mandatory information tend to be sparse over multiple sentences.
The simplest method for context-aware machine translation is to concatenate all surrounding sentences and treat the concatenated sequence as a single sentence \cite{tiedemann-scherrer-2017-neural}.
Although the concatenation strategy boosted Transformer architectures in multiple tasks \cite{tiedemann-scherrer-2017-neural,voita-etal-2018-context,Yun-Hwang2020Improving}, it lagged behind efficiency as the Transformer architecture has limited long-range dependency \cite{tang-etal-2018-self}.
To improve the efficiency, an additional encoder module is introduced to encode only the context sentences \cite{voita-etal-2018-context,jean2017does}.
Additionally, hierarchical structures also have been introduced because the context sentences do not have the same significance as the input sentences \cite{miculicich-etal-2018-document,Yun-Hwang2020Improving}.
\subsection{Coreference and NMT}
The difference in coreference expressions among languages \cite{zinsmeister2017abstract, lapshinova-koltunski-etal-2020-coreference} gives MT systems a challenge on pronoun translation. Several recent works have attempted to incorporate coreference information \cite{ohtani-etal-2019-context}. The closest work to ours is \cite{stojanovski-fraser-2018-coreference} which also adds noises on creating a coreference-augmented dataset, while we do not add oracle coreference information directly to the training data.
\subsection{Data augmentation for NMT}
One of the most common methods for data augmentation in NMT is back-translation that generates pseudo-parallel data from monolingual corpora using intermediate NMT models \cite{sennrich-etal-2016-improving}. Generally, back-translation is conducted at sentence-level, however, several works have proposed document-level back-translation \cite{sugiyama-yoshinaga-2019-data, huo-etal-2020-diving}.
On the other hand, sentence corruption by removing or replacing word(s) has also been widely used for improving model performance and robustness \cite{lample-etal-2018-phrase, voita-etal-2019-good}. Inspired by these works, we choose sentence corruption for contrastive learning.
\subsection{Contrastive Learning}
Contrastive learning is to learn a representation by contrasting positive and negative (contrastive) examples. It has been succeed in various machine learning fields including computer vision \cite{chen2020simple} and natural language processing \cite{mikolov2013distributed, wu2020clear, Lee2021contrastive}.
Recently, several approaches on contrastive learning for NMT have also been studied. \citet{yang-etal-2019-reducing} proposed strategies for generating word-omitted contrastive examples and leveraging contrastive learning for reducing word omission errors on NMT. \citet{pan-etal-2021-contrastive} applied contrastive learning for multilingual MT and employed data augmentation for obtaining both the positive and negative training examples.
While these works have been conducted on sentence-level NMT settings, we focus on extending contrastive learning on context-aware NMT.
\section{Context-aware NMT models}
In this section, we briefly overview context-aware NMT methods and describe our baseline models which are also commonly adopted in recent works.
Generally, a sentence-level (context-agnostic) NMT model takes an input sentence in a source language and returns an output sentence in a target language. On the other hand, a context-aware NMT model is designed to handle surrounding contextual sentences of source and/or target sentences. We focus on leveraging the contextual sentences of the source language.
Throughout this work, we consider Transformer \cite{Vaswani2017Attention} as a base model architecture by following the majority of the recent works on context-aware NMT. Transformer consists of a stack of self-attentional layers in which a self-attention module is followed by a feed-forward module for each layer. Here we list four Transformer-based configurations that we used in the experiments:
\begin{itemize}
\item \textbf{sent-level}:
As a baseline, we have experimented with the basic Transformer model which does not use any contextual sentences.
\item \textbf{concat}:
This is a straightforward approach to incorporate contextual sentences without modifying the Transformer model \cite{tiedemann-scherrer-2017-neural}. This concatenates all contextual sentences and an input sentence with special tokens between sentences.
\item \textbf{multi-enc}:
This has an extra encoder for encoding contextual sentences separately. We follow the model introduced in \cite{voita-etal-2018-context} which obtain a hidden representation of contextual sentences by weight-shared Transformer encoder. The model combines the encoded source and context representations using a source-to-context attention mechanism and a gated summation.
\item \textbf{multi-enc-hier}:
To represent multiple contextual sentences effectively, hierarchical encoders for contextual sentences have been proposed \cite{miculicich-etal-2018-document,Yun-Hwang2020Improving}. In this configuration, the context representation is calculated in token-level first, then finally processed in sentence-level. We experimented with the model of \cite{Yun-Hwang2020Improving} in this paper.
\end{itemize}
All the model structures are described in Figure \ref{fig:models}.
\begin{figure}[ht]
\resizebox{0.95\columnwidth}{!}{\includegraphics{fig_models-crop.pdf}}
\caption{The structure of compared context-aware NMT models.}
\label{fig:models}
\end{figure}
\section{Our Method: CorefCL}
In this section, we explain main idea of CorefCL, a data augmentation and contrastive learning scheme leveraging coreference between the source and contextual sentences.
\subsection{Data Augmentation Using Coreference}
Generally, constrastive learning encourages a model to discriminate ground-truth and contrastive (negative) examples. In existing works, a number of approaches have been studied for obtaining contrastive examples:
\begin{itemize}
\item Corrupting the sentence by randomly removing or replacing one or more tokens in the sentence. \cite{yang-etal-2019-reducing}
\item Choosing irrelevant example in the batch or dataset. \cite{pan-etal-2021-contrastive}
\item Perturbations on representation space. Usually output vector of encoder or decoder is used. \cite{Lee2021contrastive}
\end{itemize}
\begin{figure*}[t]
\begin{center}
\resizebox{0.65\textwidth}{!}{\includegraphics{fig_augmentation-crop.pdf}}
\caption{Data augmentation process of CorefCL.}
\label{fig:augmentation}
\end{center}
\end{figure*}
CorefCL basically takes a similar approach as the first one, by the sentence corruption. However, unlike previous works that modify the source sentence, CorefCL modifies the contextual sentences to form contrastive examples. Specifically, we corrupt cross-sentential coreference mentions which occur between the source and its contextual sentences. This is based on the intuition that coreference is one of the core components of coherent translation.
More formally, steps to forming contrastive examples in CorefCL are as follows (see also Figure \ref{fig:augmentation}):
\begin{enumerate}
\item Annotate the source documents automatically. We use the NeuralCoref\footnote{https://github.com/huggingface/neuralcoref} to identify the coreference mentions between the source and its previous sentences as contextual sentences
\item Filter the examples with cross-sentential coreference chain(s) between the source and contextual sentences. Around 20 to 30\% of the training corpus is annotated in this way. See Section \ref{exp:datasets} for details
\item For each coreference chain, mask every word in the antecedents with a special token. We also keep the original examples for training
\item Masked words are replaced randomly with other words in vocabulary (\textit{word replacement}), or omitted (\textit{word omission})
\end{enumerate}
In the experiments, we take both of the corruption strategies. Precisely, the masked words are removed with a probability of 0.5, or randomly replaced otherwise. We found that this method is more effective compared to the methods using only one of the two corruption strategies. Please refer to the ablation study in Section \ref{exp:analysis} for more details.
\subsection{Contrastive Learning for Context-aware NMT}
Context-aware NMT models can implicitly capture dependencies between the source and contextual sentences. CorefCL introduces a max-margin contrastive learning loss to train the model to explicitly discriminate inconsistent contexts. This contrastive loss also encourages a model to be more sensitive to the contents of contextual sentences.
Formally, given the source $\mathbf{x}$, target $\mathbf{y}$, $n$ contextual sentences $C = [\mathbf{c}_1, \cdots, \mathbf{c}_n]$ in the data $\mathcal{D}$, we first train the model by minimizing a negative log-likelihood loss, which is a common MT loss:
\[ \mathcal{L}_{MT} = \sum_{(\mathbf{x},\mathbf{y},C) \in \mathcal{D}}
- \mathrm{log} P ( \mathbf{y} | \mathbf{x}, C ).\]
Once the model is trained with MT loss, we fine-tune the model with a contrastive loss. With a contrastive version of context $\tilde{C}$, our contrastive learning objective is minimizing a max-margin loss \cite{huang-etal-2018-large, yang-etal-2019-reducing}:
\begin{multline*}
\mathcal{L}_{CL} = \sum_{(\mathbf{x},\mathbf{y},C,\tilde{C}) \in \mathcal{D}}
\mathrm{max} \{ \eta + \mathrm{log} P ( \mathbf{y} | \mathbf{x}, \tilde{C} ) \\
- \mathrm{log} P ( \mathbf{y} | \mathbf{x}, C ), 0 \}.
\end{multline*}
Minimizing $\mathcal{L}_{CL}$ encourages the log-likelihood of the ground-truth to be at least $\eta$ larger than that of the contrastive examples. In our formulation, we want the model to be more sensitive to the subtle changes in the contextual sentences.
The contrastive loss is jointly optimized with MT loss since we empirically found that the joint optimization has yielded better performance than minimizing CL loss only as similar to \cite{Yu2020thedeepmind}:
\[ \mathcal{L} = (1-\alpha)\mathcal{L}_{MT} + \alpha \mathcal{L}_{CL}, \]
where $\alpha \in [0,1]$ is a weight for balancing between contrastive learning and MT loss. For simplicity, we fixed $\alpha$ during fine-tuning.
\section{Experiments}
\subsection{Datasets}\label{exp:datasets}
\begin{table*}[ht]
\centering
\begin{tabular}{l|l|l|l|ll}
\hline
System & WMT & OpenSubtitles & IWSLT & \multicolumn{2}{c}{En-Ko Subtitles}\\
& & & & detok. & char. \\
\hline
sent-level & 22.7 & 27.6 & 29.3 & 8.6 & 19.2 \\
\hline
concat & 22.4 & 28.3 & 29.7 & 9.3 & 22.1 \\
+ CorefCL & 23.5 (+1.1) & 29.1 (+0.8) & 30.9 (+1.3) & \underline{10.9 (+1.6)} & \underline{24.9 (+2.8)} \\
multi-enc & 23.1 & 28.6 & 29.8 & 9.2 & 21.7 \\
+ CorefCL & \underline{24.3 (+1.2)} & \underline{29.8 (+1.4)} & \underline{31.1 (+1.3)} & \underline{10.8 (+1.6)} & 24.4 (+2.7) \\
multi-enc-hier & 24.4 & 29.1 & 30.0 & 10.3 & 23.1 \\
+ CorefCL & 25.4 (+1.0) & 30.2 (+1.1) & 31.1 (+1.2) & 11.7 (+1.4) & 25.7 (+2.6) \\
\hline
\end{tabular}
\caption{\label{table:overall-bleu}
Corpus-level BLEU scores of compared models on different tasks. For the En-Ko subtitles task, we list both detokenized (detok.) and character-level (char.) scores. Improvements by CorefCL are denoted in (). Underlined score means that the model has the largest BLEU improvements among models in the same task.
}
\end{table*}
We experimented with CorefCL on various document-level parallel datasets: i) 3 English-German datasets including WMT document-level news translation\footnote{http://www.statmt.org/wmt19/translation-task.html} \cite{barrault-etal-2019-findings}, IWSLT TED talk \footnote{https://wit3.fbk.eu/home} \cite{Cettolo2017overview-iwslt}, OpenSubtitles'18\footnote{https://opus.nlpl.eu/OpenSubtitles-v2018.php} \cite{lison-etal-2018-opensubtitles2018}, and ii) our web-crawled English-Korean subtitles corpus.
For all tasks, we take every 2 preceding sentences as contextual sentences and we only consider sentences only within the same document (article, talk, movie, one episode of TV programs, etc.) of the source sentence. If split of the validation and the test set is not presented in the data, we apply document-based split to ensure that training and validation/test data is well-separated. Details of datasets are listed as follows:
\textbf{WMT} We use a set of parallel corpora annotated with document boundaries which is released in WMT'19 news translation task. Specifically, we combine Europarl v9, News Commentary v14, and MODEL-RAPID to form a training set containing 3.7$M$ examples and 0.85$M$ with cross-sentential coreferences. For validation and test sets, we used newstest2013 and newstest2019 which contain 3.05$k$ and 2.14$k$ examples respectively.
\textbf{IWSLT} The IWSLT dataset consists of transcriptions of TED talks in a variety of languages. We used the 2017 version of the training set, a combination of dev2010, tst2010, tst2015 as a validation set, and tst2017 as a test set. The resulting dataset consists of 232$k$ (50.3$k$ with cross-sentential coreferences), 3.5$k$, 1.2$k$ examples of train, dev, test sets respectively.
\textbf{OpenSubtitles} We also choose the English-German pair of OpenSubtitles2018 corpora. The raw corpus contains 24.4$M$ parallel sentences. We follow the filtering methods in \cite{voita-etal-2019-good} by removing pairs that have a time overlap of subtitle frames less than 0.9. We also use separate documents for validation / test sets, resulting 3.9$M$ (1.01$M$ with cross-sentential coreferences), 40.7$k$, 40.5$k$ examples for train / validation / test sets respectively.
\textbf{En-Ko Subtitles} For English-Korean experiments, we first crawled approximately 6.1$k$ bilingual subtitle files from websites such as GomLab.com. Since sentence pairs of these subtitles are already soft-aligned by the creators so we applied a simple time-code based heuristics to filter examples. The final data contains 1.6$M$ (0.24$M$ with cross-sentential coreferences), 155.6$k$, and 18.1$k$ examples of consecutive sentences in the training, validation, and test sets respectively.
For preprocessing, all English and German corpus is tokenized first with Moses \cite{koehn-etal-2007-moses} tokenizer\footnote{https://github.com/moses-smt/mosesdecoder}. We then apply the BPE \cite{sennrich-etal-2016-neural} using SentencePiece\footnote{https://github.com/google/sentencepiece}, and the size of the merge operation is approximately 16.5$k$. We also put a special token [BOC] at the beginning of contextual sentences to differentiate them from the source sentences.
\subsection{Settings}
We use model hyperparameters, such as the size of hidden dimensions and the number of hidden layers as same the \texttt{transformer-base} \cite{Vaswani2017Attention}, since all of the compared models are based on Transformer. Specifically, we set 512 as the hidden dimension, the number of layers is 6, the number of attention heads is 8, and the dropout rate is set to 0.1.
All models are trained with ADAM \cite{kingma2014adam} with different learning rates for each dataset. We employ early stopping of the training when the MT loss on the validation set does not improve. We start training each baseline model from scratch with random initialization and document-level dataset. Note that all the baseline models are not trained using iterative training as \cite{zhang-etal-2018-improving, huo-etal-2020-diving} which first trains the model from sentence-level task first, then document-level task.
All the evaluated models are implemented on top of the transformer\footnote{https://github.com/huggingface/transformers} framework.
We measure the translation quality by the BLEU score \cite{papineni-etal-2002-bleu}. For scoring BLEU, we use the sacreBLEU \cite{post-2018-call} case-sensitive, detokenized scores for En-De, and case-insensitive scores with \texttt{intl} tokenizer for En-Ko task. We also report case-insensitive char-level scores on En-Ko for comparison.
\subsection{Overall BLEU Evaluation}
We display the corpus-level test BLEU scores of all compared models on different tasks on Table \ref{table:overall-bleu}. Among the baseline systems, all context-aware models show moderate improvements over the sentence-level (sent-level) baseline. These results are comparable to that of \citet{huo-etal-2020-diving} on the IWSLT task except for multi-enc-hier, and \citet{Yun-Hwang2020Improving} on OpenSubtitles task. One exception is a single-encoder model (concat) on WMT task, which seems due to the longer average sentence length.
We evaluated CorefCL by fine-tuning the context-aware models. Results show that models with CorefCL outperformed their vanilla counterparts, with the BLEU gain of up to 1.4 in En-De tasks, and 1.6/2.8 (detokenized/char-level BLEU) in the En-Ko subtitles task.
We observed that while CorefCL consistently improves BLEU on all tasks, it achieves better results on IWSLT and En-Ko subtitles tasks. Since improvements on much larger datasets like WMT and OpenSubtitles are smaller, we suggest that CorefCL also works as a regularization.
\subsection{Results on English-German Contrastive Evaluation Set}
\begin{table}[h]
\centering
\begin{tabular}{l|cc|cc}
\hline
System & \multicolumn{4}{c}{Trained on} \\
& \multicolumn{2}{c}{WMT} & \multicolumn{2}{c}{OpenSubtitles} \\
& BLEU & Acc. & BLEU & Acc. \\
\hline
sent-level & 19.3 & 47.9 & 29.6 & 48.4 \\
\hline
concat & 19.9 & 49.7& 30.5 & 54.4 \\
+ CorefCL & 20.3 & 51.2 & 32.3 & 57.9\\
multi-enc-hier & 20.4 & 50.9& 31.7 & 57.3\\
+ CorefCL & 21.9 & 52.4 & 33.6 & 60.5\\
\hline
\end{tabular}
\caption{\label{table:contrastive_set}
BLEU and pronoun resolution accuracies on ContraPro \cite{muller-etal-2018-large} En-De contrastive test set.
}
\end{table}
To assess how CorefCL improves the ability to deal with pronoun-related translations more in detail, we experiment our method with ContraPro\footnote{https://github.com/ZurichNLP/ContraPro}. ContraPro is a contrastive test suit for En-De pronoun translation introduced by \citet{muller-etal-2018-large}. The evaluation is done by letting the model scores the German sentence with correct and incorrect pronoun translation, given the source and contextual English sentence. The accuracy is calculated by counting the number of correctly scored examples (i.e. correct examples that received a higher score than their incorrect counterpart).
We evaluate the models trained with WMT and OpenSubtitles tasks. We also list BLEU scores of En-De translation using the English source text in ContraPro. As shown in Table. \ref{table:contrastive_set}, CorefCL significantly improves the baselines in scoring accuracy for all models by up to 5.5\%, as well as slight improvements in BLEU scores.
One interesting finding is that CorefCL also achieved substantial accuracy gain on the models trained on WMT. Since the ContraPro is created from OpenSubtitles, WMT-trained models would yield lower performance because of domain shift between training and testing. Table.\ref{table:contrastive_set} clearly shows the performance drop in BLEU, nevertheless, moderate improvements in accuracy can also be observed on WMT-trained models.
\subsection{Analysis} \label{exp:analysis}
\begin{table}[h]
\centering
\begin{tabular}{l|c|c}
\hline
System & BLEU & Accuracy \\
\hline
multi-enc-hier & 31.7 & 57.3\\
+ CorefCL & 33.6 & 60.5\\
- Word omission & 32.4 & 59.4 \\
- Word replacement & 32.3 & 58.6 \\
\hline
\end{tabular}
\caption{\label{table:analysis}
Ablation study on coreference corruption strategy. All systems are trained on OpenSubtitle English-German dataset and evaluated on ContraPro.
}
\end{table}
\textbf{Ablation Study} CorefCL uses the two corruption strategies for generating contrastive coreference mentions; word omission and word replacement. To make a better understanding of influence of these strategies, we evaluate CorefCL of different settings of these strategies.
As shown in Table.\ref{table:analysis}, using both types of corruptions results in better performance. Removing one of the two strategies slightly degrades both the pronoun resolution accuracy and BLEU. Although not being significant, removing the word replacement has more impact on accuracy. This suggests that a standard context-aware model, at least for multi-enc-hier is less sensitive to the word substitution. The word replacement strategy can complement this behavior as resulted in better performance.
\begin{figure}[h]
\resizebox{0.95\columnwidth}{!}{\includegraphics{fig_sample-crop.pdf}}
\caption{Example translation with and without CorefCL. }
\label{fig:sample}
\end{figure}
\textbf{Qualitative Example} We display a sample from ContraPro corpus and its translations made by multi-enc-hier model trained with OpenSubtitle task. In this example, since "coat" is translated as \textit{Mantel} which is a masculine noun thus \textit{Er} would be adequate translation of "It" instead of \textit{Sie} which is feminine. While multi-enc-hier incorrectly translated "It" as \textit{Sie}, the model fine-tuned with CorefCL correctly resolved it as \textit{Er}.
In practice, context-aware models do not leverage target-side contexts struggle to maintain these kinds of coreference consistency \cite{muller-etal-2018-large, lapshinova-koltunski-etal-2019-analysing} because of the asymmetric nature of grammatical components and data distributions. Results show that CorefCL can complement the limitation of source-only context-aware models.
\section{Conclusions and Future Work}
We have presented a data augmentation and contrastive learning scheme based on coreference for context-aware NMT. By leveraging coreference mentions between the source and target sentence, CorefCL effectively generates contrastive examples for applying contrastive learning on context-aware NMT models. In the experiments, CorefCL consistently improves the translation quality and pronoun resolution accuracy.
As future work, we plan to extend CorefCL to target contexts since maintaining coreference consistency needs both the source and the target contexts. It would be also interesting that applying CorefCL for fine-tuning pre-trained big language models like BART \cite{lewis-etal-2020-bart} or T5 \cite{raffel2020exploring} for downstream document-level MT tasks.
|
2,877,628,088,465 | arxiv | \section{Introduction}
All dynamical laws are affected by a deep problem, \cite{BAR1}. They are formulated in terms of an extrinsic parameter time, which is not itself an element of dynamics and hence it is left unexplained.
One powerful way of addressing this problem is the \qq{timeless approach} to time. Its logic is elegant: both dynamics and time should emerge from more fundamental elements, chosen so that the dynamics satisfies certain criteria \cite{MITHWE}. For example, when applied to Newtonian physics \cite{BAR1, BAR2}, this approach leads to relational dynamics, where one selects a system as a reference clock, with a particular clock-variable, so as to ensure that Newton's laws hold when that observable is regarded as time (the so-called \qq{ephemeris time}). This picture, however, still requires motion to be assumed as primitive, thus leaving the appearance of dynamics itself unexplained.
The same problem as in classical physics arises in quantum theory: time appears as an extrinsic parameter in the equations of motion. In quantum theory there is also a deeper problem - a major obstacle for quantum gravity \cite{ZEH}: time is not a quantum observable, and yet quantum observables depend on it. What precisely is its status, and how can it be reduced to more fundamental elements?
Once again, the timeless approach provides an elegant way out: the Page and Wootters (PW) model, \cite{PAWO}. By analogy with classical physics, that approach aims at selecting a clock and an observable of the clock, so that the Schr\"odinger (or Heisenberg) equation holds on the rest of the universe, with respect to the variable $t$ labelling the eigenvalues of that observable. But since observables in quantum theory are operators, the implementation of this approach turns out to be rather different from its classical counterpart - with some advantages, but also, as we are about to recall, various problems.
The advantage is that, unlike in the classical scenario, in quantum theory motion does not have to be assumed as primitive: one assumes that the whole universe is in a {\sl stationary state} - i.e., it is an eigenstate of its Hamiltonian. Time and dynamics then emerge in a subsystem of the universe that is entangled with some suitably chosen clock, endowed with an appropriate observable that we shall call {\sl clock observable}. It is important to notice that this is {\sl not} a time operator, but simply an observable (such as, say, a component of the angular momentum) of the system chosen as a clock.
Specifically, by supposing that the Hamiltonian is sufficiently local, it is always possible to regard the universe as consisting of two {\sl non-interacting} subsystems, which we shall call \qq{the clock} and \qq{the rest}. A clock-observable $T$, conjugate to the clock's Hamiltonian, defines a basis of eigenvectors $\ket{t}\;:\;T\ket{t}=t\ket{t}$ (the hands of the clock), where $t$ is a real-valued label. Since $T$ does not commute with the total Hamiltonian of the universe, the overall static state of the universe must be a {\sl superposition} (or mixture, \cite{VED}) of different eigenstates of the clock observable $T$: as a result, a Schr\"odinger equation can be written for the relative state (in the Everett sense \cite{EVE}) of the rest of the universe (relative to $t$) whose parameter time is nothing but the label $t$ of the states of the clock. (An equivalent construction can be carried out in the Heisenberg picture \cite{PAWO}.)
As we said, nothing in this construction relies on defining a time operator.Thus, quantum theory provides the means to solve the problem of time via its most profound properties: having pairs of non-commuting observables (in this case, the Hamiltonian of the universe, and the clock observable $T$); and permitting entanglement between subsystems of the universe. Unlike in classical dynamics, there is no need to assume any underlying motion: both time {\sl and} motion are explained in terms of motionless entanglement contained in the state of the universe.
This elegant model leading to an \qq{evolution without evolution} \cite{PAWO} has promising features, such as its compatibility with quantum gravity \cite{XXX} and its operational nature, that bodes well for experimental techniques involving quantum clocks \cite{NIST}. Yet, it has never been developed beyond the toy-model stage. This is because it is affected by a number of problems, which, though superficially technical, have been regarded as invalidating the whole approach as a contribution to fundamental physics. For example, Kuchar pointed out problems about the possibility of constructing two-time propagators in this model, \cite{KU} -- these have been thoroughly addressed in \cite{LOY}. There are also conceptual problems, because the model seems to have serious ambiguities that do not arise in relational classical dynamics. Specifically, as pointed out by Albrecht and Iglesias, there seem to be a \qq{clock ambiguity}; there are several, non-equivalent choices of the clock \cite{ALB}, which appear produce an ambiguity in the laws of physics for the rest of the universe: different choices of the clock lead to different Hamiltonians, each corresponding to radically different dynamics in the rest of the universe. So, it would seem that the logic of the timeless approach cannot be applied as directly as in classical physics, because it does not lead to a unique Schr\"odinger equation for the rest of the universe.
In this paper we show that the clock ambiguities in fact do not arise. To see why they do not arise, one must appeal to the necessary properties for a subsystem to be a good clock -- in particular, that it must be weakly interacting with the rest. We also update the PW model, clarifying what constraints the state of the universe must satisfy in order for the model to be realistic, and how it accommodates an unambiguous notions of the flow of time. As a result of this update, the model becomes applicable to a number of open problems, including potential new applications.
\section{Evolution without evolution}
We shall now review the PW approach, by expressing explicitly what conditions must hold for it to be applicable -- namely:
{\bf Timelessness}. The first condition is that the Universe is \qq{timeless}, i.e., it is in an eigenstate $\ket{\psi}\in \cal{H}$ of its Hamiltonian $H$, which can always be chosen so that
\begin{equation}
H\ket{\psi}=0\;.\label{tot}
\end{equation}
This constraint is compatible with existing approaches to quantum gravity - e.g. the Wheeler-DeWitt equation in a closed universe \cite{WEDE}, but we regard it as the first of a set of sufficient conditions for a timeless approach to time in quantum theory. Note also that this assumption is compatible with observation, as argued in \cite{PAWO}, because it is impossible empirically to distinguish the situation where \eqref{tot} holds from that where the universe's state is not stationary, because the phases appearing in the state $\ket{\psi}$ are unobservable.
{\bf Good clocks are possible}. The second sufficient condition is that the Hamiltonian includes at least one good clock -- by which we mean a system with a large set of distinguishable states, which interacts only weakly with the rest of the universe; in the ideal case, it should not interact at all. \footnote{That a perfect clock must not interact with anything else is not in contradiction with the fact that for actual clocks synchronisation must occur - indeed the latter, since it requires interactions, is always carried out when the clock is {\sl not} being used as a clock.} So, the Hamiltonian must be such that that there exists a tensor-product structure (TPS) $\cal{H}\sim \cal{H}_C\otimes \cal{H}_R$, where the first subsystem represents the clock and the second the rest of the universe, \cite{PER,PAWO}, such that this crucial {\sl non-interacting property} holds: $$H=H_C\otimes{\mathfrak I}+{\mathfrak{I}}\otimes H_R$$ where $\mathfrak{I}$ denotes the unit operator on each subspace.
In classical physics, the \qq{measurement of time} is always performed relative to some dynamical variable (e.g. a pointer on a clock dial). In quantum theory, a similar logic is valid \cite{PER}. For the ideal clock, the observable to choose as indicator is the {\sl conjugate} observable $T_C$ to the clock Hamiltonian, $[{H}_C,T_C]=i$, with $T_C\ket{t}=t\ket{t}$, where the values $t$ form a continuum, which represent the values to be read on the hands of the clock. Once more, note that $T_C$ is {\sl not} a time-operator. It is an observable of the clock subsystem.
That clocks are possible in reality means that the behaviour of the ideal clock can be approximated to an arbitrarily high accuracy: as pointed out in \cite{DEU}, the ideal clock can be approximated by systems with an observable $T$ that has a discrete spectrum of $n$ values $t_n$, where there is no limit to how well the sequence of $t_n$ can approximate the real line. In this paper we shall confine our attention to the ideal case, for simplicity of exposition.
{\bf Entanglement.} The third sufficient condition for the P-W construction to hold is that the clock and the rest of the universe are {\sl entangled}: as it will become clear in a moment, this is the feature that allows the appearance of dynamical evolution on the rest to be recovered out of no evolution at all at the level of the universe. Formally, this means that the state of the universe $\ket{\psi}$ must have this form:
\begin{equation}
\ket{\psi}=\sum_t\alpha_t\ket{t}\ket{\phi_t}\;\label{dec}
\end{equation}
for some appropriate $\ket{\phi_t}$ defined on the rest, with two or more of the $\alpha_t$ being different from zero. In practice, as we shall see, for this to produce a realistic dynamics, $\alpha_t\neq 0$ for a sufficiently large number of $t$'s. This is because all that happens in the rest is given once and for all in the state $\ket{\psi}$. By taking one of the clock eigenstates $\ket{0}$ as the initial time, whereby $\ket{t}=\exp{(-i H_C t)}\ket{0}$, the story of the rest of the universe is a sequence of events encoded in the various $\ket{\phi_0}, \ket{\phi_1}, ...,\ket{\phi_t}$.
Note that the rest and the clock must {\sl not} be in an eigenstate of their local Hamiltonians, otherwise the dynamics is trivial. In the basis $\ket{\epsilon_n}\ket{E_n}$ defined by the local hamiltonians $H_C$ and $H_R$, the universe state is therefore $\ket{\psi}=\sum_{m,n}\psi_{m,n} \ket{\epsilon_m}\ket{E_n}$ where $H\ket{\epsilon_m}\ket{E_n}=0\;\; \forall n,m $. An elementary example will clarify this point. Consider a universe made of two qubits only, with Hamiltonian $H=\sigma_z\otimes{\mathfrak I}+{\mathfrak{I}}\otimes \sigma_z$, where $\sigma_z$ represents the z-component of the spinor $\left (\sigma_x, \sigma_y, \sigma_z\right)$, $[\sigma_i, \sigma_j]=2\epsilon_{i,j,k}\sigma_k$ and $\sigma_i\sigma_j=2\delta_{i,j}$. The clock observable can be $\sigma_x$, so that in the clock basis $\ket{+},\ket{-}$ the state of the universe can be written as $\ket{\psi}=\frac{1}{\sqrt{2}}\left (\ket{+-}+\ket{-+}\right)$. As required, the Hamiltonian of the clock generates the shift on the two clock \qq{hands}, $\exp(-i\sigma_z\frac{\pi}{2})\ket{+}=\ket{-}$. In the energy basis (the basis of eigenvectors of $\sigma_z$) the state of the universe is $\ket{\psi}=\frac{1}{\sqrt{2}}\left (\ket{01}+\ket{10}\right)$.
Therefore for this construction to be compatible with a realistic dynamics there must be a high degree of degeneracy in the Hamiltonian $0$-eigenspace.
\medskip
If the above conditions are satisfied, the evolution without evolution can be reconstructed as follows. The state of the rest of the universe when the clock reads $t$ is the Everett relative state, \cite{EVE}, defined as:
\begin{equation}
\rho_t=\frac{{\rm Tr_c}\{P_t^{(c)}\rho\}}{{\rm Tr}\{P_t^{(c)}\rho\}}=\ket{\phi_t}\bra{\phi_t}.
\end{equation}
Note that the projector in the definition of relative state has nothing to do with measurement and does not require one to be performed on the clock: rather, the relative states are a 1-parameter family $\ket{\phi_t}$ of states, labelled by $t$, each describing the state of the rest with respect to the clock ‘given that’ the latter is in the state $\ket{t}$. By using the constraint $\eqref{tot}$, the special, {\sl non-interacting} form of $H$, and the fact that $[H_C,T_C]=i$, one obtains that the relative state of the rest evolves according to the Schr\"odinger equation with respect to the parameter $t$:
\begin{equation}
\frac{\partial \rho_t}{\partial t}= i[\rho_t, H_R]\;.\label{SC}
\end{equation}
Thus, the logic that \qq{time can be said to exist if there is a description of the physical world such that the Schr\"odinger (or Heiseberg) equation holds on a subsystem of the universe}, seems to be applicable to quantum theory. The parameter $t$ is to be interpreted as time, and the evolution of the rest of the universe has been recovered out of no evolution at all.
Assuming that the eigenstates of the clock have the form $\ket{t}=\exp(-iH_Ct)\ket{0}$ may seem too strong a constraint: together with the fact that the clock and the rest are entangled, that constraint directly implies that the evolution on the rest has to have the same exponential form leading to the Schr\"odinger equation. However, the main point of the PW approach is to show that that there exists at least one such choice. This is a rather remarkable property of unitary quantum theory, as it implies that it is consistent with there being the appearance of dynamics in a subpart of the universe, even when the whole universe is at rest.
Note also that this construction is compatible with a time-dependent Hamiltonian arising on a {\sl subsystem} of the rest, just like in ordinary quantum mechanics. The time-dependent Hamiltonian for the subsystem only is an approximate description, generated by the interactions (encoded in the time-independent hamiltonian $H_R$) between the subsystem and the environment, in the approximation where the environment can be treated semi-classically (see, e.g. \cite{JAB}).
\section{There is no ambiguity}
A problem seems to arise in the PW logic. Quantum theory provides infinitely many inequivalent ways of partitioning the total Hilbert space of the universe into a tensor-product structure (TPS); as a consequence, there would seem to be several choices of the clock by which unitary evolution can arise on the rest of the universe. If true, this would mean that given the same overall state $\ket{\psi}$ describing the universe, the PW approach leads to completely different dynamics on the rest of the universe. This is the so-called \qq{clock ambiguity}, \cite{ALB}. We are about to show that this ambiguity does not in fact arise: having fixed the total Hamiltonian and the overall state $\ket{\psi}$ of the universe, if there is \textit{one} tensor-product structure - i.e., one partition of the universe into a good clock and the rest - leading to a unitary evolution generated by a {\sl time-independent} Hamiltonian for the relative state, then it must be unique. The crucial property will be that the clock is not any old subsystem of the universe, but it must be, in the ideal case, a {\sl non-interacting} one.
Let us first summarise the clock ambiguity problem. By choosing a suitable orthonormal basis $|{k}\rangle $ in the overall Hilbert space ${\cal H}$, one can write: $$\ket{\psi} = \sum_k
\alpha_k |{k}\rangle\;,$$ where $\ket{k}\iff \ket{t}_C\ket{\phi_t}_R$ in a given tensor product structure ${\cal H}\sim {\cal H_C}\otimes {\cal H_R} $.
The clock ambiguity is
thus expressed: consider a different state of the universe, such as $$|\tilde
{\psi}\rangle = \sum_k \beta_k |{k}\rangle\;.$$
There is of course a unitary operator $W$ such that $|\tilde {\psi}\rangle = W|{\psi}\rangle$. Hence \begin{equation}|{\psi}\rangle =\sum_k \beta_k
W^{\dagger}|{k}\rangle =\sum_k \beta_k |\tilde{k}\rangle\label{AMB}\end{equation} where we have defined $\ket{\tilde k}=W^{\dagger}|{k}\rangle$.
Now, it is possible to choose a {\sl different} bi-partite tensor-product structure whereby: $ |\tilde{k}\rangle = |t\rangle_{\tilde C}|\phi_t\rangle_{\tilde R}$. The clock ambiguity is that there are countless such choices, and
they would each seem give rise to very different description of the evolution of the rest. In one, the rest would appear to evolve according to the sequence of relative states: $\ket{\phi_0}_R$,$\ket{\phi_1}_R, ...., \ket{\phi_t}_R$; in the other, it would go through the sequence of {\sl different} relative states: $\ket{\phi_0}_{\tilde R }, \ket{\phi_1}_{\tilde R },...., \ket{\phi_t}_{\tilde R }$.
In fact, the clock ambiguity does not arise, because the PW model has additional constraints. In short, a clock is a special subsystem of the universe, which must not interact with the rest, in the ideal case. So, let us assume that there exists a tensor-product structure $\cal{H}\sim \cal{H}_C\otimes \cal{H}_R$ where the clock and the rest are non-interacting: $H=H_C\otimes{\mathfrak I}+{\mathfrak{I}}\otimes H_R$ -- whereby, applying the PW argument, the relative state $\ket{\phi_t}_R$ of the rest evolves according to a unitary evolution generated by $\exp{(-iH_Rt)}$.
Formally, a tensor product structure is a unitary mapping $U$ whose elements $U_{a,b}^{k}$ have the property that, for any state $\ket{k}\in{\cal H}$ and some basis states $\ket{a}_C\ket{b}_R$: $$ |{k}\rangle = \sum_{a,b}U_{a,b}^{k}|a\rangle_C|b\rangle_R.$$
Two tensor product structures are {\sl equivalent} if and only if their elements $U_{i,j}^{k}$, $\tilde U_{a,b}^{k}$ are related by {\sl local} unitaries $P$, $Q$: $$ U_{i,j}^{k}= \sum_{a,b}P_{i}^{a}Q_{j}^{b}\tilde U_{a,b}^{k}\;.$$
Hence, the case where the new TPS is equivalent to the original one corresponds to $W=P\otimes Q$ in \eqref{AMB}, i.e., to choosing a different clock observable $P^{\dagger}T_CP$ from the optimal one $T_C$ (the conjugate observable to $H_C$). Therefore this case need not concern us any further, as it simply consists of choosing a poorer clock.
The case where the new TPS is not equivalent requires a little more explanation. In this case, the unitary $W$ in equation \eqref{AMB} has the form: $W=\exp\{-i\left(W_C+W_R+W_{CR}\right)\}$, for some Hermitean operators $W_C$, $W_R$ $W_{CR}$, where $W_{CR}$ operates as an interaction term between the two subsystems $C$ and $R$ of the original TPS. For two qubits, the most general form is: $$W_{CR}=\sum_{\alpha,\beta\in\{x,y,z\}}w_{\alpha,\beta}\;\sigma_{\alpha}\otimes\sigma_{\beta}\;$$ for real coefficients $w_{\alpha, \beta}$. The cases where $[H,W]=0$ or $[H,W_{CR}]=0$ also need not concern us any further, because in both cases $W$ would have a trivial, local action on $\ket{\psi}$.
The remaining case can be addressed as follows. $H$ is the sum of two non-interacting terms for $C$ and $R$ in the tensor-product structure defined by $U_{i,j}^{k}$. Therefore, in {\sl any} tensor product structure $\tilde U_{a,b}^{k}$ obtained via $W$ acting on the TPS defined by $U_{i,j}^{k}$, $H$ will have an interaction term between the new clock $\tilde C$ and the new rest $\tilde R$ :
\begin{equation}
H=H_{\tilde C}\otimes{\mathfrak{I}}+{\mathfrak{I}}\otimes H_{\tilde R}+V_{\tilde C}\otimes V_{\tilde R}\;
\end{equation}
because the transformation to the new TPS is generated by a non-local unitary transformation. As a consequence, in the new, non-equivalent, tensor-product structure, the evolution of the relative state as a function of the labels of the eigenstates of observable $T_{\tilde C}$ of the clock will \textit{not} be a unitary evolution generated by a \textit{time-independent} Hamiltonian. As pointed out in \cite{PAGE} it will have the form:
\begin{equation}
\frac{\partial \rho_t}{\partial t}= i[\rho_t, H_{\tilde R}]\;+ {\rm terms\; depending\; on\; t}.\label{rot}
\end{equation}
Hence, given $H$ and $\ket{\psi}$, if there is {\sl one} tensor product structure in which the clock is ideal (no interactions) {\sl and} a Schr\"odinger-type unitary evolution (generated by the time-independent hamiltonian $H_R$) arises on the relative state of the rest with respect to the labels $t$, then the TPS must be unique. In all other non-equivalent tensor product structures, although it is possible to write the overall state as
$|{\psi}\rangle =\sum_t \beta_t \ket{t}_{\tilde C}\ket{\phi_t}_{\tilde R}$, it must be $\ket{\phi_t}_{\tilde R}\neq \exp(-iH_{\tilde R}t)\ket{\phi_0}_{\tilde R}$, because the eq. \eqref{rot} holds instead of \eqref{SC} -- due to the interaction terms between the clock $\tilde C$ and the rest $\tilde R$. Thus, there is no clock ambiguity, as promised.
We conclude that in unitary quantum theory it is not ambiguous to apply the same logic as in the classical time-less approaches: the clock and the clock observable are to be chosen so that a Schr\"odinger-type dynamics arises on the rest of the universe, generated by a time-independent Hamiltonian. There can be only one such choice, for a given total Hamiltonian $H$ and a given total state of the universe.
{\bf The appearance of the flow of time.} It is worth pointing out that there is no flow of time in the PW picture. The PW approach shows that the Schr\"odinger equation generated by $H_R$ holds for the rest of the universe with respect to the labels $\{t\}$ of the eigenstates of a particular clock observable $T_C$, conjugate to $H_C$, $\ket{t}=\exp(-iH_Ct)\ket{0}$. But the {\sl flow of time} has to emerge as a result of there being subsystems of the rest of the universe that can perform measurements and store their results, thus constructing a {\sl history}. More specifically, let us consider a model where the rest is partitioned in three subparts, the \qq{observer} (which for simplicity we assume to be made of a memory only), the \qq{observed} and a sequence of ancillas. As mentioned in section 3, by treating the ancillas semiclassically, it is possible to describe the observed and the observer as undergoing an effective evolution generated by a time-dependent Hamiltonian, in turn generated by the interactions with the ancillas as prescribed by $H_R$ -- where time here is the label $t$ of the eigenstates of the clock.
Let us suppose that effective evolution corresponds to a sequence of gates occurring on the observed and to the observer performing measurements on the observed to record what has happened. Specifically, suppose that the observer's memory starts in a blank state $\ket{b}^{\otimes N}$ at $t=0$ and the observed is in a state $ \ket{A_1}$ where $\ket{A_i}$, for $i=1...N$, is an eigenstate of some observable $A$. Suppose that the observer and the observed evolve under the effective time-dependent hamiltonian as follows: at time $t=1$ the observer measures the observable $A$, so that its state changes to $\ket{\text{Saw}A_1}\ket{b}^{\otimes N-1}\ket{A_1}$; then a local permutation happens on the observed, so that the state changes to $\ket{\text{Saw}A_1}\ket{b}^{\otimes N-1}\ket{A_2}$; then the observer measures $A$ again, so that the state is now $\ket{\text{Saw}A_1}\ket{\text{Saw}A_2}\ket{b}^{\otimes N-2}\ket{A_2}$ and so on, until the observer ends up recording a sequence of events $A_1, A_2, ...,A_N$ as prescribed by $H_R$. All these events, here described as sequential, are encoded statically in the overall state of the universe $\ket{\psi}$.
The beauty of the PW approach is that it is fully internally consistent. The observer cannot empirically tell the difference between a situation where the sequence of events it observes is really generated by the Hamiltonian $H_R$ on the rest and all the other possibilities, which include cases where the universe can be manipulated by some external entity. This is a general feature of any other picture where constructing a history of events is possible in the sense above, e.g. static pictures such as the Block Universe in general relativity. Imagine, for example, that an entity existing outside of the PW universe were able to decide which element in the PW wavefunction constitutes the \qq{now}, as defined with respect to some external coordinate time -- a sort of \qq{meta-time} existing outside of the PW universe. The entity is able to point at any one element $t_n$ declaring it the now. Then (again in the meta-time picture) it could point at another element $t_m$ as the next now. And so on. This corresponds to picking a different generator of the clock states than $\ket{t}=\exp(-iH_Ct)\ket{0}$, corresponding to a different dynamical law from the $H_R$-generated Schr\"dinger equation. The entity can choose any order it likes for the labels $t$; not necessarily the one corresponding to the sequence of events outlined above. What if the entity decides to point first at a state which in the original labeling appears \qq{later}, and then at an \qq{earlier} one? This may seem to make time flow backwards even from the point of view of the observer. But that is not the case.
The observer, from his own perspective, does not notice anything different, as his state only contains information about the \qq{previous times}. For example, the observer could never see any gaps if the entity decides to jump, say, from time $t=1$ to time $t=100$, because in the observer state corresponding to $t=100$ there will be recollections of all events $E_2...E_{100}$, from $t=2$ up to $t=100$. As far as the observer is concerned he perceives himself as coming to time $t=100$ directly from $t=99$. Therefore any experiment made by the observer would lead him to conclude that what he observes is consistent with the same Schr\"odinger equation as that corresponding to there being no jumps at all (in the meta-time), generated by the hamiltonian $H_R$. In other words, in the PW approach just like in any other dynamical theory, the existence of a meta-time is completely irrelevant from the observer's perspective, as it would not have any empirical consequences.
{\bf The arrow of time.} In the PW picture there is no arrow of time, just like in unitary quantum theory: the PW approach gives rise to a time-reversal symmetric dynamical law. Thus, the arrow of time has to be imposed by a separate postulate, requiring that under that dynamical law a given monotone always increases (or decreases). Namely, suppose again that the rest consists of two subsystems, \qq{the observer} and \qq{the observed}. For simplicity, let us approximate the \qq{observer} as simply consisting of a memory needed to keep track of what happens to the observed. The arrow of time can now be specified by the increase in entanglement between the observer and the observed, by selecting a measure of entanglement, and requiring that entanglement never decreases on the rest under the evolution generated by $H_R$. In other words, early times correspond to no entanglement between the observer and the observed and later times to more and more entanglement (as the observer learns more and more about the system). Since the relative states of the rest are pure states, when considering a bi-partition there is a unique measure, which consists of the relative entropy \cite{VE}. For a discussion of some explicit models, see \cite{PAGE}. The only ambiguities that might arise in this context are due to: 1) the possibility of picking different partitions into subsystems; and 2) the possibility of having a partition into n-subsystems, where $n\geq 3$: for, in this case, there is not a unique measure. However, these ambiguities are the same as those related to which coarse-graining to adopt in the usual statistical-mechanical picture. Hence this is no more problematic than any other coarse-graining approaches to irreversibility in statistical-mechanics.
\section{Conclusions}
We have shown that the PW timeless approach to time in quantum theory has no ambiguities, thus vindicating it as a viable proposal for the emergence of time in a time-less universe described by the unitary quantum theory. The non-interacting property of the clock is crucial to establish that result: a good clock is not just a system with a large set of orthogonal states, such as good memory; it must be non-interacting while it is used as a clock.
We have also updated the model so that it becomes possible to apply to to more general theories, including the successor of quantum theory. One possible development is to investigate under what conditions the PW logic could apply to different theories than quantum theory (e.g., generalised probabilistic theories \cite{BAR1} or constructor theory's super-information theories \cite{MA}). The challenge there is to understand what relative states would be, as well as in what form the clock ambiguity might appear. Another interesting application could be to recast this model in terms of pseudo-density matrices, \cite{PSEUDO} where time and space are treated in a unified framework. Finally, it is worth speculating about how the PW approach might provide observable consequences, when combined with cosmological models. For example, in the context of an expanding universe, one might use as clock observable the radius of the universe, whereby $\ket{\psi}=\sum_n\alpha_n\ket{t_n}\ket{\phi_n}\;$, where $t_n$ is the radius of the universe. In such models, \cite{REF}, $\langle t_n|t_m\rangle\sim \exp(-\gamma(t_n-t_m))$, where $\gamma$ is some parameter that can be fixed according to the particular cosmological model -- which means that different states of the clock get more and more distinguishable the more they are separated in \qq{time}. When applied in this scenario, the PW construction would lead to the conclusion that the relative state of the rest of the universe is no longer pure, but it is a mixed state -- this is because the operators $P_t=\ket{t}\bra{t}$ involved in constructing the relative state are no longer orthogonal projectors. This fact might have observable consequences even at the present epoch, according to which particular cosmological model one chooses, provided that the accuracy for measuring time is high enough. We leave investigating all that to future work.
\section{Acknowledgements}
The authors thank David Deutsch for fruitful discussions and comments; Mario Rasetti for helpful suggestions. CM's research was made possible through the support of a grant from the Templeton World Charity Foundation. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of Templeton World Charity Foundation. VV thanks the Oxford Martin School, Wolfson College and the University of Oxford, the Leverhulme Trust (UK), the John Templeton Foundation, the EU Collaborative Project TherMiQ (Grant Agreement 618074), the COST Action MP1209, the EPSRC (UK) and the Ministry of Manpower (Singapore). This research is also supported by the National Research Foundation, Prime Minister’s Office, Singapore, under its Competitive Research Programme (CRP Award No. NRF- CRP14-2014-02) and administered by Centre for Quantum Technologies, National University of Singapore.
|
2,877,628,088,466 | arxiv | \section{Introduction}
\label{section2}
The discovery that a peak in the density of states at certain frequencies where surface modes can be thermally excited will result in a significant enhancement in heat transfer when objects are brought in close proximity to each other has lead to development of applications relevant to energy systems such as near-field thermophotovoltaics \cite{narayanaswamy03a, dimatteo2001enhanced, laroche06b} and thermal rectification \cite{otey2010thermal}.%
This enhancement in heat transfer which can be several orders higher than that exchanged by objects when separated by large distances ($d \gg \lambda_T$ where $\lambda_T$ is the thermal wavelength of radiation) has been successfully explained using the theory of fluctuational electrodynamics developed by Rytov \cite{rytov59a} and first applied for analysing heat transfer between closely spaced bodies by Polder and Van Hove \cite{polder71}.
The theoretical framework of this procedure relies on the introduction of external microscopic thermal fluctuating currents or dipole moments whose correlation functions are related to the dielectric properties of the material via the fluctuation-dissipation theorem \cite{loomis94, pendry1999radiative, joulain05a}.
While the predictions from this theory has indeed been experimentally verified over the years for several geometries including that for flat surfaces \cite{song2016radiative}, STM tip over substrate \cite{kim2015radiative, kloppstech2017giant} and sphere over a flat substrate \cite{rousseau2009radiative, shen2009surface}, a main assumption in this theory is the prevalence of local thermodynamic equilibrium in the objects which restricts the cases to be analysed to stationary or quasi-stationary cases only.
\\
Recently, an alternate approach which uses the coupled harmonic oscillator theory to model the coupling between surface polaritons has been proposed which enables us to analyse the near-field heat transfer for both dynamic situations valid for time scales less than the relaxation time scales of surface polariton excitations, as well as in the steady-state \cite{biehs2013dynamical, barton2015classical, sasihithludynamic}. In this work we adopt this approach to arrive at expression for the steady-state near-field heat transfer for the configurations of two nanoparticles and also for two planar systems and compare the expressions with that derived using flucutational electrodynamics. The approach we use in this study differs from that explained in Ref.
%
The advantage of this model is the simplicity and the generality that this offers vis-a-vis the flucutational electrodynamics theory. By modelling the coupling of surface modes using the harmonic oscillator model, the parameters thus derived will be useful to analyze not only heat transfer but also other phenomena where the resonant excitation of surface modes plays an important role such as van der Waals forces, decreased lifetime of molecules close to surfaces \cite{chance1978molecular}, and surface enhanced Raman scattering \cite{le2008principles}, while also giving us additional insight into the mechanism and the strength of coupling between surface modes.
The paper is arranged as follows: in Section \ref{section3} the theory of steady state heat transfer between coupled harmonic oscillators is developed and key results are highlighted. In Section \ref{nano} this theory is applied to predict heat exchange between nanostructures and compared with the predictions of fluctuational electrodyanmics. In Section \ref{planar} we apply this theory to predict the near-field heat exchange between planar surfaces whose dielectric properties are given by the Drude model.
\section{Heat transfer between coupled harmonic oscillators}
\label{section3}
\begin{figure}[h]
\includegraphics[scale=0.45]{HO.png}
\caption{The coupled harmonic oscillator system which is analysed in Section \ref{section3}. The oscillators with spring constant $\omega_0^2$ are connected to two separate heat baths each maintained at a different temperature. }
\label{FigHO}
\end{figure}
We model the surface modes between interacting systems as harmonic oscillators which are in contact with two separate heat baths maintained at constant temperatures as shown in Fig. Since the oscillators are in contact with heat bath they are can be modelled as force driven. While the heat flux between the two heat reservoirs for such a system has been well studied \cite{dorofeyev2013coupled, ghesquiere2013entanglement, biehs2013dynamical, barton2015classical} here we provide an alternate derivation based on Green's function theory to arrive at the expression for heat transfer for such a system. A similar derivation has been given in Ref. \cite{pendry2016phonon} in the context of modelling the heat transfer due to coupling of Rayleigh modes in solids.
We begin with the equation of motion for such a system which in the frequency $\omega$ space can be written as:
\[
\omega^2 \hat{x}_1(\omega)= \omega_0^2 \hat{x}_1(\omega) - 2 i \omega \delta \hat{x}_1(\omega) + F(\omega, T)
\]
where $F(\omega, T)$ is the forcing function whose spectral density $S_F(\omega,T)$ is as yet unknown, and $\delta$ is the half-line width.
The Green's function for such a system defined such that $\hat{x}_1(\omega) =G(\omega)F(\omega, T)$ is given as:
\[
G(\omega) = \dfrac{1}{\omega^2 - \omega_0^2 + 2 i \omega \delta}
\]
Relating the density of states $\rho(\omega)$ to the imaginary part of the Green's function, and recognising that it peaks around $\omega \approx \omega_0$, gives:
\[
\rho(\omega) = \dfrac{4}{\pi}\dfrac{\omega_0^2 \delta}{(\omega^2 - \omega_0^2)^2 + 4 \omega_0^2 \delta^2}
\]
At equilibrium at temperature $T$ we have the total energy contained in the harmonic oscillator, $E$, to be:
\begin{equation}
E = \int_0^\infty \rho(\omega) \dfrac{\hbar \omega \, d\omega}{e^{\hbar \omega/(k_B T)} - 1}
\label{energy1}
\end{equation}
However, the energy contained in the oscillator is also given by:
\begin{equation}
E = \langle \omega_0^2 x^2 (t)\rangle = \omega_0^2 \frac{\pi}{\Theta} \int_{-\infty}^{\infty} |x(\omega)|^2 \, d\omega = 2 \omega_0^2 \int_{0}^{\infty} |G(\omega)|^2 \, S_F(\omega, T) d\omega
\label{energy2}
\end{equation}
where, $\Theta$ is a large time interval and we make use of the definition of spectral density $S_F(\omega, T) = (\pi/\Theta) |F(\omega, T)|^2$ from Ref. \cite{reif2009fundamentals}.
\begin{comment}
For an amplitude $\hat{x}_1(\omega)$ its power spectral density $S(\omega)$ is given by $S(\omega) = \hat{x}_1(\omega)\hat{x}_1^*(\omega)$, which reduces to the form:
\[
S(\omega) = \frac{|F(\omega, T)|^2}{(\omega^2 - \omega_0^2)^2 + \omega_0^2 \delta^2}
\]
Since the energy contained in the oscillator is given as: $\omega_0^2 \langle x_1^2(t) \rangle$, from Wiener-Khinchin theorem we get:
\begin{equation}
E = \int_0^\infty \omega_0^2 S(\omega) \, d\omega
\label{energy2}
\end{equation}
\end{comment}
Comparing Eq. \ref{energy1} and Eq. \ref{energy2} we obtain:
\begin{equation}
S_F(\omega, T) = \frac{2\delta}{\pi} \frac{\hbar \omega}{e^{\hbar \omega/k_B T} - 1}
\label{forcing}
\end{equation}
Now consider system of two coupled oscillators with coupling constant $\gamma \omega^2_0$ such that one of the oscillators is in contact with a heat reservoir at temperature $T$ and the other at zero kelvin. The equations of motion can be written as:
\begin{eqnarray}
\omega^2 \hat{x}_1(\omega) &=& \omega_0^2 \hat{x}_1(\omega) - 2i \omega \delta \hat{x}_1(\omega) + \gamma \omega_0^2 \hat{x}_2(\omega)+ F(\omega, T) \nonumber \\
\omega^2 \hat{x}_2(\omega) &=& \omega_0^2 \hat{x}_2(\omega) -2 i \omega \delta \hat{x}_2(\omega) + \gamma \omega_0^2 \hat{x}_1(\omega)
\label{eqofmotion}
\end{eqnarray}
The eigenfrequencies of this coupled system in the absence of the forcing function and for $\Gamma/\omega_0 \ll 1$ is given as:
\begin{equation}
\omega^2_\pm = \omega_0^2 (1 \pm \gamma ) - i \omega_0 \delta
\label{eqeigenHO}
\end{equation}
with the corresponding normalized eigenvectors given by: $ \frac{1}{\sqrt{2}}
\left[\begin{matrix} +1 \\ +1 \end{matrix} \right]$ and $ \frac{1}{\sqrt{2}}
\left[\begin{matrix} +1 \\ -1 \end{matrix} \right]$.
The new Green's function can be built from the eigenvectors as follows:
\begin{eqnarray}
\begin{split}
\bar{\bar{G}}(\omega) = \frac{1}{2}
\left[\begin{matrix} +1 \\ +1 \end{matrix} \right] \left[\begin{matrix} +1, +1 \end{matrix} \right] \dfrac{1}{\omega^2 - \omega_0^2 (1+ \gamma)+i \omega_0 \delta} +\\ \frac{1}{2}
\left[\begin{matrix} +1 \\ -1 \end{matrix} \right] \left[\begin{matrix} +1, -1 \end{matrix} \right] \dfrac{1}{\omega^2 - \omega_0^2 (1- \gamma) +i \omega_0 \delta}
\end{split}
\end{eqnarray}
Using this form of the Green's function we can write the response to be of the form:
\begin{eqnarray}
\left[\begin{matrix} \hat{x}_1(\omega) \\ \hat{x}_2 (\omega) \end{matrix} \right] = \dfrac{F(\omega, T)}{2} \left( \left[\begin{matrix} 1 \\ 1 \end{matrix} \right] \dfrac{1}{\omega^2 - \omega_0^2 (1+ \gamma)+i \omega_0 \delta} \right. \\ \left. +\left[\begin{matrix} 1 \\ -1 \end{matrix} \right] \dfrac{1}{\omega^2 - \omega_0^2 (1- \gamma)+i \omega_0 \delta} \right)
\end{eqnarray}
\begin{comment}
This gives the amplitude response $|x_2(\omega)|^2$ to be of the form:
\begin{equation}
|x_2(\omega)|^2 = \dfrac{|F(\omega, T)|^2}{4} \left|\frac{1}{(\omega^2 - \omega_0^2(1+\gamma) + i \delta \omega_0/2)} - \frac{1}{(\omega^2 - \omega_0^2 (1- \gamma)+i\delta \omega_0/2)} \right|^2
\end{equation}
\end{comment}
In steady state the rate of energy transfer to the second oscillator via coupling with the first oscillator is equal to the decay in the second oscillator via damping. This should be equal to the heat transfer to the sink. Thus we can write the expression for the heat transfer in the system shown in Fig. as:
\begin{equation}
P = - \omega_0^2 \frac{d}{dt} \langle x_2^2 (t) \rangle = 2 \delta \omega_0^2 \langle x_2^2 (t) \rangle \label{damping}
\end{equation}
Following the procedure similar to that in Eq. \ref{energy2}, Eq. \ref{damping} further reduces to:
\begin{equation}
P = \delta \omega_0^2 \int_0^\infty S_F(\omega, T) \left|\frac{1}{(\omega^2 - \omega_0^2(1+\gamma) + i \omega_0 \delta)} - \frac{1}{(\omega^2 - \omega_0^2 (1- \gamma)+i \omega_0 \delta)} \right|^2
\label{P1}
\end{equation}
\begin{comment}
\[
\int_0^\infty \dfrac{d}{dt}\hbar \omega_0 |x_2|^2\,\, d\omega = - \int_0^\infty \delta \hbar \omega_0 |x_2|^2\,d\omega
\]
Substituting from Eq. \ref{forcing}, and integrating over the frequency range recognizing that the integrand is sharply peaked around $\omega_0$ (for $\gamma \ll 1$), and for the case when $k_B T\gg \hbar \omega_0$ which is valid at room temperature, we get the expression for the heat transfer from the `hot' oscillator to the `cold' oscillator as:
\end{comment}
Substituting the expression for $S_F(\omega, T)$ from Eq. \ref{forcing} and evaluating the integral while recognising that the integral is sharply peaked around $\omega \approx \omega_0$, we get:
\begin{equation}
P = \frac{\hbar \omega_0}{e^{\hbar \omega_0/(k_B T) }-1} \dfrac{ \omega_0^2 \gamma^2 \delta}{(\omega_0^2\gamma^2 + \delta^2)}
\label{eqHO}
\end{equation}
This expression matches the form for the steady-state heat transfer for this system derived by other authors using Master equation approach \cite{biehs2013dynamical} and Langevin theory \cite{ barton2015classical}.
\section{Nanoparticles}
\label{nano}
\begin{figure}[h!]
\includegraphics[scale=0.45]{dipoles.png}
\caption{Coupling between the fields due to the surface modes of spherical nanoparticles of diameter $D$ and center-to-center distance $d$ which is analyzed in Section \ref{nano}. Thick arrows indicate the direction of orientation of the dipole moments relative to the separation. The coupling strength between the fields is dependent on whether the dipoles are oriented a) side-by-side or b) head-to-tail. }
\label{FigNP}
\end{figure}
In this section we derive the parameters of the coupled harmonic oscillator model suitable for describing the coupling of surface plasmons between two nanoparticles, and hence arrive at the expression for the heat transfer exchanged from Eq. \ref{eqHO}.
We only consider the heat transfer mediated by the coupling of surface modes and neglect contributions due to eddy currents \cite{chapuis2008radiative}, multipoles \cite{perez2008heat}, many-body effects \cite{ben2011many}.
A similar attempt has been made to arrive at a coupled harmonic oscillator model to
analyze the near-field heat transfer between nanoparticles by Biehs and Agarwal \cite{biehs2013dynamical}.
The parameters in this model were arrived by comparing the steady-state heat transfer prediction
in the coupled harmonic oscillator model with that predicted from Rytov's fluctuational
electrodynamics theory.
Here, we will outline an alternate approach based on an analysis of splitting of eigenmodes
to arrive at the parameters of the model.
The advantage of this approach over that followed by Biehs and Agarwal is that it can be
used in general to analyse coupling between quasi-particles when
the steady state heat flux is not known before-hand.
To find the equivalent natural frequency $\omega_0$ and coupling constant $\gamma$ for the interaction between the surface modes across the vacuum gap we proceed as follows: we use the expression for the eigenmodes of the configuration of two nanoparticles
separated by a vacuum gap using Maxwell's equations and compare this expression with that for two coupled harmonic oscillators.
Consider first the case of a single nanoparticle with polarizability $\alpha(\omega)$. The resonant frequency is
determined from the poles of the polarizability of the nanoparticle. We will consider the particular case of
spherical metallic nanoparticles with diameter $D$ whose polarizability can be expressed as:
\begin{equation}
\alpha(\omega) = \frac{\pi D^3}{2} \frac{\varepsilon(\omega)-1}{\varepsilon(\omega)+2}
\label{alpha}
\end{equation}
When the permittivity $\varepsilon(\omega)$ of the particle is given by the Drude form:
\begin{equation}
\varepsilon(\omega) = \varepsilon_{\infty} \left(1 - \frac{\omega_L^2}{\omega (\omega + i \Gamma)} \right)
\label{dielectric}
\end{equation}
with $\omega_p$ being the plasma frequency, and $\Gamma$ the damping parameter, the pole of Eq. \ref{alpha} gives the resonant frequency, in the limit of $\Gamma/\omega_L \ll 1$, to be of the form $\omega = \omega_0 - i \Gamma/2 $, where $\omega_0 = \omega_L \sqrt{\varepsilon_{\infty}/(\varepsilon_{\infty} +2)}$.
Now consider the two-particle system shown in Fig. , with their centers being separated by a gap $d$.
%
The presence of a second particle induces a change in the net polarizability of the two-particle system
to the form \cite{jain2006plasmon, jain2007universal}:
\begin{equation}
\alpha'(\omega) = \dfrac{\alpha(\omega)}{1 - \dfrac{\kappa \alpha(\omega)}{4 \pi \varepsilon_0 d^3}}
\label{alphan}
\end{equation}
where $\kappa$ is an alignment factor which depends on the possible polarization modes such that $\kappa=-1$ for dipoles aligned side-by-side (perpendicular polarization) as shown in Fig. \ref{FigNP}a and $\kappa=2$ for dipoles aligned head-to-tail (parallel polarization) as shown in Fig. \ref{FigNP}b.
In deriving Eq. \ref{alphan} it is assumed that the gap $d$ between the nanoparticles is such that $1/d^3$ component of the electric field dominates compared to the other terms which vary as $1/d^2$ and $1/d$.
The eigenmodes for this two-particle system, with the higher energy mode corresponding to the case with $\kappa = -1$ and the lower energy mode corresponding to the case with $\kappa = 2$, can be solved for by
substituting the expression for $\alpha$ in Eq. \ref{alpha} into Eq. \ref{alphan} and solving for the poles of Eq. \ref{alphan}. In the limit of dipole approximation ($d\gg D$) and under small losses ($\Gamma \ll \omega_L$) we get the eigenmodes to be of the form:
\begin{equation}
\omega_{\pm} = \omega_0^{\text{np}}\sqrt{1\pm \gamma^{\text{np}}} - i \frac{\Gamma}{2}
\label{eigenNP}
\end{equation}
where,
\begin{equation}
\omega_0^{\text{np}} = \omega_L \sqrt{\frac{\varepsilon_\infty}{\varepsilon_\infty+2}}
\label{omeganp}
\end{equation}
and
\begin{equation}
\gamma^{\text{np}}(d) = \frac{9}{16} \frac{D^3}{d^3} \frac{1}{\varepsilon_\infty+2}
\label{gammanp}
\end{equation}
For large gaps $d\gg D$, $\omega_0^{\text{np}} \approx \omega_0 $ as expected. Comparing Eq. \ref{eigenNP} with Eq. \ref{eqeigenHO} we can directly relate the parameters of the coupled harmonic oscillator model with that of the coupled nanoparticles as follows:
\begin{equation}
\omega_0 \rightarrow \omega_0^{\text{np}}; \, \, \gamma \rightarrow \gamma^{\text{np}}; \,\,\, \delta \rightarrow \frac{\Gamma}{2}
\label{parametersNP}
\end{equation}
Substituting these parameters from Eq. \ref{parametersNP} into Eq. \ref{eqHO}, we arrive at the the expression for heat transfer between two dipolar nanoparticles as:
\begin{equation}
P^{\text{np}}(d) = \frac{\hbar \omega^{\text{np}}_0}{e^{\hbar \omega^{\text{np}}_0/(k_B T) }-1} \left( \dfrac{ (\omega_0^{\text{np}})^2 \gamma^2 }{ (\omega_0^{\text{np}})^2 \gamma^2 + \Gamma^2/4} \right) \frac{\Gamma}{2}
\label{Pnp1}
\end{equation}
For analytical simplicity if we consider the case where $k_B T \gg \hbar \omega_0$, the expression in Eq. \ref{Pnp1} can be shown to reduce to:
\begin{equation}
P^{\text{np}}(d) =\frac{81}{128} k_B T \left( \frac{D}{d}\right)^6 \frac{\varepsilon_\infty}{(\varepsilon_\infty+2)^3} \frac{\omega_L^2}{\Gamma}
\label{Pnp2}
\end{equation}
This expression can be compared with the expression for heat transfer between two dipolar nanoparticles derived using fluctuational electrodynamics principles \cite{volokitin2001radiative, domingues2005heat, chapuis2008radiative, perez2008heat, dedkov2010radiative}. This, for small gaps $d/\lambda \ll 1$ reads \cite{domingues2005heat, chapuis2008radiative}:
\begin{equation}
P^{\text{np}}_{\text{FE}}(d) = \frac{3}{4 \pi^3 d^6} \int_0^\infty \frac{ \hbar \omega}{e^{\hbar \omega/(k_B T) }-1} \text{Im} \big[\alpha(\omega)\big]^2 \,\, d\omega
\label{PnpFEa}
\end{equation}
Considering the limit $k_B T \gg \hbar \omega$ and using the expression for $\alpha(\omega)$ from Eq. \ref{alpha} to evaluate the integral in Eq. \ref{PnpFEa} we get:
\begin{equation}
P^{\text{np}}_{\text{FE}}(d) =\frac{27}{128} k_B T \left( \frac{D}{d}\right)^6 \frac{\varepsilon_\infty}{(\varepsilon_\infty+2)^3} \frac{\omega_L^2}{\Gamma}
\label{PnpFE}
\end{equation}
The expression for the heat transfer in Eq. \ref{Pnp2} derived using the coupled harmonic oscillator model agrees with that derived using fluctuational electrodynamics principles in Eq. \ref{PnpFE} except for a numerical factor. There is currently no consensus in literature regarding the constant numerical factor in Eq. \ref{PnpFE} - it being 27/128 in Ref. \cite{domingues2005heat, chapuis2008radiative}, 27/(32$\pi$) in Ref. \cite{volokitin2001radiative}, 27/256 in Ref. \cite{perez2008heat} and 27/8 in Ref. \cite{dedkov2010radiative}.
\section{Planar surfaces}
\label{planar}
\begin{figure}[h!]
\includegraphics[scale=0.45]{planar.png}
\caption{Coupling between two surface modes located at the interface between a planar surface and vacuum which is analysed in Section \ref{planar} . The dielectric properties is taken to vary only with the frequency $\omega$ and the coupling constant $\gamma \omega_0^2$ between the surface modes is derived using Maxwell's equations}
\label{FigPl}
\end{figure}
In this section we extend the procedure outlined in Sec. \ref{nano} for deriving the expression for near-field heat transfer between two nanoparticles to that between two planar surfaces. We follow the derivation shown in Ref. \cite{barton2015classical, sasihithludynamic} to derive the equivalent parameters of the coupling oscillator, analyze the heat transfer between closely spaced metallic surfaces whose dielectric properties are given by the Drude form shown in Eq. \ref{dielectric} and contrast it with the parameters derived for coupling between nanoparticles in Sec. \ref{nano}. %
Consider two half-spaces separated by a gap of width $d$ occupying the regions $z<-d/2$ and $z>d/2$. The in-plane two dimensional component $k_x $ and the $z$ component $k_z$ of the wavevector of a planar wave of frequency $\omega$ in the vacuum gap of this system are related as $k_x^2+k_z^2 = (\omega/c)^2$ with $c$ being the velocity of light. From the dispersion relation for surface polaritons $\omega(k_x)$ \cite{maier2007plasmonics} and close to the surface polariton frequency the surface mode is seen to acquire an electrostatic character with $k_x \gg \omega/c$ (where group velocity $d\omega/d k_x \rightarrow 0$) and hence $k_z \approx ik_x$. In this electrostatic limit the expression for the symmetric and antisymmetric forms of the potential in the vacuum gap due to the surface mode can be written as \cite{sernelius2011surface, van1968macroscopic}:
\[
\phi_{\pm}=\Phi \, \text{exp }(i k_x x - i \omega t)\begin{cases}
e^{-k_x z}+e^{k_x z}\\
e^{-k_x z}-e^{k_x z}
\end{cases}
\]
and for $z>d/2$:
\[
\phi_{\pm}=\Phi \, \text{exp }(ik_x x - i \omega t)\begin{cases}
(e^{-k_x d/2}+e^{k_x d/2}) & \text{exp}[-k_x (z-d/2)]\\
(e^{-k_x d/2}-e^{k_x d/2}) & \text{exp}[-k_x (z-d/2)]
\end{cases}
\]
where $\Phi$ is an arbitrary constant. Such a form of the potential, which oscillates harmonically in time with frequency $\omega$, ensures continuity of potential at the interfaces. Satisfying continuity of perpendicular component of displacement
field gives the condition for surface modes as:
\begin{eqnarray}
\varepsilon(\omega\pm)=\begin{cases}
\left(1-e^{k_x d}\right)/\left(1+e^{k_x d}\right)\\
\left(1+e^{k_x d}\right)/ \left(1-e^{k_x d}\right)
\end{cases}
\label{condition}
\end{eqnarray}
where
$\varepsilon(\omega)$ is the dielectric function of the half-space. Using the Drude model for the dielectric function shown in Eq. \ref{dielectric},
and solving for $\omega$ in Eq. \ref{condition} in the low-loss limit $\Gamma \ll \omega_L$ we obtain the eigenfrequencies of the coupled surface modes to be of the form:
\begin{equation}
\omega_\pm =\omega_0^{\text{pl}}( k_x, d) \sqrt{ 1 \pm \gamma^{\text{pl}}(k_x, d) } - i \frac{\Gamma}{2}
\label{eigensurface}
\end{equation}
where,
\begin{equation}
\omega^{\text{pl}}_0(k_x d) =\sqrt{ \varepsilon_\infty} \omega_L \sqrt{\dfrac{ (\varepsilon_\infty + 1) - (\varepsilon_\infty - 1) e^{-2 k_x d} }{(\varepsilon_\infty + 1)^2 - (\varepsilon_\infty - 1)^2 e^{- 2 k_x d}}}
\label{omegap}
\end{equation}
and
\begin{equation}
\gamma^{\text{pl}}(k_x d) = \dfrac{2 e^{-k_x d} }{ (\varepsilon_\infty + 1) - (\varepsilon_\infty - 1) e^{-2 k_x d} } ;
\label{xip}
\end{equation}
In the limit of large gaps i.e., $k_x d \rightarrow \infty$ we have $\omega_0^{\text{pl}}(k_x d) \rightarrow \omega_L \sqrt{\varepsilon_\infty/(\varepsilon_\infty+1)}$ which is the surface plasmon frequency for a single half-space, and $\gamma^{\text{pl}}(k_x d) \rightarrow 0$ as expected. From Eq. \ref{eqeigenHO} and \ref{eigensurface} we have for planar surfaces:
\begin{equation}
\omega_0 \rightarrow \omega_0^{\text{pl}}(k_x d); \, \, \gamma \rightarrow \gamma^{\text{pl}}(k_x d); \,\,\, \delta \rightarrow \frac{\Gamma}{2}
\label{parametersPl}
\end{equation}
Comparing the coupling parameters $\omega^{\text{pl}}_0$ and $\gamma^{\text{pl}}$ derived in Eqs. \ref{omegap} and Eq. \ref{xip} for planar surfaces with those derived for nanoparticles $\omega^{\text{np}}_0$ and $\gamma^{\text{np}}$ in Eqs. \ref{omeganp} and Eq. \ref{gammanp} it is observed that in the former case the coupling parameters are a function of the in-plane wavevector modes $k_x$. We thus have a spectrum of oscillators (labelled by the modes $k_x$) and since only modes of the same in-plane wavevector components interact across the vacuum gap we can consider each of the oscillators to contribute independently to the heat transfer.
We can thus write the expression for the near-field heat transfer between planar surfaces from Eq. \ref{eqHO} and Eq. \ref{parametersPl} to be of the form:
\begin{equation}
P^{\text{pl}}(d) = \frac{1}{4 \pi^2} \int \frac{\hbar \omega^{\text{pl}}_0 (k_x d)}{e^{\hbar \omega^{\text{pl}}_0(k_x d)/(k_B T) }-1} \left( \dfrac{ [\omega_0^{\text{pl}}(k_x d)]^2 \gamma^2(k_x d) }{ [\omega_0^{\text{pl}}(k_x d)]^2 \gamma^2(k_x d) + \Gamma^2/4} \right) \frac{\Gamma}{2} \,\, d^2 k_x
\label{Ppl1}
\end{equation}
For analytical simplicity considering the case when $k_B T \gg \hbar \omega^{\text{pl}}_0$ and substituting the expressions for the coupling parameters from Eqs. \ref{omegap} and \ref{xip}, Eq. \ref{Ppl1} reduces to:
\begin{equation}
P^{\text{pl}}(d) = \frac{k_B T}{2 \pi d^2} \int_0^\infty \dfrac{8 \varepsilon_\infty \omega_L^2 }{e^{- 2 x} \Gamma( \varepsilon_\infty - 1)^3 + \Gamma e^{ 2 x} ( \varepsilon_\infty + 1)^3 - 2 \Gamma \varepsilon_\infty ( \varepsilon_\infty^2 - 1) + 16 \varepsilon_\infty \omega_L^2 \Gamma^{-1} } \,\, x \, d x
\label{Ppl2}
\end{equation}
This expression can be compared with the expression of near-field heat transfer between two planar surfaces derived from fluctuational electrodynamics principles \cite{loomis94} which, in the limit $k_x d \rightarrow 0$ and $k_B T \gg \hbar \omega $, reads :
\begin{equation}
P^{\text{pl}}_{\text{FE}}(d)= \frac{k_B T}{\pi^2 d^2} \int_0^\infty \, d\omega \, \int_0^\infty \frac{ \left[\text{Im} \varepsilon(\omega)\right]^2 x e^{-x} \, dx}{|(\varepsilon(\omega)+1)^2 - (\varepsilon(\omega) -1)^2 e^{-x}|^2}
\label{QFD}
\end{equation}
Substituting the expression for the dielectric function $\varepsilon(\omega)$ from Eq. \ref{dielectric} and expanding, Eq. \ref{QFD} reduces to the form:
\begin{equation}
P^{\text{pl}}_{\text{FE}}(d)= \frac{k_B T}{\pi^2 d^2} \int_0^\infty x e^{-x} \, dx \int_0^\infty \frac{M(\omega)}{N(\omega)} \, d\omega
\label{cauchy}
\end{equation}
where $M(\omega)$ and $N(\omega) $ are complicated polynomial even functions of $\omega$. By carrying out the integral over $\omega$ in the complex plane using Cauchy's residue theorem the expression in Eq. \ref{cauchy} reduces to that in Eq. \ref{Ppl2}.
\begin{comment}
Equation \ref{cauchy} then reduces to:
\begin{equation}
P_{\text{FE}}(d)= \frac{k_B T\, \Gamma}{16 \pi d^2} \int_0^\infty \frac{ \omega_L^2 x e^{-x}}{\omega_L^2 e^{-x} + 2 \Gamma^2} \, dx ;
\label{cauchyL}
\end{equation}
By making the substitutions $\omega'(|\mathbf{k}|d) = \omega_L/\sqrt{2}$, $\xi'(|\mathbf{k}|d) = e^{- |\mathbf{k}|d }$ and $ x = 2 |\mathbf{k}|d$ in Eq. \ref{eqSteady} the expression for $P_{\text{FE}}(d)$ in Eq. \ref{cauchyL} can be shown to match that for $ P_{\text{st}}(d)$ in Eq. \ref{eqSteady}. A similar derivation for showing this equivalence can be found in Ref. \cite{barton2015classical}.
\end{comment}
To conclude, we have arrived at an expression for heat transfer between two heat baths maintained by constant temperatures mediated by a coupled harmonic oscillator system using Green's function theory and shown its equivalence to those derived in literature using Master equation and Langevin dynamics. We use this expression to find surface plasmon mediated heat transfer between two closely spaced nanoparticles, and also for two closely spaced planar surfaces whose dielectric properties are of the Drude form. In order to establish an equivalence between the coupled harmonic oscillator system and the coupled nanoparticles/planar surfaces configuration we have compared the splitting in the eigenmodes of the two systems to arrive at the equivalent coupling parameters. The expression of heat transfer thus obtained for both these configurations is shown to be consistent with that obtained using the established theory of fluctuational electrodynamics.
\label{section6}
\section*{Acknowledgements}
We acknowledge useful discussions with Prof. Girish Sarin Agarwal and with Dr. Svend-Age Biehs.
\section*{References}
\bibliographystyle{ieeetr}
|
2,877,628,088,467 | arxiv | \section*{Methods}
\subsubsection{Measurements}
We use a pulsed QCL mid-IR laser (LaserScope from Block Engineering) that is linearly polarized and has a wavelength tuning range from $\lambda$ = 6.1 to 10 $\mu$m. We scan the device position with motorized $xyz$-stage. We modulate the mid-IR laser employing an optical chopper at 422 Hz and we measure the photocurrent using a lock-in amplifier (Stanford Research). We focus the mid-IR light with a reflective objective with a numerical aperture (NA) of 0.5. We measure the mid-IR power using a thermopile detector from Thorlabs placed at the sample position.
\subsubsection{Responsivity and NEP calculation}
The external responsivity is given by: Responsivity = (I$_{\rm PTE}/\rm P_{\rm in}$)$\times$(A$_{\rm focus}$/A$_{\rm diff}$)\cite{tredicucci12, tredicucci14, Castilla2019}, where P$_{\rm in}$ is the power measured by the commercial power meter, A$_{\rm focus}$ is the experimental beam area at the measured wavelength and A$_{\rm diff}$ is the diffraction-limited spot size. We measure the photocurrent I$_{\rm PTE}$ from the output signal of the lock-in amplifier $V_{\rm LIA}$ considering I$_{\rm PTE} = \frac{2 \pi \sqrt{2}}{4G} V_{\rm LIA}$\cite{tredicucci12, tredicucci14}, where $G$ is the gain factor in V/A (given by the lock-in amplifier). We use the ratio A$_{\rm diff}/$A$_{\rm focus}$ for estimating the power reaching our photodetector since $A_{\rm diff}$ is the most reasonable value one can attain when considering the detector together with an optimized focusing system (e.g. using hemispherical lens) and it is widely used in the literature for comparing the performances among photodetectors \cite{Castilla2019, tredicucci12, tredicucci14}. We usually have a ratio of A$_{\rm focus}$/A$_{\rm diff}$ $\approx$ 7. This ratio is given by A$_{\rm diff}/$A$_{\rm focus} = \frac{w^2_{\rm 0,diff}}{w_{\rm 0,x} w_{\rm 0,y}}$. In order to obtain $w_{\rm 0,x}$ and $w_{\rm 0,y}$ we use our experimental observation that the photocurrent is linear in laser power and measure the photocurrent while scanning the device in the $x-$ and $y-$direction. Consequently, the photocurrent is described by Gaussian distributions $\propto e^{-2x^2/w^2_{\rm 0,x}}$ and $\propto e^{-2y^2/w^2_{\rm 0,y}}$, where $w_{\rm 0,x}$ and $w_{\rm 0,y}$ are the respectively obtained spot sizes (related to the standard deviation via $\sigma = w_0/2$ and to the FWHM = $\sqrt{2 \ln(2)}w_0$). We usually achieve $w_{\rm 0,x}$ = 5.05 $\mu$m and $w_{\rm 0,y}$ = 5.40 $\mu$m at $\lambda$ = 6.6 $\mu$m (see Fig. S14b). For the diffraction-limited spot, we consider $w_{\rm 0,diff} = \frac{\lambda}{\pi}$, with $\lambda$ the mid-IR laser wavelength. The diffraction-limited area is hence taken as A$_{\rm diff} = \pi w_{\rm 0, diff}^2 = \lambda^2/\pi$. Additionally, the noise-equivalent power (NEP) that characterizes the sensitivity of the photodetector is defined as NEP $ = I_{\rm noise}/$Responsivity and considering that our unbiased photodetector has a very low noise current that is limited by Johnson noise, we use a noise spectral density $I_{\rm noise} = \sqrt{\frac{4k_BT}{R_D}}$, where $k_B$ corresponds to the Boltzmann constant, $T$ is the operation temperature (300 K) and $R_D$ the device resistance.
\section*{Acknowledgments}
\small
The authors thank David Alcaraz-Iranzo, Jianbo Yin, Iacopo Torre, Hitesh Agarwal, Bernat Terrés and Ilya Goykmann for fruitful discussions. F.H.L.K. acknowledges financial support from the Spanish Ministry of Economy and Competitiveness, through the “Severo Ochoa” Programme for Centres of Excellence in R$\&$D (SEV-2015-0522), support by Fundacio Cellex Barcelona, Generalitat de Catalunya through the CERCA program, and the Agency for Management of University and Research Grants (AGAUR) 2017 SGR 1656. Furthermore, the research leading to these results has received funding from the European Union Seventh Framework Programme under grant agreement no.785219 and no. 881603 Graphene Flagship for Core2 and Core3. ICN2 is supported by the Severo Ochoa program from Spanish MINECO (Grant No. SEV-2017-0706). K.J.T. acknowledges funding from the European Union’s Horizon 2020 research and innovation programme under Grant Agreement No. 804349. S.C. acknowledges financial support from the Barcelona Institute of Science and Technology (BIST), the Secretaria d’Universitats i Recerca del Departament d’Empresa i Coneixement de la Generalitat de Catalunya and the European Social Fund (L'FSE inverteix en el teu futur) – FEDER. T. S. and L.M.M. acknowledge support by Spain’s MINECO under Grant No. MAT2017-88358-C3-1-R and the Aragon Government through project Q-MAD.
\section*{Author contributions}
\small
F.H.L.K., R.H., M.A. and S.C. conceived the project. S.C. fabricated the device and performed the experiments. V.P. assisted in device fabrication and experiments. M.A. supported the device fabrication. I.V. and E.L. performed the simulations and developed the multiphysics model. S.C., T.S. and L.M.-M. assisted in the modelling. S.C., K.R., J.G. and M.A. performed preliminary optical simulations. S.C., I.V., E.L and F.H.L.K. wrote the manuscript. J.G., S.K. and K.-J.T. assisted with measurements and discussion of the results. K.W. and T.T. synthesized the hBN crystals. D.E., R.H., K.-J.T., E.L. and F.H.L.K. supervised the work and discussed the results. All authors contributed to the scientific discussion and manuscript revisions. S.C. and I.V. contributed equally to the work.
|
2,877,628,088,468 | arxiv | \chapter{Mytitle}{Myname} in a file named Myname.tex
\documentclass{amsart}
\usepackage{geometry}
\geometry{
paperwidth=6in,
paperheight=9in,
width=27pc,
height=45pc,
headsep=12pt,
foot=24pt,
inner=0.9375in,
top=0.625in,
includehead
}
\usepackage{amsmath, amsfonts, amssymb}
\usepackage{xspace}
\usepackage{graphicx,color}
\usepackage[curve]{xypic}
\usepackage{pifont}
\usepackage[parfill]{parskip}
\newtheorem{question}{Question}
\newtheorem{theo}{Theorem}[section]
\newtheorem{defi}{Definition}[section]
\newtheorem{prop}{Proposition}[section]
\newtheorem{coro}{Corollary}[section]
\newtheorem{conj}{Conjecture}[section]
\newtheorem{lemme}[prop]{Lemma}
\newtheorem{objective}{Objective}
\newtheorem{task}{Task}
\newtheorem{rema}{Remark}
\newtheorem{exa}{Example}
\def={=}
\def{P_f}{{P_f}}
\def{P_g}{{P_g}}
\def{\mathbb C}{{\mathbb C}}
\def{\mathbb D}{{\mathbb D}}
\def{\mathbb H}{{\mathbb H}}
\def{\mathbb M}{{\mathbb M}}
\def{\mathbb N}{{\mathbb N}}
\def{\mathbb 0}{{\mathbb 0}}
\def{{\mathbb P}}{{{\mathbb P}}}
\def{\mathbb Q}{{\mathbb Q}}
\def{\mathbb R}{{\mathbb R}}
\def{\mathbb T}{{\mathbb T}}
\def{\mathbb Z}{{\mathbb Z}}
\def{\Sigma}{{\Sigma}}
\def \varepsilon{\varepsilon}
\def \varepsilon{\varepsilon}
\def \varphi{\varphi}
\def\mathcal{\mathcal}
\def\mathrm{Teich}(\P^1,A){\mathrm{Teich}({{\mathbb P}}^1,A)}
\def\mathrm{Teich}(\P^1,{\Pf}){\mathrm{Teich}({{\mathbb P}}^1,{{P_f}})}
\def\mathrm{Mod}(\P^1,A){\mathrm{Mod}({{\mathbb P}}^1,A)}
\def\mathrm{Mod}(\P^1,{\Pf}){\mathrm{Mod}({{\mathbb P}}^1,{{P_f}})}
\def\mathrm{Teich}(\P^1,{\Pf}){\mathrm{Teich}({{\mathbb P}}^1,{{P_f}})}
\def\mathrm{Mod}(\P^1,{\Pf}){\mathrm{Mod}({{\mathbb P}}^1,{{P_f}})}
\def\mathrm{Teich}(\P^1,{\Pf}){\mathrm{Teich}({{\mathbb P}}^1,{{P_f}})}
\def\mathrm{Teich}{\mathrm{Teich}}
\def\sigma_f{\sigma_f}
\def\circledast{\circledast}
\def{s}{{s}}
\def{\mu}{{\mu}}
\def{\nu}{{\nu}}
\def{\rm id}{{\rm id}}
\def{\hfill{$\square$}}{{\hfill{$\square$}}}
\newcommand{\kmp}[1]{\textcolor{blue}{\sf #1}}
\def\mathrm{PMCG}(\P^1, {\Pf}){\mathrm{PMCG}({{\mathbb P}}^1, {{P_f}})}
\newcommand{\vskip 0.2in}{\vskip 0.2in}
\def{\underset{f}{\leftarrow}}{{\underset{f}{\leftarrow}}}
\newcommand{\partial}{\partial}
\newcommand{\xb}[1]{\textcolor{green}{\sf #1}}
\begin{document}
\title{On Thurston's pullback map}
\author{Xavier Buff, Adam Epstein, Sarah Koch,
and Kevin Pilgrim}
\today
\maketitle
\begin{abstract} Let $f: {{\mathbb P}}^1 \to {{\mathbb P}}^1$ be a rational map with finite
postcritical set ${P_f}$. Thurston showed that $f$ induces a
holomorphic map $\sigma_f: \mathrm{Teich}(\P^1,{\Pf}) \to \mathrm{Teich}(\P^1,{\Pf})$ of the Teichm\"uller
space to itself. The map $\sigma_f$ fixes the basepoint
corresponding to the identity map $\mathrm{id}: ({{\mathbb P}}^1, {P_f}) \to
({{\mathbb P}}^1, {P_f})$. We give examples of such maps $f$ showing that
the following cases may occur:
\begin{enumerate}
\item the basepoint is an attracting fixed point, the image of $\sigma_f$ is open and dense
in $\mathrm{Teich}(\P^1,{\Pf})$ and the pullback map $\sigma_f:\mathrm{Teich}(\P^1,{\Pf})\to \sigma_f\bigl(\mathrm{Teich}(\P^1,{\Pf})\bigr)$
is a covering map,
\item the basepoint is a superattracting fixed point, the image of $\sigma_f$ is $\mathrm{Teich}(\P^1,{\Pf})$
and $\sigma_f:\mathrm{Teich}(\P^1,{\Pf})\to \mathrm{Teich}(\P^1,{\Pf})$ is a ramified Galois
covering map, or
\item the map $\sigma_f$ is constant.
\end{enumerate}
\end{abstract}
\section{Introduction}\label{intro}
In this article, ${\Sigma}$ is an oriented $2$-sphere. All maps ${\Sigma}\to {\Sigma}$
are assumed to be orientation-preserving. The map $f:{\Sigma}\to {\Sigma}$ is a
branched covering of degree $d\geq 2$. A particular case of interest
is when ${\Sigma}$ can be equipped with an invariant complex structure for
$f$. In that case, $f:\Sigma\to \Sigma$ is conjugate to a rational
map $F:{{\mathbb P}}^1\to {{\mathbb P}}^1$.
According to the Riemann-Hurwitz formula, the map $f$ has $2d-2$
critical points, counting multiplicities. We denote $\Omega_f$ the
set of critical points and $V_f= f(\Omega_f)$ the set of
critical values of $f$. The {\em postcritical set} of $f$ is the set
\[ {P_f} = \bigcup_{n>0}f^{\circ n}(\Omega_f).\]
The map $f$ is {\em postcritically finite} if ${P_f}$ is finite.
Following the literature, we refer to such maps simply as {\em
Thurston maps}.
Two Thurston maps $f:{\Sigma}\to {\Sigma}$ and $g:{\Sigma}\to {\Sigma}$ are {\em
equivalent} if there are homeomorphisms $h_0: ({\Sigma},{P_f}) \to
({\Sigma},{{P_g}})$ and $h_1: ({\Sigma},{P_f}) \to ({\Sigma},{{P_g}})$ for which $h_0\circ
f=g\circ h_1$ and $h_0$ is isotopic to $h_1$ through homeomorphisms
agreeing on ${P_f}$. In particular, we have the following commutative
diagram:
\[\xymatrix{
&({\Sigma},{P_f})\ar[d]_{f}\ar[r]^{h_1} &
({\Sigma},{P_g}) \ar[d]^{g} \\
&({\Sigma},{P_f})\ar[r]^{h_0} & ({\Sigma},{P_g}).}
\]
In \cite{dh}, Douady and Hubbard, following Thurston, give a
complete characterization of equivalence classes of rational maps
among those of Thurston maps. The characterization takes the
following form.
A branched covering $f: ({\Sigma}, {P_f}) \to ({\Sigma}, {P_f})$ induces a
holomorphic self-map
\[\sigma_f: {\rm Teich}({\Sigma},{P_f}) \to {\rm
Teich}({\Sigma},{P_f})\] of Teichm\"uller space (see Section
\ref{prelimsect} for the definition). Since it is obtained by
lifting complex structures under $f$, we will refer to $\sigma_f$ as
the {\em pullback map} induced by $f$. The map $f$ is equivalent to
a rational map if and only if the pullback map $\sigma_f$ has a
fixed point. By a generalization of the Schwarz lemma, $\sigma_f$
does not increase Teichm\"uller distances. For most maps
$f$, the pullback map $\sigma_f$ is a contraction, and so a fixed
point, if it exists, is unique.
In this note, we give examples showing that the contracting behavior
of $\sigma_f$ near this fixed point can be rather varied.
\begin{theo}\label{alt_thm1}
There exist Thurston maps $f$ for which $\sigma_f$ is contracting,
has a fixed point $\tau$ and:
\begin{enumerate}
\item the derivative of $\sigma_f$ is invertible at $\tau$, the image of $\sigma_f$ is open and dense
in $\mathrm{Teich}(\P^1,{\Pf})$ and $\sigma_f:\mathrm{Teich}(\P^1,{\Pf})\to \sigma_f\bigl(\mathrm{Teich}(\P^1,{\Pf})\bigr)$
is a covering map,
\item the derivative of $\sigma_f$ is not invertible at $\tau$, the image of $\sigma_f$ is equal to $\mathrm{Teich}(\P^1,{\Pf})$
and $\sigma_f:\mathrm{Teich}(\P^1,{\Pf})\to \mathrm{Teich}(\P^1,{\Pf})$ is a ramified Galois covering map,\footnote{A ramified covering is Galois if the group of deck transformations acts transitively on the fibers. }
or
\item the map $\sigma_f$ is constant.
\end{enumerate}
\end{theo}
In Section \ref{prelimsect}, we establish notation, define
Teichm\"uller space and the pullback map $\sigma_f$ precisely, and
develop some preliminary results used in our subsequent analysis. In
Sections \ref{pf1}, \ref{pf2}, and \ref{Xexamples}, respectively, we
give concrete examples which together provide the proof of Theorem
\ref{alt_thm1}. We supplement these examples with some partial
general results. In Section \ref{pf1}, we state a fairly general
sufficient condition on $f$ under which $\sigma_f$ evenly covers it
image. This condition, which can sometimes be checked in practice,
is excerpted from \cite{k} and \cite{k2}. Our example
illustrating (2) is highly symmetric and atypical; we are not aware
of any reasonable generalization. In Section \ref{sigmaconst}, we
state three conditions on $f$ equivalent to the condition that
$\sigma_f$ is constant. Unfortunately, each is extremely
difficult to verify in concrete examples.
{\bf{ Acknowledgements.}} We would like to thank Curt
McMullen who provided the example showing (3).
\section{Preliminaries}\label{prelimsect}
Recall that a Riemann surface is a connected oriented topological
surface together with a {\em complex structure}: a
maximal atlas of charts $\phi:U\to{\mathbb C}$ with holomorphic overlap maps.
For a given oriented, compact topological surface $X$, we denote the
set of all complex structures on $X$ by ${\mathcal{C}}(X)$. It is easily
verified that an orientation-preserving branched covering map $f:X\to
Y$ induces a map $f^*: {\mathcal{C}}(Y)\to {\mathcal{C}}(X)$; in particular,
for any orientation-preserving homeomorphism $\psi:X\to X$, there is an induced
map $\psi^*: {\mathcal{C}}(X)\to {\mathcal{C}}(X)$.
Let $A\subset X$ be finite. The Teichm\"uller space of $(X,A)$ is
\[{\rm Teich}(X,A)= {\mathcal{C}}(X)/{\sim_A}\]
where $c_1\sim_A c_2$ if and only if $c_1=\psi^*(c_2)$ for some
orientation-preserving homeomorphism $\psi:X\to X$ which is isotopic
to the identity relative to $A$. In view of the homotopy-lifting property, if
\begin{itemize}
\item $B\subset Y$ is finite and contains the critical value set
$V_f$ of $f$, and
\item $A\subseteq f^{-1}(B)$,
\end{itemize}
then $f^*:{\mathcal{C}}(Y)\to{\mathcal{C}}(X)$ descends to a well-defined
map $\sigma_f$ between the corresponding Teichm\"uller spaces:
\[
\xymatrix{
& {\mathcal{C}}(Y)\ar[d]\ar[rr]^{f^*} & & {\mathcal{C}}(X) \ar[d] \\
& \text{Teich}(Y,B)\ar[rr]^{\sigma_f} & & \text{Teich}(X,A) .}\]
This map is known as the {\em pullback map} induced by $f$.
In addition if $f:X\to Y$ and $g:Y\to Z$ are orientation-preserving
branched covering maps and if $A\subset X$, $B\subset Y$ and $C\subset
Z$ are such that
\begin{itemize}
\item $B$ contains $V_f$ and $C$ contains $V_g$,
\item $A\subseteq f^{-1}(B)$ and $B\subseteq g^{-1}(C)$,
\end{itemize}
then $C$ contains the critical values of $g\circ f$ and $A\subseteq
(g\circ f)^{-1}(C)$. Thus
\[\sigma_{g\circ f}:\text{Teich}(Z,C)\to
\text{Teich}(X,A)\] can be decomposed as $\sigma_{g\circ f} =
\sigma_f\circ \sigma_g$:
\[\xymatrix{
& \text{Teich}(Z,C)\ar[rr]^{\sigma_g} \ar@/_2pc/[rrrr]_{\sigma_{g\circ
f}} & & \text{Teich}(Y,B)\ar[rr]^{\sigma_f} & & \text{Teich}(X,A) .}
\]
For the special case of
${\text{Teich}}({{\mathbb P}}^1,A)$, we may use the Uniformization Theorem to
obtain the following description. Given a finite set $A\subset {{\mathbb P}}^1$ we may
regard ${\text{Teich}}({{\mathbb P}}^1,A)$ as the quotient of the
space of all orientation-preserving homeomorphisms $\phi :
{{\mathbb P}}^1\rightarrow {{\mathbb P}}^1$ by the equivalence relation $\sim$ whereby
$\phi_1\sim\phi_2$ if there exists a M\"obius transformation $\mu$
such that $\mu\circ\phi_1=\phi_2$ on $A$, and $\mu\circ\phi_1$ is
isotopic to $\phi_2$ relative to $A$. Note that there is a natural
basepoint $\circledast$ given by the class of the identity map on ${{\mathbb P}}^1$.
It is
well-known that ${\text{Teich}}({{\mathbb P}}^1,A)$ has a natural topology and
complex manifold structure (see \cite{h1}).
The {\em moduli space} is the space of all injections $\psi:
A\hookrightarrow {{\mathbb P}}^1$ modulo postcomposition with M\"obius
transformations. The moduli space will be denoted as $\mathrm{Mod}(\P^1,A)$.
If $\phi$ represents an element of ${\text{Teich}}({{\mathbb P}}^1,A)$, the restriction
$[\phi]\mapsto \phi |_A$ induces a universal covering $\pi:
{\text{Teich}}({{\mathbb P}}^1,A)\to{\text{Mod}}({{\mathbb P}}^1,A)$ which is a local
biholomorphism with respect to the complex structures on
${\text{Teich}}({{\mathbb P}}^1,A)$ and ${\text{Mod}}({{\mathbb P}}^1,A)$.
Let $f:{{\mathbb P}}^1\to {{\mathbb P}}^1$ be a Thurston map with $|{P_f}|\geq 3$. For any
$\Theta\subseteq {P_f}$ with $|\Theta|=3$, there is an obvious identification of
$\mathrm{Mod}(\P^1,{\Pf})$ with an open subset of $({{\mathbb P}}^1)^{{P_f}-\Theta}$. Assume
$\tau\in \mathrm{Teich}(\P^1,{\Pf})$ and let $\phi:{{\mathbb P}}^1\to {{\mathbb P}}^1$ be a homeomorphism
representing $\tau$ with $\phi|_{\Theta}={\rm id}|_{\Theta}$. By the
Uniformization Theorem, there exist
\begin{itemize}
\item a unique
homeomorphism $\psi:{{\mathbb P}}^1\to {{\mathbb P}}^1$ representing $\tau'=
\sigma_f(\tau)$ with $\psi|_{\Theta}={\rm id}|_{\Theta}$ and
\item a unique rational map $F:{{\mathbb P}}^1\to {{\mathbb P}}^1$,
\end{itemize}
such that the following diagram commutes:
\[\xymatrix{
&({{\mathbb P}}^1,{P_f})\ar[d]_{f}\ar[rr]^{\psi} &&
\bigl({{\mathbb P}}^1,\psi({P_f})\bigr) \ar[d]^{F} \\
&({{\mathbb P}}^1,{P_f})\ar[rr]^{\phi} && \bigl({{\mathbb P}}^1,\phi({P_f})\bigr). }
\]
Conversely, if we have such a commutative diagram with $F$
holomorphic, then
\[\sigma_f(\tau)=\tau'\]
where $\tau\in \mathrm{Teich}(\P^1,{\Pf})$ and $\tau'\in \mathrm{Teich}(\P^1,{\Pf})$ are the
equivalence classes of $\phi$ and $\psi$ respectively. In particular, if
$f:{{\mathbb P}}^1\to {{\mathbb P}}^1$ is rational, then $\sigma_f:\mathrm{Teich}(\P^1,{\Pf})\to \mathrm{Teich}(\P^1,{\Pf})$
fixes the basepoint $\circledast$.
\section{Proof of (1)}\label{pf1}
In this section, we prove that there are Thurston maps $f:{\Sigma}\to {\Sigma}$
such that $\sigma_f$
\begin{itemize}
\item is contracting,
\item has a fixed point $\tau\in \mathrm{Teich}({\Sigma},{P_f})$ and
\item is a covering map over its image which is open and dense in $\mathrm{Teich}({\Sigma},{P_f})$.
\end{itemize}
In fact, we show that this is the case when ${\Sigma}={{\mathbb P}}^1$ and $f:{{\mathbb P}}^1\to
{{\mathbb P}}^1$ is a polynomial whose critical points are all periodic. The
following is adapted from \cite{k2}.
\begin{prop}\label{prop_periodicpoly}
If $f:{{\mathbb P}}^1\to {{\mathbb P}}^1$ is a polynomial of degree $d\geq 2$ whose
critical points are all periodic, then
\begin{itemize}
\item $\sigma_f\bigl(\mathrm{Teich}({{\mathbb P}}^1,{P_f})\bigr)$ is open and dense in
$\mathrm{Teich}({{\mathbb P}}^1,{P_f})$ and
\item $\sigma_f:\mathrm{Teich}({{\mathbb P}}^1,{P_f})\to \sigma_f\bigl(\mathrm{Teich}({{\mathbb P}}^1,{P_f})\bigr)$
is a covering map.
\end{itemize}
In particular, the derivative $D\sigma_f$ is invertible at the fixed
point $\circledast$.
\end{prop}
This section is devoted to the proof of this proposition.
Let $n=|{P_f}|-3$. We will identify $\mathrm{Mod}(\P^1,{\Pf})$ with an open subset of ${{\mathbb P}}^n$ as follows.
First enumerate the finite postcritical points as $p_0,\ldots,p_{n+1}$.
Any point of $\mathrm{Mod}(\P^1,{\Pf})$ has a
representative $\psi:{P_f} \hookrightarrow {{\mathbb P}}^1$ such that
\[\psi(\infty) = \infty \quad\text{and}\quad \psi(p_0) = 0.\]
Two such representatives are equal up to multiplication by a nonzero
complex number. We identify the point in $\mathrm{Mod}(\P^1,{\Pf})$ with the point
\[[x_1:\ldots :x_{n+1}]\in {{\mathbb P}}^n \quad\text{where}\quad
x_1= \psi(p_1)\in {\mathbb C},\ldots, x_{n+1}= \psi(p_{n+1})\in
{\mathbb C}.\] In this way, the moduli space $\mathrm{Mod}(\P^1,{\Pf})$ is identified with
${{\mathbb P}}^{n}-\Delta$, where $\Delta$ is the {\em forbidden locus}:
\[\Delta= \bigl\{[x_1:\ldots:x_{n+1}]\in {{\mathbb P}}^n~;~(\exists
i,~x_i=0)\text{ or }(\exists i\neq j,~x_i=x_j)\bigr\}.\] The
universal cover $\pi:\mathrm{Teich}(\P^1,{\Pf})\to \mathrm{Mod}(\P^1,{\Pf})$ is then identified with a
universal cover $\pi:\mathrm{Teich}(\P^1,{\Pf}) \to {{\mathbb P}}^n-\Delta$.
Generalizing a result of Bartholdi and Nekrashevych \cite{bn}, the thesis \cite{k} showed that when $f:{{\mathbb P}}^1\to {{\mathbb P}}^1$ is a
unicritical polynomial there is an analytic endomorphism
$g_f:{{\mathbb P}}^n\to {{\mathbb P}}^n$ for which the following diagram commutes:
\[
\xymatrix{ & \mathrm{Teich}(\P^1,{\Pf})\ar[d]_{\pi}\ar[rr]^{\sigma_f} & & \mathrm{Teich}(\P^1,{\Pf}) \ar[d]^{\pi} \\
& {{\mathbb P}}^n & & {{\mathbb P}}^n.\ar @{->}[ll]_{g_f} }
\]
We show that the same result holds when $f:{{\mathbb P}}^1\to {{\mathbb P}}^1$ is a
polynomial whose critical points are all periodic.
\begin{prop}\label{prop_periodicpoly2}
Let $f:{{\mathbb P}}^1\rightarrow {{\mathbb P}}^1$ be a polynomial of degree $d\geq 2$
whose critical points are all periodic. Set $n= |P_f|-3$. Then,
\begin{enumerate}
\item there is an analytic
endomorphism $g_f:{{\mathbb P}}^n\to {{\mathbb P}}^n$ for which the following diagram
commutes:
\[
\xymatrix{ & \mathrm{Teich}(\P^1,{\Pf})\ar[d]_{\pi}\ar[rr]^{\sigma_f} & & \mathrm{Teich}(\P^1,{\Pf}) \ar[d]^{\pi} \\
& {{\mathbb P}}^n & & {{\mathbb P}}^n\ar @{->}[ll]_{g_f} }
\]
\item $\sigma_f$
takes its values in $\mathrm{Teich}(\P^1,{\Pf})-\pi^{-1}({\mathcal L})$ with ${\mathcal
L}= g_f^{-1}(\Delta)$,
\item $g_f(\Delta)\subseteq \Delta$ and
\item the critical point locus and the critical value locus of $g_f$ are contained in $\Delta$.
\end{enumerate}
\end{prop}
\medskip
\noindent{\em Proof of Proposition \ref{prop_periodicpoly} assuming
Proposition \ref{prop_periodicpoly2}:} Note that ${\mathcal L}$ is a codimension 1 analytic subset of ${{\mathbb P}}^n$, whence $\pi^{-1}({\mathcal L})$ is a codimension 1
analytic subset of $\mathrm{Teich}(\P^1,{\Pf})$. Thus, the complementary open sets are dense and connected.
Since $g_f:{{\mathbb P}}^n-{\mathcal L}\rightarrow {{\mathbb P}}^n-\Delta$ is a covering map, the compostion
\[g_f\circ \pi :\mathrm{Teich}(\P^1,{\Pf})-\pi^{-1}({\mathcal L}) \to {{\mathbb P}}^n-\Delta\]
is a covering map. Moreover,
\[\pi(\circledast) = g_f\circ \pi \circ \sigma_f(\circledast) = g_f\circ
\pi(\circledast).\] By universality of the covering map
$\pi:\mathrm{Teich}(\P^1,{\Pf})\to {{\mathbb P}}^n-\Delta$, there is a unique map
$\sigma:\mathrm{Teich}(\P^1,{\Pf})\to \mathrm{Teich}(\P^1,{\Pf}) - \pi^{-1}({\mathcal L})$ such that
\begin{itemize}
\item $\sigma(\circledast) = \circledast$ and
\item the following diagram commutes:
\[\xymatrix{ & \mathrm{Teich}(\P^1,{\Pf})\ar[d]_{\pi}\ar[rrr]^{\sigma} & & &\mathrm{Teich}(\P^1,{\Pf})-\pi^{-1}({\mathcal L}) \ar[dlll]^{g_f\circ \pi} \\
& {{\mathbb P}}^n -\Delta. }
\]
\end{itemize}
Furthermore, $\sigma:\mathrm{Teich}(\P^1,{\Pf})\to \mathrm{Teich}(\P^1,{\Pf}) - \pi^{-1}({\mathcal L})$ is a
covering map. Finally, by uniqueness we have $\sigma_f= \sigma$. {\hfill{$\square$}}
\medskip
\medskip
\noindent{\em Proof of Proposition \ref{prop_periodicpoly2}:}\\ (1)
We first show the existence of the endomorphism $g_f:{{\mathbb P}}^n\to {{\mathbb P}}^n$. We start with the definition of $g_f$.
The restriction of $f$ to ${P_f}$ is a permutation which fixes
$\infty$. Denote by ${\mu}:[0,n+1]\to [0,n+1]$ the permutation
defined by:
\[p_{{\mu}(k)} = f(p_k)\]
and denote by ${\nu}$ the inverse of ${\mu}$.
For $k\in [0,n+1]$, let
$m_k$ be the multiplicity of $p_k$ as a critical point of $f$ (if
$p_k$ is not a critical point of $f$, then $m_k= 0$).
Set $a_0= 0$ and let $Q\in {\mathbb C}[a_1,\ldots,a_{n+1},z]$ be the homogeneous polynomial of
degree $d$ defined by
\[Q(a_1,\ldots,a_{n+1},z)= \int_{a_{{\nu}(0)}}^z \left( d\prod_{k=0}^{n+1} (w-a_k)^{m_k} \right){\rm
d}w.\] Given ${\bf a}\in
{\mathbb C}^{n+1}$, let $F_{\bf a}\in {\mathbb C}[z]$ be the monic polynomial defined
by
\[F_{\bf a}(z) = Q(a_1,\ldots, a_{n+1},z).\]
Note that $F_{\bf a}$ is the unique monic polynomial of degree $d$ which vanishes at $a_{{\nu}(0)}$ and whose critical points are exactly those points $a_k$ for which $m_k>0$, counted with multiplicity $m_k$.
Let $G_f:{\mathbb C}^{n+1}\to {\mathbb C}^{n+1}$ be the
homogeneous map of degree $d$ defined by
\[G_f\left(\begin{array}{c}a_1\\\vdots\\a_{n+1}\end{array}\right) = \left(\begin{array}{c}
F_{\bf a}(a_{{\nu}(1)})\\\vdots\\
F_{\bf
a}(a_{{\nu}(n+1)})\end{array}\right)=\left(\begin{array}{c}
Q(a_1,\ldots,a_{n+1},a_{{\nu}(1)})\\\vdots\\
Q(a_1,\ldots,a_{n+1},a_{{\nu}(n+1)})\end{array}\right).\]
We claim that $G_f^{-1}\bigl({\bf 0}\bigr) = \{{\bf
0}\}$ and thus, $G_f:{\mathbb C}^{n+1}\to {\mathbb C}^{n+1}$ induces an endomorphism $g_f:{{\mathbb P}}^n\to {{\mathbb P}}^n$.
Indeed, let us consider a point ${\bf a}\in
{\mathbb C}^{n+1}$. By definition of $G_f$, if $G_f({\bf a}) = {\bf 0}$, then
the monic polynomial $F_{\bf a}$ vanishes at $a_0,a_1,\ldots,a_{n+1}$.
The critical points of $F_{\bf a}$ are those
points $a_k$ for which $m_k>0$. They are all mapped to $0$ and thus,
$F_{\bf a}$ has only one critical value in ${\mathbb C}$. All the preimages
of this critical value must coincide and since $a_0=0$, they all
coincide at $0$. Thus, for all $k\in [0,n+1]$, $a_k=0$.
Let us now prove that for all $\tau\in \mathrm{Teich}(\P^1,{\Pf})$, we have
\[\pi (\tau)= g_f\circ \pi\circ \sigma_f(\tau).\]
Let $\tau$ be a point in $\mathrm{Teich}(\P^1,{\Pf})$ and set $\tau'=
\sigma_f(\tau)$.
We will show that there is a representative $\phi$ of $\tau$ and a representative $\psi$ of $\tau'$ such that $\phi(\infty)=\psi(\infty)=\infty$, $\phi(p_0)=\psi(p_0)=0$ and
\begin{equation}\label{eq_Gfproperty}
G_f\bigl(\psi(p_1),\ldots,\psi(p_{n+1})\bigr) = \bigl(\phi(p_1),\ldots,\phi(p_{n+1})\bigr).\end{equation}
It then follows that
\[g_f\bigl([\psi(p_1):\ldots:\psi(p_{n+1})]\bigr) = [\phi(p_1):\ldots:\phi(p_{n+1})]\]
which concludes the proof since
\[\pi(\tau') = [\psi(p_1):\ldots:\psi(p_{n+1})]\quad \text{and}\quad
\pi(\tau) = [\phi(p_1):\ldots:\phi(p_{n+1})].\]
To show the existence of $\phi$ and $\psi$, we may proceed as follows.
Let $\phi$ be any representative of $\tau$ such that
$\phi(\infty) = \infty$ and $\phi(p_0) = 0$. Then, there is a
representative $\psi:{{\mathbb P}}^1\to {{\mathbb P}}^1$ of $\tau'$ and a rational map
$F:{{\mathbb P}}^1\to {{\mathbb P}}^1$ such that the following diagram commutes:
\[
\xymatrix{ & {{\mathbb P}}^1 \ar[d]_{f}\ar[r]^{\psi} & {{\mathbb P}}^1 \ar[d]^{F} \\
& {{\mathbb P}}^1 \ar @{->}[r]^\phi & {{\mathbb P}}^1. }
\]
We may normalize $\psi$ so that $\psi(\infty) = \infty$ and
$\psi(p_0)=0$. Then, $F$ is a polynomial of degree $d$. Multiplying
$\psi$ by a nonzero complex number, we may assume that $F$ is a
monic polynomial.
We now check that these homeomorphisms $\phi$ and $\psi$ satisfy the required Property
(\ref{eq_Gfproperty}).
For $k\in [0,n+1]$, set
\[x_k= \psi(p_k)\quad\text{and}\quad y_k= \phi(p_k).\]
We must show that
\[G_f(x_1,\ldots,x_{n+1}) = (y_1,\ldots,y_{n+1}).\]
Note that for $k\in
[0,n+1]$, we have the following commutative diagram:
\[
\xymatrix{ & p_{{\nu}(k)} \ar@{|->}[d]_{f}\ar@{|->}[r]^{\psi} & x_{{\nu}(k)} \ar@{|->}[d]^{F} \\
& p_k \ar@{|->}[r]^\phi & y_k.}
\]
Consequently, $F(x_{{\nu}(k)}) =y_k$. In particular $F(x_{{\nu}(0)})=0.$
In addition, the critical points of $F$ are exactly those points $x_k$ for which $m_k>0$, counted with multiplicity $m_k$.
As a consequence, $F=F_{\bf x}$ and
\[G_f\left(\begin{array}{c}x_1\\\vdots\\x_{n+1}\end{array}\right)=
\left(\begin{array}{c}F_{\bf x}(x_{{\nu}(1)})\\\vdots\\F_{\bf x}(x_{{\nu}(n+1)})\end{array}\right)=
\left(\begin{array}{c}F(x_{{\nu}(1)})\\\vdots\\F(x_{{\nu}(n+1)})\end{array}\right)=
\left(\begin{array}{c}y_1\\\vdots\\y_{n+1}\end{array}\right).\]
(2) To see that $\sigma_f$ takes its values in
$\mathrm{Teich}(\P^1,{\Pf})-\pi^{-1}({\mathcal L})$, we may proceed by contradiction.
Assume
\[\tau\in\mathrm{Teich}({{\mathbb P}}^1,{P_f}) \quad \text{and}\quad
\tau'= \sigma_f(\tau)\in \pi^{-1}({\mathcal L}).\] Then, since $\pi
= g_f\circ \pi\circ \sigma_f$, we obtain
\[\pi(\tau)= g_f\circ \pi(\tau')\in\Delta.\]
But if $\tau\in\mathrm{Teich}({{\mathbb P}}^1,{P_f})$, then $\pi(\tau)$ cannot be in
$\Delta$, and we have a contradiction.
(3) To see that $g_f(\Delta)\subseteq \Delta$, assume \[{\bf
a}= (a_1,\ldots, a_{n+1})\in {\mathbb C}^{n+1}\] and set $a_0= 0$.
Set
\[(b_0,b_1,\ldots,b_{n+1})= \bigl(0,F_{\bf a}(a_{{\nu}(1)}),\ldots,F_{\bf a}(a_{{\nu}(n+1)})\bigr).\]
Then,
\[G_f(a_1,\ldots,a_{n+1}) = (b_1,\ldots, b_{n+1}).\]
Note that
\[a_i=a_j\quad\Longrightarrow\quad
b_{{\mu}(i)} = b_{{\mu}(j)}.\] In addition $[a_1:\ldots: a_{n+1}]$
belongs to $\Delta$ precisely when there are integers $i\neq j$ in
$[0,n+1]$ such that $a_i=a_j$. As a consequence,
\[[a_1:\ldots :a_{n+1}]\in \Delta\quad \Longrightarrow \quad
[b_1:\ldots:b_{n+1}]\in \Delta.\] This proves that
$g_f(\Delta)\subseteq \Delta$.
(4) To see that the critical point locus of $g_f$ is contained in
$\Delta$, we must show that ${\rm Jac}~G_f:{\mathbb C}^{n+1}\to {\mathbb C}$ does not
vanish outside $\Delta$. Since $g_f(\Delta)\subseteq \Delta$, we
then automatically obtain that the critical value locus of $g_f$ is
contained in $\Delta$.
Note that ${\rm Jac}~G_f(a_1,\ldots,a_{n+1})$ is a homogeneous
polynomial of degree $(n+1)\cdot (d-1)$ in the variables
$a_1,\ldots,a_{n+1}$. Consider the polynomial $J\in
{\mathbb C}[a_1,\ldots,a_{n+1}]$ defined by
\[J(a_1,\ldots,a_{n+1})= \prod_{0\leq i<j\leq n+1} (a_i-a_j)^{m_i+m_j}
\quad\text{with}\quad a_0= 0.\]
\begin{lemme}
The Jacobian ${\rm Jac}~G_f$ is divisible by $J$.
\end{lemme}
\begin{proof}
Set $a_0= 0$ and $G_0= 0$. For $j\in
[1,n+1]$, let $G_j$ be the $j$-th coordinate of
$G_f(a_1,\ldots,a_{n+1})$, i.e.
\[G_j= d\int_{a_{{\nu}(0)}}^{a_{{\nu}(j)}} \prod_{k=0}^{n+1} (w-a_k)^{m_k} {\rm
d}w.\] For $0\leq i<j\leq n+1$, note that setting
$w=a_i+t(a_j-a_i)$, we have
\begin{align*}
G_{{\mu}(j)} - G_{{\mu}(i)} &= d\int_{a_i}^{a_j}
\prod_{k=0}^{n+1} (w-a_k)^{m_k} {\rm d}w \\
&= d\int_{0}^{1} \prod_{k=0}^{n+1} (a_i+t(a_j-a_i)-a_k)^{m_k}\cdot
(a_j-a_i) {\rm d} t\\
&= (a_j-a_i)^{m_i+m_j+1} \cdot H_{i,j} \end{align*} with
\[H_{i,j}= d\int_0^1
t^{m_i}(t-1)^{m_j} \prod_{k\in [0,n+1]\atop k\neq i,j}
\bigl(a_i-a_k+t(a_j-a_i)\bigr)^{m_k} {\rm d}t.\] In particular,
$G_{{\mu}(j)} - G_{{\mu}(i)}$ is divisible by
$(a_j-a_i)^{m_i+m_j+1}$.
For $k\in [0,n+1]$, let $L_k$ be the row defined as:
\[L_k= \left[\frac{\partial G_k}{\partial a_1}\quad \ldots\quad
\frac{\partial G_k}{\partial a_{n+1}}\right].\] Note that $L_0$ is
the zero row, and for $k\in [1,n+1]$, $L_k$ is the $k$-th row of the
Jacobian matrix of $G_f$. According to the previous computations,
the entries of $L_{{\mu}(j)}-L_{{\mu}(i)}$ are the partial
derivatives of $(a_j-a_i)^{m_i+m_j+1}\cdot H_{i,j}$. It follows that
$L_{{\mu}(j)}-L_{{\mu}(i)}$ is divisible by $(a_j-a_i)^{m_i+m_j}$.
Indeed, $L_{{\mu}(j)}-L_{{\mu}(i)}$ is either the difference of two
rows of the Jacobian matrix of $G_f$, or such a row up to sign, when
${\mu}(i)=0$ or ${\mu}(j)=0$. As a consequence, $ {\rm Jac}~G_f$ is
divisible by $J$.
\end{proof}
Since $\sum m_j=d-1$, an easy computation shows that the degree of
$J$ is $(n+1)\cdot (d-1)$.
Since $J$ and ${\rm Jac}~G_f$ are homogeneous polynomials of the
same degree and since $J$ divides ${\rm Jac}~G_f$, they are equal up
to multiplication by a nonzero complex number. This shows that ${\rm
Jac}~G_f$ vanishes exactly when $J$ vanishes, i.e. on a subset of
$\Delta$.
This completes the proof of Proposition \ref{prop_periodicpoly2}.
{\hfill{$\square$}}
\medskip
\section{Proof of (2)}\label{pf2}
In this section we present an example of a Thurston map $f$ such
that the pullback map $\sigma_f:\mathrm{Teich}(\P^1,{\Pf})\to\mathrm{Teich}(\P^1,{\Pf})$ is a ramified Galois covering
and has a fixed critical point.
Let $f:{{\mathbb P}}^1\to {{\mathbb P}}^1$ be
the rational map defined by:
\[
f(z)=\frac{3z^2}{2z^3+1}.
\]
Note that $f$ has critical points at
$\Omega_f=\{0,1,\omega,\bar{\omega}\}$, where
\[\omega=
-1/2+i\sqrt3/2\quad\text{and}\quad \bar{\omega}= -1/2-i\sqrt3/2\] are cube roots
of unity. Notice that
\[f(0)=0,~f(1)=1,~f(\omega)=\bar{\omega}\text{ and }
f(\bar{\omega})=\omega.\] So, ${P_f}=\{0,1,\omega,\bar{\omega}\}$ and
$f$ is a Thurston map. We illustrate the critical dynamics of $f$
with the following {\it{ramification portrait}}:
\[
\xymatrix{0\ar@(ur,dr)^2}\qquad\xymatrix{1\ar@(ur,dr)^2}\qquad
\xymatrix{\omega\ar@/^1.1pc/[r]^2 & \bar{\omega}\ar@/^1.1pc/[l]^2}
\]
\begin{figure}[htbp]
\centerline{\scalebox{.25}{\includegraphics{critsigma.eps}}}
\caption{The Julia set of the rational map $f:z\mapsto
3z^2/(2z^3+1)$. The basin of $0$ is white. The basin of $1$ is light
grey. The basin of $\{\omega,\bar\omega\}$ is dark grey.}
\end{figure}
Since $|{P_f}|=4$, the Teichm\"uller space $\mathrm{Teich}(\P^1,{\Pf})$ has complex
dimension $1$.
Set $\Theta= \{1,\omega,\bar{\omega}\}\subset {P_f}$. We identify
the moduli space $\mathrm{Mod}(\P^1,{\Pf})$ with ${{\mathbb P}}^1-\Theta$. More precisely, if
$\phi:{P_f}\hookrightarrow {{\mathbb P}}^1$ represents a point in $\mathrm{Mod}(\P^1,{\Pf})$ with
$\phi|_\Theta={\rm id}|_\Theta$, we identify the class of $\phi$ in
$\mathrm{Mod}(\P^1,{\Pf})$ with the point $\phi(0)$ in ${{\mathbb P}}^1-\Theta$. The universal
covering $\pi:\mathrm{Teich}(\P^1,{\Pf})\to \mathrm{Mod}(\P^1,{\Pf})$ is identified with a universal
covering $\pi:\mathrm{Teich}(\P^1,{\Pf})\to {{\mathbb P}}^1-\Theta$ and $\pi(\circledast)$ is
identified with $0$.
Assume $\tau\in \mathrm{Teich}(\P^1,{\Pf})$ and let $\phi:{{\mathbb P}}^1\to {{\mathbb P}}^1$ be a
homeomorphism representing $\tau$ with
$\phi|_{\Theta}={\rm id}|_{\Theta}$. There exists a unique homeomorphism
$\psi:{{\mathbb P}}^1\to {{\mathbb P}}^1$ representing $\tau'= \sigma_f(\tau)$ and a
unique cubic rational map $F:{{\mathbb P}}^1\to{{\mathbb P}}^1$ such that
\begin{itemize}
\item $\psi|_{\Theta}={\rm id}|_{\Theta}$ and
\item the following diagram commutes
\[
\xymatrix{ & {{\mathbb P}}^1 \ar[d]_{f}\ar[r]^{\psi} &
{{\mathbb P}}^1 \ar[d]^{F} \\
& {{\mathbb P}}^1 \ar[r]^{\phi} & {{\mathbb P}}^1.}
\]
\end{itemize}
We set
\[y= \phi(0) = \pi(\tau) \quad\text{and}\quad
x = \psi(0) =\pi(\tau').\]
The rational map $F$ has the following properties:
\begin{enumerate}
\item[(P1)]\label{cond_Fcrit}{$1$, $\omega$ and $\bar \omega$ are critical points of $F$, $F(1)=1$, $F(\omega)=\bar{\omega}$, $F(\bar{\omega})=\omega$ and}
\item[(P2)]{$x\in {{\mathbb P}}^1-\Theta$ is a critical point of $F$ and $y=F(x)\in {{\mathbb P}}^1-\Theta$ is the corresponding critical value.}
\end{enumerate}
For $\alpha=[a:b]\in {{\mathbb P}}^1$, let $F_\alpha$ be the rational map
defined by
\[F_\alpha(z)= \frac{az^3+3bz^2+2a}{2bz^3+3az+b}.\]
Note that $f=F_0$.
We first show that $F=F_\alpha$ for some $\alpha\in {{\mathbb P}}^1$. For this
purpose, we may write $F=P/Q$ with $P$ and $Q$ polynomials of degree
$\leq 3$. Note that if $\widehat F=\widehat P/\widehat Q$ is another
rational map of degree $3$ satisfying Property (P1), then
$F-\widehat F$ and $(F-\widehat F)'$ vanish at $1$, $\omega$ and
$\bar \omega$. Since
\[F-\widehat F=\frac{P\widehat Q-Q\widehat P}{Q\widehat Q}\]
and since $P\widehat Q-Q\widehat P$ has degree $\leq 6$, we see that
$P\widehat Q-Q\widehat P$ is equal to $(z^3-1)^2$ up to
multiplication by a complex number.
A computation shows that $F_0$ and $F_\infty$ satisfy Property (P1).
We may write $F_0=P_0/Q_0$ and $F_\infty=P_\infty/Q_\infty$ with
\[P_0(z)=3z^2,\quad Q_0(z)=2z^3+1,\quad P_\infty(z)=z^3+2\quad\text{and}\quad
Q_\infty(z) = 3z.\] The previous observation shows that $PQ_0-QP_0$
and $PQ_\infty-QP_\infty$ are both scalar multiples of $(z^3-1)^2$,
and thus, we can find complex numbers $a$ and $b$ such that
\[a\cdot(PQ_\infty-QP_\infty)+b\cdot(PQ_0-QP_0)= 0\]
whence
\[P\cdot(aQ_\infty+bQ_0) = Q\cdot (aP_\infty+bP_0).\]
This implies that
\[F = \frac{P}{Q} = \frac{aP_\infty+bP_0}{aQ_\infty+bQ_0}=F_\alpha\quad\text{with}\quad
\alpha=[a:b]\in {{\mathbb P}}^1.\]
We now study how $\alpha\in {{\mathbb P}}^1$ depends on $\tau\in \mathrm{Teich}(\P^1,{\Pf})$. The
critical points of $F_\alpha$ are $1$, $\omega$, $\bar\omega$ and
$\alpha^2$. We therefore have
\[x = \alpha^2\quad\text{and}\quad y = F_\alpha(\alpha^2) = \frac{\alpha(\alpha^3+2)}{2\alpha^3
+1} = \frac{x^2+2\alpha}{2x\alpha+1}.\] In particular,
\[\alpha = \frac{x^2-y}{2xy-2}.\]
Consider now the holomorphic maps $X:{{\mathbb P}}^1\to {{\mathbb P}}^1$, $Y:{{\mathbb P}}^1\to {{\mathbb P}}^1$
and $A:\mathrm{Teich}(\P^1,{\Pf})\to {{\mathbb P}}^1$ defined by
\[X(\alpha)= \alpha^2,\quad Y(\alpha) = \frac{\alpha(\alpha^3+2)}{2\alpha^3
+1}\] and
\[A(\tau) = \frac{x^2-y}{2xy-2}
\quad\text{with}\quad y=\pi(\tau)\quad\text{and}\quad x = \pi\circ
\sigma_f(\tau).\] Observe that
\[X^{-1}\bigl(\{1,\omega,\bar\omega\}\bigr) = Y^{-1}\bigl(\{1,\omega,\bar\omega\}\bigr) =
\Theta'= \{1,\omega,\bar\omega,-1,-\omega,-\bar\omega\}.\]
Thus, we have the following commutative
diagram,
\[\xymatrix{ & \mathrm{Teich}(\P^1,{\Pf}) \ar[dd]_{\pi}\ar[rr]^{\sigma_f} \ar[dr]^A & &
\mathrm{Teich}(\P^1,{\Pf}) \ar[dd]^{\pi} \\
&&{{\mathbb P}}^1-\Theta' \ar[dl]_Y \ar[dr]^X &\\
& {{\mathbb P}}^1-\Theta & & {{\mathbb P}}^1-\Theta.}
\]
In this paragraph, we show that $\sigma_f$ has local degree two at
the fixed basepoint. Since $f=F_0$, we have $A(\circledast)=0$. In
addition, $\pi(\circledast)= \pi\circ\sigma_f(\circledast)= 0$. Since
$Y(\alpha)= 2\alpha + {\mathcal O}(\alpha^2)$, the germ $Y:({{\mathbb P}}^1,0)\to
({{\mathbb P}}^1,0)$ is locally invertible at $0$. Since $\pi:\mathrm{Teich}(\P^1,{\Pf})\to \mathrm{Mod}(\P^1,{\Pf})$
is a universal covering, the germ
$\pi:\bigl(\mathrm{Teich}(\P^1,{\Pf}),\circledast\bigr)\to
\bigl(\mathrm{Mod}(\P^1,{\Pf}),\circledast\bigr)$ is also locally invertible at $0$.
Since $X(\alpha)=\alpha^2$, the germ $X:({{\mathbb P}}^1,0)\to ({{\mathbb P}}^1,0)$ has
degree $2$ at $0$. It follows that $\sigma_f$ has degree $2$ at
$\circledast$ as required.
Finally, we prove that $\sigma_f$ is a surjective Galois orbifold
covering. First, note that the critical value set of $Y$ is $\Theta$
whence $Y:{{\mathbb P}}^1-\Theta'\to {{\mathbb P}}^1-\Theta$ is a covering map. Since
$\pi=Y\circ A$ and since $\pi:\mathrm{Teich}(\P^1,{\Pf})\to P^1-\Theta$ is a universal
covering map, we see that $A:\mathrm{Teich}(\P^1,{\Pf})\to {{\mathbb P}}^1-\Theta'$ is a covering map (hence a universal covering map).
Second, note that $X:{{\mathbb P}}^1-\Theta'\to {{\mathbb P}}^1-\Theta$ is a ramified Galois
covering of degree $2$, ramified above $0$ and $\infty$ with local
degree $2$. Let $M$ be the orbifold whose underlying surface is
${{\mathbb P}}^1-\Theta$ and whose weight function takes the value $1$
everywhere except at $0$ and $\infty$ where it takes the value $2$.
Then, $X:{{\mathbb P}}^1-\Theta'\to M$ is a covering of orbifolds and $X\circ
A:\mathrm{Teich}(\P^1,{\Pf}) \to M$ is a universal covering of orbifolds.
Third, let $T$ be the orbifold whose underlying surface is $\mathrm{Teich}(\P^1,{\Pf})$ and
whose weight function takes the value $1$ everywhere except at
points in $\pi^{-1}\bigl(\{0,\infty\}\bigr)$ where it takes the
value $2$. Then $\pi:T\to M$ is a covering of orbifolds. We have the
following commutative diagram:
\[\xymatrix{ & \mathrm{Teich}(\P^1,{\Pf}) \ar[drrr]_{X\circ A}\ar[rrr]^{\sigma_f} & & &
T \ar[d]^{\pi} \\
& & & & M.}
\]
It follows that $\sigma_f:\mathrm{Teich}(\P^1,{\Pf})\to T$ is a covering of orbifolds
(thus a universal covering). Equivalently, the map
$\sigma_f:\mathrm{Teich}(\P^1,{\Pf})\to \mathrm{Teich}(\P^1,{\Pf})$ is a ramified Galois covering,
ramified above points in $\pi^{-1}\bigl(\{0,\infty\}\bigr)$ with
local degree $2$.
Figure \ref{fig_orbifoldcovering} illustrates the behavior of the map $\sigma_f$.
\begin{figure}[htbp]
\centerline{\begin{picture}(330,150)(0,0) \put(0,0){
\scalebox{.3}{\includegraphics{hexagone.eps}}}
\put(160,80){$\overset{\sigma_f}\longrightarrow$}
\put(180,0){\scalebox{.3}{\includegraphics{triangle.eps}}}
\end{picture}
} \caption{For $f(z)=3z^2/(2z^3+1)$, the pullback map $\sigma_f$
fixes $0=\circledast$. It sends hexagons to triangles. There is a
critical point with local degree $2$ at the center of each hexagon
and a corresponding critical value at the center of the image
triangle. The map $X\circ A$ sends light grey hexagons to the unit
disk in ${{\mathbb P}}^1-\Theta$ and dark grey hexagons to the complement of
the unit disk in ${{\mathbb P}}^1-\Theta$. The map $\pi$ sends light grey
triangles to the unit disk in ${{\mathbb P}}^1-\Theta$ and dark grey triangles
to the complement of the unit disk in
${{\mathbb P}}^1-\Theta$.\label{fig_orbifoldcovering}}
\end{figure}
\pagebreak
\section{Proof of (3)}\label{pf3}
\subsection{Examples}\label{Xexamples}
Here, we give examples of Thurston maps $f$ such that
\begin{itemize}
\item ${P_f}$ contains at least $4$ points, so
$\mathrm{Teich}(\P^1,{\Pf})$ is not reduced to a point, and
\item $\sigma_f:\mathrm{Teich}(\P^1,{\Pf})\to\mathrm{Teich}(\P^1,{\Pf})$ is constant.
\end{itemize}
The main result, essentially due to McMullen, is the following.
\begin{prop}\label{prop_constantsigma}
Let ${s}:{{\mathbb P}}^1\to{{\mathbb P}}^1$ and $g:{{\mathbb P}}^1\to {{\mathbb P}}^1$ be rational maps with
critical value sets $V_{s}$ and $V_g$. Let $A\subset {{\mathbb P}}^1$ be
finite. Assume $V_{s}\subseteq A$ and $V_g\cup g(A)\subseteq
{s}^{-1}(A)$. Then
\begin{itemize}
\item $f= g\circ {s}$ is a Thurston map,
\item $V_g\cup g(V_{s})\subseteq {P_f}\subseteq V_g\cup g(A)$ and
\item the dimension of the image of
$\sigma_f:\mathrm{Teich}(\P^1,{\Pf})\to \mathrm{Teich}(\P^1,{\Pf})$ is at most $|A|-3$.
\end{itemize}
\end{prop}
\begin{rema} If $|A|=3$ the pullback map $\sigma_f$ is
constant.
\end{rema}
\begin{proof}
Set $B:=V_g\cup g(A)$. The set of critical values of $f$ is the set
\[V_f= V_g \cup g(V_{s})\subseteq B.\]
By assumption,
\[f(B) = g\circ {s}(B) \subseteq g(A) \subseteq B.\]
So, the map $f$ is a Thurston map and $V_g \cup g(V_{s})\subseteq
{P_f}\subseteq B$.
Note that $B \subseteq {s}^{-1}(A)$ and $A\subseteq
g^{-1}(B)$. According to the discussion at the beginning of
Section \ref{prelimsect}, the rational maps ${s}$ and $g$ induce
pullback maps
\[\sigma_{s}:\mathrm{Teich}(\P^1,A)\to\mathrm{Teich}({{\mathbb P}}^1,B) \quad \text{and}\quad
\sigma_g:\mathrm{Teich}({{\mathbb P}}^1,B)\to \mathrm{Teich}(\P^1,A).\]
In
addition,
\[ \sigma_f = \sigma_{s}\circ \sigma_g.\]
The dimension of the Teichm\"uller space $\mathrm{Teich}(\P^1,A)$ is $|A|-3$. Thus,
the rank of $D\sigma_g$, and so that of $D \sigma_f$, at any
point in $\mathrm{Teich}(\P^1,A)$ is at most $|A|-3$. This completes the proof of
the proposition.
\end{proof}
Let us now illustrate this proposition with some examples.
\begin{exa} We are not aware of any rational map $f:{{\mathbb P}}^1\to {{\mathbb P}}^1$ of degree $2$ or
$3$ for which $|{P_f}|\geq 4$ and $\sigma_f:\mathrm{Teich}(\P^1,{\Pf})\to\mathrm{Teich}(\P^1,{\Pf})$ is
constant. We have an example in degree $4$: the polynomial $f$
defined by
\[f(z) = 2i \left(z^2-\frac{1+i}2\right)^2.\]
This polynomial can be decomposed as $f=g\circ {s}$ with
\[{s}(z) = z^2\quad\text{and}\quad g(z) = 2i\left(z-\frac{1+i}2\right)^2.\]
See Figure 5. The critical value set of ${s}$ is \[V_{s}=\{0,\infty\}\subset
A= \{0,1,\infty\}.\] The critical value set of $g$ is
\[V_g = \{0,\infty\}\subset \{0,\infty,-1,1\}={s}^{-1}(A).\]
In addition, $g(0)=-1$, $g(1) = 1$ and $g(\infty) = \infty$, so
\[g(A) = \{-1,1,\infty\}\subset {s}^{-1}(A).\]
According to the previous proposition, $f=g\circ {s}$ is a
Thurston map and since $|A|=3$, the map $\sigma_f:\mathrm{Teich}(\P^1,{\Pf})\to
\mathrm{Teich}(\P^1,{\Pf})$ is constant.
Note that $V_f=\{0,-1,\infty\}$ and ${P_f}=\{0,1,-1,\infty\}$. The
ramification portrait for $f$ is:
\[ \xymatrix{ & \sqrt{\frac{1+i}{2}}\ar[dr]^2
\\ & & 0 \ar[r]^{2} &-1 \ar[r]
& 1\ar@(ur,dr)[] & & &\infty
\ar@(ur,dr)^4 [] \\ &-\sqrt{\frac{1+i}{2}}\ar[ur]_2.} \]
\begin{figure}[htbp]
\centerline{\scalebox{.25}{\includegraphics{constantsigma1.eps}}}
\caption{The Julia set of the degree 4 polynomial $f:z\mapsto 2i
\left(z^2-\frac{1+i}2\right)^2$ is a dendrite. There is a fixed
critical point at $\infty$. Its basin is white. The point $z=1$ is a
repelling fixed point. All critical points are in the backward orbit
of $1$.}
\end{figure}
\end{exa}
\begin{exa} We also have examples of rational maps $f:{{\mathbb P}}^1\to {{\mathbb P}}^1$ for which
$\sigma_f:\mathrm{Teich}(\P^1,{\Pf})\to \mathrm{Teich}(\P^1,{\Pf})$ is constant and $|{P_f}|\geq 4$ is an
arbitrary integer. Assume $n\geq 2$ and consider ${s}:{{\mathbb P}}^1\to
{{\mathbb P}}^1$ and $g:{{\mathbb P}}^1\to {{\mathbb P}}^1$ the polynomials defined by
\[{s}(z) = z^n\quad\text{and}\quad g(z) = \frac{(n+1)z-z^{n+1}}{n}.\]
Set $A:=\{0,1,\infty\}$. The critical value set of ${s}$ is
$V_{s}=\{0,\infty\}\subset A$.
The critical points of $g$ are the $n$-th roots of unity and $g$
fixes those points; the critical values of $g$ are the $n$-th roots
of unity. In addition, $g(0)=0$. Thus
\[V_g\cup g(V_{s}) = V_g\cup g(A) = {s}^{-1}(A).\]
According to Proposition \ref{prop_constantsigma}, ${P_f}=
{s}^{-1}(A)$ and the pullback map $\sigma_f$ is constant. In
particular, $|{P_f}|=n+2$.
For $n=2$, $f$ has the following {ramification portrait}:
\[ \xymatrix{ & i\ar[dr]^2
\\ & & -1 \ar[r]^2
& 1\ar@(ur,dr)[]^2 & &0 \ar@(ur,dr)^2 & &\infty
\ar@(ur,dr)^6 [] \\ &-i\ar[ur]_2} \]
\begin{figure}[htbp]
\centerline{\scalebox{.25}{\includegraphics{constantsigma2.eps}}}
\caption{The Julia set of the degree 6 polynomial $f:z\mapsto
z^2(3-z^4)/2$. There are superattracting fixed points at $z=0$,
$z=1$
and $z=\infty$. All
other critical points are in the backward orbit of $1$. The basin of
$\infty$ is white. The basin of $0$ is light grey. The basin of $1$ is
dark grey.}
\end{figure}
\end{exa}
\begin{exa}
Proposition \ref{prop_constantsigma} can be further exploited to
produce examples of Thurston maps $f$ where $\sigma_f$ has a {\em
skinny image}, which is not just a point.
For $n\geq 2$, let $A_n$ be the union of $\{0,\infty\}$ and the set
of $n$-th roots of unity. Let ${s}_n:{{\mathbb P}}^1\to {{\mathbb P}}^1$ and
$g_n:{{\mathbb P}}^1\to {{\mathbb P}}^1$ be the polynomials defined by
\[{s}_n(z) = z^n\quad\text{and}\quad g_n(z) = \frac{(n+1)z-z^{n+1}}{n}.\]
The critical points of $g_n$ are the $n$-th roots of unity and $g_n$
fixes those points; the critical values of $g_n$ are the $n$-th
roots of unity. In particular, $V_{g_n}\subset A_n$. In addition,
$g_n(0)=0$, and so,
\[g_n(A_n) = A_n.\]
Assume $n\geq 2$ and $m\geq 1$ are integers with $m$ dividing $n$,
let's say $n=km$. Note that
\[V_{{s}_k}\subset A_m\quad\text{and}\quad
V_{g_n}\cup g_n(A_n) = A_n = {s}_k^{-1}(A_m).\] It follows that
the polynomial $f:{{\mathbb P}}^1\to {{\mathbb P}}^1$ defined by
\[f:=g_n\circ {s}_k\]
is a Thurston map and
\[A_n=V_{g_n}\cup g_n(V_{{s}_k})\subseteq {P_f} \subseteq V_{g_n}\cup
g_n(A_n)=A_n\quad\text{so},\quad P_f=A_n.\] In particular, the
dimension of the Teichm\"uller space $\mathrm{Teich}(\P^1,{\Pf})$ is $n-1$.
\noindent{\bf Claim.} {\em The dimension of the image of
$\sigma_f:\mathrm{Teich}(\P^1,{\Pf})\to \mathrm{Teich}(\P^1,{\Pf})$ is $m-1$. Thus, its codimension is
$(k-1)m$.}
\begin{proof}
On the one hand, since $g_n$ is a polynomial whose critical points
are all fixed, Proposition \ref{prop_periodicpoly} implies that
$\sigma_{g_n} : {\rm Teich}({{\mathbb P}}^1, A_n)\to {\rm Teich}({{\mathbb P}}^1, A_n)$
has open image. Composing with the forgetful projection \[{\rm
Teich}({{\mathbb P}}^1,A_n)\to {\rm Teich}({{\mathbb P}}^1,A_m),\] we deduce that
$\sigma_{g_n} : {\rm Teich}({{\mathbb P}}^1, A_n)\to {\rm Teich}({{\mathbb P}}^1, A_m)$
has open image.
On the other hand, since ${s}_k:{{\mathbb P}}^1- A_n\to {{\mathbb P}}^1- A_m$ is a covering map, it follows from general principle that
$\sigma_{{s}_k} : {\rm Teich}({{\mathbb P}}^1, A_m)\to
{\rm Teich}({{\mathbb P}}^1, A_n)$ is a holomorphic embedding with everywhere injective derivative.
\end{proof}
\end{exa}
\begin{question}If $f:{{\mathbb P}}^1\to{{\mathbb P}}^1$ is a Thurston map such that
the pullback map $\sigma_f:\mathrm{Teich}(\P^1,{\Pf})\to\mathrm{Teich}(\P^1,{\Pf})$ is constant, then is
it necessarily of the form described above? In particular, is there
a Thurston map $f:{{\mathbb P}}^1\to{{\mathbb P}}^1$ with constant
$\sigma_f:\mathrm{Teich}(\P^1,{\Pf})\to\mathrm{Teich}(\P^1,{\Pf})$, such that $\text{deg}(f)$ is prime?
\end{question}
\subsection{Characterizing when $\sigma_f$ is constant.}\label{sigmaconst}
Suppose $f$ is a Thurston map with $|{P_f}|\geq 4$.
Let $\mathcal{S}$
denote the set of {free homotopy classes} of simple, closed,
unoriented curves $\gamma$ in ${\Sigma}-{P_f}$ such that each component of ${\Sigma}-\gamma$ contains at least two points of ${P_f}$ . Let
${\mathbb R}[\mathcal{S}]$ denote the free ${\mathbb R}$-module generated by
$\mathcal{S}$. Given $[\gamma]$ and $[\widetilde{\gamma}]$ in $\mathcal{S}$,
define the {\em pullback relation} on $\mathcal{S}$, denoted
${\underset{f}{\leftarrow}}$, by defining $[\gamma] {\underset{f}{\leftarrow}} [\widetilde{\gamma}]$
if and only if there is a component $\delta$ of $f^{-1}(\gamma)$
which, as a curve in ${\Sigma}-{P_f}$, is homotopic to $\widetilde{\gamma}$.
The {\em Thurston linear map}
\[ \lambda_f: {\mathbb R}[\mathcal{S}] \to {\mathbb R}[\mathcal{S}]\]
is defined by specifying the image of basis elements $[\gamma] \in
\mathcal{S}$ as follows:
\[ \lambda_f\bigl([\gamma]\bigr) = \sum_{[\gamma] {\underset{f}{\leftarrow}} [\gamma_i]} d_i [\gamma_i].\]
Here, the sum ranges over all $[\gamma_i]$ for which $[\gamma]
{\underset{f}{\leftarrow}} [\gamma_i]$, and
\[ d_i = \sum_{f^{-1}(\gamma)\supset \delta \simeq \gamma_i}\frac{1}{|\deg(\delta \to \gamma)|}, \]
where the sum ranges over components $\delta$ of $f^{-1}(\gamma)$
homotopic to $\gamma_i$.
Let $\mathrm{PMCG}(\P^1, {\Pf})$ denote the pure
mapping class group of $({{\mathbb P}}^1, {P_f})$---that is, the quotient of the
group of orientation-preserving homeomorphisms fixing ${P_f}$
pointwise by the subgroup of such maps isotopic to the identity
relative to ${P_f}$. Thus,
\[\mathrm{Mod}(\P^1,{\Pf}) = \mathrm{Teich}(\P^1,{\Pf})/ \mathrm{PMCG}(\P^1, {\Pf}).\]
Elementary covering space theory and homotopy-lifting facts imply
that there is a finite-index subgroup $H_f \subset \mathrm{PMCG}(\P^1, {\Pf})$ consisting of
those classes represented by homeomorphisms $h$ lifting under $f$ to
a homeomorphism $\tilde{h}$ which fixes ${P_f}$ pointwise. This
yields a homomorphism
\[ \phi_f: H_f \to \mathrm{PMCG}(\P^1, {\Pf}) \]
defined by
\[\phi_f\bigl([h]\bigr)=[\tilde h]\quad \text{with}\quad h\circ f = f \circ \tilde h. \]
{Following \cite{bn}} we
refer to the homomorphism $\phi_f$ as the {\em virtual
endomorphism of $\mathrm{PMCG}(\P^1, {\Pf})$} associated to $f$.
\begin{theo}\label{tfae}
The following are equivalent:
\begin{enumerate}
\item ${\underset{f}{\leftarrow}}$ is empty
\item $\lambda_f$ is constant
\item $\phi_f$ is constant
\item $\sigma_f$ is constant
\end{enumerate}
\end{theo}
In \cite{BEKP}, there is a mistake in the proof that $(2)\implies (3)$. The assumption (2) is equivalent to the assumption that every curve, when lifted under $f$, becomes inessential or peripheral. Even if this holds, it need not be the case that every Dehn twist lifts under $f$ to a pure mapping class element. We give an explicit example after the proof of Theorem \ref{tfae}.
\begin{proof}
In \cite{BEKP} the logic was: $(1)\implies(2)\implies(3)\implies(4)$, and failure of $(1)$ implies failure of $(4)$.
Here is the revised logic: $(1)\iff (2)$, $(3)\iff (4)$, $(3)\implies (2)$, and failure of $(4)$ implies failure of $(1)$.
$\mathbf{(1)\iff (2)}$ This follows immediately from the definitions.
$\mathbf{(3)\implies (2)}$ We show failure of $(2)$ implies failure of $(3)$. If $\lambda_f$ is not constant, then there exists a simple closed curve $\gamma$ which has an essential, nonperipheral simple closed curve $\delta$ as a preimage under $f$. Some power of the Dehn twist about $\gamma$ lifts under $f$ to a product of nontrivial Dehn twists. The hypothesis implies that the lifted map is homotopically nontrivial, so $\phi_f$ is not constant.
For the remaining implications, we will make use of the following facts.
First, recall that the deck group $\mathrm{PMCG}(\P^1, {\Pf})$ of $\pi: \mathrm{Teich}(\P^1,{\Pf}) \to \mathrm{Mod}(\P^1,{\Pf})$ acts by pre-composition properly discontinuously and biholomorphically on the space $\mathrm{Teich}(\P^1,{\Pf})$. For $h \in \mathrm{PMCG}(\P^1, {\Pf})$ and $\tau \in \mathrm{Teich}(\P^1,{\Pf})$ we denote by $h\cdot \tau$ the image of $\tau$ under the action of $h$. Since $H_f$ has finite index in $\mathrm{PMCG}(\P^1, {\Pf})$, the covering map $ \mathrm{Teich}(\P^1,{\Pf})/H_f \to \mathrm{Mod}(\P^1,{\Pf})$ is finite. Furthermore, the definitions imply
\[ \sigma_f(h\cdot \tau) = \phi_f(h)\cdot \sigma_f(\tau)\ \forall \ h \in H_f.\]
Second, {\em a bounded holomorphic function on a finite cover of $\mathrm{Mod}(\P^1,{\Pf})$ is constant}. To see this, recall that $\mathrm{Mod}(\P^1,{\Pf})$ is isomorphic to the complement of a finite set of hyperplanes in ${\mathbb C}^n$ where $n=|P_f|-3$. Let $L$ be any complex line not contained in the forbidden locus. The intersection of $L$ with $\mathrm{Mod}(\P^1,{\Pf})$ is isomorphic to a compact Riemann surface punctured at finitely many points. If $\widetilde{L}$ is any component of the preimage of $L$ under the finite covering, then $\widetilde{L}$ is also isomorphic to a compact Riemann surface punctured at finitely many points. By Liouiville's theorem, the function is constant on $\widetilde{L}$. Since $L$ is arbitrary, the function is locally constant, hence constant.
$\mathbf{(3)\implies(4)}$
Suppose (3) holds. Then $\sigma_f: \mathrm{Teich}(\P^1,{\Pf}) \to \mathrm{Teich}(\P^1,{\Pf})$ descends
to a holomorphic map
\[
\overline{\sigma}_f: \mathrm{Teich}(\P^1,{\Pf})/H_f \to \mathrm{Teich}(\P^1,{\Pf}).
\]
A theorem of Bers \cite[Section 6.1.4]{IT} shows that $\mathrm{Teich}(\P^1,{\Pf})$ is isomorphic to a bounded domain of ${\mathbb C}^n$, so $\sigma_f$ is constant.
$\mathbf{(4)\implies (3)}$ Suppose $h \in H_f$. If $\sigma_f \equiv \tau$ is constant, the deck transformation defined by $\phi_f(h)$ fixes the point $\tau$, hence must be the identity. So $\phi_f$ is constant.
$\mathbf{\mbox{\bf not} (4) \implies \mbox{\bf not}(1)}$ We first prove a Lemma of perhaps independent interest.
\begin{lemme} Let $f:(S^2,P)\to (S^2,P)$ be a Thurston map. Then the image of $\sigma_f$ is either a point, or unbounded in ${\mathcal{M}}_P$.
\end{lemme}
\begin{proof}
The definitions imply that $\pi \circ \sigma_f$ descends to a holomorphic map
\[ \rho:\mathrm{Teich}(\P^1,{\Pf})/H_f\to \mathrm{Mod}(\P^1,{\Pf}) \hookrightarrow {\mathbb C}^n.\]
If the image is bounded, the map $\rho$ is constant.
\end{proof}
\bigskip
Suppose now that $\sigma_f$ is not constant (ie, failure of $(4)$). The Lemma implies that the image of $\pi \circ \sigma_f$ is unbounded; in particular, ${\mathcal{M}}_P':=\pi(\sigma_f(\mathrm{Teich}(\P^1,{\Pf})))$ is not contained in any compact subset of $\mathrm{Mod}(\P^1,{\Pf})$. This means that there exists a point $x\in {\mathcal{M}}_P'$ corresponding to a Riemann surface $X:={{\mathbb P}}^1-Q$ containing an annulus $A$ of large modulus. Because $x\in {\mathcal{M}}_P'$, there exists a a rational map
\[
F: ({{\mathbb P}}^1,Q) \to ({{\mathbb P}}^1, R).
\]
such that the diagram in the definition of $\sigma_f$ commutes.
Let $X':=X-F^{-1}(R)$ and $Y={{\mathbb P}}^1-R$, so that $F:X'\to Y$ is a holomorphic covering map. Let $A':=A\cap X'$. There is an embedded subannulus $B'\subseteq A'$ of large modulus because we removed at most $d\cdot |P_f|$ points from $A$ to get $A'$. Hence in the hyperbolic metric on $X'$, the core curve of $B'$ is very short. Call this core curve $\delta$. Look at $F(\delta)$. Since $F:X'\to Y$ is a local hyperbolic isometry, the length of $F(\delta)$ is at most $d$ times length of $\delta$, so $F(\delta)$ is also very short. Let $\gamma$ be the geodesic in the homotopy class of $F(\delta)$. Since $\gamma$ is very short, it must be simple. Since $\delta$ is essential and non-peripheral, so is $\gamma$. We conclude that $\gamma \leftarrow_f \delta$, hence $\leftarrow_f$ is nonempty.
\end{proof}
Let $f=g\circ s$ be the quartic polynomial in Example 1.
Let $\gamma_0$ be the boundary of a small regular neighborhood $D$ of the segment $[0,1] \subset {\mathbb C}$. Let $h_0: {{\mathbb P}}^1 \to {{\mathbb P}}^1$ be the right Dehn twist about $\gamma_0$.
\noindent{\bf Claim.} {\em If $h_1: {{\mathbb P}}^1 \to {{\mathbb P}}^1$ satisfies $h_0 \circ f = f \circ h_1$ (i.e. $h_1$ is a lift of $h_0$ under $f$) then $h_1 \not\in \mathrm{PMCG}(\P^1, {\Pf})$. See Figure 5.}
\begin{figure}[htbp]
\label{fig:ex1}
\centerline{\scalebox{0.75}{\includegraphics{lifts4.eps}}}
\caption{The mapping properties of $f=g\circ s$ in Example 1. The points in grey are $-1, 0, +1$.}
\end{figure}
\begin{proof} We argue by contradiction.
We may assume $h_0$ is supported on an annulus $A_0$ surrounding a bounded Jordan domain $D_0$ whose boundary is $\gamma_0$, and an unbounded region $U_0$. Easy calculations show that the inverse image of $D_0$ under $f$ consists of two bounded Jordan domains $D_1^\pm$ each mapping as a quadratic branched cover onto $D_0$ and ramified at the points $c_\pm:=\pm \sqrt{\frac{1+i}{2}}$ (the positive sign corresponding to the root with positive real part), both of which map to the origin under $f$. The domain $D_1^+$ contains two preimages of the point $1$, namely $+1$ and $+\frac{1+i}{\sqrt{2}}$, while its twin $D_1^-$ also contains two preimages of the point $1$, namely $-1$ and $-\frac{1+i}{\sqrt{2}}$. The points $\pm 1 \in D_1^\pm$ belong to $P_f$, so if $h_1 \in \mathrm{PMCG}(\P^1, {\Pf})$ is a lift of $h_0$, then $h_1(1)=1$ and $h_1(-1)=-1$.
Since $f: D_1^\pm - \{c_\pm\} \to D_0 - \{0\}$ are both unramified coverings, and $h_0: D_0-\{0\} \to D_0-\{0\}$ is the identity map, we conclude $h_1: D_1^\pm - \{c_\pm\} \to D_1^\pm - \{c_\pm\}$ is a deck transformation of this covering fixing a point, hence is the identity on $D_1^\pm$.
The preimage of the annulus $A_0$ is a pair of disjoint, non-nested annuli $A_1^\pm$ with an inner boundary component $\gamma_1^\pm$ equal to $\partial D_1^\pm$. Since $f: A_1^\pm \to A_0$ is quadratic and unramified, and, by the previous paragraph, the restriction $h_1|_{D_1^\pm}= {\rm id}_{\gamma_1^\pm}$, we must have $h_1 \neq {\rm id}$ on the outer boundary components of $A_1^\pm$; indeed, $h_1$ there effects a half-twist.
The preimage of $U_0$ under $f$ is a single unbounded region $U_1$, which is homeomorphic to the plane minus two disks and three points; it maps in a four-to-one fashion, ramified only at the origin. The restriction $f: U_1 - \{f^{-1}(0)\} \to U_0-\{-1\}$ is an unramified covering map, so $h_1: U_1 - \{f^{-1}(-1)\} \to U_1 - \{f^{-1}(-1)\}$ is a deck transformation of this covering. By the previous paragraph, it is distinct from the identity.
We will obtain a contradiction by proving that $h_1: U_1 - \{f^{-1}(-1)\} \to U_1 - \{f^{-1}(-1)\}$ has a fixed point; this is impossible for deck transformations other than the identity. We use the Lefschetz fixed point formula. By removing a neighborhood of $\infty$ and of $-1$, and lifting these neighborhoods, we place ourselves in the setting of compact planar surfaces with boundary, so that this theorem will apply. Under $h_1$, the boundary component near infinity is sent to itself, as are the outer boundaries of $A_1^\pm$ and the boundary component surrounding the origin (since the origin is the uniquely ramified point of $f$ over $U_0$). The remaining pair of boundary components are permuted amongst themselves. The action of $h_1: U_1 - \{f^{-1}(-1)\} \to U_1 - \{f^{-1}(-1)\}$ on rational homology has trace equal to either $3$ or $5$. A fixed point thus exists, and the proof is complete.
\end{proof}
{\bf Remark:} There exists a lift $h_1$ of $h_0$ under $f$. First, there is a lift $h'$ of $h_0$ under $g$, obtained by setting $h' = {\rm id}$ on the preimage of $U_0$. This extends to a half-twist on the preimage $A_0'$ of $A_0$ under $g$, which then in turn extends to a homeomorphism fixing the preimage $D_0'$ of $D_0$ under $g$; inside $D_0'$, this homeomorphism interchanges the points $1, i$ which are the primages of $1$. It is then straightforward to show that $h'$ lifts under $s$ by setting $h_1={\rm id}$ on $U_1$ and extending similarly over the annuli $A_1^\pm$ and the domains $D_1^\pm$.
\pagebreak
|
2,877,628,088,469 | arxiv | \section*{Introduction}
In his 1799 work \textit{The Vocation Of Man}, the German idealist
philosopher Johann Gottlieb Fichte wrote that ``you could not remove
a single grain of sand from its place without thereby {[}...{]} changing
something throughout all parts of the immeasurable whole''.\footnote{Quote is from the English translation \citet{fichte1848}.}
Fichte, to his great misfortune, died almost a century before the
invention of instrumental variables (IV) regression, but his quote
is of considerable relevance to IV estimation. Suppose Fichte is correct:
subtle and serpentine causal channels connect all things. Then instrumental
exogeneity is at best a close approximation of the truth. If we agree
with Fichte we must hope that a small deviation from instrumental
validity imparts only a small asymptotic bias in our IV estimates.
Similar arguments motivate analyses of parametric IV estimation when
instruments may be mildly invalid. \citet{Conley2008} propose (among other things) a partial identification approach
to estimation in linear IV models that is valid even if instrumental
validity fails. \citet{Andrews2017a} provide methods to analyze the
sensitivity of GMM estimates to misspecification of the moment conditions.
Recent work by \citet{Armstrong} explores optimal estimation in the GMM framework under misspecified
moment conditions.
Nonparametric instrumental variables (NPIV) estimation (\citet{Newey2003},
\citet{Ai2003} and others) is a flexible alternative to linear
IV. NPIV models relax the assumption of a linear causal relationship
between regressors and outcomes. In NPIV estimation the `structural
function', which describes this causal relationship, is treated nonparametrically.
We show that NPIV estimators of the structural function are more sensitive
to invalid instruments than parametric IV estimators and standard
nonparametric regression estimators. For a broad class of NPIV estimators,
an arbitrarily small deviation from instrumental validity can impart
a large asymptotic bias. In some cases arbitrarily large. This non-robustness
is an inherent feature of NPIV estimation. Any NPIV estimator that
is robust requires strong restrictions on the structural function
for consistency.
The non-robustness of NPIV estimators is closely linked with the `ill-posedness'
of NPIV estimation. NPIV estimation is ill-posed in that a tiny change
to the `reduced-form' components of the NPIV estimating equation can
induce a large jump in the solution (see, e.g., \citet{Darolles}). The reduced-form components are
estimated empirically, and therefore subject to error. To limit the
sensitivity to the estimation error one must `regularize' the estimating
equation. However, regularization generally imparts bias. As the sample
size grows, the reduced-form is estimated with greater precision,
and so the degree of regularization is reduced.
The presence of invalid instruments is akin to error in the reduced-form.
Misspecification perturbs the reduced-form away from the shape it
would take under instrumental validity. As the degree of regularization
is reduced, the influence of error due to misspecification grows.
Unlike the error due to sampling noise, the error from misspecification
does not decrease with the sample size. If the researcher has access
to a large sample and in response regularizes only weakly, then even
a small deviation from instrumental validity can impart a large bias.
To the best of our knowledge this observation is neither made nor
addressed in any of the existing literature.
In addition to our non-robustness results, we specify two important
cases in which misspecification-robust estimation is possible. Firstly,
suppose that the researcher is interested in estimation of a continuous
functional of the structural function (\citet{Ai2003}, \citet{Severini2012},
\citet{Ichimuraa}) rather than the structural function itself. This
estimation problem may not be ill-posed and our sensitivity results
may not apply. We provide necessary and sufficient conditions under
which a continuous linear functional can be estimated robustly.
Secondly, suppose it is known that the structural function
obeys some shape restrictions, say a smoothness condition. If the
restrictions are sufficiently strong then an estimator that imposes
them on the structural function may be robust to misspecification, without sacrificing consistency.
Two NPIV methods which impose strong smoothness
conditions are the sieve-type procedures of \citet{Newey2003} and
\citet{Blundell2007}.\footnote{\citet{Chetverikov2017} impose monotonicity
on the structural function in order to tackle the problem of ill-posedness. However, their analysis also assumes monotonicity in the reduced-form
relationship between the instruments and endogenous regressors. This fall outside the scope of our analysis.} However, smoothness conditions (and other shape restrictions) are
absent from a number of prominent NPIV methods.\footnote{The NPIV methods described in \citet{Chen2018}, \citet{Darolles},
\citet{Hall2005} and \citet{Horowitz2011} to name a few. Many analyses
of NPIV estimation fall into a third category in that they are general
enough to incorporate both estimators that do and do not impose smoothness,
for example \citet{Cheng}.}
Unfortunately, even with the imposition of nonparametric smoothness,
we show that NPIV estimators are more sensitive to misspecification
than parametric IV estimators or standard nonparametric regression
estimators. Moreover, if the structural function violates some smoothness
assumptions, then a procedure that imposes those assumptions cannot
be consistent, even if instruments are valid.
In sum, NPIV estimation under misspecification involves a trade-off.
Imposing strong restrictions on the structural function reduces the
sensitivity to a failure of instrumental validity but risks additional
bias. Ideally, a researcher would make the trade-off optimally and
evaluate a point-estimator with minimal worst-case asymptotic bias.
Moreover, the researcher would present error bands that directly account
for some degree of misspecification.
To this end we propose a new approach to estimation and empirical
sensitivity analysis in NPIV. Rather than assume correct specification,
we propose a method based on partial identification (\citet{Manski1989}
, \citet{Horowitz1995}).\footnote{\citet{Santos2012} and \citet{Freyberger2015} also propose partial identification approaches to NPIV.
However, their analyses assume instrumental validity.}
We replace the assumption of instrumental validity with a weaker assumption
that the deviation from instrumental validity is bounded in the supremum
norm. We also place a priori restrictions on the structural function
(e.g., a bound on its second derivatives). This yields a linear conditional
moment inequality model (\citet{Andrewsb}) in which the parameter
of interest is a function.
We provide a procedure to estimate the envelopes of the identified
set and to evaluate a point estimator with minimal worst-case asymptotic
bias. Our method is simple and computationally light. The first stage
amounts to standard non-parametric regression and the second stage
consists of linear programming. We derive uniform rates of convergence
in probability under both high-level and primitive conditions.
The estimation problem in our partially identified model is not ill-posed and
we show that our estimators can achieve the same uniform rate of convergence
as in standard series regression.
We apply our methods to the empirical setting shared by \citet{Blundell2007}
and \citet{Horowitz2011} and replicate the results of the latter.
\citet{Blundell2007} and \citet{Horowitz2011} use NPIV methods to
estimate shape-invariant Engel curves using data from The British
Family Expenditure Survey. We use our methodology to assess which
features of the structural Engel curve for food can be inferred robustly.
The rest of the paper is structured as follows. In Section 1 we provide
an overview of NPIV models and estimators in the context of full instrumental
validity. In Section 2 we consider the case in which instrumental
validity fails and analyze the asymptotic implications for NPIV estimators.
In Section 3 we present our partial identification approach to NPIV
estimation. We provide conditions for the uniform consistency and
convergence rate of the set estimator. In Section 4 we apply our methods
to the empirical setting of \citet{Horowitz2011}. Supplementary material
can be found in Appendix A and proofs in Appendix B.
\section{NPIV Estimation Under Correct Specification}
\citet{Newey2003} present the first detailed, published analysis
of nonparametric instrumental variables (NPIV) methods and their asymptotic
properties. \footnote{A brief account of NPIV and its asymptotic
properties appears as an example application in \citet{Newey1991a}.} They provide high-level conditions for identification of the structural
function in an NPIV model and describe a nonparametric two-stage procedure
for estimation of the structural function. In the years following
their foundational work many competing NPIV estimators have been introduced
and their asymptotic properties analyzed (e.g., \citet{Ai2003}, \citet{Cheng},
\citet{Darolles}, \citet{Hall2005}, \citet{Horowitz2011}).
NPIV analyses identify and estimate the `structural function', denoted
by $h_{0}$, from a conditional moment restriction of the following
form:
\begin{equation}
E[Y-h_{0}(X)|Z]=0\label{eq:esteq}
\end{equation}
$Y$ is a scalar dependent variable with finite first absolute moment,
$X$ is a vector of possibly endogenous regressors, $Z$ is a vector
of instruments. It should be understood that the equality holds `almost
surely' (i.e., with probability $1$). We assume throughout that draws
of the triple $(Y,X,Z)$ are independent and identically distributed.
Throughout we denote the support of $X$ and of $Z$ by $\mathcal{X}$
and $\mathcal{Z}$ respectively.
The structural function is treated nonparametrically. It is assumed
to lie in an infinite-dimensional set of functions $\mathcal{H}$
which is in turn a subset of a Banach space $\mathcal{B}_{X}$ with
norm $||\cdot||_{\mathcal{B}_{X}}$. We assume that for any $h\in\mathcal{B}_{X}$
the first absolute moment of $h(X)$ is finite.
It is useful to rewrite the moment condition in terms of functions
and linear operators. Let $g_{0}$ denote the reduced-form function,
that is $g_{0}(Z)=E[Y|Z]$. Again, the equality should be understood to hold almost surely.
We assume that $g_{0}$ lies in a Banach space $\mathcal{B}_{Z}$
with norm $||\cdot||_{\mathcal{B}_{Z}}$.
Let $A$ be the bounded linear operator that maps from a function
$h\in\mathcal{B}_{X}$ to the element of $\mathcal{B}_{Z}$ that is
almost surely equal to $E[h(X)|Z]$. The conditional moment restriction
(\ref{eq:esteq}) can then be expressed as a linear operator equation $
A[h_{0}]=g_{0}$
\subsection{Standard Assumptions on the Joint Distribution of $X$ and $Z$}
Below we state two properties of the joint distribution of the instruments
and regressors, both of which are imposed throughout the NPIV literature.
The first of these assumptions, `completeness', is key to identification
of the structural function from the NPIV moment condition. Completeness
is a topic of intense discussion in the NPIV literature. For some
recent work see \citet{Andrews2017b}, \citet{Canay2013}, \citet{Chena},
\citet{DHaultfoeuille2011}, \citet{Freyberger2017a}, \citet{Hu2018}.
The second assumption is known as `compactness' of the operator $A$
defined above. Many useful results from functional analysis apply
to operators that are compact, and so the compactness assumption simplifies
analysis of the NPIV estimation problem. For some discussion of compactness
in NPIV (including primitive conditions that imply this property)
see, e.g., \citet{Florens2011} and \citet{Horowitz2011}.
\theoremstyle{definition}
\newtheorem*{A1.1}{Assumption 1.1 ($\mathcal{H}$-Completeness)}
\begin{A1.1}
For any $h\in\mathcal{H}$, $E[h(X)|Z]=0\iff h(X)=0$
\end{A1.1}
\newtheorem*{A1.2}{Assumption 1.2 (Compactness)}
\begin{A1.2}
The linear operator $A$ is a compact operator from $\mathcal{B}_{X}$
into $\mathcal{B}_{Z}$.
\subsection{Ill-posedness and Regularization}
\end{A1.2}
Under Assumption 1.1 the structural function is the unique
solution to the estimating equation (\ref{eq:esteq}) in the parameter
space $\mathcal{H}$. For now let us assume that $\mathcal{H=\mathcal{B}}_{X}$,
then the operator $A$ is invertible on its range.
Denoting the inverse
by $A^{-1}$ we have
$h_{0}=A^{-1}[g_{0}]$. The objects on the right-hand side are known functionals of the joint
distribution of observables. Thus this expression shows the structural
function is identified.
If $A$ is an infinite-dimensional and compact operator then the problem
$A[h]=g_{0}$ is `ill-posed'. In particular, the operator $A$ does
not have a closed range and the inverse $A^{-1}$ is discontinuous
everywhere on its domain. Let Assumption 1.2 hold, then:
\[
\sup_{g\in R(A):\,||g||_{\mathcal{B}_{Z}}\leq1}||A^{-1}[g]||_{\mathcal{B}_{X}}=\infty
\]
Where $R(A)\subset\mathcal{B}_{Z}$ is the range of $A$.
Suppose $A$ is known but $g_{0}$ is replaced with a consistent empirical
estimate $\hat{g}_{n}$. Because $A^{-1}$ is discontinuous, an estimate
$\hat{h}_{n}=A^{-1}[\hat{g}_{n}]$ need not converge in probability
to $h_{0}$. For this reason one employs a `regularization scheme'.
The researcher specifies a sequence of continuous functions $\{Q_{k}\}_{k=1}^{\infty}$
that converges pointwise to the discontinuous operator $A^{-1}$.\footnote{That is, for any fixed $g\in R(A)$, $||Q_{k}[g]-A^{-1}[g]||_{\mathcal{B}_{X}}\to0$.}
For a discussion of regularization in the context of NPIV see for
example \citet{Darolles}.
In economic applications the linear operator $A$ is not a priori
known and must be estimated from the data, correspondingly a regularized
inverse must also be estimated empirically. For each $k$ let $\hat{Q}_{n,k}$
estimate $Q_{k}$. Let $\hat{g}_{n}$ be an estimator of $g_{0}$.
A typical NPIV estimator $\hat{h}_{n}$ takes the following form:
\begin{equation}
\hat{h}_{n}=\hat{Q}_{n,K_{n}}[\hat{g}_{n}]\label{eq:standardestimator}
\end{equation}
Where $K_{n}$ is a sequence of natural numbers that grows to infinity
with the sample size.
The choice of $K_{n}$ controls the degree of regularization. If $K_{n}$
is large then the estimator is highly sensitive to error in $\hat{g}_{n}$.
Therefore $K_{n}$ must grow sufficiently slowly so that the increased
sensitivity is balanced by the increased precision in the estimate
$\hat{g}_{n}$. However, $K_{n}$ must grow to infinity because the
regularization itself may impart bias.
For high level conditions for consistency of an estimator of the form
above see Theorem A.1 in Appendix A.\\
\section{NPIV Estimation Under Misspecification}
We now allow for the possibility that the moment condition (\ref{eq:esteq})
is misspecified, i.e., that $E[Y-h_{0}(X)|Z]\neq0$. Define a function
$u_{0}\in\mathcal{B}_{Z}$ by:
\[
u_{0}(Z)=E[Y-h_{0}(X)|Z]
\]
In terms of the notation developed in the previous section we have
that:
\begin{equation}
g_{0}=A[h_{0}]+u_{0}\label{eq:opeqendog}
\end{equation}
$u_{0}$ measures the deviation of the NPIV conditional moment from
zero. In-keeping with our interpretation of misspecification as endogeneity
of the instruments, we sometimes refer to $u_{0}$ as the `instrumental
endogeneity'. It is natural to measure the degree of misspecification
by the norm of the function $u_{0}$. If the norm of $u_{0}$ is small,
then the NPIV conditional moment is close to zero with respect to
the norm. In the previous section we introduced a norm $||\cdot||_{\mathcal{B}_{Z}}$
on the function space that contains the reduced-form function, we
use this same norm to define the degree of misspecification.
\subsection{Asymptotic Bias With Endogenous Instruments}
To measure the sensitivity of an NPIV estimator to instrumental endogeneity,
we consider the largest possible asymptotic bias when the model is
perturbed away from correct specification. We consider perturbations
so that the degree of misspecification is bounded and parameters of
the model that are not directly related to misspecification are fixed.
Keeping these parameters fixed prevents us from perturbing the model
in such a way that the instruments become weak or the moments of the
reduced-form error become large. We call our measure of sensitivity
the `worst-case asymptotic bias'. It is a special case of the `maximum
bias' discussed in \citet{Huber2011}, albeit extended to estimators
whose values are functions rather than scalars.
We fix the true structural function $h_{0}$ and the joint probability
distribution of the regressors $X$, instruments $Z$ and the reduced-form
error $\eta$ defined by $\eta=Y-E[Y|Z]$. We denote this joint probability
by `$\mu_{XZ\eta}$'. Note that $\mu_{XZ\eta}$, $h_{0}$ and $u_{0}$
together determine the joint distribution of the observables $Y$,
$X$ and $Z$ and therefore the distribution of any NPIV estimator
at any sample size (recall $Y$, $X$ and $Z$ are iid).
Let $\hat{h}_{n}$ be an estimator of $h_{0}$, the estimation error
of $\hat{h}_{n}$ is $||\hat{h}_{n}-h_{0}||_{\mathcal{B}_{X}}$. If
$\hat{h}_{n}$ converges in probability then we define the asymptotic
bias to be the probability limit of the estimation error.
With the structural function $h_{0}$ fixed, the asymptotic bias is
fully determined by $\mu_{XZ\eta}$ and the degree of misspecification
$u_{0}$. So we fix $\mu_{XZ\eta}$ and define the `worst-case asymptotic
bias' of $\hat{h}_{n}$ to be the largest asymptotic bias given $u_{0}\in R(A)$
and $||u_{0}||_{\mathcal{B}_{X}}\leq b$. As a function of $b$ this
is:
\[
bias_{\hat{h}_{n}}(b)=\sup_{u_{0}\in R(A):\,||u_{0}||_{\mathcal{B}_{Z}}\leq b}\text{plim}_{n\to\infty}||\hat{h}_{n}-h_{0}||_{\mathcal{B}_{X}}
\]
The worst-case asymptotic bias of an estimator captures the sensitivity
of the estimator to misspecification in the form of instrumental endogeneity.
If $bias_{\hat{h}_{n}}(b)$ is small when the argument $b$ is small,
then a tight bound on the magnitude of the misspecification implies
that any asymptotic bias in the estimator $\hat{h}_{n}$ must be small.
Thus the behavior of $bias_{\hat{h}_{n}}(b)$ around $b=0$ captures
the robustness/non-robustness of the estimator to a small amount of
misspecification. If $bias_{\hat{h}_{n}}(b)$ converges to zero as
the argument $b$ goes to zero, we describe the estimator $\hat{h}_{n}$
as `robust' to misspecification in the form of invalid instruments.
Theorem 2.1 applies to any estimator
that is consistent whenever instruments are valid and the structural
function lies in the interior of the parameter space $\mathcal{H}$.
It states that if the structural function is indeed in the interior
of the parameter space $\mathcal{H}$, then the worst-case asymptotic
bias must be greater than some strictly positive constant, no matter
how small the degree of misspecification. In fact, if the parameter
space is the whole function space $\mathcal{B}_{X}$, then the worst-case
asymptotic bias must be infinite.
\theoremstyle{plain}
\begin{thm}
Fix $\mu_{XZ\eta}$ so that Assumptions 1.1 and 1.2 hold. Let $\hat{h}_{n}$ be an NPIV estimator that has a probability limit
and is consistent under instrumental validity whenever $h_{0}\in int(\mathcal{H})$.
That is, for any $h_{0}\in int(\mathcal{H})$, if $u_{0}=0$ then $\underset{n\to\infty}{\text{plim}}||\hat{h}_{n}-h_{0}||_{\mathcal{B}_{X}}=0$.
Then for any $h_{0}\in int(\mathcal{H})$ the estimator $\hat{h}_{n}$
is not robust. More precisely, if $h_{0}$ is at the center of an open ball $\mathcal{V}\subseteq\mathcal{H}$
with radius $r$, then for any $b>0$, $bias_{\hat{h}_{n}}(b)\geq r$. Furthermore, if $\mathcal{H}=\mathcal{B}_{X}$ then for any $b>0$, $bias_{\hat{h}_{n}}(b)=\infty$.
\end{thm}
To get an idea of the finite-sample effect of misspecification, suppose
for simplicity that $\mathcal{H}=\mathcal{B}_{X}$ and consider an
estimator of the form (\ref{eq:standardestimator}) discussed in Section
1. Below we give a lower bound on the error of such an estimator.
The bound (which follows by the reverse triangle inequality) contains
three parts: a) the error due to misspecification, b) the error due
to regularization (i.e, due to replacing $A^{-1}$ with $Q_{K_{n}}$),
and c) The estimation error in $\hat{g}_{n}$ and $\hat{Q}_{n,K_{n}}$:
\begin{align*}
||\hat{h}_{n,K_{n}}-h_{0}||_{\mathcal{B}_{X}}\geq & ||Q_{K_{n}}[u_{0}]||_{\mathcal{B}_{X}}\\
- & ||(A^{-1}-Q_{K_{n}})A[h_{0}]||_{\mathcal{B}_{X}}\\
- & ||\hat{Q}_{n,K_{n}}[\hat{g}_{n}]-Q_{K_{n}}[g_{0}]||_{\mathcal{B}_{X}}
\end{align*}
For the worst-case asymptotic bias we take the supremum over all $u_{0}\in R(A)$
such that $||u_{0}||_{\mathcal{B}_{Z}}\leq b$. The first term then
becomes $||Q_{K_{n}}||_{op}b$.
Where `$||\cdot||_{op}$' denotes the operator norm.\footnote{For an operator $\mathbb{T}:\mathcal{B}_{Z}\to\mathcal{B}_{X}$, $||\mathbb{T}||_{op}=\sup_{g\in\mathcal{B}_{Z}:\,||g||_{\mathcal{B}_{Z}}=1}||\mathbb{T}[g]||_{\mathcal{B}_{X}}$.}
For a given $k$, the operator norm of $Q_{k}$ is finite. However,
as we discuss in Section 1, consistency with valid instruments generally
requires $Q_{K_{n}}$ converge pointwise to $A^{-1}$. Under Assumptions
1.1 and 1.2 this necessarily implies that $||Q_{K_{n}}||_{op}\to\infty$.
And so if $b>0$ the first term in the worst-case asymptotic bias
grows to infinity. Consistency also typically requires that the remaining
two terms in the expansion above go to zero. So we see the worst-case
asymptotic bias is divergent.
\subsection{The Role of Smoothness}
The parameter space $\mathcal{H}$ plays a key role in Theorem
2.1. The theorem only applies when the structural function lies in
the interior of the parameter space $\mathcal{H}$. If the parameter
space has an empty interior then this condition is trivially false.
We consider two particular classes of sets in Assumption 2.1.
If $\mathcal{H}$ belongs to either class it must have an empty interior.
If the structural function is known to belong to a set in either class
then one can generally construct estimators that are consistent under
instrumental validity and are robust to the failure of instrumental
validity.
The two classes of sets that we consider are given below.
\theoremstyle{definition}
\newtheorem*{A2.1}{Assumption 2.1}
\begin{A2.1}
i. $\mathcal{H}$ is a compact infinite-dimensional subset
of $\mathcal{B}_{X}$. ii. $\mathcal{H}$ is a finite-dimensional
linear subspace of $\mathcal{B}_{X}$.
\end{A2.1}
Assumptions of the infinite-dimensional type are employed extensively
in the literature (e.g., in \citet{Newey2003}, \citet{Ai2003}, \citet{Blundell2007},
\citet{Freyberger2017a}, \citet{Santos2012}).
The compact function spaces employed in the NPIV literature generally correspond
to sets of smooth functions, e.g., functions in a given H{\"o}lder
ball. The finite-dimensional linear case corresponds to an IV model
with a parametric and linear-in-parameters second stage.
In either case we can replace the NPIV estimating equation with an
alternative estimating equation $A[h_{0}]=P_{Z}[g_{0}]$.
$P_{Z}$ denotes a projection from $\mathcal{B}_{Z}$ to $A[\mathcal{H}]$
(the image of $\mathcal{H}$ under $A$). A projection onto $A[\mathcal{H}]$
is a function that maps elements of $\mathcal{B}_{Z}$ to elements
of $A[\mathcal{H}]$ and when applied to elements of $A[\mathcal{H}]$
leaves them unchanged.
If the instruments are valid and the structural function is in $\mathcal{H}$,
then the reduced-form $g_{0}$ lies in $A[\mathcal{H}]$. Consequently,
$P_{Z}[g_{0}]=g_{0}$. So if the structural function lies in $\mathcal{H}$
and instruments are valid then the alternative estimating equation
holds.
If Assumption 1.1 holds then under correct specification
there is a unique element $h_{0}$ of $\mathcal{H}$ that satisfies
this equation. Let $A_{\mathcal{H}}$ denote the restriction of the
operator $A$ to $A[\mathcal{H}]$. Then the unique solution in $\mathcal{H}$
to the alternative estimating equation can be written as $A_{\mathcal{H}}^{-1}P_{Z}[g_{0}]$.
If $\mathcal{H}$ satisfies either Assumption
2.1.i or 2.1.ii, then any estimator that converges to a unique
solution in $\mathcal{H}$ to the alternative estimating equation is robust to instrumental
endogeneity.
\theoremstyle{plain}
\begin{thm}
Fix $\mu_{XZ\eta}$ so that Assumption 1.1 holds and suppose
that either of Assumptions 2.1.i or 2.1.ii holds.
Suppose $P_{Z}$ is a uniformly continuous projection onto $A[\mathcal{H}]$
and the estimator $\hat{h}_{n}$ satisfies:
\[
||\hat{h}_{n}-A_{\mathcal{H}}^{-1}P_{Z}[g_{0}]||_{\mathcal{B}_{X}}\to^{p}0
\]
Then, if $h_{0}\in\mathcal{H}$ the estimator is robust. That is $\lim_{b\to0}bias_{\hat{h}_{n}}(b)=0$.
\end{thm}
Theorem 2.2 shows that the worst-case asymptotic bias goes
to zero with the tightness of the bound on $u_{0}$, but it does not
give a rate. We now show that Assumptions 2.1.i and 2.1.ii
have rather different implications. In the finite-dimensional linear
case the asymptotic bias goes to zero at the same rate as the
bound $b$. However, under weak additional conditions, in the compact
and infinite-dimensional case the rate is strictly slower. In short,
even if nonparametric smoothness is imposed in estimation, NPIV estimators
are still less robust than parametric linear IV estimators or standard
nonparametric regression estimators.
\theoremstyle{definition}
\newtheorem*{A2.2}{Assumption 2.2}
\begin{A2.2}
$\mathcal{H}$ is convex and symmetric.\footnote{A subset $\mathcal{H}$ of a vector space is symmetric if $h\in\mathcal{H}$
implies $-h\in\mathcal{H}$.} Furthermore there exists $\alpha\in(0,1)$ so that $\frac{1}{\alpha}h_{0}\in\mathcal{H}$.
\end{A2.2}
The conditions Assumption 2.2 places on $\mathcal{H}$ hold
for the compact infinite-dimensional spaces typically used in the
NPIV literature including all of those considered by \citet{Freyberger2017}.
The assumption that $\frac{1}{\alpha}h_{0}\in\mathcal{H}$ for some
$\alpha\in(0,1)$ is, loosely speaking, a requirement that the structural
function does not lie at the edge of the parameter space.
\theoremstyle{plain}
\begin{thm}
Fix $\mu_{XZ\eta}$ so that Assumptions 1.1 and 1.2
hold.
a. Suppose Assumption 2.1.ii holds and suppose $P_{Z}$ is a
bounded linear projection operator onto $A[\mathcal{H}]$ (such a
projection has to exist). Suppose that for any $g_{0}\in A[\mathcal{H}]$
the estimator $\hat{h}_{n}$ satisfies:
\[
||\hat{h}_{n}-A_{\mathcal{H}}^{-1}P_{Z}[g_{0}]||_{\mathcal{B}_{X}}\to^{p}0
\]
Then for any $b>0$ there exists a constant $C$ (not dependent on
$h_{0}$) so that for any $h_{0}\in\mathcal{H}$, $bias_{\hat{h}_{n}}(b)/b\leq C$.
b. Suppose Assumptions 2.1.i and 2.2 hold. Suppose
the estimator $\hat{h}_{n}$ is consistent for $h_{0}$ whenever $h_{0}\in\mathcal{H}$
and $u_{0}=0$. Then $\lim_{b\to0}bias_{\hat{h}_{n}}(b)/b=\infty$.
\end{thm}
\subsection{Continuous Functionals}
The sensitivity results in Subsection 2.1 apply to estimation of the
structural function itself. If the object of interest is instead a
functional of the structural function then it may be possible to construct
estimates that a) are consistent under instrumental validity without
any a priori restrictions on the true structural function (i.e., for
any $h_{0}\in\mathcal{B}_{X}$), and also b) have asymptotic bias
under instrumental endogeneity that is at most proportional to the
magnitude of the endogeneity.
The estimation of functionals of the structural function in NPIV models
is analyzed extensively in the literature, for example in \citet{Ai2003}.
\citet{Severini2012} provide efficiency bounds for a class of linear
functionals of the structural function in some statistical inverse
problems, \citet{Ichimuraa} expand upon their results. Following
\citet{Severini2012}, we let the underlying function space $\mathcal{B}_{X}$
be $L_{2}(\mu_{X})$. The function space $\mathcal{H}$ is the whole
of $L_{2}(\mu_{X})$. `$\mu_{X}$' denotes the distribution of the
regressors.
\citet{Severini2012} consider linear functionals of the form $\gamma_{0}=E[w(X)h_{0}(X)]$
Where $w\in L_{2}(\mu_{X})$ is a known weighting function. By the
Reisz representation theorem, any continuous linear functional of
the structural function (continuous in the sense of $L_{2}(\mu_{X})$)
can be written in the form above for some $w$.
\citet{Severini2012} show that a linear functional of the form above
is estimable at rate $\sqrt{n}$ only if there exists a function $\alpha\in L_{2}(\mu_{Z})$
(where $\mu_{Z}$ is the probability measure of $Z$) so that:
\begin{equation}
w(X)=E[\alpha(Z)|X]\label{eq:condneed}
\end{equation}
Under this same condition, robust estimation of $\gamma_{0}$ is achievable.
If the condition fails, robust estimation is (under Assumptions
1.1 and 1.2) impossible without further restrictions to the parameter space.
Let $\hat{\gamma}_{n}$ be an estimator of $\gamma_{0}$ that has
some probability limit. Fix $h_{0}$ and $\mu_{XZ\eta}$. The worst-case asymptotic bias of $\hat{\gamma}_{n}$ given
$u_{0}\in R(A)$ and $||u_{0}||_{L_{2}(\mu_{Z})}\leq b$ is:
\[
bias_{\hat{\gamma}_{n}}(b)=\sup_{u_{0}\in R(A):\,||u_{0}||_{L_{2}(\mu_{Z})}\leq b}\underset{n\to\infty}{\text{lim }}|\hat{\gamma}_{n}-\gamma_{0}|
\]
In Theorem 2.4 we fully characterize those continuous
linear functionals that can be estimated robustly. As stated above,
the key condition is the existence of a function $\alpha$ that satisfies
(\ref{eq:condneed}).
\begin{thm}
Fix $\mu_{XZ\eta}$ so that Assumptions 1.1 and 1.2 hold. Suppose $\hat{\gamma}_{n}$
is consistent under instrumental validity whenever $h_{0}\in L_{2}(\mu_{X})$.
That is, if $u_{0}=0$ then $|\hat{\gamma}_{n}-\gamma_{0}|\to^{p}0$. Now fix the true structural function $h_{0}\in L_{2}(\mu_{X})$.
a. Suppose there exists $\alpha\in L_{2}(\mu_{Z})$ so that $w(X)=E[\alpha(Z)|X]$.
Then the estimator is robust. In particular:
\[
bias_{\hat{\gamma}_{n}}(b)=b\inf_{\begin{array}{c}
\alpha\in L_{2}(\mu_{Z})\\
w(X)=E[\alpha(Z)|X]
\end{array}}||\alpha||_{L_{2}(\mu_{Z})}
\]
b. If there is no $\alpha\in L_{2}(\mu_{Z})$ so that $w(X)=E[\alpha(Z)|X]$
then the estimator is non-robust and in fact for all $b>0$, $bias_{\hat{\gamma}_{n}}(b)=\infty$.
\end{thm}
\subsection{Discussion}
Theorems 2.1 and 2.3 may worry an empirical researcher.
Theorem 2.1 shows that NPIV estimation methods that do not
impose strong restrictions on the structural function can exhibit
highly aberrant asymptotic behavior, no matter how small the degree
of misspecification. If the researcher accepts that NPIV models are
always at least mildly misspecified, then imposing at least some smoothness
in estimation is paramount.
Theorem 2.2 shows that smoothness allows for robust estimation.
However, Theorem 2.3 shows that imposing nonparametric smoothness
still results in estimators that are less robust than those that impose
parametric restrictions.
Even if the researcher accepts a priori that
say, the true structural function lies in a particular H{\"o}lder
ball, nonetheless it may be optimal to impose even stronger smoothness
restrictions in order to reduce the sensitivity to a failure of instrumental
validity. However, if the stronger restrictions do not hold then an estimator that imposes them is asymptotically biased even if instruments are valid.
In short, the researcher faces a trade-off between the sensitivity
to instrumental endogeneity and the asymptotic bias that results
from imposing overly strong smoothness restrictions. The methods we
present in the next section are motivated in part by this trade-off.
\section{The Partial Identification Approach}
The results in the previous section show that point estimation in
an NPIV model entails a trade-off between two different
sources of asymptotic bias. A researcher must impose some restrictions
(say, smoothness assumptions) on the structural function in order
to reduce the sensitivity of the estimates to the failure of instrumental
validity. But if the restrictions are too strong then imposing them imparts asymptotic bias.
In this
section we propose a partial identification approach to NPIV estimation
which explicitly accounts for both sources of bias. This set identification
strategy allows us to evaluate error bands that account for all misspecification
and allows us to achieve smallest possible worst-case asymptotic bias
in point estimation. We use a priori bounds on the deviation from
instrumental validity and some restrictions on the structural function
(e.g., a bound on its second derivatives). Our approach is similar
in spirit to that of \citet{Conley2008} who perform partial identification
in the linear IV model allowing for some failure of instrumental validity.
The set estimation problem is (under weak conditions) well-posed.
This contrasts with the case of standard point estimation in NPIV models.
In our approach, the NPIV moment condition is replaced with an inequality constraint.
Our convergence rate results depend crucially on the existence of functions in the parameter space
for which this inequality does not bind. As such, our results do not extend to the point identified
case and differ fundamentally from those in, for example, \citet{Chen2018}.
We assume that the structural function $h_{0}$ lies in $\mathcal{B}_{X}$,
the Banach space of bounded functions on the support of $X$. To achieve
partial identification we assume that $h_{0}$ satisfies condition
\textbf{a. }below, which is expressed as a condition on an element
$h\in\mathcal{B}_{X}$:
\textbf{a.} $|E[Y-h(X)|Z]|\leq b$
The inequality is understood to hold with probability $1$ and the
bound $b$ is treated as a priori known. In the case of correct specification
the structural function satisfies \textbf{a.} with $b=0$, thus condition
\textbf{a.} weakens the NPIV moment restriction to allow for a
limited degree misspecification.\footnote{To map the constraint \textbf{a. }on $h_{0}$ into the functional
notation developed in previous sections, let $\mathcal{B}_{Z}$ be
the space of essential-supremum bounded functions on the support of
$Z$ equipped with the essential-supremum norm. Then the almost sure
inequality can be written as $||g_{0}-A[h_{0}]||_{\mathcal{B}_{Z}}\leq b$.}
In addition, we assume that the structural function $h_{0}$ lies
in a set of functions $\mathcal{H}\subseteq\mathcal{B}_{X}$.
The results in Section 2 suggest that the space $\mathcal{H}$ should
be sufficiently restrictive for the identified set to be meaningful.
We assume that $\mathcal{H}$ can be expressed in terms of inequality
constraints as follows. Let $\mathbb{T}$ be a linear functional from
$\mathcal{B}_{X}$ to $(\mathcal{B}_{X})^{d}$. That is, for any $h\in\mathcal{B}_{X}$,
$\mathbb{T}[h](x)$ is a length-$d$ column vector and each coordinate
of $\mathbb{T}[h]$ is a function in $\mathcal{B}_{X}$. Let $c$
be a function in $(\mathcal{B}_{X})^{d}$. Then $\mathcal{H}$ is
the set of functions in $\mathcal{B}_{X}$ that satisfy:
\textbf{b. }$\mathbb{T}[h](x)\leq c(x)\,\forall x\in\mathcal{X}$
Thus if $\mathbb{T}[h](x)$ is a vector of length $d$, then $c(x)$
is a column vector of the same length and the inequality is assumed
to hold component-wise. Technically, the constraint \textbf{b. }should
also require that $h$ is in the domain of $\mathbb{T}$ (which could
be a proper subset of $\mathcal{B}_{X}$), we leave this implicit
for ease of exposition.
In the empirical application in Section 4 the regressors are one-dimensional
and we take $\mathcal{H}$ to be the set of functions that map to
the unit interval and have second derivatives bounded by a given constant.
In a slight abuse of notation we simply denote the constant by $c$.
In this case $c(x)=(1,0,c,c)'$ and the operator $\mathbb{T}$ is
given by:
\[
\mathbb{T}[h](x)=\big(h(x),-h(x),\frac{\partial^{2}}{\partial x^{2}}h(x),-\frac{\partial^{2}}{\partial x^{2}}h(x)\big)'
\]
The conditions \textbf{a.} and \textbf{b.} define a linear conditional
moment inequality model. The moment inequality in condition \textbf{a.
}may not appear linear, but note that for any scalars $y$ and $b$,
the inequality $|y|\leq b$ is equivalent to the two linear inequalities
$y\leq b$ and $-y\leq b$.
We denote by $\Theta$ the set of functions in $\mathcal{H}$ that
satisfy the moment inequality in \textbf{a.}, or equivalently the
functions in $\mathcal{B}_{X}$ that satisfy constraints \textbf{a.}
and \textbf{b.}. We refer to $\Theta$ as the `identified set of functions'.
Let $\underline{\theta}$ and $\bar{\theta}$ denote the lower and
upper envelopes of the identified set of functions $\Theta$. Our
goal is to estimate these envelopes. For a given $x$ in the support
of $X$, let `$\Theta_{x}$' denote the identified set for the value
of the structural function at $x$. A value $\theta$ is in $\Theta_{x}$
if and only if $\theta=h(x)$ for some function $h\in\Theta$. Proposition
3.1 stated and proven in Appendix B shows that $\Theta_{x}$ is an
interval with end points $\underline{\theta}(x)$ and $\bar{\theta}(x)$.
This motivates our focus on estimation of the envelopes: consistent
estimation of the envelopes is equivalent to consistent estimation
of the identified set for $h_{0}(x)$ at each point $x$ in the support
of $X$.
Let $\hat{h}_{n}$ be a point estimator of the structural function
that converges pointwise in probability to a limit $h_{\infty}$.
Under the assumption that $h_{0}$ satisfies \textbf{a. }and \textbf{b.
}the pointwise worst-case asymptotic bias of $\hat{h}_{n}$ at some
$x\in\mathcal{X}$ is given by:
\begin{align*}
\sup_{h_{0}\in\Theta}\underset{n\to\infty}{\text{plim}}|\hat{h}_{n}(x)-h_{0}(x)| & =\max\big\{|h_{\infty}(x)-\underline{\theta}(x)|,|h_{\infty}(x)-\bar{\theta}(x)|\big\}
\end{align*}
If $\hat{h}_{n}$ converges uniformly in probability to $h_{\infty}$
then the uniform worst-case asymptotic bias is given by:
\begin{align*}
\sup_{h_{0}\in\Theta}\underset{n\to\infty}{\text{plim}}\sup_{x\in\mathcal{X}}|\hat{h}_{n}(x)-h_{0}(x)| & =\sup_{x\in\mathcal{X}}\max\big\{|h_{\infty}(x)-\underline{\theta}(x)|,|h_{\infty}(x)-\bar{\theta}(x)|\big\}
\end{align*}
Thus an estimator $\hat{h}_{n}$ that converges uniformly in probability
achieves minimal pointwise and uniform worst-case asymptotic bias
if and only if it converges uniformly to $\frac{1}{2}(\underline{\theta}+\bar{\theta})$.
We can define $\underline{\theta}$ and $\bar{\theta}$ formally as
follows:
\begin{align*}
\underline{\theta}(x)= & \inf_{h\in\mathcal{B}_{X}}h(x)\\
& \text{subject to conditions \textbf{a. }and \textbf{b.}}\\
\bar{\theta}(x)= & \sup_{h\in\mathcal{B}_{X}}h(x)\\
& \text{subject to conditions \textbf{a. }and \textbf{b.}}
\end{align*}
The problems above cannot be solved in practice and therefore we refer
to these as the `infeasible' problems. Constraint \textbf{a. }involves
a conditional expectation and therefore depends on the distribution
of the data which is not a priori known. Furthermore, the optimization
is over the space $\mathcal{B}_{X}$ of bounded functions on $\mathcal{X}$
and so if $X$ is continuously distributed then $\mathcal{B}_{X}$
is infinite-dimensional. Finally, if $X$ and $Z$ are continuously
distributed then the inequalities in constraints \textbf{a.} and \textbf{b.
}must be enforced at an infinite set of values.
We describe a method for estimating the envelopes $\underline{\theta}$
and $\bar{\theta}$. Estimation requires that we replace the infeasible
problems above with feasible ones. Instead of optimizing over $\mathcal{B}_{X}$
we optimize over a finite-dimensional subspace whose dimension grows
with the sample size. We replace the conditional expectation in \textbf{a.
}by an empirical analogue constructed using non-parametric regression
in a first stage. We enforce the inequalities in each constraint only
on finite grids that become increasingly fine as the sample size grows.
\subsection{An Estimator of the Identified Set}
Let $\Phi_{n}$ be a length-$K_{n}$ column vector of basis functions
defined on $\mathcal{X}$, where the dimension $K_{n}$ grows with
the sample size. We assume that each component of $\Phi_{n}$ is in
the domain of $\mathbb{T}$. For example, in the case above of $\mathcal{H}$
a set of functions with bounded second derivatives, $\Phi_{n}$ must
be twice differentiable. Let $\mathbb{T}[\Phi_{n}']$ denote the $d$-by-$K_{n}$
matrix whose $k^{th}$ column is $\mathbb{T}$ applied to the $k^{th}$
component of $\Phi_{n}$. Because $\mathbb{T}$ is linear $\mathbb{T}[\Phi_{n}'\beta]=\mathbb{T}[\Phi_{n}']\beta$.
The researcher estimates the reduced-form function $g_{0}(Z)=E[Y|Z]$
using standard non-parametric regression methods. The researcher also
estimates the length-$K_{n}$ column vector of regression functions
$\text{\ensuremath{\Pi}}_{n}(Z)=E[\Phi_{n}(X)|Z]$. Let $\hat{g}_{n}$
denote the estimate of $g_{0}$ and $\hat{\Pi}_{n}$ the estimate
of $\Pi_{n}$.
In our empirical application we use series regression for the first
stage. Let $\Psi_{n}$ be a length-$L_{n}$ column vector of basis
functions defined on $\mathcal{Z}$ with $L_{n}\to\infty$. Let $\hat{Q}_{n}=\frac{1}{n}\sum_{i=1}^{n}\Psi_{n}(Z_{i})\Psi_{n}(Z_{i})'$.
Then (assuming $\hat{Q}_{n}$ is non-singular) the series first-stage
regression functions are defined by:
\begin{eqnarray}
\hat{g}_{n}(z)=\Psi_{n}(z)'\hat{Q}_{n}^{-1}\frac{1}{n}\sum\Psi_{n}(Z_{i})Y_{i}\label{eq:regg}\\
\hat{\Pi}_{n}(z)=\Psi_{n}(z)'\hat{Q}_{n}^{-1}\frac{1}{n}\sum\Psi_{n}(Z_{i})\Phi_{n}(X_{i})'\label{eq:regpi}
\end{eqnarray}
Let $\mathcal{X}_{n}$ be a finite grid of points in the support of
$X$ and let $\mathcal{Z}_{n}$ be a grid of points in the support
of $Z$. The conditions \textbf{a.'} and \textbf{b.'} below are constraints
on a vector $\beta\in\mathbb{R}^{K_{n}}$:
\textbf{a.'} $|\hat{g}_{n}(z)-\hat{\Pi}_{n}'(z)\beta|\leq b,\,\forall z\in\mathcal{Z}_{n}$
\textbf{b.'} $\mathbb{T}[\Phi_{n}](x)'\beta\leq c(x),\,\forall x\in\mathcal{X}_{n}$\\
The estimators of $\underline{\theta}(x)$ and $\bar{\theta}(x)$
for a given $x$ in the support of $X$ are $\underline{\hat{\theta}}_{n}(x)$
and $\hat{\bar{\theta}}_{n}(x)$ respectively. These are defined as
the solutions to the following linear programming problems:
\begin{align*}
\hat{\bar{\theta}}_{n}(x) & =\max_{\beta\in\mathbb{R}^{K_{n}}}\Phi_{n}(x)'\beta\\
& \text{subject to conditions \textbf{a.'} and \textbf{b.'}}\\
\hat{\underline{\theta}}_{n}(x) & =\min_{\beta\in\mathbb{R}^{K_{n}}}\Phi_{n}(x)'\beta\\
& \text{subject to conditions \textbf{a.'} and \textbf{b.'}}
\end{align*}
Unlike the problems that define $\underline{\theta}(x)$ and $\bar{\theta}(x)$,
the problems above are feasible: they can be solved in practice. They are linear programming problems each with $K_{n}$ scalar parameters
and $2|\mathcal{Z}_{n}|+d|\mathcal{X}_{n}|$ linear constraints (where
$|\mathcal{X}_{n}|$ is the number of points in the grid $\mathcal{X}_{n}$
and $|\mathcal{Z}_{n}|$ is the number of points in $\mathcal{Z}_{n}$,
recall $d$ is the dimension of $\mathbb{T}[h](x)$).
Using the envelope estimators above we can evaluate a central estimator
$\hat{h}_{n}$ by setting $\hat{h}_{n}(x)=\frac{1}{2}\big(\underline{\hat{\theta}}_{n}(x)+\hat{\bar{\theta}}_{n}(x)\big)$. If the envelope estimators are uniformly consistent then this estimator
converges uniformly to $\frac{1}{2}(\underline{\theta}+\bar{\theta})$.
Thus it achieves minimal worst-case uniform and pointwise worst-case
asymptotic bias under our assumptions on $h_{0}$. The envelopes can
be understood as error bounds on $\hat{h}_{n}$ which account for
the possibility of misspecification.
\subsection{Consistency and Convergence Rates}
Let us introduce some additional notation. `$||\cdot||_{2}$' denotes
the Euclidean norm. For any $h\in\mathcal{B}_{X}$, $|h|_{\infty}$
is the supremum norm of $h$, that is $|h|_{\infty}=\sup_{x\in\mathcal{X}}|h(x)|$.
In a slight abuse of notation, for any $g\in\mathcal{B}_{Z}$, $|g|_{\infty}$
is the essential supremum norm of $g$, i.e., the infimum of the real
numbers that exceed $|g(Z)|$ with probability $1$. We say that a
vector-valued function $f$ with domain $\mathcal{W}\subseteq\mathbb{R}^{k}$
is Lipschitz continuous with Lipschitz constant $\xi$ if and only
if:
\[
\sup_{w_{1},w_{2}\in\mathcal{W}: w_1\neq w_2}\frac{||f(w_{1})-f(w_{2})||_{2}}{||w_{1}-w_{2}||_{2}}=\xi
\]
Let $D_{1,n}$ denote the upper bound on the distance between any
point in $\mathcal{X}$ and the nearest gridpoint in $\mathcal{X}_{n}$. That is, $D_{1,n}=\sup_{x_{1}\in\mathcal{X}}\min_{x_{2}\in\mathcal{X}_{n}}||x_{1}-x_{2}||_{2}$. Similarly, define the sequence $D_{2,n}=\sup_{z_{1}\in\mathcal{Z}}\min_{z_{2}\in\mathcal{Z}_{n}}||z_{1}-z_{2}||_{2}$. Define a sequence $C_{n}=\sup_{\beta\in\mathbb{R}^{K_{n}}}\frac{||\beta||_{2}}{|\Phi_{n}'\beta|_{\infty}}
$.
The following assumptions provide high-level conditions for uniform
consistency and particular uniform convergence rates for our estimated
envelopes. We provide more primitive conditions later in this section.
\theoremstyle{definition}
\newtheorem*{A3.1}{Assumption 3.1}
\newtheorem*{A3.2}{Assumption 3.2}
\newtheorem*{A3.3}{Assumption 3.3}
\newtheorem*{A3.4}{Assumption 3.3}
\begin{A3.1}
$\mathbb{T}:\,\mathcal{B}_{X}\to(\mathcal{B}_{X})^{d}$ is linear,
$\mathbb{T}[h](x)\leq c(x)$ implies $|h(x)|\leq\bar{c}$ for some
$0<\bar{c}<\infty$, and for some $\underline{c}>0$, $c(x)\geq \underline{c}$
for all $x\in\mathcal{X}$.
\end{A3.1}
\begin{A3.2}
There is a sequence of positive scalars $a_{n}\to0$ so that:
\[
|\hat{g}_{n}-g_{0}|_{\infty}+\sup_{\beta\in\mathbb{R}^{K_{n}}:\,\Phi_{n}'\beta\in\mathcal{H}}|(\hat{\Pi}_{n}-\Pi_{n})'\beta|_{\infty}=O_{p}(a_{n})
\]
\end{A3.2}
\begin{A3.3}
There is a sequence of positive scalars $\kappa_{n}\to0$ so that for any $h\in\mathcal{H}$
there exists $\beta_{n}\in\mathbb{R}^{K_{n}}$ with $|\Phi_{n}'\beta_{n}-h|_{\infty}\leq\kappa_{n}$ and:
\begin{align*}
\mathbb{T}[\Phi_{n}'](x)\beta_{n} & \leq\mathbb{T}[h](x),\,\forall x\in\mathcal{X}
\end{align*}
\end{A3.3}
\begin{A3.4}
i. Both $\Phi_{n}$ and $\mathbb{T}[\Phi_{n}]$ are Lipschitz
continuous with constant at most $\xi_{n}$ and $c$ is Lipschitz continuous
with some constant. ii. With probability approaching
$1$ both $\hat{g}_{n}$ and $\hat{\Pi}_{n}$ are Lipschitz continuous
with constant at most $G_{n}$. iii. $D_{1,n},D_{2,n}\to0$, $C_{n}\xi_{n}D_{1,n}\to0$ and $C_{n}G_{n}D_{2,n}\to0$.
\end{A3.4}
Assumption 3.1 places conditions on $\mathbb{T}$ and therefore
on $\mathcal{H}$. It implies that elements of $\mathcal{H}$ are
bounded and that $\mathcal{H}$ is convex.
Assumption 3.2 allows us to control the effect of first-stage
estimation error in the constraints of the feasible problem. In Theorem 3.2 we establish
a rate for $a_{n}$ when $\hat{g}_{n}$ and $\hat{\Pi}_{n}$ are series
regression estimators and some primitive conditions hold. The rate
in Theorem 3.2 is uniform over all choices of the sequence $\{K_{n}\}_{n=1}^{\infty}$.
Assumption 3.3 allows us to control the error from the replacement
of the space $\mathcal{B}_{X}$ with the finite-dimensional space
of functions of form $\Phi_{n}'\beta$ in the feasible problem. In Theorem 3.3 below
we provide a rate for $\kappa_{n}$ for the setting in Section 4.
Assumption 3.4 allows us to control the error from the use
of finite grids $\mathcal{X}_{n}$ and $\mathcal{Z}_{n}$ in the constraints
of the feasible problem. If Assumption 3.4 fails
then the estimated envelopes may be too loose in large samples and
so the set estimates may be too conservative in the limit.
\theoremstyle{plain}
\begin{thm}
Suppose Assumptions 3.1, 3.2, 3.3 and 3.4
hold and there exists $h\in\mathcal{H}$ with $|E[Y-h(X)|Z]|<b$.
Then:
\begin{align*}
|\hat{\underline{\theta}}-\underline{\theta}|_{\infty} & =O_{p}(a_{n}+\kappa_{n}+C_{n}\xi_{n}D_{1,n}+C_{n}G_{n}D_{2,n})\\
& =o_{p}(1)
\end{align*}
\begin{align*}
|\hat{\bar{\theta}}-\bar{\theta}|_{\infty} & =O_{p}(a_{n}+\kappa_{n}+C_{n}\xi_{n}D_{1,n}+C_{n}G_{n}D_{2,n})\\
& =o_{p}(1)
\end{align*}
\end{thm}
Note that along with the Assumptions 3.1 to 3.4 we
also require that there exists $h\in\mathcal{H}$ with $|E[Y-h(X)|Z]|<b$.
If the identified set is non-empty then failure of the condition
is knife-edge: if $b$ were even slightly larger then the condition
would hold, if $b$ were even slightly smaller then the identified
set would be empty.
Theorem 3.1 demonstrates the well-posedness of the set estimation
problem. The first-stage rate $a_{n}$ is not premultiplied by some
growing factor like a `sieve measure of ill-posedness' (\citet{Blundell2007}).
The final two terms in each rate in Theorem 3.1 depend on
$D_{1,n}$ and $D_{2,n}$ which capture the density of the grids $\mathcal{X}_{n}$
and $\mathcal{Z}_{n}$. The terms $a_{n}$ and $\kappa_{n}$ do not
depend on the grids, and so if the grids becomes dense sufficiently
quickly then the rates in Theorem 3.1 simplify to $a_{n}+\kappa_{n}$.
In practice the grid densities are limited by computational considerations.
If $K_{n}$ grows quickly to infinity then the approximation error
$\kappa_{n}$ converges rapidly to zero. However, the first-stage
error rate $a_{n}$ may depend on $K_{n}$ and so a faster rate for
$\kappa_{n}$ could result in a slower rate $a_{n}$. Below we provide
primitive conditions for a first-stage rate $a_{n}$ which is independent
of $K_{n}$. Therefore, under these conditions a faster rate for $K_{n}$
must lead to at least a weakly faster rate of convergence for the
estimates. If $K_{n}$ grows sufficiently quickly the term $\kappa_{n}$
is dominated by $a_{n}$. Again, in practice $K_{n}$ is restricted
by computational limitations.
The following primitive conditions allow us to derive a first-stage
rate $a_{n}$. Our analysis builds heavily on \citet{Belloni2015},
we apply their results directly to get a rate for $|g_{0}-\hat{g}_{n}|_{\infty}$
and adapt steps in their proof to allow for uniformity over a set
of series regressions.
In the Assumptions below, we say a function $f: \mathcal{Z}\to \mathbb{R}$ is of H{\"o}lder smoothness class $s\in (0,1]$ with constant $\xi$ if and only if:
\[
\sup_{z_{1},z_{2}\in\mathcal{Z}: z_1 \neq z_2}\frac{|f(z_{1})-f(z_{2})|}{||z_{1}-z_{2}||_{2}^s}=\xi
\]
Let $\lfloor s \rfloor$ denote the largest integer less than $s$. We say $f$ is of H{\"o}lder smoothness class $s>1$ with constant $\xi$ if and only if all the derivatives of $f$ of order weakly less than $\lfloor s \rfloor$ are uniformly bounded by $\xi$ and all the derivatives of order exactly $\lfloor s \rfloor$ are of H{\"o}lder smoothness class $ s - \lfloor s \rfloor$ with constant $\xi$. If we wish to leave the constant unspecified we simply say say $f$ is H{\"o}lder of smoothness class $s$.
Further, for any $\delta >0$, let $\mathcal{N}(\mathcal{H},|\cdot|_{\infty},\delta)$ denote the smallest
number of $|\cdot|_{\infty}$-balls of radius $\delta$ that can cover
$\mathcal{H}$.
Let $\dim (Z)$ denote the dimension of $Z$.
\theoremstyle{definition}
\newtheorem*{A3.5}{Assumption 3.5}
\newtheorem*{A3.6}{Assumption 3.6}
\newtheorem*{A3.7}{Assumption 3.7}
\begin{A3.5}
i. The eigenvalues of the matrix $Q_{n}=E[\Psi_{n}(Z_{i})\Psi_{n}(Z_{i})']$
are bounded uniformly above and away from zero. ii. $\mathcal{Z}$
is bounded and the distribution of $X$ given $Z$ admits a conditional
density $f_{X|Z}$ so that for any $x\in\mathcal{X}$ the
function $f_{X|Z}(x,\cdot)$ is of H{\"o}lder smoothness class $s>0$ with constant at most $\bar{\ell}$.
\end{A3.5}
\begin{A3.6}
For any $s>0$ there is a sequence $R_n(s)\to 0$ so for any $g\in\mathcal{B}_{Z}$ that
is of H{\"o}lder smoothness class $s$ with constant $\xi$:
\[
\sup_{z\in\mathcal{Z}}|g(z)-\Psi_{n}(z)'Q_{n}^{-1}E[\Psi_{n}(Z)g(z)]|\leq \xi R_n(s)
\]
ii. For all $z\in\mathcal{Z}$, $||\Psi_{n}(z)||_{2}\leq\bar{\xi}_{n}$.
The function $\alpha_{n}$ defined by $\alpha_{n}(z)=\frac{\Psi_{n}(z)}{||\Psi_{n}(z)||}$
is Lipschitz continuous with constant $\ell_{n}$. iii. $\mathcal{H}$ has finite Dudley entropy integral:
\[
\int_{0}^{1}\sqrt{log\mathcal{N}(\mathcal{H},|\cdot|_{\infty},u)}du<\infty
\]
iv. $\frac{\bar{\xi}_{n}^{2}log(L_{n})}{n}\to0$
\end{A3.6}
\begin{A3.7}
The function $g_{0}(Z)=E[Y|Z]$ is of H{\"o}lder smoothness class $s>0$. For $m>2$, $E\big[|Y-E[Y]|^{m}\big|Z\big]<\infty$,
$\bar{\xi}_{n}^{2m/(m-2)}log(L_{n})/n=O(1)$, $L_{n}log(L_{n})/(n^{1-2/m})=O(1)$
and $L_{n}^{2-2/\dim(Z)}/n=O(1)$. ii. $log(\ell_{n})=O\big(log(L_{n})\big)$, $\bar{\xi}_{n}=O(\sqrt{L_{n}})$ and $R_{n}(s)=O(L_{n}^{-s_0 (s)/\dim(Z)})$ for some function $s_0:\mathbb{R}_{++}\to \mathbb{R}_{++}$ and $R_n(s)=O(\sqrt{L_n})$.
\end{A3.7}
Assumption 3.5 restricts the joint distribution of the regressors
$X$ and instruments $Z$. The assumption on the eigenvalues of $Q_{n}$
is standard in the series estimation literature. Smoothness
of the conditional density ensures that for any $h\in\mathcal{B}_{X}$,
$A[h]$ is smooth.
Assumptions 3.6.i and 3.6.ii. can be verified for commonly used basis functions. 3.6.iii is a condition
on the metric entropy of $\mathcal{H}$. Loosely speaking, it states that
$\mathcal{H}$ is sufficiently restrictive. Conditions on metric entropy are commonplace in the sieve estimation literature (see \citet{Chen2007a}). Spaces of sufficiently smooth functions typically obey the condition (see e.g., \citet{Wainwright2019}
Chapter 5). In our empirical application, the set $\mathcal{H}$ can be shown to contain functions on an interval that are uniformly Lipschitz, such a set of functions must satisfy the assumption.\footnote{Note that whether or not a set of functions obeys a condition like 3.6.iii is closely related to whether the set is a universal Donsker class (see, \citet{Dudley1987}).} 3.6.iv ensures the empirical analogue of $Q_{n}$
converges in an appropriate sense by Rudelson's law of large numbers for matrices (\citet{Rudelson1999}).
Assumption 3.7.i provides conditions on the joint distribution
of $Y$ and $Z$ that allow us to apply results from \citet{Belloni2015}
to derive a convergence rate for $\hat{g}_{n}$. In our empirical
application in Section 4 the dependent variable is bounded and so
we can set $m$ arbitrarily large. 3.7.ii gives rates for
some of the sequences mentioned in the other assumptions and can be
verified for a given choice of basis functions. In particular it holds
for the B-spline bases used in Section 4 with $s_0 (s)$ equal to the minimum of the smoothness class $s$ and the order of the splines (see e.g., \citet{Belloni2015}).
The following theorem gives a rate $a_{n}$ in terms of $L_{n}$ and
$n$. The key steps in the proof are Lemma 3.3 stated in
Appendix B, which builds on ideas in \citet{Belloni2015} and may
be of some independent interest, and Theorem 4.3 in \citet{Belloni2015}.
\theoremstyle{plain}
\begin{thm}
Suppose Assumptions 3.5, 3.6 and 3.7 hold.
Let $\hat{g}_{n}$ and $\hat{\Pi}_{n}$ be the series estimators defined
in (\ref{eq:regg}) and (\ref{eq:regpi}). Then uniformly over sequences
$\{K_{n}\}_{n=1}^{\infty}$:
\begin{align*}
&|\hat{g}_{n}-g_{0}|_{\infty}+\sup_{\beta\in\mathbb{R}^{K_{n}}:\,\Phi_{n}'\beta\in\mathcal{H}}|(\hat{\Pi}_{n}-\Pi_{n})'\beta|_{\infty} \\
=& O_{p}\bigg(\sqrt{\frac{L_{n}log(L_{n})}{n}}+L_{n}^{-s_0 (s)/\dim(Z)}\bigg) = o_{p}(1)
\end{align*}
\end{thm}
If the conditions of the theorem hold, then setting $L_{n}$ optimally
we can achieve first-stage rate $a_{n}=\big( \frac{log(n)}{n} \big) ^{s_0(s)/(2 s_0(s)+\dim (Z))}$.
In the case of $s_0 (s)=s$ (which holds for B-spline bases of order greater than $s$) this is the best uniform rate possible (see \citet{Belloni2015}).
The rate given in Theorem 3.2 is independent of the sequence
$\{K_{n}\}_{i=1}^{\infty}$. Therefore, if the conditions for Theorem
3.2 hold, the optimal rate in Theorem 3.1 is achieved by
letting $K_{n}$ grow as fast as possible.
Finally, we provide a rate for $\kappa_{n}$
in Assumption 3.3 for the setting in our empirical application.
\begin{thm}
Let $\mathcal{H}$ contain functions that map from a closed interval
$[a,b]$ to $[0,1]$ so that any $h\in\mathcal{H}$ is twice-differentiable
with $|\frac{\partial^{2}}{\partial x^{2}}h|_{\infty} \leq c$.
Let $\Phi_{n}$ be a vector of $s_0$-order B-spline basis functions
with $K_{n}$ knot points evenly spaced between $[a,b]$. If $s_0 \geq3$
then Assumption 3.3 holds with $\kappa_{n}=O(K_{n}^{-\frac{1}{2}})$.
\end{thm}
\section{An Empirical Application}
To demonstrate the usefulness of our partial identification approach
we revisit an existing application of NPIV methods. In particular,
we replicate the NPIV estimation results in Section 5.1 of \citet{Horowitz2011}
which estimates a shape-invariant Engel curve for food using data
from the British Family Expenditure Survey .\footnote{We made use of the data file that accompanies \citet{Horowitz2011}
and adapted the accompanying code in order to evaluate Horowitz's
estimator and B-spline bases for our own methods.} Horowitz's application is in turn based on \citet{Blundell2007}
who also carry out NPIV estimation of shape-invariant Engel curves
and use the same data.
From \citet{Horowitz2011}: ``The data are 1655 household-level observations
from the British Family Expenditure Survey. The households consist
of married couples with an employed head-of-household between the
ages of 25 and 55 years.''
\subsection{Shape-Invariant Engel Curves and the Case for Mild Misspecification}
\citet{Blundell2007} aim to estimate `structural' Engel curves. A
structural Engel curve measures the budget share that would be spent
on a good if the household's total expenditure were set exogenously
to some level. One can imagine an ideal randomized experiment in which
the household's weekly expenditure on nondurable goods is decided
at random (i.e., exogenously) by a researcher. The household then
decides how to allocate this budget across different classes of goods.
The relationship between total expenditure and the budget share allocated
to a good in such an experiment is a structural Engel curve.
In observational settings, the share of household wealth allocated
to expenditure on nondurables in a given period is decided by the
household. Therefore, it is likely associated with the household's
underlying preferences. In short, total household expenditure on nondurable
goods is endogenous.
\citet{Blundell2007} and \citet{Horowitz2011} hope to recover structural
shape-invariant Engel curves using household income as an instrument
for total expenditure. Suppose one controls for fixed household characteristics
like household size and socio-cultural make-up. Any remaining variation
in income likely reflects outside economic shocks that are unrelated
to variation in household tastes.
However, the household characteristics controlled for by \citet{Blundell2007}
and \citet{Horowitz2011} are limited to a small selection of coarsely
measured demographic variables.\footnote{In both papers, to control for the demographic information only a
sub-sample with homogeneous characteristics is analyzed. \citet{Blundell2007}
incorporate additional dummy variable controls when performing estimation
on the sub-sample.}There is certainly some remaining variation in household features
like the ages, ethnicities and education levels of each household's
constituents. If the remaining variation is small or is only weakly
associated with income or tastes, then income is only mildly endogenous.
Therefore, in this setting it is of interest to see what can be inferred
under some small failure of instrumental validity.
\subsection{Estimation}
Below we describe an application of our methods to the estimation
of $h_{0}$ the structural, shape-invariant Engel curve for food.
The dependent variable $Y$ is the share of total expenditure on non-durables
that a household spends on food. The endogenous variable $X$ is the
logarithm of the household's total expenditure and $Z$ is the logarithm
of household income.
Our methodology requires we select a vector of basis functions $\Phi_{n}$.
We follow \citet{Horowitz2011} and use spaces of fourth-order (cubic)
B-splines with evenly-spaced knot points.\footnote{See \citet{Boor2002} for a practical introduction to B-splines.}
Suppose we set $K_{n}$ equal to some $k>4$, the length-$k$ vector
of basis functions can then be defined as follows. Let $\{l_{k,j}\}_{j=1}^{k-4}$
be a sequence of scalars known as `knot points'. Then:
\[
\Phi_{n}(x)=M_{k}(1,x,x^{2},x^{3},|x-l_{k,1}|_{+}^{3},|x-l_{k,2}|_{+}^{3},...,|x-l_{k,k-4}|_{+}^{3})'
\]
For a particular non-singular $k$-by-$k$ matrix $M_{k}$. The function
$|\cdot|_{+}$ returns the positive part of its argument, that is
$|y|_{+}=1\{y\geq0\}|y|$. For the knot points we set:
\[
l_{k,j}=\frac{j}{k-3}x_{max}+\frac{k-3-j}{k-3}x_{min}
\]
Where $x_{max}$ and $x_{min}$ are respectively the largest and
smallest observed values of the regressor $X$ in the data.
We carry out our first-stage estimates using series regression onto
cubic B-splines. If we set $L_{n}=k$ then the vector of basis functions
$\Psi_{n}$ is defined exactly as $\Phi_{n}$ above albeit with domain
$\mathcal{Z}$ rather than $\mathcal{X}$ and $x_{max}$ and $x_{min}$ replaced by the largest and smallest observed values of the instrument.
Our partial identification approach requires that we place an a priori
bound $b$ on the magnitude of $E[Y-h_{0}(X)|Z]$ and restrict the
structural function $h_{0}$ to lie in some space $\mathcal{H}$.
Here we take $\mathcal{H}$ to be the set of functions on $\mathcal{X}$
that take values in the unit interval and have second derivative bounded
in magnitude by a constant $c$. An Engel curve is an expected budget
share and must take values in $[0,1]$ by definition, however a bound
on the second derivative does not clearly follow from the setting.
As such, we present results for a range of values both for the bound
$b$ on the deviation from instrumental validity and for the bound
$c$ on the second derivative.
To implement the methods detailed in Section 3, a researcher must choose
$K_{n}$ the number of basis functions and the grids $\mathcal{X}_{n}$
and $\mathcal{Z}_{n}$. Motivated by the results in Theorem
3.2 we set $K_{n}$ to be large, specifically we let $K_{n}=10$.
The grid $\mathcal{X}_{n}$ consists of 100 evenly spaced points between
the smallest and largest observed values of $X$ (the same grid that
we used to plot the curves in Figure 4.2). The grid $\mathcal{Z}_{n}$
consists of 100 evenly spaced points between the $0.005$ and $0.995$
quantiles of $Z$, we make this truncation because the first-stage
regression functions are imprecisely estimated outside this region.
For our first-stage estimates we use nonparametric least squares regression
on cubic B-spline basis functions defined over the log income. To
estimate the reduced-form function $g_{0}$ we regress by least squares
the dependent variable $Y$ on the B-spline basis over the instruments
$Z$ described above with four interior knot points (this basis is six-dimensional). To estimate $\Pi_{K_{n}}$ we regress
$\Phi_{K_{n}}(X)$ on the same B-spline basis over $Z$ used to estimate
$g_{0}$.
The result of the reduced-form regression of the expenditure share
for food on income (that is, the estimate of $g_{0}$) is given below
in Figure 4.1. The dark line in each of the sub-figures is the estimated
reduced-form function $\hat{g}_{n}$ and the dotted lines are the
reduced-form regression plus or minus a value of the bound $b$. In
Sub-Figure 4.1.a, $b$ is equal to $0.005$, which represents a tight
bound on the deviation from instrumental validity. In Sub-Figure 4.1.b,
$b$ is set equal to $0.02$ and in Sub-Figure 4.1.c it is set to
$0.05$. The units here are budget shares and so a deviation of $0.05$
amounts to $5\%$ of the total household budget on non-durable goods.
Recall that the reduced-form is equal to the conditional expectation
of the structural function plus the deviation of the NPIV moment condition
from zero. That is $g_{0}(Z)=E[h_{0}(X)|Z]+u_{0}(Z)$. Therefore,
if the essential supremum norm of $u_{0}$ is indeed bounded by $b$, then ignoring
estimation error in $\hat{g}_{n}$, the dotted lines contain the hypothetical
reduced-form function were there no deviation from instrumental validity
(i.e., $u_{0}=0$) and everything else were held fixed.
\begin{figure}
\subfloat[]{
\includegraphics[scale=0.24]{reduced_form_plus_minus_b=0\lyxdot 005}}\subfloat[]{
\includegraphics[scale=0.24]{reduced_form_plus_minus_b=0\lyxdot 02}}\subfloat[]{
\includegraphics[scale=0.24]{reduced_form_plus_minus_b=0\lyxdot 05}}
\caption{The Reduced-Form Regression}
{\scriptsize{}
\noindent\begin{minipage}[t]{1\columnwidth
{\scriptsize{}The results of the reduced-form regression. The dark
line in each of the sub-figures is the result of regressing the expenditure
share of food on cubic B-spline basis functions with four evenly spaced
interior knot points defined over log income. The dotted lines are
the reduced-form regression plus or minus a value of the bound $b$.
\end{minipage}{\scriptsize\par}
\end{figure}
Figure 4.2 below contains the results of our set estimation procedure.
The figure contains nine sub-figures each corresponding to a different
set of values for the bounds $b$ and $c$. In each sub-figure the
lower and upper dotted lines represent $\underline{\hat{\theta}}$
and $\hat{\bar{\theta}}$ the estimated upper and lower envelopes
of the identified set. The thick black line represents the central
estimator (the half-way point between the envelopes) described in
the previous section. As noted in that section, if the envelope estimates
are consistent and our assumptions on $h_{0}$ hold then this point-estimator
achieves smallest possible worst-case asymptotic bias. The thin black
line is the estimator evaluated by \citet{Horowitz2011} which we
include for comparison.
The sub-figures in the first row all correspond to the tight bound
on the magnitude of instrumental endogeneity, $b=0.005$. This is
the same value of $b$ shown in Sub-Figure 4.1.a above. The sub-figures
on the second row correspond to the looser bound $b=0.02$ as in Sub-Figure
4.1.b above and those in the final row correspond to $b=0.05$ as
in Sub-Figure 4.1.c. The three sub-figures in each column correspond
to the same bound $c$ on the second derivatives. The first column
contains the sub-figures with bound $1$ on the second derivatives,
the second column contains those with bound $c=2$ and the third column
sub-figures correspond to the bound $5$ on the second derivatives.
As a benchmark, the second derivatives of Horowitz's estimated structural
function are bounded in magnitude by $0.5$. Thus in the sub-figures below the
second derivatives are allowed to be either twice, four times or ten
times the magnitude of those in Horowitz's estimates.
\begin{figure}[!ht]
\subfloat[]{
\includegraphics[scale=0.24]{deriv_bound_b=0\lyxdot 005d=1}}\subfloat[]{
\includegraphics[scale=0.24]{deriv_bound_b=0\lyxdot 005d=2}}\subfloat[]{
\includegraphics[scale=0.24]{deriv_bound_b=0\lyxdot 005d=5}}\\
\subfloat[]{
\includegraphics[scale=0.24]{deriv_bound_b=0\lyxdot 02d=1}}\subfloat[]{
\includegraphics[scale=0.24]{deriv_bound_b=0\lyxdot 02d=2}}\subfloat[]{
\includegraphics[scale=0.24]{deriv_bound_b=0\lyxdot 02d=5}}\\
\subfloat[]{
\includegraphics[scale=0.24]{deriv_bound_b=0\lyxdot 05d=1}}\subfloat[]{
\includegraphics[scale=0.24]{deriv_bound_b=0\lyxdot 05d=2}}\subfloat[]{
\includegraphics[scale=0.24]{deriv_bound_b=0\lyxdot 05d=5}}
\caption{Set-Estimated Engel Curves}
{\scriptsize{}
\noindent\begin{minipage}[t]{1\columnwidth
{\scriptsize{}The results of our set estimation procedure. Each sub-figure
corresponds to a different set of values for the bounds $b$ and $c$.
The values for these quantities are given above each sub-figure. In
each sub-figure, the lower and upper dotted lines represent $\underline{\hat{\theta}}$
and $\hat{\bar{\theta}}$ the estimated upper and lower envelopes
of the identified set. The thick black line represents the central
estimator (which is the mean of $\underline{\hat{\theta}}$ and $\hat{\bar{\theta}}$).
The thin black line is the estimate found in \citet{Horowitz2011}.
\end{minipage}{\scriptsize\par}
\end{figure}
The results in Section 2 suggest that if the bound $c$ on the second
derivatives is too loose then the identified set will be large, even
if the bound $b$ on the failure of instrumental validity is small.
Sub-Figure 4.2.c shows that the envelopes have non-negligible width
when the bound $c$ on the second derivatives is set to $5$ even
with the bound $b$ set to the low value of $0.005$, or $0.5\%$ of
total expenditure. More generally we see that for each $b$, the envelope estimates are looser the further right the sub-figure,
i.e., the looser the bound $c$ on the second derivatives.
Sub-Figures in the first two rows all show a general downwards slope
of the Engel curve for food at least for intermediate values of total
expenditure. That is, the estimated envelopes in these sub-figures
are fairly tight around a downward sloping central estimator. This
suggests food is a necessary good, which conforms to conventional
economic wisdom. We conclude therefore, that the finding that the
Engel curve for food has a general downward slope is fairly robust
to misspecification. The data support the finding even allowing for
a failure of validity that amounts to $2\%$ of the total expenditure
on non-durables.
For sub-figures in the first row of Figure 4.2 (i.e., with the tight
$1\%$ bound on the failure of instrumental validity) the estimated
envelopes of the identified set are tight enough to discern some non-linearity
in the Engel curve (in the sense that the envelopes do not contain
a straight line). The sub-figures seem to show an increasing downward
slope for higher values of the log total expenditure. It is clear
then that one must believe that income is only a weakly endogenous
instrument in order to infer from the envelopes that the Engel curve
demonstrates some non-linear trend.
Note that none of the results in Figure 4.2 provide evidence in favor
of an upward sloping Engel curve for low values of total expenditure
as found by \citet{Horowitz2011}. However, only in Sub-Figure 4.2.a
do the estimated envelopes exclude Horowitz's estimates, and only
by a small margin and for a narrow set of values for the log
total expenditure. The envelopes are, in all sub-figures, wide for
low values of total expenditure, which suggests that the Engel curve
is poorly identified in this region. It seems then that the data do
not provide meaningful evidence either for or against some positive
slope in the Engel curve for low values of the expenditure on nondurables.
\section*{Conclusions}
We demonstrate that NPIV estimates of the structural function are
highly sensitive to misspecification in the form of invalid instruments.
We show that the imposition of strong restrictions on the structural
function can mitigate this problem, but can impart approximation bias.
This motivates a partial identification approach to NPIV that allows
a researcher achieve point estimation with minimal worst-case asymptotic
bias and to evaluate error bounds (envelopes of the identified set)
that explicitly account for possible misspecification.
The development of simple uniform confidence bands for envelopes of the identified set in conditional moment inequality models
of the kind we study is an open question and is beyond the
scope of this paper. Our model has an infinite-dimensional parameter space and (to the best of our knowledge) the only general analysis of inference in models of this kind is a working paper by \citet{Chernozhukov2015}.\footnote{\citet{Chernozhukova} considers some conditional moment inequality
models with infinite-dimensional parameter spaces. However, these
models can be rewritten as a set of conditional moment inequalities
each with a finite-dimensional parameter space.} Their work may provide analytical tools for deriving valid confidence bands in our setting.
Future research may generalize our sensitivity results to a broader
class of conditional moment restriction models. The non-robustness
of NPIV estimators is tied to the ill-posedness of the NPIV estimating
equation. In fact, a range of other nonparametric conditional moment
restriction models are ill-posed. It seems likely then that estimation
in these models is also non-robust. It may be useful to characterize
precisely the class of nonparametric moment condition models associated
with non-robust estimation and to extend our partial identification
approach to these models.
|
2,877,628,088,470 | arxiv | \section{Introduction}
\label{s.introduction}
Several aspects of the evolution of galaxies have been
puzzling astronomers for decades. Firstly, star formation in galaxies turns out
to be efficiently quenched in galactic bulges despite the gas cooling
time being much shorter than the age of a given galaxy \citep{cowie+77,fabiannulsen-77}. Secondly, the
galaxy luminosity function features a sharp high-mass cutoff
in which the most massive systems are red, dead and elliptical,
inconsistent with the hierarchical growth of structure in the Universe
\citep{croton+06}. Explaining both phenomena requires additional
processes preventing gas from collapsing into stars and limiting the
mass of the central galaxies.
Supernova explosions and stellar winds return energy (provide
\textit{feedback}) to the
interstellar medium (ISM). Although these processes take place at
small scales, they are powerful enough to affect the evolution of the
whole galaxy. Without strong stellar feedback, gas inside galaxies
would cool
efficiently and collapse on a dynamical time resulting in star
formation rates inconsistent with observations. As shown recently
shown by \citet{hopkins+14}, stellar feedback itself is enough to explain most of the
properties of galaxies, e.g., the relation between galaxy stellar mass
and halo mass, at stellar masses $M_*\lesssim 10^{11}M_\odot$.
Additional processes are needed to explain the formation of the most
massive galaxies. It is believed that almost every galaxy harbours a supermassive black hole
(SMBH) in its nucleus. Being extremely compact such objects can
liberate gravitational energy in large amounts. As
a black hole (BH) grows to 0.2\% of the bulge mass through accreting matter, it releases
nearly 100 times the gravitational binding energy of its host galaxy
\citep{fabian+09}. It is therefore reasonable to expect that, if only
the energy returned from accretion (the
\textit{black hole feedback}) is efficiently coupled with the ISM, the
central SMBHs can strongly affect the formation and the properties of the host
galaxies.
The feedback provided by SMBHs is therefore crucial for studying the
evolution of the Universe. It is often accounted for in large scale
simulations of galaxy formation, but the adopted models
are very simplistic. The large range of scales involved in such
simulations does not allow for detailed numerical (and simultaneous)
modeling of the BH accretion. Instead, the mass supply
rate is estimated (at most) at parsec scales, usually using the Bondi
model of spherical accretion, and simple formulae for the feedback
efficiency are applied. These are based partly on the standard thin
disc models \citep{ss73}, but (to be consistent with observed
properties of galaxies) involve additional factors
arbitrarily rescaling the feedback rate.
These factors reflect our lack of understanding of how accretion on
SMBHs works and the efficiency of the feedback it provides. There are two major
unknown. Firstly, it is not clear how much matter
makes it to the BH, and how much is lost on the way. In other words,
what fraction of the gas attracted by the BH near the Bondi radius
ultimately crosses the BH horizon and efficiently liberates its
binding energy providing the energy source for the feedback \citep[see
discussion in][]{yuannarayan+14}. Secondly,
it is crucial to understand what fraction of this energy is returned
to the ISM.
In this paper we address the second question. The feedback efficiency
from an accretion flow is believed to be well established only for geometrically thin
discs, corresponding to moderate, sub-Eddington accretion rates,
$10^{-3}\dot M_{\rm Edd} \lesssim \dot M\lesssim \dot M_{\rm Edd}$ (for the definition of
$\dot M_{\rm Edd}$ see Eq.~\ref{e.medd}). In this case, the accretion
flow is radiatively efficient, and all the released binding energy of the gas\footnote{The accretion itself is
not the only energy source in an accreting system. If the accretion
manages to bring a significant amount of magnetic flux on the BH,
magnetic jets can extract rotational energy of the BH. Jets, however,
are collimated, and may not interact efficiently with the ISM.} goes into
radiation and is determined by the binding energy of the gas at the disc's
inner edge (e.g., it equals $5.7\%$ of the accreted rest mass energy,
$\dot M c^2$, for a non-rotating BH).
We address here the question of what amount of energy is extracted if
accretion flows are not geometrically thin, i.e., how efficient the BH
feedback is if a SMBH accretes either in the radio mode ($\dot
M\lesssim 10^{-3}\dot M_{\rm Edd}$), when one
expects an optically thin accretion flow and low radiative efficiency, or
above the Eddington accretion rate, in an optically thick disc. To
this purpose, we analyze a set of state-of-the-art,
three-dimensional simulations of the innermost region of BH accretion
performed with a general-relativistic,
radiative magnetohydrodynamical (MHD) code \texttt{KORAL}\, \citep{sadowski+koral}.
Our paper has the following structure. In Section~\ref{s.thin} we
discuss the energy transfer in the standard model of a thin disc. In
Section~\ref{s.enflowinsims} we give the details of the numerical
simulations and dicuss their properties. In Section~\ref{s.discussion}
we discuss their implications and several caveats. Finally, in
Section~\ref{s.summary} we summarize our findings.
\section{Energy flow in thin discs}
\label{s.thin}
We start by recapitulating the physics of energy transfer in the standard model
of a thin accretion disc \citep[e.g.][]{ss73,frank+book}. This will give us a good reference point
when discussing energy flows in numerical simulations of accretion
flows.
\subsection{Viscous dissipation}
The thin disc model assumes Keplerian azimuthal
motion, small vertical thickness of the disc, $h/r\ll 1$, and
radiative efficiency. Keplerian angular velocities imply differential
rotation and, in the presence of viscosity, non-zero transfer of
angular momentum between adjacent rings. The torque exerted by rings
on each other is \citep{frank+book},
\begin{equation}
T=2\pi r\nu \Sigma r^2 \der{\Omega}{r}=-3\pi \nu \Sigma r^2 \Omega,
\label{e.torque}
\end{equation}
where $\Sigma$ is the surface density at radius $r$,
$\Omega=\sqrt{GM/r^3}$ is the Keplerian
angular velocity, and $\nu$ is the local kinematic
viscosity coefficient corresponding to magnetically induced turbulence.
The torque results in transfer of angular momentum between the
rings. Conservation of angular momentum requires,
\begin{equation}
\der{}{r}\dot M \Omega r^2=-\der{T}{r},
\end{equation}
where $\dot M>0$ denotes the accretion rate.
Integrating between radius $r$ and the inner edge of the disc at
$r_{\rm in}$, we get,
\begin{equation}
-\dot M \left(\Omega(r) r^2-\Omega(r_{\rm in})r_{\rm
in}^2\right)=T(r)-T(r_{\rm in}).
\label{e.lcons}
\end{equation}
Following the standard assumption that the torque at the inner edge of
a thin disc vanishes \citep[see][]{paczynski-thindiscs} we get,
\begin{equation}
T = -\sqrt{GM}\dot M \left(\sqrt{r} -\sqrt{r_{\rm in}}\right).
\end{equation}
This torque not only transports angular momentum but also
dissipates mechanical energy heating up
the gas. The dissipation rate (per unit radius) in the whole ring which equals, by assumption of
radiative efficiency, minus the radiative cooling rate, is given by,
\begin{equation}
q_{\rm diss}=-q_{\rm rad}=-T\der{\Omega}{r}=-\frac{3GM\dot M}{2r^2}\left(1-\sqrt{\frac{r_{\rm in}}{r}}\right),
\label{e.qvisc}
\end{equation}
where the signs have been chosen such that a positive rate
corresponds to cooling, and negative to heating.
Dividing by the surface area of both sides of the ring we get
the well known thin-disc surface radiative flux,
\begin{equation}
Q_{\rm rad}=\frac{q_{\rm rad}}{4\pi r}=\frac{3GM\dot M}{8\pi r^3}\left(1-\sqrt{\frac{r_{\rm in}}{r}}\right).
\label{e.Frad}
\end{equation}
It is worth reiterating that the viscous dissipation rate (the
gas heating rate) does not depend on the particular form of the
viscosity (e.g., the $\alpha$-viscosity), but follows from the
assumptions of Keplerian motion, the zero-torque boundary condition and angular momentum conservation.
\subsection{Local energy budget}
In the previous paragraph we have shown that viscous dissipation in a
differentially rotating flow
results in heating of gas and radiative cooling at the rate
$q_{\rm rad}$ (Eq.~\ref{e.qvisc}).
The energy required for this radiative emission may come from the gravitational
field -- gas approaching the BH liberates its own binding energy at the
rate,
\begin{equation}
q_{\rm bind}=\dot M\der{e_{\rm
bind}}{r}=\dot M\der{}{r}\left(-\frac{GM}{2r}\right)=\frac{GM\dot
M}{2r^2}.
\label{e.qbind}
\end{equation}
However, it is clear that,
\begin{equation}
q_{\rm bind}+ q_{\rm rad}\neq 0.
\end{equation}
This means that there must be another source or sink component in the
local budget of energy.
In the previous section we have seen that viscosity leads to the
transport of angular momentum and
dissipation of mechanical energy. However, viscosity transports not only angular momentum but also
rotational energy. The amount of energy transported in this way is,
\begin{equation}
L_{\rm visc}=-T\Omega,
\label{e.Lvisc1}
\end{equation}
and the resulting local heating or cooling rate per unit radius is given by,
\begin{equation}
q_{\rm visc}=\der{}{r}(T\Omega)=\frac{GM\dot M}{r^2}\left(1-\frac32\sqrt{\frac{r_{\rm in}}{r}}\right).
\label{e.qviscvisc}
\end{equation}
It is straightforward to verify that,
\begin{equation}
q_{\rm bind}+ q_{\rm rad}+q_{\rm visc}=0.
\end{equation}
The viscous energy transport redistributes energy released in the disc
and compensates for the imbalance between the local binding energy release
and the rate of radiative cooling.
In Fig.~\ref{f.enfluxes_local_thin} we plot local heating/cooling
rates in a thin disc as a function of radius. The solid blue line
shows the energy gain from the change in the binding energy, $q_{\rm
bind}$ (Eq.~\ref{e.qbind}). This quantity is further
decomposed into the gravitational,
\begin{equation}
q_{\rm grav}=\dot M\der{e_{\rm
grav}}{r}=\dot M\der{}{r}\left(-\frac{GM}{r}\right)=\frac{GM\dot
M}{r^2}.
\label{e.qgrav}
\end{equation}
and kinetic,
\begin{equation}
q_{\rm kin}=\dot M\der{e_{\rm
kin}}{r}=\dot M\der{}{r}\left(\frac{GM}{2r}\right)=-\frac{GM\dot
M}{2r^2}.
\label{e.qkin}
\end{equation}
components. They are denoted by dashed and dotted blue lines,
respectively.
The orange line shows the radiative cooling rate, $q_{\rm rad}$
(Eq.~\ref{e.qvisc}). As expected, no emission comes from the inner edge
of the disc (located at $r_{\rm in}=6GM/c^2$, but we are using here the Newtonian approximation) and the most efficient emission takes place
from a ring located at $r \approx 8 r_{\rm g}$\footnote{The maximum is at $r = (49/6) \,r_{\rm g}$.}, where
$r_g=GM/c^2$.
The pink line reflects the energy redistribution rate by the viscosity, $q_{\rm visc}$
(Eq.~\ref{e.qviscvisc}). For
$r\lesssim 13 r_{\rm g}$ it is negative -- at these radii viscosity effectively
cools the disc and carries the energy outward. It is particularly
evident for the gas approaching the inner edge, where the release of
binding energy is large, but the no-torque boundary condition prevents
radiative emission. To maintain the energy balance, viscosity must
transport this locally liberated binding energy out.
For $r\gtrsim
13 r_{\rm g}$, $q_{\rm visc}$ becomes positive, which means that the
viscous energy flux decreases with increasing radius and locally deposits
energy, contributing to the local heating rate and increasing the
magnitude of radiative cooling beyond the rate at
which local binding energy is liberated. In the limit $r\gg r_{\rm
in}$ one has $q_{\rm rad}=-3q_{\rm bind}$, i.e., the local
rate of releasing energy in radiation is three times larger than the
change in binding energy. The extra contribution comes from the
viscous energy flux which deposit energy (and heats up gas) at a rate two times
larger than the gain from released binding energy.
\begin{figure}
\includegraphics[width=.95\columnwidth]{enfluxes_local_thin.png}
\caption{Local energy gain in its various forms in the standard thin
disc model described in Section~\ref{s.thin}.}
\label{f.enfluxes_local_thin}
\end{figure}
\subsection{Energy flow}
In the previous section we have looked into the local energy
balance. Now, let us look into the total amount of energy carried by
its various components from one radius to another.
The binding energy is carried by the flow at a rate,
\begin{equation}
L_{\rm bind}=-\dot M e_{\rm bind}=\frac{GM\dot M}{2r}>0,
\label{e.Lbindthin}
\end{equation}
where the positive sign reflects the fact that bound gas is falling
inward, thus effectively depositing energy at infinity. The luminosity in
binding energy may be again decomposed into the gravitational and
kinetic components,
\begin{equation}
L_{\rm grav}=-\dot M e_{\rm grav}=\frac{GM\dot M}{r}>0,
\label{e.Lgravthin}
\end{equation}
\begin{equation}
L_{\rm kin}=-\dot M e_{\rm kin}=-\frac{GM\dot M}{2r}<0.
\label{e.Lkinthin}
\end{equation}
Gravitational energy luminosity is positive, but the kinetic
luminosity is negative -- kinetic energy of the Keplerian motion is
brought inward by the gas.
The radiative cooling rate, $q_{\rm rad}$, is given by
Eq.~\ref{e.qvisc}. Photons are emitted from the disc surface and
leave the system. The total radiative luminosity at given radius,
$L_{\rm rad}$,
results from the emission inside that radius,
\begin{equation}
L_{\rm rad}=\int_{r_{\rm in}}^r q_{\rm rad} \,{\rm dr} = \frac32
\frac{GM\dot M}{r}\left(\frac13\frac {r}{r_{\rm in}} + \frac
23\sqrt{\frac{r_{\rm in}}{r}}-1\right)>0.
\label{e.Lradthin}
\end{equation}
This quantity is zero at the inner edge ($r=r_{\rm in}$) and equals
$GM\dot M/2r_{\rm in}$ at infinity. The radiative luminosity of the
whole accretion disc is therefore equal to the binding energy of the
gas crossing the inner edge.
Finally, the amount of energy carried by viscosity from the inner region outward, $L_{\rm visc}$, is
(Eq.~\ref{e.Lvisc1}),
\begin{equation}
L_{\rm visc}=-T\Omega=\frac{GM\dot M}{r}\left(1-\sqrt{\frac{r_{\rm in}}{r}}\right)>0.
\label{e.Lvdiscthin}
\end{equation}
The various integrated energy fluxes introduced above are shown in
Fig.~\ref{f.enfluxes_thin}. Their magnitudes have been normalized to the
amount of accreted rest-mass energy. The blue lines show the
luminosities in the binding energy, $L_{\rm bind}$, and its
gravitational and kinetic components,
$L_{\rm grav}$ and $L_{\rm kin}$. At the inner edge ($r_{\rm in}=6r_g$), the amount of
binding energy carried by the gas is
\begin{equation}
L_{\rm bind,in}=\frac{GM\dot M}{2r_{\rm
in}}=\frac{1}{12}{\dot M c^2},
\end{equation}
which is, as we will discuss in detail in a moment, the total
efficiency of a thin disc in the Newtonian gravitational potential.
The orange line in Fig.~\ref{f.enfluxes_thin} shows the radiative
luminosity crossing a sphere of radius $r$. As no photons are emitted
from inside the inner edge, it starts from zero and
gradually grows, reaching finally $GM\dot M/2r_{\rm
in}$ at infinity -- in a thin disc, the whole energy extracted by the infalling
gas ultimately goes into radiation.
It is interesting to note that
$50\%$ of the radiation is emitted from outside $r\approx 25r_{\rm g}$. At the same
time, the gas infalling from infinity down to that radius has
extracted only roughly $25\%$ of the available binding energy. The
excess in radiative luminosity comes from the extra energy carried by
viscosity from the innermost region.
This component of the energy flux is denoted with the pink line at the same plot. The
amount of energy carried by viscosity grows rapidly just outside of
the inner edge -- at these radii
viscosity is transporting rotational kinetic energy outward. Outside
$r\approx 13r_{\rm g}$ the luminosity of viscous energy transport
drops down with radius and the energy taken away from the innermost
region is deposited by viscosity into the gas.
Summing up all the components of the energy transfer we get the total
luminosity,
\begin{equation}
L_{\rm tot}=L_{\rm bind}+L_{\rm rad}+L_{\rm visc},
\label{e.Ltotthin}
\end{equation}
which is the quantity that is fundamentally conserved in stationary
flows, i.e., is independent of radius and no energy accumulates at any
location. Indeed, the sum of the three components (red line in
Fig.~\ref{f.enfluxes_thin}) gives a constant value equal to the total
efficiency of accretion and the binding energy carried in by the gas
through the disc inner edge ($1/12)\dot M c^2$. In the Schwarzschild metric this
efficiency would be $\sim 0.057\dot M c^2$.
\begin{figure}
\includegraphics[width=.95\columnwidth]{enfluxes_thin.png}
\caption{Luminosity in various forms of energy for the standard thin
disc model described in Section~\ref{s.thin}. The thick red line
denotes the total luminosity of the system which can be decomposed
into the luminosity in binding energy (solid blue line), in
radiation (orange) and luminosity transported by viscosity
(pink). The luminosity in binding energy is further decomposed into
gravitational (blue dashed) and kinetic (blue dotted)
components. All the luminosities are normalized with the accreted
rest-mass energy, $\dot M c^2$.}
\label{f.enfluxes_thin}
\end{figure}
\section{Energy flow in simulations of accretion flows}
\label{s.enflowinsims}
Having recapitulated how energy flows in a standard thin
disc, we are ready to study the energy redistribution in numerical simulations of accretion flows.
In the following Section we describe the numerical method used to
perform the simulations. In Section~\ref{s.energyfluxes} we introduce the
formalism used to study energy fluxes in numerical solutions. In Sections~\ref{s.adafs} and \ref{s.slim} we look in detail
into the energy flow in simulations of optically thin and thick discs,
respectively.
\subsection{Numerical setup}
\label{s.numerical}
The simulations analyzed in this paper were performed in three
dimensions with the general relativistic radiation magnetohydrodynamical (GRRMHD) code
\texttt{KORAL} \citep{sadowski+koral} which solves the conservation
equations in
a fixed, arbitrary spacetime using finite-difference methods. The
equations we solve are,
\begin{eqnarray}\label{eq.rhocons}
\hspace{1in}(\rho u^\mu)_{;\mu}&=&0,\\\label{eq.tmunucons}
\hspace{1in}(T^\mu_\nu)_{;\mu}&=&G_\nu,\\\label{eq.rmunucons}
\hspace{1in}(R^\mu_\nu)_{;\mu}&=&-G_\nu,
\end{eqnarray}
where $\rho$ is the gas
density in the comoving fluid frame, $u^\mu$ are the components of the gas four-velocity, $T^\mu_\nu$ is the
MHD stress-energy tensor,
\begin{equation}\label{eq.tmunu}
T^\mu_\nu = (\rho+u_{\rm g}+p_{\rm g}+b^2)u^\mu u_\nu + (p_{\rm g}+\frac12b^2)\delta^\mu_\nu-b^\mu b_\nu,
\end{equation}
$R^\mu_\nu$ is the stress-energy tensor of radiation, and $G_\nu$ is the radiative
four-force describing the interaction between gas and radiation \citep[see][for a more detailed description]{sadowski+koral2}. Here, $u_{\rm g}$ and $p_{\rm g}=(\Gamma-1)u_{\rm g}$ represent the internal energy and pressure of the
gas in the comoving frame and $b^\mu$ is the magnetic field 4-vector \citep{gammie03}.
The magnetic pressure is $p_{\rm mag}=b^2/2$ in geometrical units.
The magnetic field is evolved via the induction equation,
\begin{equation}
\label{eq.Maxi}
\partial_t(\sqrt{-g}B^i)=-\partial_j\left(\sqrt{-g}(b^ju^i-b^iu^j)\right),
\end{equation}
where $B^i$ is the magnetic field three-vector \citep{komissarov-99},
and $\sqrt{-g}$ is the metric determinant.
The divergence-free criterion is enforced using the flux-constrained
scheme of \cite{toth-00}.
The radiation field is evolved
through its energy density and flux, and the radiation stress-energy
tensor is closed by means of the M1 closure scheme
\citep{levermore84,sadowski+koral}. The energy exchange between gas
and radiation is by free-free emission/absorption as well as Compton scattering.
The latter is treated in the ``blackbody''
Comptonization approximation as described in \citet{sadowski+comptonization}.
We use modified Kerr-Shild coordinates with the inner edge of the
domain inside the BH horizon. The simulations are run with a
moderately high resolution of 252 grid cells spaced logarithmically in
radius, 234 grid cells in the polar angle, concentrated towards the
equatorial plane, and 128 cells in azimuth.
Three of the four simulations which we analyze in this work are
identical to the ones presented in \cite{sadowski+3d}. To have a
consistent optically thin version of an accretion flow we simulated an
additional model
with purely magnetohydrodynamical evolution, i.e., without radiation
field. This simulation (\texttt{h001}) corresponds to an optically thin,
advection dominated accretion flows (ADAF) believed to occur in
systems accreting well below the Eddington level \citep{yuannarayan+14}.
Parameters of the models are given in Table~\ref{t.models}.
In this work we adopt the following definition
for the Eddington mass accretion rate,
\begin{equation}
\label{e.medd}
\dot M_{\rm Edd} = \frac{L_{\rm Edd}}{\eta c^2},
\end{equation}
where $L_{\rm Edd}=1.25 \times 10^{38} M/M_{\odot}\,\rm ergs/s$ is the
Eddington luminosity, and $\eta$ is the radiative efficiency of a thin
disc around a black hole with a given spin $a_* \equiv a/M$. For zero BH spin,
$\dot M_{\rm Edd} = 2.48 \times 10^{18}M/M_{\odot} \,\rm g/s$.
Hereafter, we also use the
gravitational radius $r_{\rm g}=GM/c^2$ as the unit of length, and $r_g/c$
as the unit of time.
In this study we consider simulation
output averaged over time. Therefore, whenever we write, e.g., $\rho u^r$, we mean the
average of the product, i.e., $\langle \rho u^r \rangle$, where
$\langle \rangle$ stands for time averaging.
\begin{table}
\centering
\caption{Model parameters}
\label{t.models}
\begin{tabular}{lcccc}
\hline
\hline
& \texttt{h001} & \texttt{r001} & \texttt{r003} & \texttt{r011}\\
\hline
& hydro & radiative & radiative & radiative\\
\hline
$M_{\rm BH}$ & $10 M_\odot$& $10 M_\odot$& $10 M_\odot$& $10 M_\odot$ \\
$\dot M/\dot M_{\rm Edd}$ & $\lesssim 10^{-3}$ & 10.0 & 175.8 & 17.4 \\
$a_*$ & 0.0 & 0.0 & 0.0 & 0.7 \\
$t_{\rm max}$ & 23,000 & 20,000 & 19,000 & 16,100 \\
\hline
\hline
\multicolumn{5}{l}{All models initiated as in \cite{sadowski+3d}.}\\
\multicolumn{5}{l}{$M_{\rm BH}$ - mass of the BH, $\dot M$ - average
accretion rate, }\\
\multicolumn{5}{l}{$a_*$ - nondimensional spin parameter,}\\
\multicolumn{5}{l}{$t_{\rm max}$ - duration of the simulation in units of $GM/c^3$ }
\end{tabular}
\end{table}
\subsection{Energy fluxes}
\label{s.energyfluxes}
\subsubsection{Fundamental quantities}
\label{s.fundamental}
In quasi-stationary state the accretion rate is constant in radius,
i.e., gas does not accumulate anywhere, but rather
flows towards the BH with constant rate. The accretion rate (the luminosity in rest-mass energy) is given by,
\begin{equation} \label{e.mdot}
\dot M = \int_{0}^\pi
\int_0^{2\pi}\sqrt{-g}\,\rho u^r{\rm d}\phi {\rm d}\theta,
\end{equation}
where this and the following integrals are evaluated at a fixed
radius $r$.
In a similar way we may define the luminosity in all forms of energy,
\begin{equation} \label{e.entot0}
L_{\rm tot,0} = -\int_{0}^\pi
\int_0^{2\pi}\sqrt{-g}\,(T^r_t + R^r_t){\rm d}\phi {\rm d}\theta,
\end{equation}
where we integrate the radial flux of energy carried by gas ($T^r_t$)
and by radiation ($R^r_t$). This quantity, however, is not interesting
from the point of view of a distant observer. It contains the flux of
rest-mass energy which, even if deposited at infinity, will not have
observational consequences
(since at infinity rest-mass cannot be converted into other
forms of energy in a trivial way). Therefore, we define the \textit
{total luminosity}
by subtracting\footnote{Lower time index
introduces a negative sign in $T^r_t$, so to get rid of the
rest-mass component in $T^r_t$ one has to \textit{add} $\rho u^r$.} the rest-mass energy flux from the
previous definition,
\begin{equation} \label{e.entot}
L_{\rm tot} = -\int_{0}^\pi
\int_0^{2\pi}\sqrt{-g}\,(T^r_t + R^r_t+\rho u^r){\rm d}\phi {\rm d}\theta,
\end{equation}
The sign has been chosen in such a way that $L_{\rm tot}$ is negative for
energy falling in the BH, and positive for energy leaving the
system.
In a stationary state the total
luminosity is independent of radius
(if it was not, energy would accumulate in some
regions). It is the
luminosity of the whole system, i.e., it is also the luminosity
as seen from infinity. Therefore, it determines the rate at which energy is
deposited in the interstellar medium or, in other words, $L_{\rm
tot}$ is the total power of \textit{feedback}.
\subsubsection{Decomposition}
\label{s.decomposition}
The total energy flux consists of multiple components. We decompose it
in a way which gives well-known Newtonian limits.
First, we single out
the radiative component and define the radiative luminosity,
\begin{equation} \label{e.enrad}
L_{\rm rad} = -\int_{0}^\pi
\int_0^{2\pi}\sqrt{-g}\,R^r_t{\rm d}\phi {\rm d}\theta,
\end{equation}
which reflects energy carried by photons, either trapped in the
gas, or propagating freely.
To define other components, let us first write explicitly,
\begin{equation}
\label{e.Trt}
T^r_t+\rho u^r=\rho u^r(1+u_t)+(\Gamma u_{\rm g}+b^2)u^r u_t -b^r b_t.
\end{equation}
Here we remind the reader that in all the integrals we take averages of products,
e.g., $\rho u^r(1+u_t)$ is actually $\langle\rho u^r(1+u_t)\rangle$,
where the product is averaged over time. This particular
quantity is the average radial flux of binding energy. In detail, it is
the sum of advective, $\langle \rho u^r\rangle\langle
1+u_t\rangle$, and Reynolds (turbulent), $\langle \rho u^r(
1+u_t)\rangle-\langle \rho u^r\rangle\langle
1+u_t\rangle$, components. Similar decomposition applies to the other
components of the total energy flux. In this work we will not
discriminate between the turbulent and advective fluxes, but instead
focus on the net contribution.
It is straightforward to define the luminosity in internal (thermal)
energy,
\begin{equation} \label{e.enint}
L_{\rm int} = -\int_{0}^\pi
\int_0^{2\pi}\sqrt{-g}\,\Gamma u_{\rm g} u^r u_t {\rm d}\phi {\rm d}\theta,
\end{equation}
which, similarly, contains the advective and convective terms, and the luminosity
carried by the magnetic field,
\begin{equation} \label{e.enmagn}
L_{\rm magn} = -\int_{0}^\pi
\int_0^{2\pi}\sqrt{-g}\, (b^2 u^r u_t -b^r b_t).{\rm d}\phi {\rm d}\theta.
\end{equation}
which again includes both the advective
component and
turbulent stress.
The remaining term (proportional to $(1+u_t)$) contains information about the gravitational and
kinetic energies. In the Newtonian limit it gives $-1/2r$ for
Keplerian motion. Therefore, we identify the corresponding integrated
energy flux as the luminosity in binding energy carried radially by gas,
\begin{equation} \label{e.enbind}
L_{\rm bind} = -\int_{0}^\pi
\int_0^{2\pi}\sqrt{-g}\,\rho u^r(1+ u_t) {\rm d}\phi {\rm d}\theta.
\end{equation}
The gravitational component of the last expression can be singled out
by calculating the specific binding energy $(1+ u_t)$ for a stationary
observer. From $u^\mu=(u^t,\vec 0)$ and $u^\mu u_\mu=-1$ one gets (for
a diagonal metric)
$u_t=-\sqrt{-g_{tt}}$, and therefore, the luminosity in gravitational
energy carried by gas is,
\begin{equation} \label{e.engrav}
L_{\rm grav} = -\int_{0}^\pi
\int_0^{2\pi}\sqrt{-g}\,\rho u^r(1-\sqrt{-g_{tt}}) {\rm d}\phi {\rm d}\theta.
\end{equation}
The remaining term reflects the luminosity in kinetic energy,
\begin{equation} \label{e.enkin}
L_{\rm kin} = L_{\rm bind} - L_{\rm grav}.
\end{equation}
To sum up, we have decomposed the total energy transfer rate into
binding, thermal, magnetic, and radiative components,
\begin{equation}
L_{\rm tot} = L_{\rm bind} + L_{\rm int} + L_{\rm magn} + L_{\rm rad}.
\end{equation}
\subsubsection{Advective and viscous energy fluxes}
\label{s.magnetic}
In a viscous accretion flow energy is transported both by viscosity
and by the fluid which advectively carries energy with itself. One may
write,
$L_{\rm hydro}=L_{\rm adv}+L_{\rm visc}=\dot M Be - T\Omega$, where $Be$ is the
Bernoulli function of the fluid (which is not constant, because work
is done on gas on its way towards the BH), and $T\Omega$ reflects the viscous rate
of energy flow (Eq.~\ref{e.Lvdiscthin}). The hydrodynamical quantities
defined in the previous section ($L_{\rm bind}$, $L_{\rm int}$,
$L_{\rm magn}$) are based on time averaged quantities. The turbulence,
which provides effective viscosity, is averaged out and contributes to
the energy transfer rate. Therefore, as stated in the previous
Section, these luminosities include both terms,
the advective and the viscous one. It is beyond the scope of this
paper to decompose
them and single out the energy transfer rate solely due to viscosity.
We estimated the viscous component by calculating\footnote{If the
viscous stress is proportional to shear then it is orthogonal to the
gas velocity, $T_{\rm visc,\nu}^\mu u^\nu=0$. For purely
azimuthal motion, $u^\mu=(u^t,0,0,u^\phi)$, one finds that $T^r_\phi
\Omega=-T^r_t$. Therefore, $T^r_\phi\Omega$ indeed gives the radial
flux of energy carried by viscosity. However, in the simulations we
performed, the orthogonality and perfectly circular motion are not
enforced and these conditions are only approximately satisfied.}.
\begin{equation}
L_{\rm visc,est} = -\int_{0}^\pi
\int_0^{2\pi}\sqrt{-g}\,(T^r_\phi \Omega){\rm d}\phi {\rm d}\theta.
\end{equation}
This quantity is plotted in Fig.~\ref{f.viscvsmagn_h001} with the pink
solid line. In the same figure, we plot the magnetic component of the
luminosity, $L_{\rm magn}$ (Eq.~\ref{e.enmagn}). The two have very
similar profiles and magnitudes. This should not be surprising,
because it is mostly magnetic field which mediates angular
momentum transfer \citep[local shearing sheet and global
simulations of magnetized accretion show that the magnetic stress
dominates over the Reynolds stress by a factor of $\sim4$, see][]{pessah+06,penna+alpha}. From now
on, we will consider the luminosity in the magnetic component, $L_{\rm
magn}$, as the counterpart of the viscous luminosity $L_{\rm
visc}$
introduced in Section~\ref{s.thin}. Such assignment is helpful, but not
crucial, for the following considerations.
Often in the literature \citep[e.g.][]{abramowicz+88,narayanyi-94} the energy balance is
written in the comoving frame in the following form,
\begin{equation}
\widehat q^{\rm heating}-\widehat q^{\rm cooling}=\widehat q^{\rm adv},
\end{equation}
where $\widehat q^{\rm heating}$ and $\widehat q^{\rm cooling}$ stand
for local comoving heating
and cooling rates, and $\widehat q^{\rm adv}$ decribes the net amount of heat taken
away with the fluid or effectively brought in and locally
released. This particular decomposition
is not very helpful for the present study. However, we note that
for both the optically thin and thick discs, as will be discussed
below, the power advected with
the fluid (in thermal and radiative energies, respectively)
dominates the energy balance. Therefore, the flows discussed below
are indeed advection dominated.
\begin{figure}
\includegraphics[width=.95\columnwidth]{viscvsmagn_h001.png}
\caption{Total estimated viscous flux of energy (solid line), and the
energy carried by magnetic fields, $L_{\rm magn}$
(Eq.~\ref{e.enmagn}, dashed line).}
\label{f.viscvsmagn_h001}
\end{figure}
\subsection{Energy flow in optically thin ADAFs}
\label{s.adafs}
Let us now look at the energy flow in simulated, multi-dimensional
accretion flows. We start with an optically thin disc (ADAF), model
\texttt{h001}).
According to the standard model
\citep{narayanyi-94,abramowicz+adafs} for this mode of
accretion, energy locally dissipated does not have a chance to escape
because of low radiative efficiency
and is advected with the flow. This fact makes such discs very hot and
geometrically thick. As a result, the expected efficiency of accretion
is zero because all the binding energy gained by gas on its way towards the
BH is balanced by thermal energy advected with it on the BH.
This model, however, does not allow the gas to flow out of the
system. This process, in principle, can provide a path for the
liberated binding energy to escape from the system, and as a result
may increase the efficiency of accretion.
\subsubsection{Luminosities}
Figure~\ref{f.enfluxes_h001} presents the integrated radial fluxes
(luminosities) of
energy in various forms for the optically thin simulation
\texttt{h001}. The amount of binding energy (Eq.~\ref{e.enbind}) carried with the flow is
shown with solid blue line. The closer the gas gets to the BH, the
more bound it is, and the more luminosity it extracts with respect to
infinity (once again, infalling bound gas effectively deposits energy
at infinity). It can be decomposed into the gravitational
(Eq.~\ref{e.engrav}, blue dashed line) and the kinetic
(Eq.~\ref{e.enkin}, blue dotted line) components. Because the flow is
only slightly sub-Keplerian
and the radial velocities involved are low,
these two components behave qualitatively in the same way as in the
case of the thin disc discussed in the previous Section.
The magnetic component (Eq.~\ref{e.enmagn}), which reflects the energy
carried by effective viscosity, also qualitatively agrees with the
thin disc prediction. It is zero at the inner edge (which is now at
the horizon, not at innermost stable circular orbit (ISCO), because for thick discs stress is not zero
down to the horizon), and becomes positive, which again reflects
the fact that turbulent viscosity takes energy out of the innermost
region (here from $r\lesssim 10$) and carries it outward. In contrast
to the thin disc model, however, there is no clear decrease in the
magnetic luminosity inside the convergence region of the simulation,
i.e., turbulent viscosity does not contribute there to the local
heating rate.
Because radiative cooling is not efficient, energy is not transfered
by radiation. The dissipated energy is trapped in the flow, heats up
the gas, and contributes to the thermal energy transport
(Eq.~\ref{e.enint}). This fact is reflected in the grey line profile
in Fig.~\ref{f.enfluxes_h001}. The luminosity in thermal energy is no
longer negligible, as it was in the thin disc case. Significant amount
of thermal energy is carried inward with the flow. Because gas becomes
hotter when approaching the BH horizon, the corresponding magnitude
increases.
If the studied accretion flow followed the standard model, i.e., all
the energy released was advected on to the BH, all the components
contributing to the energy transfer should sum up to zero total
efficiency. This is, however, not the case. The thick red line in
Fig.~\ref{f.enfluxes_h001} reflects the total luminosity defined
through Eq.~\ref{e.entot}, i.e., composed of binding, magnetic and
thermal components. It is flat to a good accuracy inside $r\approx 25$
proving that the flow has reached a quasi-stationary state in this
region. The efficiency of $\sim3 \% \dot Mc^2$, reflects the amount of
energy extracted from the accretion flow\footnote{This value gives the
total energy of feedback, in contrast to values given in
\cite{sadowski+outflows} who gave only power in jet and wind components.}, and equals roughly 50\% of the
thin disc efficiency.
In the ideal case of an optically thin disc extending to infinity, this
amount of energy would be deposited at infinity. In practice, this is
the amount by which the BH systems affects the ISM (the BH feedback). Because
simulations with inefficient radiative cooling are scale-free, this
efficiency is characteristic of an optically thin flow (ADAF) at
\textit{any} accretion rate for which such a solution exists with negligible
radiative cooling \citep[i.e., for $\dot M\lesssim
10^{-3}\dot M$, see][]{yuannarayan+14}. The fate of the energy coming
out of the innermost region is discussed below in Section~\ref{s.fate}.
\begin{figure}
\includegraphics[width=.95\columnwidth]{enfluxes_h001.png}
\caption{Similar to Fig.~\ref{f.enfluxes_thin} but for a GRMHD
simulation of an optically thin disc (ADAF, model \texttt{h001}). Colors denote the same
components of luminosity. Additional gray line reflects the
luminosity in thermal (internal) energy. Zero BH spin was
assumed. For definitions see Section~\ref{s.adafs}. Vertical lines
denote the ISCO and the estimated convergence region of the simulation
at $r\approx 25$.}
\label{f.enfluxes_h001}
\end{figure}
\subsubsection{Angular distribution}
Figure~\ref{f.sims_h001} shows the spatial distribution of density
(top-most panel) and various
components of the energy flux (other panels) in the optically thin simulation
\texttt{h001}. The streamlines in the top-most panel reflect the
velocity of the gas. The second panel shows the corresponding rest-mass
energy flux, $\rho u^\mu$. Most of the accretion takes place near the
equatorial plane. Within radius $r=30$, gas at all polar angles falls
inward. Only outside this radius (and outside the converged region of
the simulation), there is a hint of outflows that may arise from the
accretion flow.
The third panel shows the magnitude (colors) and
direction on the poloidal plane of the total energy flux, $L_{\rm
tot}$ (Eq.~\ref{e.entot}). The total energy flows outward, in
agreement with the positive total efficiency of $3\%$ (see
Fig.~\ref{f.enfluxes_h001}). Most of the extracted energy flows into
the disc, and very little in the polar region.
The fourth panel shows the (negative) binding energy brought inward with
the gas. This effectively transports energy outward. This component is
more isotropic than the net energy flux, reflecting the fact that gas
falling in along the axis is more bound than gas accreting in the equatorial
plane.
The fifth panel shows the magnetic component, which, as
we argued in the previous section, corresponds roughly to the viscous
energy transfer rate. As in the case of a thin disc, viscosity
transports energy outward and redistributes it throughout the
disc. Most of this energy goes near the equatorial plane, where the
density is largest, and viscosity most efficient.
Finally, the bottom-most panel presents the flux of internal energy. Its
magnitude is significant (compare
Fig.~\ref{f.enfluxes_h001}) because optically thin flow cannot cool
efficently and becomes very hot. As expected, thermal energy is
brought inward with the gas, and the angular distribution is again
quasi-spherical -- although the accretion rate is highest near the
equatorial plene, the gas temperatures there are lower than in the
polar region.
\begin{figure}
\rotatebox{90}{\hspace{1.2cm}Density}\hspace{.05cm}\includegraphics[width=1.0\columnwidth]{sim_rhovel_double_h001.png}\vspace{-.3cm}
\rotatebox{90}{\hspace{1.3cm}Rest mass}\hspace{.05cm}\includegraphics[width=1.0\columnwidth]{sim_mdot_double_h001.png}\vspace{-.3cm}
\rotatebox{90}{\hspace{1.4cm}Total}\hspace{.05cm}\includegraphics[width=1.0\columnwidth]{sim_entot_double_h001.png}\vspace{-.3cm}
\rotatebox{90}{\hspace{1.3cm}Binding}\hspace{.05cm}\includegraphics[width=1.0\columnwidth]{sim_enbind_double_h001.png}\vspace{-.3cm}
\rotatebox{90}{\hspace{1.cm}Magnetic/Visc.}\hspace{.05cm}\includegraphics[width=1.0\columnwidth]{sim_enmagn_double_h001.png}\vspace{-.3cm}
\rotatebox{90}{\hspace{1.3cm}Thermal}\hspace{.05cm}\includegraphics[width=1.0\columnwidth]{sim_enint_double_h001.png}
\caption{Top panel: Distribution of averaged gas density in optically
thin disc (model
\texttt{h001}). Streamlines reflect direction of average gas
velocity. Second panel: Magnitude of the rest mass density flux
(colors) and its direction (streamlines). Third to sixth panels:
Magnitudes and directions of energy fluxes (total, binding, viscous
and Thermal, respectively).}
\label{f.sims_h001}
\end{figure}
\subsection{Energy flow in optically thick super-critical discs}
\label{s.slim}
Accretion flows transferring gas at rates higher than the Eddington
limit are optically thick, but they are not as radiativelly efficient
as thin discs. The vertical optical depth is so large, that the
cooling time becomes comparable or larger than the accretion time, and
such discs cannot cool efficiently. Instead, significant fraction of
radiation may be advected on to a black hole. We now look closely at
the simulation \texttt{r001} of a mildly super-critical disc
($10\dot M_{\rm Edd}$) near a non-rotating BH.
\subsubsection{Luminosities}
Figure~\ref{f.enfluxes_r001} shows the luminosities in various
components of energy for the super-critical disc \texttt{r001}. It
can be directly compared to Fig.~\ref{f.enfluxes_h001} corresponding
to an optically thin disc. The two figures show qualitatively the same
behavior of the binding and magnetic components -- bound gas is
brought inward and effectively transports energy outward, similarly to the
magnetic/viscous energy flux which takes energy liberated in the innermost
region and redistributes it in the outer regions.
There is, however, a
significant difference in the thermal and radiative components. In the
case of an optically thin disc, the liberated energy heats up the
gas and results in significant inward flux of thermal energy (grey
line in Fig.~\ref{f.enfluxes_h001}). In the case of a radiative
super-critical disc, the flux of thermal energy is
negligible. Instead, the radiative component is now significant. It is
negative within $r\approx35r_g$, reflecting the fact that photons are
trapped in the optically thick flow
and transported with the gas to the BH. Only outside this radius, the amount of
radiative energy flowing out exceeds the advected one. The fact that
the thermal energy is now subdominant with respect to the radiative
energy is consistent with the fact that super-critical discs are
radiation-pressure dominated, and therefore, inward motion advectively
carries radiative energy, not thermal energy.
Interestingly, the total luminosity of the optically thick disc is
again close to $3\%\dot M c^2$ (thick red line in
Fig.~\ref{f.enfluxes_r001}). This is the amount of energy returned
from the system to the ISM in the case of super-critical accretion on
to a non-rotating black hole. Noticeably, the magnitude of BH
feedback power from a geometrically thick disc near a non-rotating BH is not sensitive to its
optical depth.
\begin{figure}
\includegraphics[width=.95\columnwidth]{enfluxes_r001.png}
\caption{Total luminosity and its components for a GR radiative MHD
simulation of optically thick, super-critical disc accreting at
$\sim 10\dot M_{\rm Edd}$ on a non-rotating BH (model \texttt{r001}). The
colors have the same meaning as in Fig.~\ref{f.enfluxes_h001}.}
\label{f.enfluxes_r001}
\end{figure}
\subsubsection{Angular distribution}
The distribution of the energy fluxes on the poloidal plane for the
optically thick simulation \texttt{r001} is shown in
Fig.~\ref{f.sims_r001}. The density distribution (top-most panel)
shows much larger contrast between the equatorial plane and the polar
axis than in the case of an optically thin disc. This fact results
from radiative pressure exerted on the gas in the funnel --
gas is accelerated vertically and escapes along the axis. This is
clearly reflected in the velocity streamlines shown in that panel.
The total energy extracted from the system (third panel) looks
different than in the previous case. This time the polar region is not
empty of outflowing energy. The optically thin radiation escaping
along the polar funnel dominates the energy budget
there.
The distributions of binding and magnetic energy fluxes are similar to
the optically thin case. Both transfer energy within the bulk of the
disc. For the case of the magnetic energy flux (which reflects the
effective viscous transport), this fact supports the conjecture that
this energy will dissipate at larger radii (it would not dissipate if
the magnetic energy has left the disc, e.g., along the axis).
Finally, the bottom-most panel shows the magnitude and direction of
the radiative flux. As already mentioned, radiation manages to escape
in the polar region. However, it is trapped in the optically thick
flow near the equatorial plane.
\begin{figure}
\rotatebox{90}{\hspace{1.2cm}Density}\hspace{.05cm}\includegraphics[width=1.0\columnwidth]{sim_rhovel_double_r001.png}\vspace{-.3cm}
\rotatebox{90}{\hspace{1.3cm}Rest mass}\hspace{.05cm}\includegraphics[width=1.0\columnwidth]{sim_mdot_double_r001.png}\vspace{-.3cm}
\rotatebox{90}{\hspace{1.4cm}Total}\hspace{.05cm}\includegraphics[width=1.0\columnwidth]{sim_entot_double_r001.png}\vspace{-.3cm}
\rotatebox{90}{\hspace{1.3cm}Binding}\hspace{.05cm}\includegraphics[width=1.0\columnwidth]{sim_enbind_double_r001.png}\vspace{-.3cm}
\rotatebox{90}{\hspace{1.cm}Magnetic/Visc.}\hspace{.05cm}\includegraphics[width=1.0\columnwidth]{sim_enmagn_double_r001.png}\vspace{-.3cm}
\rotatebox{90}{\hspace{1.25cm}Radiative}\hspace{.05cm}\includegraphics[width=1.0\columnwidth]{sim_enrad_double_r001.png}
\caption{Similar to Fig.~\ref{f.sims_h001} but for a super-critical,
optically thick disc (model \texttt{r001}). The bottom-most panel
now shows the flux of energy carried by radiation. The flux of
thermal energy is negligible.}
\label{f.sims_r001}
\end{figure}
\subsection{Higher accretion rates}
\label{s.highmdot}
Here, we briefly discuss how the picture described in the previous
Section changes when the accretion rate increases but the BH spin remains zero. Detailed
comparison of models \texttt{r001} (accreting at $10\dot M_{\rm Edd}$) and
\texttt{r003} ($176\dot M_{\rm Edd}$) was given in \cite{sadowski+3d}. The most
important points are as follows.
The total efficiency in both cases equals approximately
$3\%$. However, because of the larger optical depth in model
\texttt{r003}, photon trapping is more effective. In particular, the
polar region becomes optically thick. Inside $r\approx 30$ the gas is
dragged on to the BH even along the axis. As a
result, radiative luminosity of the system goes
down. Fig.~\ref{f.radflux_all} shows the fractional contribution of
the radiative luminosity, $L_{\rm rad}$, to the total luminosity
$L_{\rm tot}$. Blue and orange lines correspond to simulations
\texttt{r001} and \texttt{r003}, respectively. It is clear that the
latter is less radiatively luminous, and that the effective trapping
radius moves outward. This fact, however, turns out not to change the
total efficiency.
\begin{figure}
\includegraphics[width=.95\columnwidth]{radflux_all.png}
\caption{Fractional contribution of the radiative luminosity to the
total luminosity in simulations \texttt{r011} (green), \texttt{r001}
(blue), \texttt{r003} (orange lines). The luminosities were obtained
by integrating corresponding fluxes over the whole sphere. The
radiative luminosity, in particular, includes both the radiation
trapped in the flow and escaping to infinity.}
\label{f.radflux_all}
\end{figure}
\subsection{Rotating black hole}
\label{s.rotating}
So far we have been discussing accretion flows around non-rotating BHs. In
this Section we briefly discuss what impact non-zero BH spin has on the
energy flow properties.
BH spin affects accretion flows in two ways. Firstly, BH rotation
modifies the spacetime geometry and for a given BH mass allows for circular
orbits getting closer to the horizon with increasing BH spin. This fact results in an increased efficiency of accretion
-- the closer is the inner edge of the disc, the more binding energy
is liberated. Secondly, BH rotational energy can be extracted in the
Blandford-Znajek process \citep{bz}. The power of the related jet
depends on the value of the BH spin and on the amount of the magnetic
flux that has been accumulated at the horizon. The latter is
determined by the geometry of the magnetic field in the accreted gas and
the efficiency of magnetic field dragging. It is known
not to exceed the value characteristic for the magnetically arrested
(MAD) state \citep{igu+03,narayan+mad,tch+11}.
In this paper we analyze one simulation (\texttt{r011}) of super-critical accretion
on a mildly-rotating (the non-dimensional spin parameter $a_*=0.7$)
BH. The average accretion rate in this run ($17\dot M_{\rm Edd}$) is comparable
to the fiducial simulation \texttt{r001}, and allows for direct
comparison. The amount of the magnetic flux accumulated at the BH (the
magnetic flux parameter $\Phi \approx 15$) is
far from the MAD limit of $\Phi \approx 50$, but is large enough to study the impact of
the extracted rotational energy. Optically thick super-critical accretion
flows in the MAD limit were studied in a recent work by
\cite{mckinney+madrad}.
The total efficiency of simulation \texttt{r011} is roughly $8\%$,
significantly higher than that of the comparable simulation on a
non-rotating BH ($3\%$). The increase in efficiency comes from both
factors mentioned above (modified spacetime geometry and the
extraction of the rotational energy). The latter by itself should
extract $\sim 6\%\dot M c^2$ for the accumulated amount of magnetic
flux and spin $a_*=0.7$, but decomposition of the total energy into
these two components is not straightforward.
In Fig.~\ref{f.sims_r011} we show the distribution of energy flux
components on the poloidal plane. The panels have the same meaning as
in the previously discussed Fig.~\ref{f.sims_r001}. There are a couple
of noticeable differences between the two. Most importantly, the amount
of total energy extracted into the funnel region is much higher for
the rotating BH case. This is expected, because the energy extracted
in the Blandford-Znajek is known to go roughly along the axis
\citep{penna+membrane}. In the case of optically thin accretion on to a
rotating BH \citep[e.g.,][]{tch+12,sadowski+outflows}, the jet power is
extracted as magnetic Poynting flux gradually converting (if mass
loading is significant) into kinetic energy of gas. In the case of the
radiative flow studied here, this extra energy is carried mostly by
radiation already for $r\gtrsim 3r_g$. Magnetic flux is significant only
in a shell surrounding the funnel region. Fig.~\ref{f.sims_zoomin_r011} shows the magnitude and
direction of the radiative flux in the immediate vicinity of the
BH. As expected, radiative flux falls \textit{on the BH} in this innermost
region, and it is the magnetic
energy which is extracted at the
horizon. However, the latter is quickly converted into the radiative
energy. This is possible because the magnetic field efficiently pushes
hot and optically
thick gas along the axis. The gas, in turn, drags the radiation
upward.
At the risk of oversimplifying, it is possible to say that the
properties of an energy flow in the case of a super-critical accretion on to a
rotating BH are a superposition of the disk component (discussed in Section~\ref{s.slim} for a non-rotating BH)
and the jet contribution coming from the Blandford-Znajek
process. The power of the latter depends on the BH spin and magnetic
flux threading the horizon, and may overwhelm the former in magnitude.
At the same time, the jet component is limited only to the polar
region. If the confinement provided by the
disc is strong enough, it is likely to stay collimated.
\begin{figure}
\rotatebox{90}{\hspace{1.2cm}Density}\hspace{.05cm}\includegraphics[width=1.0\columnwidth]{sim_rhovel_double_r011.png}\vspace{-.3cm}
\rotatebox{90}{\hspace{1.3cm}Rest mass}\hspace{.05cm}\includegraphics[width=1.0\columnwidth]{sim_mdot_double_r011.png}\vspace{-.3cm}
\rotatebox{90}{\hspace{1.4cm}Total}\hspace{.05cm}\includegraphics[width=1.0\columnwidth]{sim_entot_double_r011.png}\vspace{-.3cm}
\rotatebox{90}{\hspace{1.3cm}Binding}\hspace{.05cm}\includegraphics[width=1.0\columnwidth]{sim_enbind_double_r011.png}\vspace{-.3cm}
\rotatebox{90}{\hspace{1.cm}Magnetic/Visc.}\hspace{.05cm}\includegraphics[width=1.0\columnwidth]{sim_enmagn_double_r011.png}\vspace{-.3cm}
\rotatebox{90}{\hspace{1.25cm}Radiative}\hspace{.05cm}\includegraphics[width=1.0\columnwidth]{sim_enrad_double_r011.png}
\caption{Similar to Fig.~\ref{f.sims_r001} but for a model with
spinning BH (model \texttt{r011}).}
\label{f.sims_r011}
\end{figure}
\begin{figure}
\rotatebox{90}{\hspace{1.25cm}Radiative}\hspace{.05cm}\includegraphics[width=1.0\columnwidth]{sim_enrad_zoomin_r011.png}
\caption{Zoomed in magnitude and direction of radiative energy flux for a model with
spinning BH (model \texttt{r011}).}
\label{f.sims_zoomin_r011}
\end{figure}
\section{Discussion}
\label{s.discussion}
\subsection{The fate of the energy flow}
\label{s.fate}
We have shown so far that in geometrically thick discs, both optically
thin and thick, a significant
flux of energy is liberated in the accretion flow and flows out of the
system. Although the simulations we performed allowed us to study only the
innermost ($r\lesssim 25$ at the equatorial plane) region of the
flow\footnote{Obtaining an equilibrium solution in a larger domain is
computationally very demanding because of the increasing range of
timescales. An appropriate approach would be to divide the domain
into subregions which are simulated independently but coupled at the
boundaries \citep[e.g.][]{yuan+12}.},
we were able to infer the total luminosity of the system. This is because in a stationary state this
quantity is determined by the energy flux
crossing the BH horizon.
However, the adopted method does not allow to study what happens to
the extracted energy outside the inflow/outflow equilibrium region
(i.e., for $r\gtrsim 25$) -- only for gas inside this region the
duration of the simulations was longer than the viscous time scale.
As Figs.~\ref{f.enfluxes_h001} and \ref{f.enfluxes_r001} show, the
total luminosity of the flows around a non-rotating BH (roughly $3\%\dot
Mc^2$) comprises three components at radius
$r=25$. The largest in the magnitude is
the binding energy flux which effectively
deposits energy at infinity. The magnetic component (reflecting
the viscous energy transport) is also transporting energy outward in a
significant amount. The remainder goes either into the thermal (for optically thin case) or radiative energy (for
optically thick) component. In both cases their net effect results in advecting energy
inward.
The binding energy consists of the gravitational and kinetic
components (plotted with dashed and dotted blue lines, respectively, in
Figs.~\ref{f.enfluxes_h001} and \ref{f.enfluxes_r001}). The former
goes to zero with increasing radius, and \textit{at infinity} no
gravitational energy is transported by the gas. The
kinetic component is negative inside the computational domain
reflecting the fact that gas flows inward and carries kinetic energy
of its rotational motion. However, when outflows are efficiently generated it might become ultimately positive outside the computational domain.
As we have shown above,
the radial flux of magnetic energy reflects the effective
viscous energy transfer. Viscosity not only
transports angular momentum and energy, but also leads to dissipation
of the latter. Therefore, one may expect that the amount of energy
carried by magnetic fields will dissipate sooner or later
outside the convergence region of the simulation, adding up to the
local heating rate in the same way as for thin discs discussed in
Section~\ref{s.thin}. Thus, no magnetic energy will be ultimately directly
deposited in the ISM.
The radiative energy transfer is important only for optically thick
accretion flows. Simulations of such flows described in this paper (models
\texttt{r001} and \texttt{r003}) show
significant photon trapping in the bulk of the disc which results in
negative net flow of radiative energy (see Figs.~\ref{f.enfluxes_r001}
and \ref{f.radflux_all}) in the inner region. However, as radiation
gradually diffuses out from the disc, the outflowing component finally
overcomes inward advection, and the net radiative luminosity
becomes positive. In particular, from the point of view of an observer
at infinity, radiation will only carry energy outward. In the
following Section we discuss how bright the accretion flow can be.
The thermal energy flux, which contributes
significantly to the energy transfer rate in optically thin accretion
flows, reflects both the advective and convective contributions. In
the inner part of the flow it is the
advective component which dominates and results in negative net energy
transfer rate -- hot gas is accreted and takes its thermal
energy with it. However, if outflows are present in the outer region,
the net effect may be opposite and the thermal energy
carried advectively outward with the outflowing gas may dominate.
Convection can carry energy against the gravity without
transporting mass. This component is negligible in the simulations we
performed (thermal energy flows advectively inward), but in principle it may
become significant, or even dominant, in the outer region. Several
models of accretion flows which are dominated by convection
(convectively dominated accretion flows, CDAFs) have been formulated
\citep{quataert-cdafs,narayan-cdafs,narayan-cdafs2,abramowicz-cdafs}. Whether convection is
important is an open question. \cite{narayan+12} performed a set
of simulations similar to our model \texttt{h001} and showed that
optically thin flows are convectively stable within $r\lesssim
100$. On the other hand, \cite{yuan+15}, who studied
non-magnetized viscous flows on large
scales, found that the
accretion flow is actually convectively
unstable. Even if convection is important, it cannot transport
energy beyond the
outer edge of the disc (or beyond the Bondi radius). One may expect that ultimately all energy
transported by convection is released as radiation or generates outflow.
To sum up, geometrically thick accretion flows on to a non-rotating
BHs deposit in the ISM roughly $3\%$ of the rest-mass energy
crossing the BH horizon. This energy may be transported outward with
the outflowing gas, radiation and convection. Which components
dominate is currently unclear. Ultimately, however, only radiation
and outflow can transport energy beyond the Bondi radius and
deposit it in the ISM. A separate jet component may be
present in case of a spinning BH which managed to accumulate
significant magnetic flux at its horizon. It will result in a
collimated, narrow outflow of mostly kinetic energy, unlikely to
interact efficiently with the ISM.
\subsection{Radiative luminosity}
\label{s.luminosity}
Radiation is one of the ways of extracting energy from accretion
flows. The total luminosity (accounting for all forms of energy) for a thick disc near a non-rotating
BH seems to be robust --
every simulation we have performed indicates that roughly $3\%$
of the accreted rest-mass energy is returned to ISM. The amount of
this energy that goes into radiation is not, however, easy to
estimate. Only when the photosphere is properly resolved, one can
check if radiation reaches the observer at infinity. This is not
the case for any of the optically thick simulations discussed here. They were
run only for a time which allowed them to reach inflow/outflow
equilibrium state at the equatorial plane within $r\approx
25$. Because of large optical depths, photospheres are located at
large distances \citep{sadowski+dynamo}, significantly outside the
converged region. This fact makes it impossible to directly measure
the amount of radiation escaping the system. Only radiation
escaping along optically thin funnel, if it exists, is guaranteed to
reach a distant observer.
Because of significant photon trapping in the
super-Eddington regime, the radiative luminosity of
the system is not proportional to the accretion rate. What is more,
the radiation coming from such an accretion flow must penetrate
the optically thick wind region. It cannot be therefore locally significantly
super-Eddington, because in such a case it would transfer its energy
and momentum to the gas accelerating it. We are inclined to suggest that
the result will be similar to the effect of pure photon trapping
which results in logarithmic dependence
of the luminosity on the accretion rate (already anticipated by
\cite{ss73}, see also \cite{begelman-79}),
\begin{equation}
L_{\rm rad}\approx L_{\rm Edd}\left(1+\log \dot M/\dot M_{\rm Edd}\right).
\label{e.Lradestimate}
\end{equation}
A similar logarithmic behavior was found in the old
works on super-Eddington Polish Doughnuts and
explained by Paczynski and collaborators as being
a consequence of the drop in efficiency
when, with increasing accretion rate, the inner edge of the
accretion disc moves from the ISCO to the innermost bound circular orbit (IBCO), where the efficiency
is zero \citep[see][for explanation and
references]{wielgus+15}.
\subsection{Outflow}
Our work shows that the existence of outflows is inevitable in outer parts of thick
accretion discs. Avoiding them requires \textit{all} of the extracted
energy to be ultimately
transported outwards by radiation. However, thick discs
are radiatielly inefficient (see
Eq.~\ref{e.Lradestimate}).
This conclusion is based on disc energetics --
a significant fraction of the accreted rest mass energy flows outward
through the disc which cannot generate
enough radiation to provide the efficient cooling required
to get rid of the energy surplus. If convection is not effective, at
least in the outermost region, outflow is the only possible way of taking
this excess of energy out of the system.
We do not see strong outflows in the simulations we
performed (compare
topmost panels in Figs.~\ref{f.sims_h001}, \ref{f.sims_r001}, and
\ref{f.sims_r011}). Only in the funnel
region of optically thick simulations \texttt{r003} and \texttt{r011} one observes
that the radiative luminosity is converted gradually into kinetic
energy of the outflowing gas. However, the kinetic luminosity of such gas measured at the
outer boundary is still at most $\sim 10\%$ of the total
efficiency.
Therefore, one may expect that most of the outflow will be generated
at radii larger than covered by the inflow/outflow equilibrium region
of the simulations, i.e., at $r\gtrsim 25r_g$. What will drive these
outflows? In principle, there are three acceleration mechanisms likely
to act in magnetized accretion flows -- magnetocentrifugal
\citep{blandfordpayne}, radiative and thermal. Magnetocenrifugal
driving is not effective in the simulated inner region of a non-MAD
accretion flow \citep[see
also][]{moller+15}, and there is little hope for it to
become effective further out, where magnetic field has no reason to
be more uniform on large scales. Radiative driving is seen in the
funnel region of the simulated super-critical discs, but does not
result in
significant outflow
at larger polar angles -- radiation diffusing out of the disc into the
optically thick wind region is supporting the disc against gravity,
and therefore cannot on average significantly exceed the local Eddington flux.
Thermal wind driving remains the only candidate to balance the energy
budget of the discs. It is especially reasonable if we consider the
energy flux redistributed through viscosity. When it finally
dissipates at larger radii, it will heat up the gas and make it more
prone to become unbound and likely to flow out of the system. This is
in agreement with the standard ADAF model which predicts positive
Bernoulli function for the inflowing gas in the self-similar regime \citep{narayanyi-94}. At
the same time, most of the observed outflows in BH accretion flows are
believed to be of thermal nature \citep[e.g.,][]{lee+02,ponti+12,neilsen-13}.
The total
luminosity will be ultimately carried by the outflow and
radiation. Thus, at infinity, the outflow will carry the amount of
energy equal to the difference between the total and radiative
luminosities. For a non-spinning BH one will have, \begin{equation} L_{\rm outflow,\infty}= 0.03\dot Mc^2-L_{\rm
rad,\infty}. \end{equation} The latter term is obviously negligible for
optically thin discs, for which the whole extracted energy goes into
outflow.
\subsection{Extent of a thick disc}
In the considerations so far we assumed that the accretion flow
extends to infinity, and that the gas at infinity has negligible
energy, i.e., its Bernoulli number is zero. These conditions do not
have to be satisfied in reality. In particular, a thick disc is
expected to become thin in the outer region. For example, a
super-critical disc becomes radiatively efficient at a sufficiently
large radius where its optical depth is no longer large enough to prevent
locally generated radiation from diffusing out in an efficient
way. Somewhat similarly, optically thin discs (ADAFs) cannot exist
above a critical accretion rate, and this limit decreases with
radius. Therefore, for a fixed accretion rate at a BH, there is a
radius where thick disc must become thin, and radiatively
efficient. In such cases, the picture presented here would have to be
modified accordingly, e.g., if at the transition radius the outward
energy flux inside the disc (i.e., not in the outflow) is still significant, then one could
expect that the thin disc, extending from this radius outward, will
ultimately release all its energy as (relatively cold) radiation.
\subsection{Transition between accretion modes}
We did not simulate thin, sub-Eddington
accretion flows, for which the standard assumption of being radiatively
efficient is satisfied by construction. For such discs, the energy
transfer is expected to follow the characteristics described in
Section~\ref{s.thin}, i.e., the total efficiency of BH feedback equals
the thin disc efficiency, and ultimately all of it is carried by
radiation which is emitted over a wide solid angle. Geometrically thin discs
are unlikely to drag significant amount of magnetic field on to the BH
(e.g., \cite{lubow+94}, \cite{ghosh+97}, \cite{guletogilvie-12,guletogilvie-13}, but see also
\cite{spruit+05}, \cite{roth+08} and \cite{beckwith+09})
and therefore one does not expect strong Blandford-Znajek jet
component.
The transition between optically thick but geometrically thin and
thick discs takes place near the Eddington accretion rate. In the past
it has been modeled with the so called slim disc model
\citep[e.g.,][]{abramowicz+88,sadowski.phd} which
generalizes the standard thin disc model to higher accretion
rates. Recently, numerical simulations similar to the ones that this
work is based on, have studied a number of super-Eddington accretion
flows \citep[e.g.,][]{sadowski+dynamo,jiang+14b}. Simulations of thin
discs, more demanding computationally, have not yet been
performed. Below, we
will describe the transition between geometrically thin and thick optically thick
discs with the help of arbitrary step functions, which make the
final formulae agree qualitatively with what we have learned from
numerical, multi-dimensional simulations.
The transition between optically thin and thick discs is even less well
understood and awaits numerical modeling. It is known than
radiatively inefficient optically thin flows cannot exist above some
critical accretion rate $\dot M_{\rm ADAF}\approx 10^{-3}\dot M_{\rm Edd}$ \citep[e.g.,][]{esin+97}. Whether
increasing the accretion rate above this threshold results in a dramatic
transition to a cold, optically thick disc, or rather the disc
takes a form similar to the luminous-hot accretion flow
\citep[LHAF,][]{yuan+01}, has still to be verified.
Below,
for simplicity, we assume that whenever accretion rate is below $\dot
M_{\rm ADAF}$, accretion occurs in optically thin disc, and that the
transition to optically thick discs (for $\dot
M>\dot M_{\rm ADAF}$) takes place instantanously.
Having these considerations in mind, one may
approximate the total amount
of feedback luminosity coming from an accreting system as,
\begin{eqnarray}
\hspace{1cm}L_{\rm fb}=\frac12\eta_{\rm thin}\dot Mc^2 + P_{\rm BZ},
\label{e.Lfbthin}
\end{eqnarray}
\hspace{4cm}for $\dot M<\dot M_{\rm ADAF}$ (opt. thin),
\begin{eqnarray}
\hspace{1cm}L_{\rm fb}=\eta_{\rm thin}\left(1-\frac12f_\eta\right) \dot
Mc^2+f_{\rm BZ} P_{\rm BZ},\label{e.Lfb3}
\label{e.Lfbthick}
\end{eqnarray}
\hspace{4cm}for $\dot M>\dot M_{\rm ADAF}$ (opt. thick),
\vspace{.3cm}
\noindent where $\eta_{\rm thin}$ stands for the efficiency of a standard thin
disc with given spin, the $1/2$ factor reflects two times smaller
efficiency of thick discs, and where we allow for the Blandford-Znajek
contribution, $P_{\rm BZ}$, for thick discs. $\dot M_{\rm ADAF}\approx
10^{-3}\dot M_{\rm Edd}$ is
the critical accretion rate above which radiatively inefficient optically thin accretion flows do
not exist. Functions
$f_\eta$ and $f_{\rm BZ}$,
\begin{equation}
f_\eta=\left(1+\left(\frac{3}{\dot
M/\dot M_{\rm Edd}}\right)^3\right)^{-1}
\label{eq.feta}
\end{equation}
\begin{equation}
f_{\rm BZ}=\left(1+\left(\frac{1}{\dot
M/\dot M_{\rm Edd}}\right)^5\right)^{-1}
\label{eq.fbz}
\end{equation}
were chosen to give (arbitrary) smooth transitions
between the sub- and super-Eddington regime for the efficiency and the
jet power, respectively. The Blandford-Znajek
term \citep[given here for
saturated magnetic field at the BH, i.e., for the MAD limit, see][]{tchekh15},
\begin{equation}
P_{\rm BZ}=1.3a_*^2\dot M c^2,
\label{e.PBZ}
\end{equation} is strongly damped for thin discs which are not likely
to drag the magnetic fields effectively.
Fig.~\ref{f.Lfb} shows the
disc and jet components of the total black hole feedback, $L_{\rm fb}$
(Eqs.~\ref{e.Lfbthin} \& \ref{e.Lfbthick}), as a function of accretion
rate for BH spins $a_*=0.0$ and $0.7$. For the latter, we assumed
that magnetic field saturated
at the BH at half of the MAD limit (in this way the jet power was not
overwhelming the power of the disc component).
The solid lines show the
efficiency of the feedback coming from the disc. In the thin disc
regime ($\dot M_{\rm ADAF}\lesssim\dot M\lesssim\dot M_{\rm Edd}$), this
efficiency equals the standard thin disc efficiency - $\eta=0.057$ and
$0.104$ for $a_*=0.0$ and $0.7$, respectively. For accretion rates
significantly exceeding the Eddington limit, this efficiency drops
down to roughly half of the thin disc efficiency, i.e., to $\eta=0.03$
for a non-rotating BH. The proposed formulae make the transition
between the two regimes smooth. In the limit of low accretion rates
$\dot M<\dot M_{\rm ADAF}$, one expect accretion flows to be optically
thin with similar efficiency of $\eta=0.03$. The transition to the
thin disc limit is probably more violent, and we did not apply any
smoothening function there.
The dashed lines reflect the power of the jet feedback component. It
is non-zero only for the case of a rotating BH. For geometrically
thick discs jet production is efficient and given (for magnetic flux
at the BH saturated at half of the MAD limit)
by $1/4P_{\rm
BZ}$ (see Eq.~\ref{e.PBZ}). Thin discs are unlikely to
provide strong jet components and therefore we damp the jet
power in this regime. One has to keep in mind that the jet component
will be highly collimated and may not interact efficiently with the ISM.
\begin{figure}
\includegraphics[width=.985\columnwidth]{Lfb.png}
\caption{Total efficiency of the feedback (Eqs.~\ref{e.Lfbthin} \&
\ref{e.Lfbthick}) for $a_*=0.0$ (red) and $0.7$ (blue lines)
BHs. Solid and dashed lines represent the disc and jet components,
respectively. The jet component was calculated assuming magnetic
field saturation at half the MAD limit. $\dot M_{\rm ADAF}=10^{-3}\dot M_{\rm Edd}$
is an estimated transition between optically thin and thick
accretion flows.}
\label{f.Lfb}
\end{figure}
\section{Summary}
\label{s.summary}
In this paper we have studied the flow of energy in geometrically thick
discs, both optically thin and thick. We based our study on a set of
state-of-the-art, three-dimensional simulations of accretion flows
performed in the framework of general relativity. Our results are as
follows:
\begin{enumerate}
\item \textit{Total feedback:} Thick accretion flows on a non-rotating
BH show the same total efficiency of $3\% \dot M c^2$ (roughly
$50\%$ of the thin disc efficiency) independent
of the accretion rate. Both optically thin, ADAF-like flows, and
super-Eddington optically thick flows liberate energy at the same
rate. This energy is ultimately distributed between the energy
carried by outflow and radiation.
The efficiency of accretion flows onto a rotating BH is increased by the
modified spacetime geometry and the
rate at which BH rotational energy is extracted through the
Blandford-Znajek process.
\item \textit{Approximated formulae:} One may
approximate the total amount
of feedback coming from an accreting system using
Eqs.~\ref{e.Lfbthin} and \ref{e.Lfbthick}. These formulae assume that
the total efficiency of the feedback disc component, i.e., the amount
of energy extracted from the disc itself, and not from the jet, equals half of the thin disc
efficiency for
geometrically thick discs, as found in this work.
\item \textit{Energy in the outflow:} The energy outflowing from the system can be ultimately carried away
only by radiation or outflowing gas. If a disc cannot cool
efficiently, i.e., if it is not luminous (in radiation), then most of
the liberated energy must be carried by the outflow. This is
true both for optically thin discs and for optically thick discs
which at sufficiently high accretion rates efficiently trap radiation
in the gas. Therefore, we can
infer the existence of outflows even if they are not emerging strongly
within
the computational domain.
The amount of energy, either kinetic or
thermal, carried by the outflow equals,
\begin{equation}
L_{\rm outflow}=L_{\rm fb}- L_{\rm rad},
\end{equation}
where the radiative luminosity is zero for optically thin discs and
may be approximated as,
\begin{equation}
L_{\rm rad}\approx L_{\rm Edd}\left(1+\log \dot M/\dot M_{\rm Edd}\right),
\end{equation}
for super-critical discs.
\item \textit{Outflowing mass:} Our study is based on simulations
covering only the innermost region of BH accretion. We find that
significant amount of energy flows out from that region and likely
results in outflowing mass from larger radii. However,
despite the fact that we know how energetic the outflow can be, we
are not able to say in what amount gas is blown away. The
relation between the two depends on the Bernoulli function of the
outflowing gas, e.g., marginally bound gas will carry
virtually zero energy per unit mass. Because of similar reasons we
cannot determine the amount of momentum carried with the outflow. As \cite{begelman-12} points
out, accretion through thick, advective discs leads to either winds
or breezes. To find how much gas is lost on
the way towards the BH, one has to solve the problem consistently on
larger scales than covered by the simulations presented
here. Recently, a significant progress in this direction has been
made by \cite{yuan+15} and \cite{bu+15}, who studied optically thin
accretion flows and found that gas is likely
lost between $r\approx40$ and the Bondi radius, and that the mass
loss rate
in the wind increases proportionally to radius according to $\dot
M_{\rm out}=\dot M_{\rm BH} (r/40)$.
\item \textit{Angular distribution of feedback:} In the case of optically thin accretion, the liberated energy can
flow out in two channels. The jet component related to the extraction
of BH rotational energy is collimated along the axis and ultimately
results in a narrow, relativistic magnetized jet. The accretion
component flows outward in the bulk of the disc and is responsible for
driving the outflows at large radii or ultimately leaves the system in
convective eddies. Such energy flows will have a
quasi-spherical distribution in space and will likely interact
efficiently with ISM.
To some extent similar properties characterize outflows in case of
super-critical, optically thick discs. The jet component is likely to
be colimated along the axis, while the outflow component covers wide
range of angles. The radiation coming out of the system may have
initially a mildly collimated component in the funnel region \citep[the
radiative jet, see][]{sadowski+radjets}. However, it either converts
into kinetic jet (if there is enough coupling between radiation and
gas in the funnel), or ultimately diffuses when the funnel opens because of only mild
collimation of the photon beams \citep{jiang+14b,narayan+heroic}. Therefore, the radiation component
should be expected to cover large solid angle from the point of view of a
distant observer.
Thin accretion discs, which we did not study here, are expected to
produce largely isotropic radiative feedback.
\item \textit{Models of thick discs:} Our study shows that
outflows in some form or convection is inevitable for thick discs. This is not surprising because
advection dominated accretion involve fluid which is only weakly
bound to the BH \citep{narayanyi-94,adios}. Existence of outflows
or convection in principle rules out well
known and celebrated models of thick accretion flows which assume
that gas is not lost on the way towards the BH, and which do not
allow for convection, i.e., optically thick slim discs
\citep{abramowicz+88} and optically thin ADAFs
\citep{narayanyi-94,abramowicz+adafs}. However, the outflow and
convective regions
do not extend all the way down to the BH. Therefore, the innermost
region \textit{can} be described with the use of these models.
In the outer region the situation is less clear because all the
proposed semi-analytic models for convection and winds suffer
from some problems, or they have been developed, as the recent
inflow-outflow solution by \cite{begelman-12}, in application for
the inner part of the flow. In particular, the models of convectively
dominated discs \citep{quataert-cdafs,narayan-cdafs,narayan-cdafs2,abramowicz-cdafs}
in optically thin flows are self-similar. The \cite{dotan+11} model for slim discs with winds uses sophisticated
descriptions of the disc and the wind separately, but assumes
an ad hoc wind launching mechanism. Finally, simple and widely
used ADIOS model \citep{adios} takes strong assumptions which have been
criticized by \cite{abramowicz+00}.
\end{enumerate}
\section{Acknowledgements}
AS acknowledges support
for this work
by NASA through Einstein Postdoctoral Fellowship number PF4-150126
awarded by the Chandra X-ray Center, which is operated by the
Smithsonian
Astrophysical Observatory for NASA under contract NAS8-03060. AS thanks
Harvard-Smithsonian Center for Astrophysics for its hospitality.
This research was supported by the Polish NCN grants UMO-2013/08/A/ST9/00795 and DEC-2012/04/A/ST9/00083.
JPL was supported in part by a grant from the French Space Agency CNES.
RN was
supported in part by NSF grant AST1312651 and NASA grant TCAN
NNX14AB47G.
The authors acknowledge computational support from NSF via XSEDE resources
(grant TG-AST080026N), and
from NASA via the High-End Computing (HEC) Program
through the NASA Advanced Supercomputing (NAS) Division at Ames
Research Center.
\bibliographystyle{mn2e}
{\small
|
2,877,628,088,471 | arxiv | \section{Introduction}\label{sec:introduction}
Deep learning is the foundation for many of today's applications, such as computer vision, natural language processing, and speech recognition. After AlexNet \cite{alexnet} made a breakthrough in 2012 by significantly outperforming other object detection solutions, and winning the ISLVRC competition \cite{islvrc}, CNNs gained a well-deserved popularity for computer vision applications. This energized the research community to architect models capable of achieving higher accuracy (that led to development of many higher accuracy models including GoogleNet \cite{googlenet} and ResNet \cite{resnet}), increased the demand and research for hardware platforms capable of fast execution of these models \cite{NcdNpeAmirzaei,nesta}, and created a demand for lower complexity models \cite{icnn,icnntecs, exploit} capable of reaching high levels of accuracy.
Even though the evolution in their model structure and the improvement in their accuracy have been very promising in recent years, it is illustrated that convolutional neural networks are prone to adversarial attacks through simple perturbation of their input images \cite{fgsm ,bim ,mim ,deepfool}. The algorithms proposed by \cite{fgsm ,bim , mim, deepfool} have demonstrated how easily the normal images can be perturbed with adding a small noise in order to fool neural networks. The main idea is to add a noise vector containing small values to the original image in the opposite or same direction of the gradient calculated by the target network to produce adversarial samples \cite{fgsm , bim}.
The wide-spread adoption of CNNs in various applications and their unresolved vulnerability to adversarial samples has raised many safety and security concerns and has motivated a new wave of deep learning research. To defend against adversarial attacks, the concept of adversarial training was proposed in \cite{fgsm} and was further refined and explored in \cite{bim , mim}. Adversarial training is a data augmentation technique in which by generating a large number of adversarial samples and including them with correct labels in the training set, the robustness of network against adversarial attacks improves. Training an adversarial classifier to determine if the input is normal or adversarial and using autoencoder (AE) to remove the input image noise before classification are some of the other approaches taken by \cite{fgsm} and \cite{intriguing}. Finally, \cite{distillation} utilizes distillation as a defense method against adversarial attacks in which a network with a similar size to the original network is trained in a way that it hides the gradients between the softmax layer and its predecessor.
In this work, we combine denoising and classification into a single solution and propose the code-bridged classifier (CBC). We illustrate that CBC is 1) more robust against adversarial attacks compared to a similar CNN solution that is protected by a denoising AE, and has substantially less computational complexity compared to such models.\par
\section{Background and Related Work}\label{sec:background}
The vulnerability of deep neural networks to adversarial examples was first investigated in \cite{intriguing}. Since this early work, many new algorithms for generating adversarial examples, and a verity of solutions for defending against these attacks are proposed. Following is a summary of the attack and defense models related to our proposed solution:
\subsection{Attack Models}
Many effective attacks have been introduced in the literature. Some of the most notable attacks include Fast Gradient Sign Method (fgsm) \cite{fgsm}, Basic Iterative Method \cite{bim}, Momentum Iterative Method \cite{mim}, DeepFool \cite{deepfool} and Carlini Wagner~\cite{cw}, the description of each method are as follows.
\subsubsection {FGSM attack} in \cite{fgsm}, a simple method is suggested to add a small perturbation to the input to make an adversarial image. The adversarial image is obtained by:
\begin{equation}\label{eq_fgsm}
x'= x + \epsilon sign(\nabla_x J(\theta, x, y))
\end{equation}
in which $x$ is the input image, $y$ is the correct label, $\theta$ is the network parameters, and $J$ is the loss function. $\epsilon$ defines the magnitude of the noise. The larger the $\epsilon$, the larger the possibility of misclassification. Figure \ref{fig:fgsm} illustrates how such adversarial perturbation can change the classifier's prediction.
\begin{figure}[t]
\centering
\includegraphics[width=0.85\columnwidth]{figures/fgsm_example_one_row-cropped.pdf}
\caption{The FGSM attack is used to add adversarial noise to the original image. The adversarial perturbation remains imperceptible to the human eyes but causes the neural network to misclassify the input image.}
\label{fig:fgsm}
\end{figure}
\subsubsection {Basic Iterative Method (BIM) attack \cite{bim}}
Also known as Iterative-FGSM attack, BIM attack is iterating over the FGSM attack, increasing the effectiveness of the attack. The BIM attack can be expressed as:
\begin{equation}\label{eq-bim}
x'_0=x ,~x'_n = x'_{n-1} + \epsilon sign(\nabla_x J(\theta, x_{n-1}, y))
\end{equation}
\subsubsection {Momentum Iterative attack \cite{mim}}
In the Momentum Iterative attack, the momentum is also considered when calculating the adversary perturbation, and is expressed as:
\begin{equation}\label{eq-mim}
\begin{aligned}
g_0 = 0, ~g_{n}=\mu g_{n-1}\frac{\nabla_x J(\theta, x_{n-1}, y)}{||\nabla_x J(\theta, x_{n-1}, y)||_1}\\
\\
x'_n = x'_{n-1} + \epsilon sign(g_n)
\end{aligned}
\end{equation}
in which $\mu$ is the momentum, and $||\nabla_x J(\theta, x_{n-1}, y)||_1$ is the $L_1$ norm of the gradient.
\subsubsection {Deepfool \cite{deepfool}}
The Deepfool attack is formulated such that it can find adversarial examples that are more similar to the original ones. It assumes that neural networks are completely linear and classes are distinctively separated by hyper-planes. With these assumptions, it suggests an optimal solution to find adversarial examples. However, because neural networks are nonlinear, the step for finding the solution is repeated. We refer to \cite{deepfool} for details of the algorithm.
\subsubsection { Carlini \& Wagner (CW) \cite{cw}}
Finding adversarial examples in the CW attack is an iterative process that is conducted against multiple defense strategies. The CW attack uses Adam optimizer and a specific loss function to find adversarial examples that are less distorted than other attacks. For this reason, the CW attack is much slower. Adversarial examples can be generated by employing $L_0$ , $L_2$ and $L_{\infty}$ norms. The objective function in CW attack consider an auxiliary variable $w$ and is defined as:
\begin{equation}\label{cost-cw}
\delta_i = \frac{1}{2}(tanh(w_i)+1)-x_i
\end{equation}
Then if we consider the $L_2$ norm, this perturbation is optimized with respect to $w$:
\begin{equation}\label{cw-opt}
min_w||\delta||_2^2 + c.f(\delta + x)
\end{equation}
in which function $f$ is defined as follows:
\begin{equation}\label{cw-f}
f(x') = max(max\{Z(x')_i : i \neq t\} - Z(x')_t ,- \kappa)
\end{equation}
in the above equation, $Z(x')$ is the pre-softmax output for class $i$, the parameter $t$ represents the target class, and $\kappa$ is the parameter for controlling the confidence of misclassification.
\subsection{Transferability of Adversarial Examples}
All previously described attacks are carried out in a white-box setting in which the attacker knows the architecture, hyperparameters, and trained weights of the target classifier as well as the existing defense mechanism (if any). It is very hard to defend against white-box attacks because the attacker can always use the information she has to produce new and working adversarial inputs. However, adversarial attacks can be considered in two other settings: Gray Box and Black Box attacks. In gray box attacks, the attacker knows the architecture but doesn't have access to the parameters, the defense mechanism. In black-box setting the attacker does not know the architecture, the parameters, and the defense method.
Unfortunately, it has been shown that adversarial examples generalize well across different models. In \cite{intriguing} it was shown that many of the adversarial examples that are generated for (and are misclassified by) the original network are also misclassified on a different network that is trained from scratch with different hyperparameters or using disjoint training sets. \par
The findings of \cite{intriguing} are confirmed by the following works, as in \cite {universal}, universal perturbations are successfully found that not only generalize across images but also generalize across deep neural networks. These perturbations can be added to all images and the generated adversarial example is transferable across different models. The work in \cite{transferability,practical} show that adversarial examples that can cause a model to misclassify, can have the same influence on another model that is trained for the same task. Therefore, an attacker can train her dummy model to generate the same output, craft/generate adversarial images on her model, and rely on the transferability of the adversarial examples, being confident that there is a high chance for the target classifier to be fooled. We argue that our proposed solution can effectively defend black-box attacks. \par
\subsection{Defenses}
Several works have investigated defense mechanisms against adversarial attacks. In \cite{fgsm}, adversarial training is proposed to enhance the robustness of the model. In \cite{comparative,magnet} autoencoders are employed to remove the adversarial perturbation and reconstruct a clean input. In \cite{distillation} distillation is used to hide the gradients of the network from the attacker. Other approaches are also used as a defense mechanism \cite{gradreg,detection,protecting}. In this section, we explore the ideas for defending against adversarial examples.
\subsubsection{Adversarial Training}
The basic idea of the adversarial training \cite{fgsm} is to train a robust classifier via adding many adversarial examples (that are generated using different attacks) to the training dataset \cite{mim,deepfool,minmax}. The problem with this approach is that it can only make the network robust against known (and trained for) attacks for generating adversarial examples. It also increases the training time significantly.
\subsubsection{Defensive Distillation}
In \cite{distillation} distilling was originally proposed to train a smaller student model from a larger teacher model with the objective that the smaller network predicts the probability of the bigger network. The distillation technique takes advantage of the fact that a probability vector contains more information than only class labels, hence, it is a more effective mean for training a smaller network. For defensive distillation, the second network is the same size as the first network \cite{distillation}. The main idea is to hide the gradients between the pre-softmax and softmax layers to make the attacker's job more difficult. However, it was illustrated in \cite{cw} that this defense can be beaten by using the pre-softmax layer outputs in the attack algorithm and/or choosing a different loss function.
\subsubsection{Gradient Regularization}
Input gradient regularization was fist introduced by \cite{gradreg2} to improve the generalization of training in neural networks by a double backpropagation method. \cite{distillation} mentions the double backpropagation as a defense and \cite{gradreg} evaluate the effectiveness of this idea to train a more robust neural network. This approach intends to make sure that if there is a small change input, the change in KL divergence between the predictions and the labels also will be small. However, this approach is sub-optimal because of the blindness of the gradient regulation.
\subsubsection{Adversarial Detection}
Another approach taken to make neural networks more robust is to detect adversarial examples before feeding to the network\cite{detection,detection2}. \cite{detection} tries to find a decision boundary to separate adversarial and clean inputs. \cite{detection2} deploys the fact that the perturbation of pixel values by adversarial attack alters the dependence between pixels. By modeling the differences between adjacent pixels in natural images, deviations due to adversarial attacks can be detected.
\subsubsection{Autoencoders}
\cite{comparative} analyzes the use of normal and denoising autoencoders as a defense method. Autoencoders are neural networks that code the input and then try to reconstruct the original image as their output. \cite{magnet}, as illustrated in Fig. \ref{fig:magnet}, uses a two-level module and uses autoencoders to detect and reform adversarial images before feeding to the target classifier. However, this method may change the clean images and also add a computational overhead to the whole defense-classifier module. To improve the method introduced in \cite{magnet}, \cite{sabokrou2019self} presents an efficient auto-encoder with a new loss function which is learned to preserve the local neighborhood structure on the data manifold.
\begin{comment}
\begin{table}[t]
\caption{Comparison of Defense Methods}
\label{tab:defense}
\scalebox{0.8}{
\begin{tabular}{|l|l|}
\hline
\textbf{Defense} & \textbf{Disadvantage}\\
\hline
Adversarial Training \cite{fgsm, mim,deepfool,minmax} & Prone to unseen adversarial examples\\
\hline
Defensive Distillation \cite{distillation}& Beaten by attacks that use the pre-softmax layer\\
\hline
Random Ensemble \cite{protecting} & Computational complexity \\
\hline
Gradient Regularization \cite{gradreg} & Sub-optimal due to blindness of gradient regulation\\
\hline
Adversarial Detection \cite{detection,detection2} & Not effective for large size inputs\\
\hline
Autoencoders \cite{comparative,magnet}& Computational overhead\\
\hline
\end{tabular}
}
\end{table}
\end{comment}
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{figures/magnet.pdf}
\caption {Magnet defense in \cite{magnet} is a two stage defense: the first stage tries to detect the adversarial examples. The images that pass the first stage are denoised using an AutoEncoder in the second stage and fed to the classifier.}
\label{fig:magnet}
\end{figure}
\section{Problem statement}\label{sec:problem}
An abstract view of a typical Auto-Encoder (AE) and Denoising Auto-Encoder (DAE) is depicted in Fig. \ref{ae}. An AE is comprised of two main components: 1) The \textit{encoder}, $\varphi(X)$, that extracts the corresponding latent space for input $X$, and 2) the \textit{decoder}, $\zeta(\varphi(X))$, that reconstructs a representation of the input image from its compressed latest space representation. Ideally, the decoder can generate the exact inputs sample from the latent space, and the relation between the input and output of an AE can be expressed as $\zeta(\varphi(X)) = X $. However, in reality, the output of an AE is to some extent different from the input. This difference is known as reconstruction error and is defined as $E_R = |\zeta(\varphi(X)) - X|$ \cite{reconstruct1}. When training an AE, the objective is $E_R$.
A DAE is similar to AE, however, it is trained using a different training process. As illustrated in Fig. \ref{ae}.b the input space of DAE are the noisy input samples, $X+\epsilon$, and their corresponding latent space is generated by $\varphi(X+\epsilon)$. Unlike AE (in which the $E_R$ is defined as the difference between the input and output of AE), the $E_R$ of DAE is defined as $E_R = |\zeta(\varphi(X+\epsilon)) - X|$ \cite{reconstruct1}. In other words, the reconstruction error is the difference between the output of decoder $\zeta(\varphi(X+\epsilon))$ and the clean input samples. An ideal DAE removes the noise $\epsilon$ from the noisy input and generates the clean sample $X$.
This refining property of DAEs, make them an appealing defense mechanism against adversarial examples. More precisely, by placing one or more DAEs at the input of a classifier, the added adversarial perturbations are removed and a refined input is fed into the subsequent classifier. The effectiveness of this approach highly depends on the extent of which the underlying DAE is close to an ideal DAE (in which the DAE completely refines the perturbed input). Although a well-trained DAE refines the perturbed input to some extent, it also imposes a reconstruction noises to it. As an example, assume that $\epsilon$ in Fig. \ref{ae}.b is zero. This means the input $X$ is a clean image. In this case the output is $X+E_R$. If the size of $E_R$ is large enough, it can move the input $X$ over the classifier's decision boundary. This, as illustrated in Fig. \ref{adv}, will result in predicting the input $X$ as a $X^*$ class member. In this scenario, DAE not only fails to defend against adversarial examples, but also generates noise that could lead to the misclassification of the clean input images.
The other problem of using AE or DAE as a pre-processing unit to refine the image and combat adversarial attacks is their added computational complexity. Adding an autoencoder as a pre-processor to a CNN increases 1) the energy consumed per classification, 2) the latency of each classification and 3the number of parameters of the overall model.
In the following section, we propose a novel solution for protecting the model against adversarial attacks that addresses both the computational complexity problem and the reconstruction error issue of using an AE as a pre-processor.
\begin{figure}[t]
\centering
\includegraphics[width=0.99\columnwidth]{figures/dae.png}
\caption {An abstract view of a) a typical Auto-Encoder, and b) a Denoising Autoencoders. Two major components of both structures are 1) Encoder, $\varphi(.)$, which extracts the latent space of sample inputs 2) Decoder, $\zeta(.)$, which reconstructs sample inputs from the latent space.}
\label{ae}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.6\columnwidth]{figures/adv.PNG}
\caption {Reconstruction error ($E_R$) of a decoder can also result in mis-classification if the features extracted for the reconstructed image $X^*$ are pushed outside of the classifier's learnt decision boundary. }
\label{adv}
\end{figure}
\section{Proposed Method}\label{sec:method}
Using DAEs to refine perturbed input samples before feeding them into the classifier is a typical defense mechanism against adversarial examples \cite{comparative,magnet}. A general view of such defense is illustrated in Fig. \ref{pdae}.(top). In this figure, $\varphi(.)$, $\zeta(.)$ are the decoder and encoder of DAE, respectively, $\varphi'(.)$ represents the first few CONV layers of the CNN classifier, and $C(.)$, represents the later CONV stages. In this defense, the DAE and CNN are separately trained. The DAE is trained to minimize the reconstruction error, while the CNN is trained to reduce the pre-determined loss function (e.g. $L1$ or $L2$ loss). An improved version of such defense is when the training is done serially, where in the first stage, the DAE is trained, and then the CNN classifier is trained using the output of DAE as input sample. Note that the 2nd solution tends to get a higher classification accuracy. Regardless of the choice of training addition of a DAE to the CNN classifier adds to its complexity. Aside from added computational complexity, the problem with this defense mechanism is that AEs could act as a double agent: on one hand refining the adversarial examples is an effective means to remove the adversarial perturbation (noise) from the input image and is a valid defense mechanism, but on the other hand, its reconstruction error, $E_R$, could force misclassification of clean input images. For correcting the behavior of the DAE, we propose the concept of Code Bridge Classifiers (CBC), aiming to 1) eliminating the impact of reconstruction error of the underlying DAE, and 2) reducing the computational complexity of the combined DAE and classifier to widen its applicability.
\begin{figure}[t]
\centering
\includegraphics[width=0.90\columnwidth]{figures/proposed_error.pdf}
\caption {(Top) the defense proposed in \cite{magnet} where a DAE filter the noise in the input image before feeding it to a classifier. (Bottom) the CBC model in which the decoder of DAE and the first few conv layers of the base classifier are removed. Note that the decoder in CBC is only used for training the CBC, and is removed after training (for evaluation). In this figure $X$, $X'$, $X''$ are respectively the clean input sample, noisy input sample and the output of DAE. The $Y$ is the corresponding ground truth, and $E_R$ and $E_C$ are reconstruction error and classification error respectively. }
\label{pdae}
\end{figure}
Fig. \ref{pdae}.(bottom), illustrates our proposed solution where the encoder $\varphi(.)$ of a trained DAE and a part of original CNN ($C(.)$) are combined to form a hybrid yet compressed model. In this model, the decoder $\zeta(.)$ of DAE, and the first few CONV layers of the CNN model, $\varphi'(.)$, are eliminated. In CBC $\zeta(.)$ and $\varphi'(.)$ are eliminated with the intuition that they act as an Auto Decoder (AD). As opposed to AE, the AD translates the latest space to an image and back to another latent space (intermediate representation of the image in the CNN captured by output channels of $\varphi'(.)$). This is, however, problematic because 1) the decoder $\zeta(.)$ is not ideal and it introduces reconstruction error to the refined image, 2) decoding and encoding (the first few CONV layers act as an encoder) of the image only translates the image from one latest space to another without adding any information to it. This is when such code translation (latest space to latent space) could be eliminated and the code at the output of $\varphi(.)$ could be directly used for classification. This allows us to eliminate the useless AD (the decoder $\zeta(.)$ and first few conv layers of original CNN that act as an encoder) and not only reduce the computational complexity the overall model but also improves the accuracy of the model by eliminating the noise related to image reconstruction of the decoder $\zeta(.)$.
The training process for the CBC is serial: We first train a DAE, the encoder section of the model is separated. Then the trained decoder is paired with a smaller CNN compare to that of the original model. One way to build a smaller model is to remove the first few CONV layers of the original model and adjust the width of the DAE and the partial CNN to match the filter sizes. The rule of thumb for the elimination of the layers is to remove as many CONV layers equal to those in the encoder of AE. The next step is to train the partial CNN while fixing the values of the decoder, allowing the propagation to only alter the weights in the classifier $C(.)$.
\section{Implementation Details} \label{sec:framework}
In this section, we investigate the effectiveness of our proposed solution against adversarial examples prepared for FashionMNIST \cite{FashionMNIST} and CIFAR-10 \cite{cifar} datasets. To be able to compare our work with previous work in \cite{magnet}, we build our CBC solution on top of the CNN models that are described as \textit{Base} in tables \ref{tab:classifier1} and \ref{tab:classifier2}. In these tables, the DAE columns represent the solution proposed in \cite{magnet}, in which a full auto-encoder pre-process the input to the CNN model and finally the columns CBC described the modified model corresponding to our proposed solution.
The DAE as described in tables \ref{tab:classifier1} and \ref{tab:classifier2} includes 2 convolutional layers for encoding, and 2 convolutional transpose layers for decoding. The input is a clean image, and the output is an image of the same size generated by the autoencoder.
To build the CBC classifier, we stacked the trained encoder of the DAE with an altered version of the target classifier in which some of the CONV layers are removed. The trade-off on the number of layers to be removed is discussed in the next section. Considering that the encoder quickly reduces the size of the input image to a compressed latent space representation, the CNN following the latent space is not wide. For this reason, we also remove the max-pooling layers making sure that the number of parameters of the CBC classifier, when it reaches the softmax layer is equal to that of the base architecture. In our implementation, all the attacks and models are implemented using PyTorch \cite{pytorch} framework. To train the models we only use clean samples, and freeze the weights of the encoder part and train the remaining layers. Training parameters of the target classifier and the proposed architecture are listed in table \ref{tab:train}. We evaluated our proposed solutions against the FGSM \cite{fgsm}, Iterative \cite{bim}, DeepFool \cite{deepfool}, and Carlini Wagner \cite{cw} adversarial attacks.
\begin{table}[t]
\centering
\caption{Architecture of the FashionMNIST Classifiers}
\label{tab:classifier1}
\scalebox{0.70}{
\begin{tabular}{|l|l|l||l|l||l|l|}
\hline
\multicolumn{1}{|c|}{} &
\multicolumn{2}{|c|}{Base} &
\multicolumn{2}{|c|}{DAE-CNN} &
\multicolumn{2}{|c|}{CBC} \\
\cline{2-7}
& Type & Size & Type & Size & Type & Size \\
\hline
\multirow{4}{*}{\rotatebox[origin=c]{90}{Defense}} & & & Conv.ReLU & $4 \times 4 \times 16$ & Conv.ReLU & $4 \times 4 \times 16$\\
& & & Conv.ReLU & $4 \times 4 \times 48 $ & Conv.ReLU & $4 \times 4 \times 48$\\
& & & ConvTran.ReLU & $4 \times 4 \times 48$ & & \\
& & & ConvTran.ReLU & $4 \times 4 \times 16$ & & \\
\hline
\multirow{8}{*}{\rotatebox[origin=c]{90}{CNN}}&Conv.ReLU & $3 \times 3 \times 32$ & Conv.ReLU & $3 \times 3 \times 32$ & Conv.ReLU & $3 \times 3 \times 64$ \\
& Conv.ReLU & $3 \times 3 \times 32$ & Conv.ReLU & $3 \times 3 \times 32$ & Conv.ReLU & $3 \times 3 \times 64$\\
&Max Pool & $2 \times 2 $ &Max Pool & $2 \times 2$ & FC.ReLU & $4096 \times 200$\\
&Conv.ReLU & $3 \times 3 \times 64$ &Conv.ReLU & $3 \times 3 \times 64$ & FC.ReLU & $200 \times 200$\\
&Conv.ReLU & $3 \times 3 \times 64$ &Conv.ReLU & $3 \times 3 \times 64$ & Softmax & 10\\
&FC.ReLU & $4096 \times 200$ &FC.ReLU & $ 4096 \times 200$ & & \\
&FC.ReLU& $200 \times 200$ &FC.ReLU & $200 \times 200$ & & \\
&Softmax & 10&Softmax & 10 & & \\
\hline
\end{tabular}
}
\end{table}
\begin{table}[t]
\centering
\caption{Architecture of the CIFAR-10 Classifiers}
\label{tab:classifier2}
\scalebox{0.70}{
\begin{tabular}{|l|l|l||l|l||l|l|}
\hline
\multicolumn{1}{|c|}{} &
\multicolumn{2}{|c|}{Base} &
\multicolumn{2}{|c|}{DAE-CNN} &
\multicolumn{2}{|c|}{CBC} \\
\cline{2-7}
&Type & Size & Type & Size & Type & Size \\
\hline
\multirow{4}{*}{\rotatebox[origin=c]{90}{Defense}}& & &Conv.ReLU & $4 \times 4 \times 48$ & Conv.ReLU & $4 \times 4 \times 48$\\
& & &Conv.ReLU & $4 \times 4 \times 72$ & Conv.ReLU & $4 \times 4 \times 72$\\
& & &ConvTran.ReLU & $4 \times 4 \times 72$ & & \\
& & &ConvTran.ReLU & $4 \times 4 \times 48$ & & \\
\hline
\multirow{13}{*}{\rotatebox[origin=c]{90}{CNN}}&Conv.ReLU & $3 \times 3 \times 96$ &Conv.ReLU & $3 \times 3 \times 96$ & Conv.ReLU & $3 \times 3 \times 96$ \\
&Conv.ReLU & $3 \times 3 \times 96$ &Conv.ReLU & $3 \times 3 \times 96$ & Conv.ReLU & $3 \times 3 \times 192$\\
&Conv.ReLU & $3 \times 3 \times 96$ &Conv.ReLU & $3 \times 3 \times 96$ & Conv.ReLU & $3 \times 3 \times 192$\\
&Max Pool & $2 \times 2 $ & Max Pool & $2 \times 2$ & Conv.ReLU & $3 \times 3 \times 192$\\
&Conv.ReLU & $3 \times 3 \times 192$ &Conv.ReLU & $3 \times 3 \times 192$ & Conv.ReLU & $3 \times 3 \times 192$\\
&Conv.ReLU & $3 \times 3 \times 192$ &Conv.ReLU & $3 \times 3 \times 192$ & Conv.ReLU & $3 \times 3 \times 192$\\
&Conv.ReLU & $3 \times 3 \times 192$ &Conv.ReLU & $3 \times 3 \times 192$ & Conv.ReLU & $1 \times 1 \times 192$\\
&Max Pool & $2 \times 2$ &Max Pool & $2 \times 2$ & Conv.ReLU & $1 \times 1 \times 192$\\
&Conv.ReLU & $3 \times 3 \times 192$ &Conv.ReLU & $3 \times 3 \times 192$ & Conv.ReLU & $1 \times 1 \times 192$\\
& Conv.ReLU & $1 \times 1 \times 192$ & Conv.ReLU & $1 \times 1 \times 192$ & Avg Pool & \\
& Conv.ReLU & $1 \times 1 \times 192$ &Conv.ReLU & $1 \times 1 \times 192$ & Softmax & 10 \\
&Avg Pool & &Avg Pool & & & \\
&Softmax & 10&Softmax & 10 & & \\
\hline
\end{tabular}
}
\end{table}
\begin{comment}
\begin{table}[t]
\centering
\caption{Training Parameters}
\label{tab:train}
\scalebox{0.8}{
\begin{tabular}{|l|l|l|}
\hline
Parameter & FashionMNIST & CIFAR\\
\hline
Optimaztion Method & Adam & Adam\\
Learning Rate & 0.01 & 0.0001 \\
Batch Size & 128 & 128 \\
Epochs & 50 & 150\\
\hline
\end{tabular}
}
\end{table}
\end{comment}
\begin{table}[t]
\centering
\caption{Training Parameters}
\label{tab:train}
\scalebox{0.8}{
\begin{tabular}{|l|l|l|l|l|l|}
\hline
Dataset & Optimization Method & Learning Rate & Batch Size & Epochs\\
\hline
FashionMNIST & Adam & 0.001 & 128 & 50\\
CIFAR-10 & Adam & 0.0001 & 128 & 150
\\
\hline
\end{tabular}
}
\end{table}
\section{Experimental Results}\label{sec:results}
By adopting the training flow described in Table \ref{tab:parameter}, the top-1 accuracy of the base classifiers (in Tables \ref{tab:classifier1} and \ref{tab:classifier2}) that we trained for FashionMNIST and CIFAR10 are 95.1\% 90.\% respectively. For the evaluation purpose, we trained denoising autoencoders with different noise values for both Datasets. The structure of the DAEs are shown in Table \ref{tab:classifier1} and \ref{tab:classifier2}. The reconstruction error for DAEs was arround 0.24 and 0.54 for FashionMNIST and CIFAR10 datasets respectively.
\subsection{Selecting altered CNN architecture:}
As discussed previously, the removal of the decoder of the DAE should be paired with removing the first few CONV layers from the base CNN and training the proceeding CONV layers to use the code (latent space) that is generated by the encoder as input. The number of layers to be removed was determined by a sweeping experiment in which the accuracy of the resulting model and its robustness against various attacks was assessed. Figure \ref{fig:sw_layer} shows the accuracy of CBC networks when the number of convolutional layers in the altered classifier is reduced compared to the base classifier. The experiment is repeated for both MNIST and CIFAR datasets, and the robustness of each model against CW \cite{cw}, Deepfool \cite{deepfool}, and FGSM \cite{fgsm} with $\epsilon=0.5$ is assessed. As illustrated in Fig. \ref{fig:sw_layer}, the models remain insensitive to the removal of some of the first few layers (2 in MNIST, and 5 in CIFAR) with negligible (\~ 1\%) change in the accuracy by complete removal of each CONV layer until they reach a tipping point. The MNIST model, for being a smaller model, reaches that tipping point when 2 CONV layers are removed, whereas the CIFAR model (for being a larger model) is slightly impacted even after 5 CONV layers are removed.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{figures/sweep_layers_5-cropped.png}
\caption{The change in the accuracy of the CBC model for a) FashionMNIST and b) CIFAR-10 classification with respect to the number of removed CONV layers from the base CNN model.}
\label{fig:sw_layer}
\end{figure}
\subsection{CBC accuracy and comparison with prior art}
Fig. \ref{fig:dae_all_results} captures the result of our simulation, in which the robustness and the accuracy of the base CNN, the solution in \cite{magnet} in which the DAE refines the input for base CNN model, and our proposed CBC are compared. For the CNN protected with DAE, we provide two sets of results: 1) DAE-CNN Model accuracy: DAE and CNN are separately trained and paired together; 2) Retrained-DAC-CNN model accuracy: The CNN is incrementally trained using the refined images produced at the output of the DAE (denoted by Retraind-DAE-CNN). The comparison is done for the classification of both original and adversarial images. Results for FGSM, DeepFool, and CW adversarial attacks are reported. For completeness, we have captured the robustness of each solution when the DAE is trained with different noise values.
\begin{figure*}[t]
\centering
\includegraphics[width=0.94\textwidth]{figures/fashion_mnist_cifar_dae_results_revised4-cropped.png}
\caption{Comparing the accuracy of the base CNN model, the DAE protected CNN model (with and without retraining), and the CBC model when classifying bening images and adverserial images generated by different attack models: (left): FashionMNIST models, (right): CIFAR-10 models.}
\label{fig:dae_all_results}
\end{figure*}
As illustrated in Fig. \ref{fig:dae_all_results}, the base model is very sensitive to adversarial examples, and the accuracy of the network in presence of adversarial examples (depending on the attack type) drops from over 90\% to the range of 0\% to 20\%. The DAE-CNN model also performs very poorly even for benign images. This is because of the reconstruction error introduced by the decoder which severely affects the accuracy and the ability of the base CNN model. The Retrained-DAE-CNN model (representing the solution in \cite{magnet}) performs well in classifying benign images and also exhibits robustness against adversarial images. As illustrated, the robustness improves when it is paired with a DAE that is trained with high noise. The best solution, however, is the CBC solution: regardless of the type of the attack, type of the benchmark, and the noise of DAE, the CBC model outperforms other solutions in both classification accuracy of the benign images and also robustness against adversarial examples. This clearly illustrates that the CBC model by eliminating the reconstruction error is a far more robust solution than DAE protected CNN models.
\subsection{Reduction in model size and computational complexity}
In a CBC model, the DAE's decoder and the first few CONV layers of the base CNN model are removed. Hence, a CBC model has a significantly smaller flop count (computational complexity). Table \ref{tab:parameter} captures the number of model Parameters and the Flop count for each of the CBC classifiers which are described in Tables \ref{tab:classifier1} and \ref{tab:classifier2}. Note that the majority of computation in a CNN model is related to its CONV layers, while a CONV layer has a small number of parameters. Hence, removing a few CONV layers may result in a small reduction in the number of parameters, but the reduction in the FLOP count of the CBC models is quite significant. As reported in Table \ref{tab:parameter}, in the FashionMNIST model, the flop count has reduced by 1.8x and 2.8X compared to the base and DAE protected model, while the parameter count is respectively reduced by 0.37\% and 2.69\% . This saving is more significant for the CIFAR-10 CBC model, where its computational complexity has reduced 3.1x and 3.3x compared to the Base and DAE protected model respectively, while the number of parameters is respectively reduced by 5.8\% and 13.4\%. Reduction in the flop count of the CBC model, as illustrated in table \ref{tab:parameter} also reduces the model's execution time. The execution time reported in table \ref{tab:parameter} is the execution time of each model over the validation set of each (FashionMNIST and CIFAR-10) dataset when the model is executed using Dell PowerEdge R720 with Intel Xeon E5-2670 (16 core CPUs) processors. As reported in table \ref{tab:parameter}, the execution time of the CBC is even less than the base CNN. Note that the CBC also results in processing unit energy reduction proportional to the reduction in the flop count. Hence, the CBC, not only resist against adversarial attacks, but (for being significantly smaller than the base model) also reduces the execution time, and energy consumed for classification.
\begin{comment}
\begin{table}[t]
\centering
\caption{Comparison of the number of parameters and computational complexity of CBC and the base model with AE and without AE protection.}
\label{tab:parameter}
\scalebox{0.9}{
\begin{tabular}{|l|l|l|l|}
\hline
Dataset & Model & Flops & Parameters \\
\hline
\multirow{3}{*}{FashionMNIST}& Base CNN & 9.08 MMac & 926.6 K \\
&AE-CNN\cite{magnet} & 14.3 MMac & 951.81 K \\
&CBC & 5.04 MMac & 926.25 K \\
\hline
\multirow{3}{*}{CIFAR-10} & Base CNN& 0.59 GMac & 1.37 M \\
&AE-CNN \cite{magnet} & 0.63 GMac & 1.49 M \\
&CBC & 0.095 GMac & 1.29 M \\
\hline
\end{tabular}
}
\end{table}
\end{comment}
\begin{table}[t]
\centering
\caption{Comparison of the number of parameters, computational complexity and execution time of CBC and the base model with AE and without AE protection.}
\label{tab:parameter}
\scalebox{0.88}{
\begin{tabular}{|l|l|l|l|l|l|}
\hline
Dataset & Model & Flops & Parameters & Execution time \\
\hline
\multirow{3}{*}{FashionMNIST}& Base CNN & 9.08 MMac & 926.6 K & 463.4 s \\
&AE-CNN\cite{magnet} & 14.3 MMac & 951.81 K & 562.3 s\\
&CBC & 5.04 MMac & 926.25 K & 293.7s \\
\hline
\multirow{3}{*}{CIFAR-10} & Base CNN& 0.59 GMac & 1.37 M & 1673.0 s\\
&AE-CNN \cite{magnet} & 0.63 GMac & 1.49 M & 1749.7 s \\
&CBC & 0.19 GMac & 1.29 M & 1191.6 s \\
\hline
\end{tabular}
}
\end{table}
\section{Conclusion}\label{sec:conclusion}
In this paper, we propose the Code-Bridged Classifier (CBC) as a novel and extremely efficient mean of defense against adversarial learning attacks. The resiliency and complexity reduction of CBC is the result of directly using the code generated by the encoder of a DAE for classification. For this purpose, at the training phase, a decoder is instantiated in parallel with the model to tune the denoising encoder by computing and back-propagating the image reconstruction error. At the same time, the code is used for classification using a lightweight classifier. Hence, the encoder is trained for both feature extraction (contributing to the depth of classifier and low-level feature extraction) and denoising. The parallel decoder is then removed when the model is fully trained. This allows the CBC to achieve high accuracy by avoiding the reconstruction error of the DAE's decoder, while reducing the computational complexity of the overall model by eliminating the decoder and few CONV layers from the trained model.
\renewcommand{\IEEEbibitemsep}{0pt plus 0.5pt}
\makeatletter
\IEEEtriggercmd{\reset@font\normalfont\fontsize{7.0pt}{6.5pt}\selectfont}
\makeatother
\IEEEtriggeratref{1}
\bibliographystyle{IEEEtran}
|
2,877,628,088,472 | arxiv | \section{Introduction}
Unlike traditional recruitment methods, such as employee referrals, CV screening, and face-to-face interviews, AI is able to find patterns unseen by a human eye. It could be used to find the right person for the perfect role faster and more efficiently than ever before. In order to rapidly improve talent management and take full advantage of the power and potential AI offers, we need to shift our focus from developing more ethical HR systems to developing more ethical AI. McKinsey's Global Institute model predicts that approximately 70 percent of companies will adopt some form of AI by 2030. When it comes to identifying talent or potential, most organizations still play it by ear. Recruiters spend just a few seconds looking at a resume before deciding who to “weed out” \cite{erecruit_cv_form}. Often when hiring is made it's very important to know the current strength of the organization and based on it if hiring is made a candidate is referred to be a good fit for an organization \cite{sel_soc_acc}. There's increasing evidence that AI could overcome this trade-off by deploying more dynamic and personalized scoring algorithms that are sensitive as much accuracy as to fairness to an organization.
AI has power to provide deep hiring efficiencies, increase talent mobility and will ensure that the scores that come out of the hiring process are both maximally predictive of outcomes that matter to employers, free from all types of bias and provides the best fitting candidate as per organizational work environment. AI and ML have an immense potential to provide a unique solution in the domain varying from robotic automation to biochemical industry\cite{Micro_nanorobots} \cite{Ai_ml_fornano_tech}.
Recent interesting work where Real Time Heart Rate Measurement with Facial Video has been performed using face detection technique. This approach can also be implemented in the Hr tech industry in order to interview the candidate to know more about their behavior\cite{Singh2017ContactlessAH}.
In this report, we present an analysis of the problem of recruitment, followed by a proposition of the model. In Section 2, we will cover a general understanding of the recruitment field, and we will cover the problem statement of this project and the Implementation.
According to several surveys, the recruiting field is one of the main concerns of many CEO’s \cite{hiring_approach, ceo_survey_talent}. In fact, according to the Society for Human Resource Management \cite{hiring_approach}, employers spend an enormous amount on hiring: an average of \$4,129 per job in the United States.
We observe that the recruiting process is not an easy task. It contains several stages. The average time to fill an open position is approximately 42 days \cite{avg_hire_cost} and even with this long process, most of the time recruiters are not sure that they choose the right candidate \cite{hiring_approach}.
To overcome this issue, many innovations in the recruiting field have arisen recently, such as video interviews analysis, accurate CV parsers, AI personality tests, AI candidate recommendation, among others. According to an analysis made by the Linkedin talent blog in 2018 \cite{g_recruiting_trends}, there exists four trends that will shape the future of recruiting:
\begin{list}{--}{}
\item \textbf{Diversity}: Refers to the fact that changing demographics are diversifying communities, shrinking talent pools for companies that don’t adapt. This trend is relevant since diverse teams are more productive, more innovative, and more engaged also makes it hard to ignore.
\item \textbf{New interviewing tools}: These tools try to improve ineffective traditional ways of interviewing. New tools are concentrated on online soft skills assessments, job auditions, casual interviews, among others.
\item \textbf{Data} : Refers to data informing talent decisions, such as prediction of hiring outcomes, or smarter recruiting decisions based on data analysis.
\item \textbf{Artificial Intelligence}: It is focused on automated candidate searches and quickly finds prospects that match specific criteria. There are also technologies that help to screen candidates before even speaking to them. The development of chatbots can respond to candidate questions so recruiters don’t have to.
\end{list}
While Sourcing candidates is the process to contact as many candidates as possible, screening candidates refers to the problem of selecting a candidate based on its CV. It makes sense that the screening is one of the fields where innovation is needed, since Profile/CV matching is a multi dimensional task. Normal human eye is not enough to compare precisely many CV's in a multi dimensional way.
\section{Problem Identification and Objectives}
\subsection{Problem}
Recruitment can be a very demanding and tough process for a company and their recruiters. Many times, recruiters end up hiring a not so competent candidate which eventually renders all the efforts put through a recruitment process as waste \cite{algo_recruiter}. Having a perfect fit for a job position is as tough as finding that perfect fit and that entire process of finding one can be very demanding at times. It is very important for a recruiter to pick a candidate whose competent matches with current organization strength. In addition to this, there are a lot of difficulties which candidates face while searching for their dream job. Starting from finding a trusted platform to searching for job roles, to track their application and to receive feedback for application, the entire process has a lot of roadblocks which renders the entire recruitment process as very time consuming and as a frustrating one \cite{web_based_recruiting}. The root problem is because the profile matching field is multi dimensional and it is very difficult as an individual to cover all dimensions and to select the best candidate and to justify the reason for the selection and rejection of the candidate.
As a way out, the candidates or recruiter often make use of third parties to reach out for their desired job roles and positions. they have a team dedicated to a more manual approach to do the matching. This eventually results in a major chunk of their salary being lost to the third party facilitators and targeted problems cannot be solved.
Every organization works as per their unique values and strengths and it's very hard to generalize the common matching solution to a candidate which can fit best for all the organization. There are many solutions existing in the market to automate the matching such as creating the recommendation systems which are based on keyword matching which often results in poor recommendations. Also there are many AI related solutions which provide solutions to the problem, however if the candidate is recruited without considering the organization values and strength, it becomes hard for a candidate to survive and give best to an organization.
\subsection{Current market solution}
The table \ref{table:2} shows an AI driven talent platform which has been assisting the enterprise along with the features details.
\begin{table}[h!]
\caption{AI driven talent platform h}
\label{table:2}
\centering
\begin{tabular}{p{3cm} p{8cm}}
\hline\hline
Metric & Description \\
\hline
Text kernel &
ML (DL) for document understanding, Web Mining external sources, Synonyms,Software understands \& searches unstructured data, Fuzzy text matching through OCR, Ontology Mining, Machine-learned ranking(MLR).\\[.5\normalbaselineskip]
CVScan &
Free service, scan CV and job description and compare keywords \&frequencies \& match rate, includes top skills per industry (weighted).\\[.5\normalbaselineskip]
Untapt &
Talent-matching based on Natural Language (not keywords), Identify future leaders based on custom data analysis, white label solution or branded, AI-driven hiring decisions.\\[.5\normalbaselineskip]
Google Talent Solution &
Talent Solution uses ML technology to better understand job content and jobseeker intent, Talent Solution can interpret the vagueness of any job description, job search query, or profile search query., includes military occupational specialty code translation (MOS, AFSC, NEC).\\[.5\normalbaselineskip]
Zoho Recruit &
A candidate’s match score is calculated using their skills and qualifications, contact the matched person through the platform, semantic search, radius search (location), integrates with Linkedin, parse CV, large CV database\\[.5\normalbaselineskip]
DaXtra &
Offered as a component deployment or hosted service, Rich structured data output, Skills taxonomy extraction, Geographical and multilingual coverage, Social media awareness, Highly accurate, Continually updated\\[.5\normalbaselineskip]
\hline
\end{tabular}
\end{table}
\subsection{Supplementary method used for matching}
There exist many platforms in the market who are already providing their service having a different business strategy. \cite{res_recomm_online} .
Existing online market tool which are providing the service to businesses in various ways as shown in Fig. \ref{fig:onlineplatforms_here}.
\begin{figure}[h!]
\centering
\includegraphics[width=120mm]{Images/OnlineTalentPlatforms.png}
\caption{Existing online talent platforms}
\label{fig:onlineplatforms_here}
\end{figure}
\subsection{General and specific aims}
With the intervention of AI, the recruitment process may be completely disrupted to a new future revolution.
Fig. \ref{fig:toptrends} show the statistics of the main trends observe in the last part of section 1. Its very obvious here that the Artificial Intelligence field is still not much adopted. This is seen as an opportunity to leverage an AI in the field of HR tech industry.
\begin{figure}[h!]
\centering
\includegraphics[width=90mm]{Images/RecruitingTopTrends.png}
\includegraphics[width=90mm]{Images/AIRecruiting.png}
\caption{\textbf{(a)} Four trends were identified based on numerous expert interviews and a survey of 9,000 talent leaders and hiring managers across the globe \cite{recruitment_def} \textbf{(b)} Details on hiring trends.}
\label{fig:toptrends}
\end{figure}
Our proposed solution is to create a personalized recruitment product for an organization based on its workforce strength with which we can provide the scoring of the candidate along with the feedback (acceptance/rejection) purely driven by AI. The product which could have the ability to think multidimensional, having the ability to take care of all the aspects between candidate and job post which can help to complete the entire recruitment process with more efficiency, effectiveness, and ultimately fit between potential candidates and recruiters.
\section{Background Research}
\subsection{What is important in a CV ?}
As a first glance, the recruiter will be already evaluating the quality of a CV and its organization. Fact that could help or harm the overall result. Although, we won’t take into account this aspect of the CV and recruiter first encounter. Instead we’re going to extract the content and study it.
So, for the content, CV is a structured document that can be separated into several sections. A CV could have or not each of these parts depending on the experience, the type of applicant (researcher, private/public) and simply whether or not they follow a standard structure. But whatever the CV received is, objectively, the recruiter will search for some specific parts in order to understand who the applicant is, and if he/she passes this first filter, a reading, understanding and making sense of the CV.
In order to understand the profile, an understanding of each section should be done. The different sections are: contact information, personal details, skills, professional experience, academic experience, projects, recognition and awards, publications, certifications and references. And an overall review on spelling and grammar is important too.
\subsection{Regular structure on a CV - Metadata}
As mentioned before, the list of expected input is limited, each one has its own objective on helping define the candidate’s profile.
\begin{list}{--}{}
\item Contact information: To contact the candidate.
\item Personal details like birthday, nationality, social networks, blogs or github: In order to go further the CV if desired. (further work can be web crawling to discover some traits of the applicant).
\item Skills: Have an overview of the candidate’s values and personal characteristics.
\item Professional experience: Understand the relevance of the professional path regarding the offer and the enterprise.
\item Academic experience: Extract the basis of the human capital and see if it’s pertinent and use it as an indicator.
\item Projects: Depending of the type, professional or academical, they tell about the experience or motivation
\item Recognitions and awards: In order to differentiate from the others.
\item Publications: In a research context, their role is to describe the potential of a candidate.
\item Certifications: If necessary for the job, otherwise, a recognition
\item References: Get further feedback from the candidate, and speak about candidate’s values.
\end{list}
\subsection{Recruiter points of views}
We consulted a recruiter of Thales human resources department, who has been involved in the recruitment field in the engineering industry for more than 15 years, in order to have a better understanding of the recruitment process.
\begin{figure}[h!]
\centering
\includegraphics[width=120mm]{Images/recruitrMath.jpeg}
\caption{Basic match made by recruiter}
\label{fig:onlineplatforms}
\end{figure}
As an initial observation, she told us that this approach was her strategy and which is used most of the time in the recruiting field.
They divide the work in two parts, extracting features from the required job posts and extracting features from the CV and then perform one to one matching between both. There are certain aspects which keep into account always for eg: School, degree.
As interesting points we find that recruiters point of view can vary, depending on the country and depending on the company. Some workers that are selected for a company, may not fit for another similar post in another company. Then, we observe a sign that the culture of a company is important to set parameters into the recruiting process.
\subsection{Related work on Parsing and Matching}
The domain of job matching has been researched since decades. AI has become talk of an hour by many researchers and business enterprises. The researchers are creating new algorithms in the field of talent acquisition which can help the business to find the best candidates without introducing any algorithm bias\cite{resumatcher}.
In the literature, the research is sparse and not a lot of specific domain studies have been done. For instance, finding “how to parse a CV and match/recommend/class it to a job posting” is not at all available. This type of study might have been interesting because a CV is a structured document where information of different categories could be extracted and analysed in parallel to extract information and get more accurate results for each sub-structure. A similar approach to our desired project is a recommendation model by implementing a genetic algorithm that uses recruitment records to establish the users demand model \cite{resumatcher}. From the researcher perspective the matching problem has been tackled in different ways for eg:- Recommender systems are broadly accepted in various areas to suggest products, services, and information items to latent customers. Yi et al. used structured relevance models (SRM) to match résumés and jobs\cite{resumatcher}.Drigas et al. presented an expert system to match jobs and job seekers, and to recommend unemployed to the positions. The expert system used Neuro-Fuzzy rules to evaluate the matching between user profiles and job openings. they also proposed a fuzzy logic based expert system (FES) tool for online personnel recruitment. The system uses a fuzzy distance metric to rank candidates’ profiles in the order of their eligibility for the job\cite{resumatcher}.
\subsection{Most commonly used Methodology for job matching}
The primary steps involved for a recruiters before matching a CV to a job post is understanding the job post. It is very essential for the recruiter to understand what they expect from the particular job post. In the process of which each job post is evaluated based on certain defined criteria and the candidates are accessed if they meet those criteria. The most popular defined criteria used by most of the recruiters
\subsubsection{HAY criteria}
The HAY system is based on measuring the job against three elements which are deemed to be common in all jobs \cite{hay_guide_chart, hay_methodology}.
These elements are:
\begin{list}{--}{}
\item KNOW HOW - This measures the range of technical, planning, organising, controlling and communicating/influencing skills required in order to be able to perform the job competently.
\item PROBLEM SOLVING - This measures the degree of complexity involved in carrying out the job.
\item ACCOUNTABILITY - This measures the influence that the job has and the decisions made in achieving the end result.
\end{list}
Each job is measured against these three elements. A numeric score for each is calculated, using charts provided by HAY Management Consultants. The total of the three scores (job units) identifies the grade into which the job falls.
\subsection{Open Source Knowledge Bases}
Many countries have made their data open source for the purpose of study on job openings, hires, and separations, providing an assessment of the availability of unfilled jobs, and information to help assess the presence or extent of labor shortages.
\subsubsection{Rome}
In France, ROME (Répertoire Opérationnel des Métiers et des Emplois) is a tool for professional mobility and the matching of offers and candidates. The ROME was built by the Pôle emploi teams with the contribution of a large network of partners (companies, branches and professional unions, AFPA...), based on a practical approach: inventory of the most common job titles/jobs, analysis of activities and skills, job grouping according to a principle of equivalence or proximity.
\subsubsection{O-NET}
The O*NET Program is the nation's primary source of occupational information. Valid data are essential to understanding the rapidly changing nature of work and how it impacts the workforce and U.S. economy. From this information, applications are developed to facilitate the development and maintenance of a skilled workforce \cite{onet_center}.
\subsubsection{ESCO}
European Skills, Competences, Qualifications and Occupations (ESCO) is a multilingual classification of European Skills, Competences, Qualifications and Occupations. ESCO is part of the Europe 2020 strategy. The ESCO classification identifies and categorises skills, competences, qualifications and occupations relevant for the EU labour market and education and training. It systematically shows the relationships between the different concepts. ESCO has been developed in an open IT format, is available for use free of charge by everyone and can be accessed via the ESCO portal.
\subsection{Strategy/Plan}
\subsubsection{Summary generated from CV’s}
Each recruiter has to search in each part and highlight the important points in the CV in order to get an overview of facts that would help him understand if the candidate is adequate for the role, and then, some that would show characteristics that most save time to the recruiter.
\subsubsection{Feedback from CV’s to the candidate}
We propose as a further step, to give feedback to the applicant about it’s CV, like if the role he’s searching for could be not very suitable for him, propose him some roles. Also, tell him where he is according to the job needs, if he should increase his skill on something else and how adequate he is in respect to similar job offers.
\subsubsection{Recommendation}
In order to do the scoring and matching we need to understand how we’re going to do it. From some research and recruiters feedback, we have come with some metrics to extract from the CV. One thing to take into account is that for several metrics, it’s existence is not certain so this fact must be taken into account. A “must have” note will be then proposed in order to mitigate the possible missing values that are not mandatory, but still, give them some importance to the fact that they exist if ever present.
\subsubsection{Stages, algorithm flow}
In order to create the final Proof of concept, we want to follow a recruiter based evaluation logic in order to optimize the processes. This would permit the flow to ignore and class CV's. So, these two are the main stages to do the recommendation.
In the case of doing the whole process of recommendation we would already know what the recruiter is searching for, so we would be able to apply the HAY job evaluation criteria in order to offer two things: drop the CV and score on relevant CV. Since the HAY criteria offers us a way to see the immediate relationship between two roles, and to understand how much the candidate's experience is adequate for the job role the company is proposing.
As a first step, "Drop". We could search for the minimum requirements the recruiter is searching for, the “must have” ones in order to do a direct match with the job posting and drop candidates who don’t have these minimal skills. Then, we would apply the HAY criteria in order to know how much related the job position is to the experience and roles the candidate has had. If we are not able to extract this information from the candidate we would return as feedback for him to add it to the CV and reapply. If the information is “blurry”, we would simply not delete the candidate but assign a high score to the dropping criteria. Also, as a further step, we would require the recruiter feedback in order to improve this analysis when “blurry”.
As a second step, "Score". Once all relevant profiles have been selected, we could use the HAY evaluation done as a first input to the classifying algorithm. Then, we would use the metrics in order to do a classification among the candidates. For this, we would also apply the recruiters point point of view in order to give higher or smaller scores to the metrics results.
\subsubsection{Metrics}
Table \ref{tab:table12},\ref{tab:table13},\ref{tab:table14} are the different metrics which recruiters look for while matching the CV to a job post.
\begin{table}[H]
\begin{center}
\caption{Personal Experience}
\label{tab:table12}
\begin{tabular}{p{4cm} p{6cm} p{4cm}}
\textbf{Metric } & \textbf{Description} & \textbf{Type}\\
\hline
\multicolumn{3}{c|}{\textbf{Periods}}\\
\hline
total\_years\_of\_experience &
from the first job to last job &
number \\[.5\normalbaselineskip]
experience\_occupation \_percentage &
percentage of the time of experience that actually has been invested in working &
number [0, 1] \\[.5\normalbaselineskip]
experience\_shifts\_behavior &
note describing the pause behavior between works: is it random? each time is one year? it has reduced over time? &
number [ -1, 1 ] (don’t take into account if 0 or less) \\[.5\normalbaselineskip]
experience\_total\_occupation \_time\_jobs\_ratio &
ratio of time per job &
number [0, 1]\\[.5\normalbaselineskip]
experience\_gap\_limit\_ repetitions &
count how many times the pauses between jobs were bigger than 9 months (tolerance + 5 days) &
number \\[.5\normalbaselineskip]
\hline
\multicolumn{3}{c|}{\textbf{Company}}\\
\hline
activity\_sector &
activity sector (civil engineering, computer science ...) &
name \\[.5\normalbaselineskip]
country &
idem. &
name\\[.5\normalbaselineskip]
\hline
\multicolumn{3}{c|}{\textbf{Activities}}\\
\hline
experience\_action\_words \_list &
Keep list for job matching relevance evaluation (Created,received,deployed …) &
action\_list \\[.5\normalbaselineskip]
experience\_important\_words \_list &
Keep important words list for further usage in job matching (Managed Optimised Reduced Developed Increased Supported Negotiated Presented Resolved Improved...) &
important\_list\\[.5\normalbaselineskip]
experience\_activities\_skills\_list &
Deduce type of activities from overall skills: management, abstraction, scientific framework.... &
skill\_list (all types)\\[.5\normalbaselineskip]
\hline
\multicolumn{3}{c|}{\textbf{Role}}\\
\hline
experience\_role\_type &
title of the role (would work to see the distance to the actual job position) &
name \\[.5\normalbaselineskip]
career\_continuity &
where the successive roles related? &
mapping of career sectors\\[.5\normalbaselineskip]
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[H]
\begin{center}
\caption{Academics details}
\label{tab:table13}
\begin{tabular}{p{4cm} p{7cm} p{4cm}}
\textbf{Metric } & \textbf{Description} & \textbf{Type}\\
\hline
\multicolumn{3}{c|}{\textbf{Academic Experience}}\\
\hline
academic\_institution\_title &
Name &
name \\[.5\normalbaselineskip]
academic\_institution\_country &
country &
name\\[.5\normalbaselineskip]
academic\_experience\_period &
period &
date\\[.5\normalbaselineskip]
academic\_experience\_total &
cumulated years &
number\\[.5\normalbaselineskip]
academic\_degree &
degree &
number\\[.5\normalbaselineskip]
academic\_major &
major &
name\\[.5\normalbaselineskip]
academic\_grades &
grades &
number\\[.5\normalbaselineskip]
academic\_institution\_score &
score &
number\\[.5\normalbaselineskip]
\hline
\multicolumn{3}{c|}{\textbf{Academic Projects}}\\
\hline
aca\_project\_types &
in acadamical purpose?, professional? entrepreneurship?] &
name \\[.5\normalbaselineskip]
aca\_project\_subjects &
Distance measuring between role activities, and type can be done. &
name \\[.5\normalbaselineskip]
aca\_project\_duration\_list &
projects duration &
number \\[.5\normalbaselineskip]
aca\_project\_start \_date\_lists &
projects start date &
date \\[.5\normalbaselineskip]
aca\_project\_name\_list &
[in acadamical purpose?, professional? entrepreneurship?] &
name \\[.5\normalbaselineskip]
aca\_project\_count &
number of projects &
number \\[.5\normalbaselineskip]
aca\_project\_count\_if\_relevant &
relevant projects count &
number \\[.5\normalbaselineskip]
aca\_skills\_list &
skills list obtained from the project if any &
skill\_list (all types) \\[.5\normalbaselineskip]
aca\_action\_words &
action words list &
action\_list \\[.5\normalbaselineskip]
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[H]
\begin{center}
\caption{ Candidate Information}
\label{tab:table14}
\begin{tabular}{p{4cm} p{8cm} p{4cm}}
\textbf{Metric } & \textbf{Description} & \textbf{Type}\\
\hline
\multicolumn{3}{c|}{\textbf{ Personal details}}\\
\hline
cand\_name &
name, last name &
name \\[.3\normalbaselineskip]
cand\_picture &
picture &
blob \\[.3\normalbaselineskip]
cand\_linkedin &
linkedin &
name \\[.3\normalbaselineskip]
cand\_github &
github &
name \\[.3\normalbaselineskip]
cand\_facebook &
facebook &
name \\[.3\normalbaselineskip]
cand\_twitter &
twitter &
name \\[.3\normalbaselineskip]
cand\_blog &
blog page / web page &
name \\[.3\normalbaselineskip]
cand\_nationality &
nationality &
name \\[.3\normalbaselineskip]
\hline
\multicolumn{3}{c|}{\textbf{Contact Information}}\\
\hline
cand\_mail &
mail &
name \\[.3\normalbaselineskip]
cand\_phone &
phone &
name \\[.3\normalbaselineskip]
\hline
\multicolumn{3}{c|}{\textbf{Skills}}\\
\hline
soft\_skills &
soft skills &
list \\[.3\normalbaselineskip]
transversal\_skills &
transversal &
list \\[.3\normalbaselineskip]
language\_skills &
languages &
list \\[.3\normalbaselineskip]
\hline
\multicolumn{3}{c|}{\textbf{Recognitions/awards}}\\
\hline
award\_type &
type &
name \\[.3\normalbaselineskip]
award\_year &
year &
number \\[.3\normalbaselineskip]
award\_name &
name &
name \\[.3\normalbaselineskip]
\hline
\multicolumn{3}{c|}{\textbf{Publications}}\\
\hline
pub\_type &
type/subject &
name \\[.3\normalbaselineskip]
pub\_year &
year &
number\\[.3\normalbaselineskip]
pub\_name &
name &
name \\[.3\normalbaselineskip]
pub\_magazine &
magazine &
name \\[.3\normalbaselineskip]
pub\_impact &
impact [local, national, international] &
name \\[.3\normalbaselineskip]
pub\_coworkers &
coworkers &
name \\[.3\normalbaselineskip]
\hline
\multicolumn{3}{c|}{\textbf{Certifications}}\\
\hline
cert\_type &
type (match if any certification needed &
name \\[.3\normalbaselineskip]
cert\_name &
name &
name\\[.3\normalbaselineskip]
cert\_year &
year &
number \\[.3\normalbaselineskip]
cert\_date\_validity &
until year &
number \\[.3\normalbaselineskip]
\hline
\multicolumn{3}{c|}{\textbf{References}}\\
\hline
ref\_job &
correspondent job &
name/number \\[.3\normalbaselineskip]
ref\_tone &
tone (negative, positive, can’t say) &
{-1,0,1}\\[.3\normalbaselineskip]
ref\_match\_job &
correspondence with activities &
{1,0} \\[.3\normalbaselineskip]
ref\_skills &
skills &
list \\[.3\normalbaselineskip]
ref\_name &
name &
name \\[.3\normalbaselineskip]
ref\_contact\_info &
contact info &
name \\[.3\normalbaselineskip]
\hline
\multicolumn{3}{c|}{\textbf{Candidate’s summary}}\\
\hline
content &
summary, very variate, not structured, for the moment, just identify it, and pass it as it is to the recruiter &
name \\[.3\normalbaselineskip]
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Ontology}
Leveraging the domain of NLP to build a HR ontology which consists of thirteen modular ontologies : competence, education, job offer, job seeker, language, occupation, skill and Time can play a very important role. The main sub ontologies are the job Offer and job Seeker, which are intended to represent the structure of a job posting and CV respectively. While these two sub ontologies were built taking as a starting point some HR-XML \cite{hr_xml} recommendations, the other sub ontologies were derived from the available international standards (like NACE, ISCO-88 (COM), FOET, etc.) and ES classifications and international codes (like ISO 3166, ISO 6392, etc.) that best fit the European requirements.
\begin{figure}[H]
\centering
\includegraphics[width=120mm]{Images/Main_ad-hoc_relationships_between_the_modular_ontologies.png}
\caption{Main ad-hoc relationships between the modular ontologies. }
\end{figure}
Details of the ontology is explained well in “Reusing Human Resources Management Standards for Employment Services” \cite{hr_mgmt_stdrs}.
In the scope of our project we build a basic job skill ontology based on online available technology, to build an ontology we intend to build these ontologies using the reference provided in \cite{taxo_skills_job_ad}. The flow chart shown below depicts how to build basic taxonomies which can be further converted into ontologies.
\begin{figure}[h!]
\centering
\includegraphics[width=110mm]{Images/flowchart_skillsgraph.png}
\caption{Flow chart for building and preparing the skills graph}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=110mm]{Images/flowchart_hierarchical_skillsgraph.png}
\caption{ Flow chart for detecting hierarchical communities in the skills graph}
\end{figure}
\subsection{ Model Proposition (Input/Output)}
\subsubsection{No data model proposal}
Approach to take: divide and conquer plus expertise dependence. Since the recruitment process is singular for each sector and even for each enterprise or recruiting framework, try to do a general approach remains exhaustive and even impossible to tune. An alternative to attack this issue is to let the recruiting area of the enterprise (our final customer) the freedom to tune some metrics that would be introduced to our final algorithm. This would permit a broader impact in the market. since, behind the scenes, the algorithm would remain the same without need for us to adapt it trying to handle the biggest amount of study cases. Of course, this could lead to a “difficult product” so the tuning parameters should remain reduced. This would remain the more delicate “issue” for the customer's point of view since with a data based model, they would not need to do any tuning, so the objective would be to enlighten its possibilities and advantages.
In order to solve each small challenge, each of the metrics will be taken into account. A “very important”, “important” and “not important” category would be proposed for the main categories and maybe for each category if the recruiter needed it. Thus, he could adapt in order to comply with the needs of the role to be fulfilled, including all the cultural and organizational characteristics that the company must be seeking.
\begin{figure}[H]
\includegraphics[width=150mm]{Images/flowchart_for_concepts.png}
\caption{ Example, flowchart for concepts treatment}
\end{figure}
As explained in the figure above, each type of metrics { numbers, names, concepts } would be treated with an algorithm to extract value that could be inserted into the “main algorithm” that would help to score.
First, a filter would be done so there would be a list that when found, extracted and matched less than required, the treatment of CV would stop by justifying the reason so that it still appears in the list of candidates’ CV's but as “not qualified, reason: X”.
Next, the extraction of these metrics would continue and contribute to feed the “main algorithm”. The “main algorithm”model builds a skill graph by using ontology in order to do the match between CV and job post. The key concepts are identified from the text using text processing and are matched with the nodes of the ontology in order to get detailed information related to the identified concept. In addition we can take any additional requirements from the organization which can enhance their work culture. For eg if the organization emphasizes much on soft skill, creativity and other culture fit criteria needed within. The algorithm will exploit to match every section extensively like experience ,skills, soft skills, academic projects, motivation. For eg for the skill section ,it will generate a skill graph for both the job post and CV and measure the similarity between them. And finally, a multi criteria approach. such as MR sort to obtain a final score.
\section{Implementation}
\emph{For the implementation, we propose the minimal viable product, So as discussed in \textbf{Model Propositions} , we managed to get existent ontologies and embedding, to be able to work out functionalities related to the proposed model. Since building an ontology exclusively for every section is a time taking process. Here, we focus on matching technical skill along with cultural feature between CV and job post.
Functionalities like the parser, and more dimension evaluations had to be ignored in order to complete a first functional product. Moreover, data was created by using an existing software that organized already all CV structured data so that we could exploit it directly.}
\subsection{Create ontology's}
The structure of ontologies borrows a lot from graph theory, and For instance, when considering competencies, each competency is a ‘node’ and each relationship between competencies is an
‘edge’. Ontologies are represented as undirected graphs .
In order to create an ontology for the skill development, Based on the research which we made on the online available ontologies. We chose to work with CSO Ontology. We also created manually the domain specific ontology from the crawled job posts.
\subsubsection{Technical Skill ontology}
The Computer Science Ontology (CSO) is a large-scale ontology of research areas that was automatically generated using the Klink-2 algorithm on the Explore dataset, which consists of about 16 million publications, mainly in the field of Computer Science [https://cso.kmi.open.ac.uk/home ].The Klink-2 algorithm combines semantic technologies, machine learning, and knowledge from external sources to automatically generate a fully populated ontology of research areas.It also includes Linguistics, Geometry. The current version of CSO includes 26K topics and 226K semantic relationships.
It includes five semantic relations:
\begin{list}{--}{}
\item relatedEquivalent, which indicates that two topics can be treated as equivalent for the purpose of exploring research data (e.g., Ontology Matching and Ontology Mapping). For the sake of avoiding technical jargon, in the CSO Portal this predicate is referred to as alternative label of
\item skos:broaderGeneric, which indicates that a topic is a super-area of another one (e.g., Semantic Web is a super-area of Linked Data). This predicate is referred to as parent of in the portal. The inverse relation (child of) is instead implicit
\item contributesTo, which indicates that the research output of one topic contributes to another. For instance, research in Ontology Engineering contributes to Semantic Web, but arguably Ontology Engineering is not a sub-area of Semantic Web – that is, there is plenty of research in Ontology Engineering outside the Semantic Web area.
\item rdf:type, this relation is used to state that a resource is an instance of a class.For example, a resource in our ontology is an instance of topic.
\item rdfs:label, this relation is used to provide a human-readable version of a resource’s name.
\end{list}
\begin{figure}[h!]
\centering
\includegraphics[width=150mm]{Images/cso_ontology.png}
\caption{CSO ontology overview }
\end{figure}
\subsubsection{CSO generation}
The working of Klink2 algorithm \cite{klink} takes as input a set of keywords and investigates their relationship with the set of their most co-occurring keywords. The algorithm tries to find the semantic relationship between keyword x and y by the means of three metrics which are hierarchical relationship, temporal relationship and similarity . The first two used to to detect skos:broaderGeneric and contributesTo relationships, while the latter is used to infer relatedEquivalent relationships.
\subsubsection{Domain Skill ontology}
In order to create a domain skill ontology, we collected the job posts of the particular domain for e.g we focused on creating an ontology in the domain of data science. As it helps to find the key terms which exist related to the domain in such a job post. For example the cso ontology lag the term such as algorithms or tools which are explicitly related to particular domain. Here we build a hierarchical based ontology where the nodes of the same type have some special semantics for defining parent/child relationships as this is a very common relationship necessary to express existing child and parent relationship frameworks. A node defined as a parent generally is a broader version of all of its children, having many shared attributes. For instance, ESCO defines an ‘advanced nurse practitioner’ and ‘specialist nurse’ as both being children of ‘nursing professionals’. These occupations understandably share many competencies, and it is easy to imagine experience in any type of nursing professional occupation as being broadly applicable to other nursing professional occupations. This parent/child hierarchy is necessary, but not itself sufficient for defining rich ontology is capable of expressing the relationships between other nodes or child.
\subsubsection{Domain skill ontology generation}
We created a large text corpus where we collected all the job posts in the data science skill domain. The job post was crawled from the Dice platform (https://www.dice.com/) . We collected 10000 job posts. To generate the ontologies we employ machine learning methods, such as word embedding and clustering algorithms.
All the stop words existing in the job post corpus are removed and then we created a concept based on the number of occurrence of words in the whole corpus based on n-gram technique. We used the library word cloud \footnote{$https:\//github.com\//amueller\//word_cloud$} which chose the top 200 originated concept. After creating the vectors using the nltk vectorize model, the cluster is being formed using the K-Mean Algorithm. After exploring the obtained cluster thoroughly we create a basic ontology using the protege software \footnote{https://protege.stanford.edu}. The ontology can be accessed using \url{http://owlgred.lumii.lv/online\_visualization/lli4/}.
\newline
Figure 8 shows the brief overview of domain data science ontology.
\begin{figure}[h!]
\centering
\includegraphics[width=150mm]{Images/DataScienceSkill.PNG}
\caption{Data science domain skill ontology }
\end{figure}
\subsubsection{Cultural values ontology}
In order to understand the culture that a text could explain by itself from an enterprise point of view we approached theory that studies this as concepts that permit a classification of the enterprises on a cultural level. So to start with, to locate this problem, we're going to make some remarks: we're treating CVs and job postings, so general and domain specific terms would be found, and the context happens in an enterprise-like environment, an organization, whether public or private. These kinds of documents are not very explainable texts since they don't have a lot of complete phrases that would need a deeper sense analysis to get better results. This being said, we're going to explain how we managed to approach the extraction of what we call "organizational culture" from CVs and job postings.
To start with, we based our understanding in proposed organizational culture understanding theory \cite{org_cult} where several researchers propose a way of characterizing an organization. We use a global view of the characterization. As a first step, four levels can characterize culture in an organization \{\textit{Symbols, Heroes, Rituals, Values} \} from which we're just interested in the values since they are only ones that can be extracted from text. The other levels imply abstraction or more inside-company behaviors and traditions that can't be forcefully extracted from CV or a job posting. As a second step, values have been distinguished in six different concepts \{ \textit{Power Distance, Individualism, Uncertainty Avoidance, Masculinity \& Femininity, Long Term Orientation, Indulgence Vs Restraint} \} and each of these concept is subdivided in two "antonym" set of values describing its parent \textit{(showed in the next table)}.
\begin{table}[h!]
\caption{Organizational Culture Dimensions}
\centering
\begin{tabular}{p{4cm} p{4cm}}
\hline\hline
Organizational Culture Dimension & Concepts Comparison \\
\hline
Power Distance &
Small $\leftarrow$ - $\rightarrow$ Large \\
Individualism &
Individualism $\leftarrow$ - $\rightarrow$ Collectivism \\
Uncertainty Avoidance &
Weak $\leftarrow$ - $\rightarrow$ Strong \\
Masculinity \& Femininity &
Masculinity $\leftarrow$ - $\rightarrow$ Femininity \\
Long Term Orientation &
Short $\leftarrow$ - $\rightarrow$ Long \\
Indulgence Vs Restraint &
Indulgence $\leftarrow$ - $\rightarrow$ Restraint \\
\hline
\end{tabular}
\label{table:nonlin}
\end{table}
Each of these "antonym" concepts has other concepts that describe it (for more information see \cite{org_cult}). For example, for Power Distance, we have the next descriptors: \textit{decentralization/centralization, management by experience/management by rules, autonomy of employee/order directed employee, pragmatic superior relationships/emotional superior relationships, no privileges/privileges}.
Thus, in order to make this information useful from a practical point of view, and create CV and job posting profiles at the cultural level, we proposed a graph. The approach was to develop a directed graph that would handle a multiple level division of concepts until the very end where terms would describe concepts following the next logic: {Culture -$>$ Values -$>$ Organizational Culture Dimensions -$>$ Concepts Comparison ("antonyms") -$>$ Concepts Definition Concepts (descriptors) -$>$ terms (descriptors' terms)}. So, a tree-like graph where each of these descriptors has terms that describe them, let's call them "descriptors' terms". These descriptors' terms were proposed by us, so improvements can be made. The way to assign the terms was to do a limited and definition directed list of terms referring to the parent node searching for definitions and extracting the most coherent and related terms.
\begin{figure}[H]
\centering
\includegraphics[width=150mm]{Images/culture_graph_h2.png}
\caption{3 First levels of culture graph}
\end{figure}
Furthermore, the idea to use a directed graph with no inter-related terms or concept resides in a practical decision where simplicity and the nature of the task intervenes. Since the task, "extract culture profile", means that the vocabulary to use is general and not domain specific, already trained models could be implemented. To this end, an embedded model has been chosen: the GloVe model \cite{glove}. This model was chosen against the word2vec model because of four reasons. The first one is because it was stated that somehow it maintains a better analogy. The second one, by doing some tests, the results given by simulating the application of terms (as it would happen with CV's and job postings) threw congruent and meaningful terms. And the third one, word2vec model was trained in a set of news texts (google News), so could be news context specific, and glove in Wikipedia corpus, so a broader set of contexts. And fourth, because when searching for similarities, of terms against a set of words we want to be antonyms by less similar and for the word2vec model, the antonyms used to be more similar than in the GloVe model. Still, this model could be changed, simply, it has to respect the \textit{gensim \cite{gensim} word2vec} standards.
\begin{table}[H]
\centering
\begin{tabular}{p{4cm} p{4cm}}
\hline\hline
GloVe & Word2Vec \\
\hline
'centralised', 'decentralized', 'hierarchical', 'decentralised', 'bureaucracy' & 'decentralized', 'centralizing', 'centralize', 'Centralized', 'centrally\_managed' \\
\hline
\end{tabular}
\label{table:nonlin}
\caption{Example of results similar to "centralized"}
\end{table}
Finally, now that the structure of the graph and the why's of the structure have been answered, we can conclude by saying that the idea for this ontology is simply a term list (descriptors' terms) that represent concepts (descriptors). These concepts are sets of concepts (descriptors) that are part of a main concept definition and the ensemble of these main concept definitions are somehow antonyms that belong to wider concepts (organizational culture dimensions) which describe the values of the culture. At the end the part of the graph that will participate in the matching will be the leaves (descriptors' terms).
\begin{figure}[h!]
\centering
\includegraphics[width=130mm]{Images/structure_of_cultural_graph.png}
\caption{Structure of Cultural Graph}
\end{figure}
\subsection{Matching}
In order to do fair matching between the job post and CV'S (github code \cite{Rudresh2020}). We create a graph with the help of the above created ontologies for both job post and CV for each section(general skill, domain skill and culture).
Similarity matrix is calculated between each section graph obtained from both the job post and CV. The obtained matrix is normalized to a matching score. The library used to measure the similarity between the two graphs is known as GMatch4py. GMatch4py follows the algorithm of graph edit distance by combining Hausdorff matching and greedy assignment \cite{GMatch4py}. After we receive the score from the different sections such as General skill match, domain skill match and cultural match. we aggregate it to the common score using MR sort algorithm.
\subsubsection{Creating Skill Graph from Ontologies}
In order to create a graph the algorithm takes the job description or the candidate work experience text as the input and outputs a list of relevant concepts from the job and CV's. For the Skill graph generation we followed the similar approach followed by CSO classifier \cite{CSO}.
\begin{figure}[h!]
\centering
\includegraphics[width=150mm]{Images/cso_workflow.png}
\caption{Workflow of CSO Classifier }
\end{figure}
It consists of two main components: (i) the syntactic module and (ii) the semantic module.
The syntactic module parses the input documents and identifies concepts that are explicitly referred to in the document. The semantic module uses part-of-speech tagging to identify promising terms and then exploits word embeddings to infer semantically related topics. Finally, the graph combines the results of these two modules and enhances them by including relevant super-areas.
\paragraph{Syntactic Module}
The syntactic module maps n-gram chunks in the text to concepts. The algorithm removes the stop words and collects the unigrams, bigrams and trigrams chunks. Then for each n-gram, it computes the levenshtein similarity with the labels of the topic in ontologies. the minimum similarity level can be set manually and it has been set to 0.94. This value allows us to recognize many variations between concept and ontologies.
\paragraph{Semantic Module}
The semantic module was designed to find topics that are semantically related to the text. These topics are explicitly not mentioned in the text. Here it requires the word embeddings produced by word2vec to compute the semantic similarity between the terms in the text and the ontologies.
.
it follows four step .
\begin{list}{--}{}
\item Entity extraction.
\item Ontology concept identification.
\item Concept ranking.
\item Concept selection.
\item Combined generated graph
\end{list}
The word embedding model was created by CSO using the word2vec model. The model is trained on text collected from the technical research paper in the domain of computer science.
\paragraph{Entity extraction}
The concepts can be represented either by nouns or adjectives followed by nouns. The classifier tags each word according to its part of speech (e.g., nouns, verbs, adjectives, adverbs) and then applies a grammar-based chunk parser to identify chunks of words, expressed by the grammar.
\paragraph{Concept identification}
The extracted concepts in the entity extraction stage are decomposes further into n-grams. Then similarity is measured between the n-grams with the ontology. The scores with the top 10 similar words are identified as the concepts.
\paragraph{Concept ranking}
Since it's possible that the above step may develop a lot of topics from the ontology with the help of n grams similarity to the nodes in which there may be the concepts which might not be related to the topic we are dealing with. It means many of the identified topics could be unrelated. So in order to choose the concept which is a really important relevance score is calculated which is the product between the number of times it was identified (frequency) and the number of unique n-grams that led to it (diversity). If a concept is directly available in the ontology, its score is set to the maximum score.
\paragraph{Concept selection}
Once the relevance score is identified for all the generated topics. The topics are plotted distributionally and the elbow method is implemented in order to implement the top relevant topic which could be helpful in order to do the matching. \cite{elbow}
\paragraph{Combined generated graph}
The obtained topics from the both the semantic and syntactic modules are combined together. It then explored the topics by inferring all their direct super topics, exploiting the superTopicOf relationship within ontology. For instance, when the classifier extracts the topic “machine learning”, it will infer also “artificial intelligence”. All the identified returns by both the modules are stored in the dictionary and it is further converted into graphs with the help of networkx library.
\begin{figure}[h!]
\centering
\includegraphics[width=150mm]{Images/SkillGraph.png}
\caption{Generated Skill graph from job post/CV }
\end{figure}
\subsubsection{Cultural Match}
As a reminder, to describe or understand the cultural profile we have Organizational Culture Dimensions that have two antonym concepts which have several descriptors and each those have terms. So, in order to do the matching a profile of the CV or job posting is done with the help of the cultural graph, so each text will have a cultural graph profile. And then these profiles will be compared.
In order to better understand, we will explain the procedure by steps:
\begin{itemize}
\item Calculate the cosine similarities between the descriptors' terms and the text (this is done by using the embedded model)
\begin{itemize}
\item \textit{ex: decentralized organization = 0.62, centralized organization = 0.52}
\end{itemize}
\item The mean of the similarities values of the descriptors' terms will be stored in each of the antonym concepts, this way, we'll know the belonging of each of the antonym concepts to the text, or, in other words, we will have the text profiled in the cultural graph.
\begin{itemize}
\item \textit{ex: small power distance = 0.86, large power distance = 0.56}
\end{itemize}
\item So, we can obtain several cultural graphs that profile different CV's or job postings
\item To compare an euclidean distance between both profiles would be measured.
\end{itemize}
This is how the similarity between a CV (or several CV's) and job posting can be done.
\subsubsection{Education Match}
For the education match between the job and CV’s. The sovren parser parses the degree and the name of the school from the job post and CV's. Lookup dictionary is created with all the equivalent degrees related to a particular degree. For eg MSc, master, BAC+5 belongs to the same category. This match is done by keyword match in the dictionary where the required degree from the job post is searched in the degree obtained in candidate CV.
If the candidate required degree is inferior to the degree required in the job post. The candidate gets rejected and is not processed for further stages.
\subsubsection{Required Skill Match}
All the required skills parsed from the job post are collected and is matched to the Skill/Concept found in the candidate skill graph. Based on the number of skills from the job post matched in obtained skill from candidate CV scored is assigned. For eg if out of 4 skill 3 skill is matched in candidate CV, then the calculated score is 0.75.
\subsection{Multiple CVs to Job Post matching}
One task is to match a CV with a job post, where the thing to do is to somehow compare different points of view of the texts such as culture, domain, education, and others. And another task to accomplish is to compare several CVs to a job posting, a task that actually would be the most common used by recruiters in order to understand the match of a candidate's profile with the job post a recruiter is promoting. In order to do this we will serve ourselves in two steps process: \textit{filtering and matching}.
\subsubsection{Filtering}
In order to accomplish this task, we will use the theory seen in HAY criteria, where there are some requirements that must be met and others that could be not mandatory.
As one of the proposals, the user would be able to "tune" the mandatory fields in order to implement by himself this filtering process. This could be done as a second iteration over the proposed solution since for now, the proposed solution has limited analysis axes and so, the development of this part would be excessive compared to the actual functionality. For the moment, as a first approach, some filtering concepts have been imposed and CV's would be filtered taking them into account.
So, as per this iteration, simply the education axis will be taken into account. So, a comparison of required education level against the actual education level will be made and just the ones that meet or exceed the requirements will be passed.
For this end, sovren software helps a lot by telling us already the required skills that a job posting has.
\subsubsection{Sorting}
For the matching part, the help of a multi criteria algorithm is used in order to accomplish the task. When having several CV’s, even hundreds, the help of a sorting algorithm may be very cherished. So, how to sort? As recruiters may have it's own idea in mind of the aspects they want to favor depending on the situation, the sector and the needs of the company, we propose a user friendly sorting by letting the recruiters to "tune" the parameters.
The parameters could be highly tuned in different aspects if the user needed it. Meaning that for each main axis of study, different sub parameters could be adjusted according to needs. For example, if there was a team that needed someone with a soft and friendly character because it's full of strong and difficult characters (true story), the recruiter should be able to tune this part of the sorting algorithm. But, mainly, the user should be able to tune five or maximum seven dimensions since, as per research and recommendation, those are the maximum quantity of dimensions that a person can handle. So, even if the tuning could be expanded, as a first proposal, seven is our maximum and the inputs will be directly asked to the user.
The sorting dimensions to take into account will be integers between zero and three included with the meanings: \{ 0-not interested, 1-poorly interested, 2-interested, 3-very interested \}. In this way, the user can express his interest in each of the axes in a scale from zero to three. The algorithm used is the multi-criteria majority-rule sorting algorithm \cite{mrsort_pm}.
This way the user is able to tune the dimensions indicating which interest him the most, having $n^{n-1} * (n-1)$ different combinations to adjust, being the number of dimensions to study. In our cases, we're proposing for the moment, and as part of the first iteration four dimensions { skills, domain skills, culture, required skills }.
\section{Evaluation}
In order to evaluate our system, we have implemented two different use cases. In the first use case we test the function \textbf{ManyToOne matching} that allows to filter CV's and give a score to the selected ones. The second use case evaluates the function \textbf{OneToOne matching} that allows to see the level of correspondence between one CV and one Job post.
The CV's used to evaluate this tool were downloaded from the Naukri database \cite{freesearchNaukri} using the following filters: "Search by Keywords: data", "Total Experience: 0 to 2 years'' and "Candidate Age: 20 to 30 years". The job posts used were obtained from Linkedin. The search for these job posts focused on a data science internship with 0 to 2 years of experience. Both the CV's and Job post were parsed using the demo version of the sovren parser tool \cite{sovrentool}. We collected 120 CV's and 11 job posts. This data is stored in our Mongo database.
For this evaluation, we have selected 5 different CV's which include different profiles. We can define them as technical and business oriented profiles. We have also selected two different job posts, technical and a business oriented one as well. The test dataset was the following:
\begin{table}[h!]
\centering
\begin{tabular}{p{1cm} p{4cm}}
\hline\hline
ID & MongoID\\
\hline
CV1 & 5e60f5895a90883323e38bcc \\[.5\normalbaselineskip]
CV2 & 5e60f58a5a90883323e38bdc \\[.5\normalbaselineskip]
CV3 & 5e60f58a5a90883323e38bcf \\[.5\normalbaselineskip]
CV4 & 5e60f58b5a90883323e38bf1 \\[.5\normalbaselineskip]
CV5 & 5e60f58a5a90883323e38bdb \\[.5\normalbaselineskip]
\hline
\end{tabular}
\end{table}
\begin{table}[h!]
\centering
\begin{tabular}{p{3cm} p{6cm} p{6cm}}
\hline\hline
ID & MongoID & Job Offer\\
\hline
BusinessOrientedJob & 5e60f5895a90883323e38bcc &
Intern Data Science - Product Analytics at Criteo
\\[.5\normalbaselineskip]
TechnicalOrientedJob & 5e64cbef837ba015d90abc79 &
Intern Data Science at Multivac
\\[.5\normalbaselineskip]
\hline
\end{tabular}
\end{table}
\subsection{Use case 1 - One To Many Matching}
\subsubsection{Result from test 1: 5 CVs and Business Oriented Job Post}
\begin{list}{--}{}
\item \textbf{Input weights to define the priority of sections:} DomainSkillsMatch: 2, SkillsMatch: 2, CultureMatch: 2
\begin{table}[h!]
\centering
\begin{tabular}{p{1cm} p{3cm} p{2cm} p{2cm} p{2cm} p{2cm}}
\hline \hline
ID & DomainSkillsMatch & SkillsMatch & CultureMatch & MRValues \\
\hline
CV1 & 1.428 & 1.420 & 0.944 & 0.743 \\[.5\normalbaselineskip]
CV2 & 1.415 & 1.418 & 0.913 & 0.739 \\[.5\normalbaselineskip]
CV3 & 1.420 & 1.417 & 0.889 & 0.736 \\[.5\normalbaselineskip]
CV4 & 1.427 & 1.431 & 0.845 & 0.730 \\[.5\normalbaselineskip]
CV5 & 1.414 & 1.419 & 0.828 & 0.728 \\[.5\normalbaselineskip]
\hline
\end{tabular}
\end{table}
\newpage
\item \textbf{Input weights to define the priority of sections:} DomainSkillsMatch: 3, SkillsMatch: 3, CultureMatch: 1
\begin{table}[h!]
\centering
\begin{tabular}{p{1cm} p{3cm} p{2cm} p{2cm} p{2cm} p{2cm}}
\hline \hline
ID & DomainSkillsMatch & SkillsMatch & CultureMatch & MRValues \\
\hline
CV1 & 1.428 & 1.420 & 0.944 & 0.774 \\[.5\normalbaselineskip]
CV2 & 1.415 & 1.418 & 0.913 & 0.772 \\[.5\normalbaselineskip]
CV3 & 1.420 & 1.417 & 0.889 & 0.771 \\[.5\normalbaselineskip]
CV4 & 1.427 & 1.431 & 0.845 & 0.769 \\[.5\normalbaselineskip]
CV5 & 1.414 & 1.419 & 0.828 & 0.768 \\[.5\normalbaselineskip]
\hline
\end{tabular}
\end{table}
\end{list}
\subsubsection{Result from test 2: 5 CVs and Technical Oriented Job Post}
\begin{list}{--}{}
\item \textbf{Input weights to define the priority of sections:} DomainSkillsMatch: 2, SkillsMatch: 2, CultureMatch: 2
\begin{table}[H]
\centering
\begin{tabular}{p{1cm} p{3cm} p{2cm} p{2cm} p{2cm} p{2cm}} \hline \hline
ID & DomainSkillsMatch & SkillsMatch & CultureMatch & MRValues \\
\hline
CV3 & 1.418 & 1.432 & 0.940 & 0.743 \\[.5\normalbaselineskip]
CV4 & 1.427 & 1.426 & 0.928 & 0.741 \\[.5\normalbaselineskip]
CV5 & 1.414 & 1.420 & 0.924 & 0.740 \\[.5\normalbaselineskip]
CV2 & 1.414 & 1.417 & 0.891 & 0.736 \\[.5\normalbaselineskip]
CV1 & 1.429 & 1.435 & 0.849 & 0.731 \\[.5\normalbaselineskip]
\hline
\end{tabular}
\end{table}
\item \textbf{Input weights to define the priority of sections:} DomainSkillsMatch: 3, SkillsMatch: 3, CultureMatch: 1
\begin{table}[H]
\centering
\begin{tabular}{p{1cm} p{3cm} p{2cm} p{2cm} p{2cm} p{2cm}} \hline \hline
ID & DomainSkillsMatch & SkillsMatch & CultureMatch & MRValues \\
\hline
CV3 & 1.418 & 1.436 & 0.940 & 0.774 \\[.5\normalbaselineskip]
CV4 & 1.429 & 1.434 & 0.928 & 0.774 \\[.5\normalbaselineskip]
CV5 & 1.414 & 1.420 & 0.924 & 0.773 \\[.5\normalbaselineskip]
CV2 & 1.414 & 1.417 & 0.891 & 0.772 \\[.5\normalbaselineskip]
CV1 & 1.432 & 1.443 & 0.849 & 0.769 \\[.5\normalbaselineskip]
\hline
\end{tabular}
\end{table}
\end{list}
From these results we can observe how CV3 CV4 and CV5 are more likely to be selected for a technical job offer and CV1 and CV2 are more likely to be selected for a business oriented job.
We also observe that changing the weight to evaluate each section, is not changing the sorting of the CVs, however the MR score does change.
\subsection{Use case 2 - One To One Matching}
Since CV1 and CV3 were the CV's with great scores in both cases. Let's analyze why they have this result.
\subsubsection{Fig \ref{fig:result1} shows the result from test 3: CV1 and Business Oriented Job Post}
\begin{figure}[htbp]
\centering
\includegraphics[width=150mm]{Images/result1.png}
\caption{System output explaining the correlation between CV1 and a job post}
\label{fig:result1}
\end{figure}
\subsubsection{Fig \ref{fig:result2} shows the result from test 4: CV3 and Technical Oriented Job Post}
\begin{figure}[h!]
\centering
\includegraphics[width=150mm]{Images/result2.png}
\caption{System output explaining the correlation between CV3 and a job post}
\label{fig:result2}
\end{figure}
\section{Conclusions}
About the process, a clear algorithm for recommendation could be implemented. Possible models for the CV parsing and recommendation processes are variate, not a complete approach can be found in research. We implemented a divide and conquer methodology for the model. We can approach each problem and solve each one with the best tools such as ontologies, embeddings, direct match, expert evaluation, machine learning. Develop an algorithm for the whole process according to the existence or not of data.
About the product, it would reduce time in the recruiting process, save money, invest recruiters in more productive activities in order to increase retention and productivity of the team and decrease recruiter’s bias. Besides, it would encourage best candidates' fitting on the organization, which would increase the company value as a consequence.
\section{Future Prospective}
The algorithm clearly shows a winning result as per the given time frame to complete the project. However there are many improvements which are possible in order to improve the result. In the scope of this project we explored mainly the domain of skill graph matching. The other important domain which is left unattended is the job titles and past experience. Adding the dimension of job title in current skill ontology can help to leverage deep match between candidates to the job positions. Moreover creating the word2vec model on the domain related data set will greatly help to explore closely the different concepts in the particular domain and to get the idea about their similarity and dissimilarity amongst them. In the scope of this project we did not consider the work experience of the candidate as we only focused to match candidates at a beginner level. However we wish to improve it further in order to adopt it for the experienced professional. Most importantly in order to deal with the cultural perspective of an organization we lacked the data from the organization as we believed that CV's of candidate accepted within the organization best describe the organization cultural values. By training the model on such data we could have achieved better results from a cultural perspective.
\section{Acknowledgment}
This work is supported by the researchers of IMT Atlantique, Brest. I would like to thank our professor Yanish Haralambous and Nicolas Julien who helped us for this study, and comments which helped to improve this paper.
\bibliographystyle{unsrt}
|
2,877,628,088,473 | arxiv | \section{Introduction}
Estimating depth in monocular images constitutes a problem of practical importance when aiming to understand the geometry of a scene, e.g., in autonomous driving systems or for augmented reality applications. Due to its ill-posed nature, methods approaching this problem nowadays typically incorporate complex models, trained on large amounts of data using machine learning methods.
The majority of existing approaches tackles depth estimation (whether per-pixel or per-object) as a regression problem, i.e., as the problem of learning a model to predict a (pseudo-)metric map (e.g., \cite{Alhashim2018,eigen2014depth, Laina2016DeeperDP, Lee2019FromBT}).
However, on the one hand, accurate prediction of metric depth actually depends on the intrinsic camera parameters, which are often not available. On the other hand, instead of predicting absolute depth, it is often enough to predict the \emph{relative} depth of pixels or higher level concepts (such as objects), that is, to sort them from closest to farthest away from the camera.
One may then argue that regression is solving an unnecessarily difficult task, and rather advocate a formalization of depth estimation as a \emph{ranking} task \cite{pl11}. So-called ``learning-to-rank'' methods can be used to minimize suitable performance metrics based on relative errors.
As absolute depth measurements are not necessarily needed, ranking has the additional advantage that it potentially allows for learning from weaker training information.
This includes depth annotations that are not metric but can be regarded as pseudo-metric data, e.g., disparity maps constructed from stereo images or videos \cite{Chen2019LearningSD,monodepth2, Li2018MegaDepthLS}, or human-annotated data \cite{NIPS2016_6489,DBLP:conf/cvpr/ChenQFKHD20}.
Without the need for metric RGB-D data produced by depth sensors, the diversity of training datasets can be drastically increased due to cheaper data acquisition \cite{Xian2018MonocularRD}.
Existing ranking methods are essentially based on pairwise comparisons of the form ``object A is closer to the camera than B'' \cite{NIPS2016_6489, Xian2018MonocularRD, Xian2020StructureGuidedRL,Zoran2015LearningOR}. Pairwise relations of that kind are sampled from a depth map as training information, and predictive models are induced by minimizing pairwise ranking losses. While these approaches have proven effective, the quadratic number of possible pairs that can be constructed renders them rather inefficient and necessitates sophisticated sampling strategies to eliminate less informative pairs \cite{Xian2020StructureGuidedRL}. Besides, breaking a linear order into pairwise comparisons necessarily comes with a certain loss of information. In particular, information about the transitivity of order relations, which is implicitly contained in a linear order, will be lost.
To avoid these drawbacks, so-called ``listwise ranking'' \cite{xia2008listwise} has been proposed as an alternative to pairwise methods. In the listwise approach, higher order rankings of arbitrary length can be considered as training information. In this paper, we elaborate on the use of listwise ranking for depth estimation in images. More specifically, we propose a listwise ranking method based on the well-known Plackett-Luce (PL) model \cite{Luce59,Plackett1975TheAO}, which allows for learning probability distributions over rankings from pseudo-metric data. Moreover, taking advantage of the representation of PL as a random utility model \cite{DBLP:conf/nips/SoufianiPX12}, we suggest a natural way to recover translation-invariant approximations of the underlying metric depth information. Along with that, we propose a state-of-the-art neural network architecture as a backbone, together with a simple sampling strategy to construct training examples from raw pseudo-depth data.
In a zero-shot evaluation, where we compare models on data not considered for training, we study the cross-dataset performance of our model and compare it with state-of-the-art approaches. Thereby, we demonstrate that listwise ranking is an effective approach for rank-based error minimization, and our model constitutes an appropriate choice for the prediction of depth orders in unseen scenes, as well as providing promising results in recovering metric depth.
\section{Related Work}
In learning to rank, the goal is to infer ranking models from training data in the form of rankings (permutations) of individual items. According to a rough categorization of methods, one can distinguish between pointwise, pairwise, and listwise approaches \cite{Liu2010LearningTR}. While single items are considered as training examples in pointwise learning-to-rank methods, relations between items are typically used as training examples in the other categories, either relations of order two (pairwise) or arbitrary length (listwise). In the case of pointwise learning-to-rank, examples are usually annotated by a score that determines their individual usefulness, from which, for instance, regression models can be induced. For pairwise approaches, where examples are typically given as single relations among two items, existing methods range from SVM-based classifiers \cite{rankingsvm} to boosting methods \cite{rankboost} and ranking networks \cite{RankNet}. Similarly, several listwise ranking methods have been proposed, in which examples are represented by higher order (potentially partial) item rankings. One of the most well-known representative is ListMLE \cite{xia2008listwise}, a maximum likelihood estimation method to infer Plackett-Luce probability distributions over rankings.
Several approaches to tackle the problem of estimating depth in images using relative depth information for training have been proposed. Among the first, Zoran et al.\ \cite{Zoran2015LearningOR} classify individual point pairs from an image, which are then combined into a global solution for a complete dense map over all image pixels. Following a similar motivation, Chen et al.\ \cite{NIPS2016_6489} train a deep neural network architecture by using a pairwise ranking loss, directly predicting a dense-map in an end-to-end fashion. This approach has also been adopted in subsequent works and improved in various directions, for example by using a different model architecture \cite{Xian2018MonocularRD}, additional data \cite{Chen2019LearningSD}, or improved sampling strategy \cite{Xian2020StructureGuidedRL}. Furthermore, Ewerth et al.\ \cite{Ewerth2017EstimatingRD} propose a method to estimate relative depth using a RankBoost model. Alternative approaches also exploit ordinal depth information \cite{Li2018MegaDepthLS}, either directly or to pretrain regression models \cite{cao2020monocular}.
To learn models that work well for arbitrary scenes, e.g., in both indoor and outdoor scenarios, diversity of training data is crucial. Commonly used metric data produced by depth sensors typically provide limited diversity, e.g., NYUD-v2 \cite{NYUDV2} with indoor-only or KITTI \cite{Geiger2013IJRR} with only street scenes. Since maximal depth capacities of sensors constrain the recognizable depth, they fail to capture scenes ``in the wild''. This is why Chen et al.\ \cite{NIPS2016_6489} propose a human-annotated dataset with pairwise point samples, for which the ``closer-to-camera'' relation is captured. However, as it provides ground truth information for only two points in each image, and the human annotation process is quite costly, other strategies aiming to automatically extract depth information have been proposed. For instance, stereo images \cite{Xian2018MonocularRD} or sequences of images in videos \cite{Chen2019LearningSD} have been facilitated to predict structural disparity maps from the motion of elements. Combinations of such methods have been considered, too \cite{Wang2019WebSV}. As none of them delivers metric information per pixel, the information produced must be considered as pseudo-depth, which, as previously explained, is still sufficient for depth relations. Although scale-invariant regression methods are also capable of learning from such data \cite{Li2018MegaDepthLS,ranftl2020towards}, their ability to generalize to new datasets with structurally different scenes is fairly limited, at least for the task of depth ordering, as our empirical evaluation will confirm later on.
\section{Plackett-Luce Model for Depth Estimation}
In the following, we introduce our proposal of a Plackett-Luce model for depth estimation as illustrated in Fig.\ \ref{fig:plmodel}, along with a description of the model architecture and sampling strategy to construct training examples from raw depth data.
\begin{figure}[!thbp]
\centering
\includegraphics[width=0.9\linewidth]{gfx/PLDepth_gfx.pdf}
\caption{Overview of our method: The PL model incorporates a deep neural network to predict scores for each pixel in an input image, which are then turned into probabilities for rankings of queried image locations. For training, we sample rankings from images annotated by pseudo depth.}
\label{fig:plmodel}
\end{figure}
\subsection{Problem Formulation}
We assume training information in the form of RGB images $I$ together with (pseudo-)depth annotations $D$, i.e., tuples $(I,D) \in \mathbb{R}^{h \times w \times 3} \times \mathbb{R}^{h \times w}$, where $h$ and $w$ denote the image height and width, respectively. Moreover, $D[l]$ denotes the (pseudo-)depth of a position $l \in \{1, \dots, h\} \times \{1, \dots, w\}$ identified by a height and width coordinate. Without loss of generality, lower values $D[l]$ encode shorter distances to the camera.
We are mainly interested in the order relation of the locations in an image $I$ as induced by the (pseudo-)depth $D$. Formally, the relation between $n$ locations $M=\{l_1, l_2, \dots, l_n\}$ can be represented in terms of a permutation $\pi$ of $[n] := \{1, \ldots, n\}$ such that $D[l_{\pi(i)}] < D[l_{\pi(i+1)}]$ for $i \in \{1, \dots, n-1\}$. This permutation encodes the ranking
$l_{\pi(1)} \succ l_{\pi(2)} \succ \dots \succ l_{\pi(n)}$, i.e., location $l_{\pi(1)}$ is closest, then $l_{\pi(2)}$, etc.
At query time, when $I$ is given but $D$ is not, the task of a rank-based depth estimation model is to predict the ``closer-to-camera'' relation $\succ$, that is, to produce an accurate order-preserving estimate of $D$. Formally, this estimate can again be represented in terms of a permutation, which is then compared to the ground truth permutation $\pi$.
\subsection{Listwise Depth Ranking}
\label{sec:method:listwise}
We model information about rankings in a \emph{probabilistic} way, which has several advantages, especially from a learning point of view (for example, it makes the problem amenable to general inference principles such as maximum likelihood estimation).
A well-known probability model on rankings is the Plackett-Luce (PL) model, which is parameterized by a vector $\vec{v} = (v_1, \dots, v_K) \in \mathbb{R}^K_+$, where $K$ is the number of items (length of the ranking). Referring to the interpretation of a ranking in terms of a preferential order, the value $v_i$ is also called the (latent) utility of the $i^{th}$ item\,---\,subsequently, we shall use the more neutral notion of PL score or parameter. The probability of a permutation $\pi$ of $[K]$ is then given by
\begin{equation}\label{eq:plm}
P(\pi \, | \, \vec{v}) = \prod_{i=1}^{K-1} \frac{v_{\pi(i)}}{\sum_{k=i}^K v_{\pi(k)}} \, ,
\end{equation}
where $\pi(i)$ is the index of the item on the $i^{th}$ rank. One easily verifies that, the larger the score $v_i$, the higher the probability that the $i^{th}$ item will show up on a top rank. Moreover, the mode of the distribution, i.e., the ranking with the highest probability, is obtained by sorting the items in decreasing order of their scores.
The PL model has the appealing property that each marginal of a PL model is again a PL model (with the same parameters). More specifically, if $J = \{ j_1, \ldots , j_k \} \subseteq [K]$ is a subset of the $K$ items, then the corresponding marginal of (\ref{eq:plm}) is a PL model with parameters $v_{j_1}, \ldots , v_{j_k}$. This property greatly facilitates learning and inference from possibly incomplete rankings that do not comprise all $K$ items. In fact, learning to rank with the PL model essentially comes down to estimating the score vector $\vec{v} = (v_1, \dots, v_K)$.
In the case of depth estimation, items correspond to the pixels of an image, and the task of the learner is to predict the scores of these pixels. To make this possible, we assume that the score of a pixel can be expressed as a function of its context on the image. Thus,
a parameter $v_i$ is defined through a function $\phi_i : \mathcal{X} \to \mathbb{R}$ on an input space $\mathcal{X}$ \cite{cheng2010label}, where $\mathcal{X} = \mathbb{R}^{h \times w \times 3}$ corresponds to the space of all possible images of size $h \times w$. Assuming all images to have the same size, we set the overall number of alternatives $K$ to $h \times w$.
In the domain of depth estimation, the most obvious way to represent the functions $\phi_1, \ldots , \phi_K$ is to model them as a (joint) deep convolutional neural network. Thus, each function $\phi_i$ is represented in terms of a set of network parameters $\vec{w}_i$, a subset of the parameters $\vec{w}$ of the entire (joint) network. In the experimental section, different state-of-the-art model architectures will be assessed for that purpose.
For an image $\vec{x} \in \mathcal{X}$, let $\vec{w}(\vec{x})$ denote the output of the neural network under parameterization $\vec{w}$ and
\begin{equation}\label{eq:vw}
(v_1, \ldots , v_K) = (\phi_1(\vec{x}) , \ldots , \phi_K(\vec{x})) = \exp(\vec{w}(\vec{x}))
\end{equation}
the induced (non-negative) PL parameters. Thus, the entire PL model for the image $\vec{x}$ is eventually specified by the network parameters $\vec{w}$. Given a ranking $\pi$ of (a subset of) the pixels of $\vec{x}$ as training information, one can thus determine the probability $P(\pi \, | \, \vec{x}, \vec{w})$ of that ranking under $\vec{w}$ according to (\ref{eq:plm}). More generally, given training information in the form of a collection of images with rankings, $\{ (\vec{x}_i , \pi_i ) \}_{i=1}^L$, learning an optimal model can be realized as maximum likelihood estimation \cite{xia2008listwise}:
\begin{equation}
\vec{w}^* \in \operatorname*{arg\,min}_{\vec{w}} - \sum_{i=1}^L \log P(\pi \, | \, \vec{x}, \vec{w}) \, .
\end{equation}
\subsection{Metric Depth Estimation}
\label{sec:method:metric}
Going beyond the prediction of rankings, one may wonder whether there is any possibility to recover metric depth information from a learned PL model. At first sight, this would be surprising, because the model is only trained on qualitative information in the form of rankings, and predicts probabilities instead of metric depth. Yet, the PL model also comprises a quantitative part, namely the scores $v_i$, which, as will be explained in the following, are in direct correspondence with the underlying metric information.
The PL model is a specific random utility model (RUM) \cite{Mcfadden1980EconometricMF}. In this class of models, it is assumed that the true order $z_1 < z_2 < \ldots < z_n$ of $n$ real numbers\,---\,think of them as the true depth values of the pixels in a image\,---\,is ``randomized'' through (independent) measurement noise: Each value $z_i$ is replaced by the measurement $X_i = z_i + \epsilon_i$, where $\epsilon_i$ is an error term, and what is observed as a ranking is the order of the measurements $X_1, \ldots , X_n$. In particular, the true order relation $z_i < z_j$ between two items is reversed if the corresponding error terms satisfy $\epsilon_i - \epsilon_j > z_j - z_i$, and the smaller the distance $| z_i - z_j|$, the more likely such a mistake is going to happen. Thus, the probability of a ranking error is indicative of the distance between $z_i$ and $z_j$.
The PL model is obtained for the special case where the error terms $\epsilon_i$ follow a Gumbel distribution with fixed shape parameter \cite{DBLP:conf/nips/SoufianiPX12}.
More specifically, the so-called Thurstone model with parameters $z_1, \ldots , z_n$ is equivalent to the PL model (\ref{eq:plm}) with parameters $v_i = \exp( z_i)$, $i=1 , \ldots , n$. In the context of depth estimation, the model can thus be interpreted as follows: The true depth of the $i^{th}$ image object (pixel) is given by $z_i$, but due to measurement noise, these distances are not observed precisely. Accepting the assumption of a Gumbel distribution\footnote{This distribution looks similar to the normal distribution. Even if not provably correct, it is certainly not implausible.}, a PL model fitted to the observed (noisy) rankings of image objects yields estimates $\hat{v}_i$ of $v_i = \exp( z_i)$. Thus, a natural estimate of the underlying metric depth is given by $\hat{z}_i = \log ( \hat{v}_i )$.
We note that, since the PL model (\ref{eq:plm}) is invariant toward multiplicative scaling (i.e., $P(\pi \, | \, \vec{v}) \equiv P(\pi \, | \, \lambda \vec{v})$ for $\lambda > 0$), the parameter $\vec{v}$ can only be determined up to a multiplicative factor. Correspondingly, the parameter $\vec{z}$ can only be determined up to an additive constant. This is indeed plausible: Assuming that the probability of reversing the order of two image objects only depends on their true distance $| z_i - z_j|$, this probability will not change by shifting the entire scene (i.e., moving the camera closer or farther away). In addition to this shift invariance, there is also a scaling effect, albeit of a more indirect nature. This effect is caused by fixing the shape parameter of the Thurstone model to 1. Therefore, instead of a simple log-transformation, we shall use an affine transformation of the form $\hat{z} = s \log( \hat{v}) + t$, with $s,t \in \mathbb{R}$ fitted to the image at hand.
\subsection{Model}
\label{sec:method:model}
Regarding the underlying neural network, taking an image $\vec{x}$ as input and producing $\vec{w}(\vec{x})$ as used in (\ref{eq:vw}) as output, we suggest two variants of our listwise ranking approach. The first one, dubbed \textit{PLDepthResNet}, uses the same model architecture as suggested by Xian et al. \cite{Xian2018MonocularRD}. As a second model, by consideration of recent neural architecture research, we propose \textit{PLDepthEffNet} as a closely related architecture relying on EfficientNet \cite{Tan2019EfficientNetRM} as backbone. Without further notice, the variant EfficientNetB5 is used as encoder, while the decoder part is a stack of repeating convolutional, BatchNormalization, ReLU and bilinear upsampling layers until the original shape is recovered. Similar to the model in \cite{Xian2018MonocularRD}, different scale features from the encoder branch are fed into the corresponding levels of the decoder part. Instead of fusing these features by addition, we concatenate at the respective layers. As a result, we obtain a model with approximately $45$ million parameters for PLDepthEffNet, which is similar to the size of PLDepthResNet with $42$ million parameters, while increasing the model performance at the same time (cf. empirical evaluation).
For both PLDepthResNet and PLDepthEffNet, we use encoders pretrained on ImageNet. Consequently, we standardize input images to match the preprocessing on ImageNet.
During training, we freeze the encoder part and only allow the BatchNormalization layers to adjust to the new input data as typically done in transfer learning.
\subsection{Sampling}
\label{sec:method:sampling}
In the past, different strategies to construct pairwise relations from raw depth data have been proposed, including superpixel sampling \cite{Zoran2015LearningOR}, random sampling \cite{NIPS2016_6489}, and combinations of multiple structure-guided strategies \cite{Xian2020StructureGuidedRL}. According to Xian et al. \cite{Xian2020StructureGuidedRL}, random sampling of pairwise relations from raw depth data may harm the model's performance, due to training on uninformative or even misleading examples. Even worse, due to imprecision in the ground truth data, the risk of incorrectly ordered items increases with larger samples.
To address these issues, we propose a random sampling strategy that is almost as simple as pure random sampling, and which allows for incorporating the depth structure of the given image while leading to a relatively low training complexity. For $R$ $n$-ary rankings to be queried per training tuple $(I, D)$, $N \cdot R$ item sets $M$ with $n$ individual image locations are sampled, where $N > 1$ is a parameter. For each ranking set $M$, we order all image locations $l$ by $D[l]$ to construct a ground truth permutation $\pi$. Given $\pi$, we sum up all pairwise depth differences
$| D[l_{\pi(i)}] - D[l_{\pi(i + 1)}]|$, $i \in [n-1]$.
Afterwards, we sort all $N \cdot R$ rankings per image in a decreasing order according to this sum of depth difference and select the top $R$ rankings as training examples. This way, we consider those rankings that seem to be most informative, since their relative depth values are maximized among the samples. Other strategies, such as the minimum among all pairwise depth differences in a ranking, are of course also possible as a proxy of the amount of information.
It is worth to mention that the Plackett-Luce model does not support partial rankings, i.e., neither allows for ties nor incomparability between items. Thus, as opposed to strategies incorporating equality relations, as e.g.\ \cite{NIPS2016_6489}, such relations are not explicitly considered here.
To avoid sampling point pairs that are almost equally far away from the camera, we add a penalty of $-10$ to the depth difference sum for each compared image location pair $l_1$ and $l_2$ if their depth difference is such that
$\max\left\{ \frac{D[l_1]}{D[l_2]}, \frac{D[l_2]}{D[l_1]} \right\} < 1 + \tau$,
where the parameter $\tau$ is set to $\tau = 0.03$ in our experiments.
\section{Experiments}
To demonstrate the effectiveness of our method, we conduct an exhaustive empirical evaluation on several benchmark datasets. Before presenting the results, we first introduce the datasets, followed by a brief description of the baseline methods and metrics used for assessment.
\subsection{Datasets}
To train our models, we use the recently introduced pseudo-metric ``High-Resolution Web Stereo Image'' ($\textit{HR-WSI}$) dataset \cite{Xian2020StructureGuidedRL}. It consists of $20,378$ diverse, high resolution images annotated with pseudo-depth maps generated from flow predictions. For hyperparameter optimization, a separate set of $400$ images was used. Since the flow predictions provided as depth annotation failed for some image regions, a consistency mask is attached to each prediction to allow for sampling only from pixels that provide a reasonable depth value. To this end, a forward-backward flow consistency check has been applied. Furthermore, the annotations have been preprocessed to also assign a constant depth value to sky regions. Despite its relatively small size, we found this dataset to provide highly informative image and depth pairs to learn from.
In the experiments, we compare our model to various baselines in a ``zero-shot'' generalization study on datasets that were not used within the training processes. Thus, we follow the basic evaluation scheme by Ranftl et al. \cite{ranftl2020towards}. As datasets, we consider \textit{Ibims} \cite{Koch2018EvaluationOC}, \textit{Sintel} \cite{Butler2012ANO}, \textit{DIODE} \cite{Vasiljevic2019DIODEAD}, and \textit{TUM} \cite{Sturm2012ABF}. In the supplementary material, we detail the characteristics of each dataset, such as their data diversity.
With this choice of benchmark targets, we capture indoor, outdoor, and computer generated scenes, which provides a good basis for assessing the generalization performance of different models, and their ability to predict depth orders in a wide variety of applications.
\subsection{Baselines}
We compare our PL-based approach to state-of-the-art depth estimation models using depth relations as training information. To this end, we consider the ResNet-based model trained on ``Relative Depth from Web'' (ReDWeb), ``Depth in the Wild'' (DIW), and YouTube3D as described by Chen et al. \cite{Chen2019LearningSD}, hereinafter referred to as YouTube3D, and the same model as used by Xian et al. \cite{Xian2020StructureGuidedRL} trained on HR-WSI (referred to as Xian 2020). Both approaches have shown compelling generalization performance, corroborating our motivation to use relative data for supervision.
Besides models trained on relative depth information, regression models are obviously also capable of inferring rankings, simply by sorting the image locations based on their values in a predicted dense depth map. Therefore, we consider state-of-the-art (pseudo-)regression methods as additional baselines, namely, DenseDepth~\cite{Alhashim2018}, BTS~\cite{Lee2019FromBT}, MegaDepth~\cite{Li2018MegaDepthLS}, MannequinChallenge (MC)~\cite{li2019learning}, and MiDaS~\cite{ranftl2020towards}. Furthermore, we also evaluated MonoDepth2 \cite{monodepth2} as a completely unsupervised resp.\ self-supervised method.
While we considered most baselines as described in the related work, let us note that the authors of MiDaS provide a model trained on approximately $2$ million examples, which is far more than most of the other methods we compare with. To account for this, we re-implemented their approach and retrained the model on HR-WSI for a fairer comparison. For a complete overview of all baselines, including a categorization of the respective training data diversity, we refer to the supplementary material.
\subsection{Metrics}
To evaluate our models, we report the ``ordinal error'' on sampled point pairs as done by Xian et al. \cite{Xian2020StructureGuidedRL}. For two points $l_1$ and $l_2$ sampled from an example $(I, D)$, with $I$ being the image and $D$ a dense (pseudo-)depth map as specified before, the ground truth ordinal relation $r(l_1, l_2, D)$ is given by $+1$ for $D[l_1] > D[l_2]$, $-1$ for $D[l_2] > D[l_1]$ and $0$ otherwise.
The ordinal error is then given by
\begin{equation}
ord(\mathcal{D}) = \frac{1}{|\mathcal{D}|} \sum_{(I, D, l_1, l_2) \in \mathcal{D}} \!\!\!\!\!\!\!\mathbbm{1}\big( r(l_1, l_2, D) \neq r(l_1, l_2, f(I)) \big),
\end{equation}
where $f$ is the function predicting depth or, in the case of a PL model, scores for each pixel of the input image $I$, resulting in a dense depth map just as given by $D$, and $\mathcal{D}$ denotes the set of all point pairs sampled from the test dataset images and depth maps.
As already noted, we omit all equal pairs, i.e., relations with $r(\cdot, \cdot, \cdot)=0$. Hence, we report $ord$ on unequal pairs only without any equality thresholding.
Thus, there is no need to rely on re-scaling and -translating as done in \cite{ranftl2020towards} and \cite{Xian2020StructureGuidedRL} to identify reasonable equality thresholds, which comes with additional complications for the evaluation process.
Often, depth orders have varying priorities, i.e., closer elements are more critical for correct ordering than elements far away from the camera. For example, an autonomous vehicle has less time to react to elements very close to the car and must rely on valid input for safe interactions. This is reflected by metrics like the discounted cumulative gain (DCG), which measures the usefulness of rankings by accumulating graded relevances of ranking items discounted with decreasing rank. More precisely, for every image location $l$ associated with a dense depth map $D$, we set the relevance score of $l$ in $D$ to $rel(l, D) = \frac{1}{D[l] + 1}$. Given these scores, we can specify the DCG score for a ranking $l_{\pi(1)} \succ l_{\pi(2)} \succ \dots \succ l_{\pi(n)}$ by
\begin{equation}\label{eq:dcg}
DCG(\pi, D) = \sum_{i = 1}^{n} \frac{rel(l_{\pi(i)}, D)}{\log_2 (i +1)} \, .
\end{equation}
For our experiments, we used the normalized DCG (nDCG), which divides (\ref{eq:dcg}) by the best DCG
possible on $D$.
For the metric comparison, we assess the root-mean-square error (RMSE) between the dense ground truth and predicted depth maps and the percentage of predictions $\hat{z}$ such that $\max\left(\frac{\hat{z}}{z}, \frac{z}{\hat{z}}\right) = \delta > 1.25$ for the ground truth depth $z$. To calculate the metrics, we normalized the given ground truth scores by the maximum depth capacity of the corresponding dataset (cf. the dataset characteristics in the supplement) to obtain error values on a similar scale.
\subsection{Results}
To show the effectiveness of our method proposal,
we first compare different losses using the same model architecture and training dataset, followed by a comparison of our method to the baselines.
Every reported result is the average of three runs with different randomization seeds.
\subsubsection{Loss Comparison}
There are many experimental studies in the literature showing improved performance of a method, but not isolating the key factors contributing to the improvement, e.g., the neural network architecture, loss function, training procedure, training data, etc. To assess the influence of a listwise approach to ranking more clearly, we evaluate three methods trained on the same data and with the same neural network architecture, namely (scale-invariant, SI) regression, pairwise, and listwise ranking. It is true that the model, loss, and data may strongly interact with each other (i.e., a loss might work well with a certain architecture on a particular dataset, while the same architecture may harm the performance of a different method). Nevertheless, we found that the ResNet-based architecture as proposed by Xian et al.~\cite{Xian2018MonocularRD} and subsequently also used in~\cite{ranftl2020towards} serves as a good basis for a fair comparison.
For our experiments, we re-implemented the SI mean-squared error loss as also used in MiDaS and the pairwise ranking loss as described in~\cite{NIPS2016_6489} and~\cite{Xian2018MonocularRD}. As training information, we used HR-WSI as a state-of-the-art diverse pseudo-depth dataset. We refer to the supplement for a detailed description of all hyperparameters.
All three methods require different sampling strategies: While the SI-regression uses the complete (masked) image, pair- and listwise methods involve different amounts of sampled points selected per ranking. For a fair comparison, we adopted the number of sampled rankings in the listwise case to the number of drawn pairwise relations, such that one approach does not see much more points than the other during training. In the case of pairwise rankings, we randomly sampled $1$k point pairs per image and epoch, resulting in a maximum of $2$k seen points per image and epoch. For our listwise approach, we found a size of $5$ to achieve a good trade-off between highly informative rankings and efficient training. Hence, we sampled $400$ rankings of ranking size $5$ per image and epoch. Here, we explicitly stick to random-only sampling to alleviate side effects.
Table \ref{tab:experiments:results:losses} presents the results of the method comparison on $50$k randomly sampled location pairs per image. As can be seen, the relative models outperform the SI-regression method, suggesting to serve as a better surrogate loss for optimizing the ordinal error. Moreover, our listwise approach seems to perform slightly better than the pairwise approach, although the difference does not appear to be significant.
\begin{table}[t]
\centering
\caption{Ordinal errors on $50$k randomly sampled pairs per loss, using the architecture from \cite{Xian2020StructureGuidedRL} trained on HR-WSI (lower is better).}
\label{tab:experiments:results:losses}
\resizebox{0.93\columnwidth}{!}{
\begin{tabular}{l|cccc|c}
\toprule
Loss & Ibims & Sintel & DIODE & TUM & Avg. Rank \\
\midrule
SI-Regression & 0.308 & 0.311 & 0.334 & 0.222 & 3 \\
Pairwise & 0.281 & 0.299 & 0.291 & \textbf{0.192} & 1.75 \\
Listwise & \textbf{0.273} & \textbf{0.289} & \textbf{0.285} & 0.218 & \textbf{1.25} \\
\bottomrule
\end{tabular}}
\end{table}
\subsubsection{Ordinal Prediction}\label{sec:ordinal}
After having compared the loss function on a shared model and data level, we now analyze individual depth estimation models with regard to their ordinal error and nDCG performance as trained by the respective authors,
who made an attempt at optimizing the interplay between data, network architectures, and training procedures.
For the baseline models, we used the best provided pretrained models by the authors or, if official implementations were not available, by popular and carefully tested re-implementations. For our PL models, we kept most of the training hyperparameters the same (see supplementary for more details). Within our sampling strategy, we set the factor $N=5$ (cf.\ Section \ref{sec:method:sampling}). For MiDaS, we also used our proposed EfficientNet-based architecture, which delivers superior performance compared to the formerly used architecture, for reasons of fairness. Here, as opposed to the version of MiDaS within the loss comparison, where we primary focused on comparing different problem considerations, we employ the trimmed absolute deviation loss providing the best performance among the regarded alternatives (cf.~\cite{ranftl2020towards}).
Table \ref{tab:experiments:results:ord} reports the individual ordinal errors on unequal relations for the four benchmark datasets, again on $50$k randomly sampled location pairs per image. As can be seen, our PLDepthEffNet achieves the lowest averaged rank over all datasets, while outperforming the other methods on half of the datasets at the same time, demonstrating the effectiveness of the listwise ranking approach to optimize the ordinal error metric. Supporting the observations made in the previous experiment, the generalization capabilities of MegaDepth as another scale-invariant regression method, even by having access to over $600$k diverse instances, to correctly rank elements are fairly limited. Moreover, in agreement with the previous results, the ranking approaches are consistently among the best models, suggesting ranking losses to be the favorite choice as surrogates for ordinal error minimization.
\begin{table}[t]
\centering
\caption{Ordinal errors on benchmark datasets with $50$k randomly sampled relations for each image (lower is better).}
\label{tab:experiments:results:ord}
\resizebox{0.93\columnwidth}{!}{
\begin{tabular}{l|cccc|c}
\toprule
Model & Ibims & Sintel & DIODE & TUM & Avg. Rank \\
\midrule
DenseDepth & 0.208 & 0.384 & 0.317 & 0.224 & 5.75 \\
MegaDepth & 0.297 & 0.324 & 0.316 & 0.227 & 7.5 \\
BTS & \textbf{0.190} & 0.384 & 0.323 & 0.251 & 6.25 \\
MC & 0.272 & 0.387 & 0.378 & 0.206 & 7.25 \\
MiDaS & 0.269 & 0.278 & 0.263 & 0.207 & 3.75 \\
\midrule
MonoDepth2 & 0.375 & 0.425 & 0.407 & 0.336 & 9.75 \\
\midrule
YouTube3D & 0.272 & 0.292 & 0.288 & 0.199 & 4.75 \\
Xian 2020 & 0.225 & 0.278 & 0.263 & \textbf{0.184} & 2.25 \\
\midrule
PLDepthResNet & 0.245 & 0.284 & 0.277 & 0.213 & 4.75 \\
PLDepthEffNet & 0.213 & \textbf{0.272} & \textbf{0.256} & 0.204 & \textbf{2} \\
\bottomrule
\end{tabular}}
\end{table}
Additionally, Table \ref{tab:experiments:results:ndcg} reports the results for nDCG as performance metric on $100$ randomly sampled rankings of size $500$ per image. In accordance with the ordinal errors, ranking methods are well suited to optimize this metric. Here, the top-$3$ models are all of that kind, with PLDepthEffNet slightly better performing than Xian 2020.
\begin{table}[t]
\centering
\caption{nDCG on benchmark datasets with $100$ randomly sampled rankings of size $500$ for each image (higher is better).}
\label{tab:experiments:results:ndcg}
\resizebox{0.93\columnwidth}{!}{
\begin{tabular}{l|cccc|c}
\toprule
Model & Ibims & Sintel & DIODE & TUM & Avg. Rank \\
\midrule
DenseDepth & 0.916 & 0.986 & 0.821 & 0.986 & 4.75 \\
MegaDepth & 0.911 & 0.989 & 0.815 & 0.983 & 7.5 \\
BTS & \textbf{0.918} & 0.986 & 0.825 & 0.983 & 4.75 \\
MC & 0.908 & 0.986 & 0.828 & 0.987 & 5.5 \\
MiDaS & 0.913 & 0.991 & 0.806 & 0.987 & 6.25 \\
\midrule
MonoDepth2 & 0.896 & 0.981 & \textbf{0.836} & 0.961 & 7.75 \\
\midrule
YouTube3D & 0.911 & 0.993 & 0.816 & 0.988 & 4.75 \\
Xian 2020 & 0.916 & 0.993 & 0.817 & \textbf{0.990} & 2.75 \\
\midrule
PLDepthResNet & 0.914 & 0.993 & 0.817 & 0.985 & 5 \\
PLDepthEffNet & 0.916 & \textbf{0.994} & 0.819 & 0.988 & \textbf{2.5} \\
\bottomrule
\end{tabular}}
\end{table}
\subsubsection{Metric Prediction}
\begin{table*}[!t]
\centering
\caption{Evaluation results on benchmark datasets with regard to metric depth error measures (lower is better in both cases).}
\label{tab:experiments:results:metric}
\resizebox{0.805\textwidth}{!}{
\begin{tabular}{l|cc|cc|cc|cc|cc}
\toprule
\multirow{2}{*}{Model} & \multicolumn{2}{c|}{Ibims} & \multicolumn{2}{c|}{Sintel} & \multicolumn{2}{c|}{DIODE} & \multicolumn{2}{c|}{TUM} & \multicolumn{2}{c}{Avg. Rank} \\
& RMSE & $\delta > 1.25$ & RMSE & $\delta > 1.25$ & RMSE & $\delta > 1.25$ & RMSE & $\delta > 1.25$ & RMSE & $\delta > 1.25$ \\
\midrule
DenseDepth & \textbf{0.016} & 20.9 & 0.128 & 39.6 & 0.110 & 53.5 & 0.084 & 69.7 & 5.25 & 4.5 \\
MegaDepth & 0.020 & 35.9 & 0.119 & 35.5 & 0.094 & 55.3 & 0.082 & 70.8 & 6 & 7 \\
BTS & \textbf{0.016} & \textbf{18.9} & 0.133 & 41.8 & 0.112 & 54.4 & 0.089 & 72.4 & 7 & 6.25 \\
MC & 0.018 & 31.3 & 0.128 & 38.8 & 0.120 & 58.7 & \textbf{0.074} & \textbf{67.8} & 5.25 & 5.5 \\
MiDaS & 0.019 & 33.2 & \textbf{0.091} & \textbf{27.7} & \textbf{0.081} & 53.5 & 0.085 & 71.1 & 4 & 4.75 \\
\midrule
MonoDepth2 & 0.023 & 42.6 & 0.143 & 43.8 & 0.122 & 61.1 & 0.088 & 72.5 & 9.75 & 10 \\
\midrule
YouTube3D & 0.019 & 31.8 & 0.101 & 31.1 & 0.096 & 54.5 & 0.077 & 68.4 & 4.75 & 5.25 \\
Xian 2020 & 0.018 & 31.5 & 0.096 & 30.5 & 0.085 & \textbf{51.4} & 0.080 & 69.4 & \textbf{3} & \textbf{3.25} \\
\midrule
PLDepthResNet & 0.019 & 30.9 & 0.099 & 30.7 & 0.092 & 53.1 & 0.084 & 71.9 & 5 & 4.75 \\
PLDepthEffNet & 0.017 & 29.1 & 0.093 & 29.3 & 0.085 & 52.7 & 0.083 & 71.6 & \textbf{3} & 3.5 \\
\bottomrule
\end{tabular}}
\end{table*}
As motivated theoretically in Section~\ref{sec:method:metric}, our method provides an interface to recover metric depth information approximated from observed rankings. Here, we compare our model to the baselines with regard to the two metric error measures RMSE and $\delta > 1.25$ using the same models as in Section \ref{sec:ordinal}. As all benchmark datasets have different scales and might be shifted arbitrarily, we rescale and shift the predictions to the resolution of the ground truth as described in \cite{ranftl2020towards} by optimizing a least-squares criterion.
The results are given in Table \ref{tab:experiments:results:metric}. As can be seen, although our model was solely trained on rankings, it is capable of recovering the underlying depth structure relatively precisely. Noteworthy, it is superior to all regression baselines and on a par with Xian 2020 for RMSE, although this ranking baseline additionally incorporates a smooth gradient loss term for sharp boundaries, directly accessing the metric depth information at training time. While it delivers the highest $\delta > 1.25$ accuracy, our approach still proves to be very competitive in this regard.
Fig.\ \ref{fig:preds} shows exemplary predictions of our model. Obviously, the model is able to capture tiniest object details, such as tree branches in the image from DIODE, and predicting sharp object boundaries. This shows that, even with simple sampling strategies, listwise ranking is able to reflect and predict such small details, without any need for very complex strategies based on the depth structure of an image.
\begin{figure}[!t]
\centering
\includegraphics[width=\linewidth]{gfx/preds.pdf}
\caption{Sample predictions given by the reconstructed metric scores of the PLDepthEffNet model as used in the experiments.}
\label{fig:preds}
\end{figure}
\section{Conclusion}
We have proposed to tackle the problem of depth ordering in images as a listwise ranking problem, for which we employed a Plackett-Luce model tailored to the domain of monocular depth estimation. Thus, compared to estimating the exact depth values, we solve an arguably simpler problem, at least if the goal is to minimize an ordinal error metric. Besides, compared to precise numerical data required by regression models for training, a ranking approach allows for leveraging weaker and more diverse training data. Although not directly trained on metric data, our model is capable of providing precise (shift-invariant) depth predictions, essentially by exploiting the relationship between the (latent) distance between image objects and the probability of reversing their order in a ranking.
Through an exhaustive zero-shot cross-dataset evaluation, we showed that our approach, combined with a state-of-the-art neural network as backbone, achieves superior ranking performance compared to previous approaches. In particular, it improves upon existing pairwise ranking methods, in spite of using a much simpler and more efficient sampling technique. Remarkably, our model also performs very competitive on metric error measures.
Motivated by these promising results, we plan to elaborate on further improvements of the listwise ranking approach. This includes an investigation of the effect of varying the ranking size, as well as an extension toward learning from partial rankings and incorporating equality relations. In addition, as we only applied random sampling so far, we plan to develop more sophisticated sampling strategies leading to more informative rankings to learn from.
\noindent \textbf{Acknowledgement.} This work was supported by the German Research Foundation (DFG) under Grant\ 3050231323. Moreover, computational resources were provided by the Paderborn Center for Parallel Computing (PC$^2$).
{\small
\bibliographystyle{ieee_fullname}
|
2,877,628,088,474 | arxiv | \section{Introduction} \label{Introduction}
A {\em $k$-pair network} $\mathcal N=(V, A, S, T)$ consists of a directed acyclic graph (DAG) $D=(V, A)$, a set of $k$ vertices $S=\{s_1, s_2, \dots, s_k\}$ with zero in-degree, called sources (senders) and a set of $k$ vertices $T=\{t_1, t_2, \dots, t_k\}$ with zero out-degree, called sinks (receivers). For convenience only, we assume that any $k$-pair network considered in this paper does not have vertices whose in-degree and out-degree are both equal to $1$. Roughly put, the {\em multiple unicast conjecture}, also known as the {\em Li-Li conjecture}~\cite{Li042}, claims that for any $k$-pair network, if information can be transmitted from all the senders to their corresponding receivers at rate $(r_1, r_2,\dots, r_k)$ via network coding, then it can be transmitted at the same rate via undirected fractional routing. One of the most challenging problems in the theory of network coding~\cite{Yeung06}, this conjecture has been doggedly resisting a series of attacks~\cite{CH15, CH16, CH17, CH18, Harv06, Jain06, Langberg09, Zongpeng12, Xiahou12, Yang14} and is still open to date.
A $k$-pair network $\mathcal{N}$ is said to be {\em fully reachable} if there exists an $s_i$-$r_j$ directed path $P_{s_i, r_j}$ for all $i, j$; and {\em strongly reachable} if, in addition, the paths $P_{s_1, r_j}, P_{s_2, r_j}, \cdots, P_{s_k, r_j}$ are {\em edge-disjoint} for any $j$; and {\em extra strongly reachable} if, furthermore, for any $j$ and all $i \neq k$, $P_{s_i, t_j}$ and $P_{s_k, t_j}$ do not share any vertex other than $t_j$. Throughout the paper, we will reserve the notations $\mathbf{P}_{t_j}$ and $\mathbf{P}$ and define
$$
\mathbf P_{t_j}:=\{P_{s_i, t_j}: i = 1, 2, \dots, k\}, \quad \mathbf{P} := \cup_{j=1}^k \mathbf P_{t_j}.
$$
For notational convenience, we may refer to a path from $\mathbf P_{t_j}$ as a $\mathbf P_{t_j}$-path, or simply a $\mathbf{P}$-path, and moreover, an arc on the path $\mathbf{P}_{t_j}$ as a $\mathbf{P}_{t_j}$-arc. Note that an arc can be simultaneously a $\mathbf{P}_{t_j}$-arc and a $\mathbf{P}_{t_{j'}}$-arc, $j \neq j'$.
The following {\em Langberg-M\'{e}dard multiple unicast conjecture}~\cite{Langberg09}, which deals with strongly reachable $k$-pair networks, is a weaker version of the Li-Li conjecture. Note that for a strongly reachable network, each source is able to multicast at rate 1 to all the receivers, e.g., the classic butterfly network of two-unicast is such a case.
\begin{conj}\label{conj-1}
For any strongly reachable $k$-pair network, there exists a feasible undirected fractional multi-flow with rate $(1,1,\dots,1)$.
\end{conj}
\noindent It turns out that Conjecture~\ref{conj-1} is equivalent to the following conjecture, with ``strongly reachable'' replaced by ``extra strongly reachable''.
\begin{conj}\label{conj-2}
For any extra strongly reachable $k$-pair network, there exists a feasible undirected fractional multi-flow with rate $(1,1,\dots,1)$.
\end{conj}
\noindent To see the equivalence, note that Conjecture~\ref{conj-1} trivially implies Conjecture~\ref{conj-2}, and the reverse direction follows from the fact that a strongly reachable $k$-pair network can be transformed to an extra strongly reachable $k$-pair network with a feasible undirected fractional multi-flow mapped to one with the same rate.
The Langberg-M\'{e}dard multiple unicast conjecture was first proposed in 2009~\cite{Langberg09}. In the same paper, the authors constructed a feasible undirected fractional multi-flow with rate $(1/3, 1/3, \dots, 1/3)$ for a strongly reachable $k$-pair network. Recently, we have improved $1/3$ to $8/9$ for a generic $k$ in~\cite{CH15} and to $11/12$ for $k=3, 4$ in~\cite{CH18}.
A strongly reachable $k$-pair network $\mathcal N$ is said to be {\em stable} if the choice of each $P_{s_i, t_j}$, $i, j=1, 2, \dots, k$, is unique, and {\em unstable} otherwise (see Fig.~\ref{Minimal Networks}); here we remark that $\mathcal{N}$ is stable only if it is extra strongly reachable. In this paper, we will establish Conjecture~\ref{conj-1} for stable $3$-pair networks by establishing Conjecture~\ref{conj-2} for the same family of networks. Our treatment is based on classification of stable $3$-pair networks according to their network topologies. Related work on topological analysis of strongly reachable networks can be found in~\cite{Langberg06} and~\cite{Han09}.
\begin{figure}[htbp]
\centering
\includegraphics[width=3.5cm]{fig1.1}~~~~~~~~~~~~~~~~~~
\includegraphics[width=3.5cm]{fig1.2}
\caption{A stable network $(a)$ and an unstable network $(b)$. In $(b)$, $\mathbf P_{t_2}$ is not unique since $P_{s_2, t_3}$ can be chosen as either $[s_2, D, F, I, K, t_3]$ or $[s_2, E, G, I, K, t_3]$. Here and hereafter, for stable networks, {\bf an arc that belongs to only one $\mathbf P$-path is colored red, green or blue, respectively, depending on the fact that the $\mathbf{P}$-path is a $\mathbf P_{t_1}$-, $\mathbf P_{t_2}$- or $\mathbf{P}_{t_3}$-path; and an arc that belongs to two or more $\mathbf{P}$-paths' is colored black}.}\label{Minimal Networks}
\end{figure}
The rest of paper is organized as follows. In Section 2, we introduce some basic notions, facts and related tools. In Section 3, we characterize stable $k$-pair networks and subsequently show that there exists an efficient algorithm to determine the stability of a given $k$-pair network. In Section4, we investigate the topological structure of stable $3$-pair networks, for which we settle the Langberg-M\'{e}dard conjecture in Section 5. Finally, the paper is concluded in Section 6.
\section{Preliminaries}
Throughout this section, we consider a {\em fully reachable} $k$-pair network $\mathcal{N}$ and adopt all the related notations defined in Section~\ref{Introduction}.
\subsection{Undirected Fractional Multi-Flow}\label{subsection-multi-flow-basics}
For an arc $a=[u,v]\in A$, we call $u$ and $v$ the {\em tail} and the {\em head} of $a$ and denote them by $tail(a)$, $head(a)$, respectively. For any $s, t \in V$, an $s$-$t$ {\em flow}~\footnote{The flow or multi-flow defined for directed graph in this paper, which can be negative, is equivalent to the flow defined in~\cite{Schrijver03} for undirected graphs, which has to be non-negative.} is a function $f: A \rightarrow \mathbb{R}$ satisfying the following {\em flow conservation law:} for any $v \notin \{s, t\}$,
\begin{equation}\label{flow conservation law}
excess_f(v)=0,
\end{equation}
where
\begin{equation}
excess_{f}(v):=\sum_{a\in A: \; head(a)=v} f(a)-\sum_{a\in A: \; tail(a)=v} f(a).
\end{equation}
It is easy to see that $excess_{f}(s)=-excess_{f}(t)$, which is called the {\em value (or rate)} of $f$. We say $f$ is {\it feasible} if $|f(a)| \leq 1$ for all $a\in A$.
An $(s_1, s_2, \dots,s_k)$-$(t_1, t_2, \dots, t_k)$ {\em multi-flow} refers to a set of $k$ flows $\mathcal{F}=\{f_1, f_2, \dots, f_k\}$, where each $f_{i}$ is an $s_i$-$t_i$ flow. We say $\mathcal{F}$ has {\em rate} $(d_1, d_2, \dots, d_k)$, where $d_i:=excess_{f_i}(s_i)$. For any given $a \in A$, we define $|\mathcal{F}|(a)$ as
\begin{equation}\label{total value in an arc}
|\mathcal{F}|(a):=\sum_{1\leq i\leq k}|f_{i}(a)|.
\end{equation}
And we say $\mathcal{F}$ is {\em feasible} if $|\mathcal{F}|(a) \leq 1$ for all $a \in A$.
\subsection{Routing Solution}
For each $P_{s_i, t_j}$, we define an $s_i$-$t_j$ flow $f_{i,j}$ as follows:
$$
f_{i,j}(a)=\left\{
\begin{array}{ll}
1, & \hbox{$a\in P_{s_i, t_j}$,}\\
0, & \hbox{otherwise.}
\end{array} \right.
$$
\begin{defn}[Linear Routing Solution]
An $(s_1, s_2, \dots, s_k)$-$(t_1,t_2,\dots, t_k)$ multi-flow $\mathcal{F} = \{f_1, f_2, \dots, f_k\}$ is said to be a {\em routing solution} for $\mathcal{N}$ if it is feasible with rate $(1, 1, \dots, 1)$. A routing solution is called {\em linear} (with respect to $\mathbf{P}$), if, for each feasible $l$,
\begin{equation} \label{coefficient-matrix}
f_{l}=\sum_{i,j=1}^k c^{(l)}_{i,j} f_{i,j},
\end{equation}
where all $c^{(l)}_{i,j} \in \mathbb{R}$, in which case the solution $\mathcal{F}$ can be equivalently represented by its {\em matrix form} $\mathcal C=\left((c^{(1)}_{i,j}), (c^{(2)}_{i,j}), \dots, (c^{(k)}_{i,j})\right)$; otherwise, it is called {\em non-linear}.
\end{defn}
The following theorem~\cite{CH18} is somewhat straightforward.
\begin{thm}\label{basic observation}
An $(s_1, s_2, \dots, s_k)$-$(t_1,t_2,\dots, t_k)$ multi-flow $\mathcal{F} = \{f_1, f_2, \dots, f_k\}$ satisfying (\ref{coefficient-matrix}) has rate $(1,1,\dots,1)$ if and only if all $c^{(l)}_{i,j}$ satisfy
\begin{equation} \label{commodity condition}
\sum_{j=1}^kc^{(l)}_{i,j}=0, \text{ for all}\; i\neq l, \quad \sum_{i=1}^kc^{(l)}_{i,j}=0, \text{ for all}\; j\neq l, \quad \sum_{i=1}^k\sum_{j=1}^kc_{i,j}^{(l)}= 1, \text{ for all}\;l.
\end{equation}
\end{thm}
\begin{figure}[htbp]
\centering
\includegraphics[width=4cm]{fig2.1}~~~~~~~~~~
\includegraphics[width=4cm]{fig2.2}~~~~~~~~~~
\caption{A Linear routing solution.}\label{2-pair-linear-solution}
\end{figure}
\begin{exam}\label{exam-2-pair}
Consider the $2$-pair network depicted in Fig.~\ref{2-pair-linear-solution} and Fig.~\ref{2-pair-non-linear-solution}. It is easy to check that Fig.~\ref{2-pair-linear-solution} gives a linear routing solution $\mathcal F=(f_1, f_2)$ with the matrix form
$$
\left(\left(
\begin{array}{cc}
\frac{3}{4} & \frac{1}{4} \\
\frac{1}{4} & \frac{-1}{4} \\
\end{array}
\right),
\left(
\begin{array}{cc}
\frac{-1}{4} & \frac{1}{4} \\
\frac{1}{4} & \frac{3}{4} \\
\end{array}
\right)
\right),
$$
i.e., $f_1=\frac{3}{4}f_{1,1}+\frac{1}{4}f_{1,2}-\frac{1}{4}f_{2,2}+\frac{1}{4}f_{2,1}$ and $f_2=\frac{3}{4}f_{2,2}+\frac{1}{4}f_{2,1}-\frac{1}{4}f_{1,1}+\frac{1}{4}f_{1,2}$. Note that
$$
|\mathcal F|(a)=|f_1(a)|+|f_2(a)|=\left\{
\begin{array}{ll}
\frac{1}{2}, & \hbox{$a\in\{[s_1,t_2],[s_2,t_1]\}$;} \\
1, & \hbox{otherwise.}
\end{array}
\right.
$$
On the other hand, it easy to check that Fig.~\ref{2-pair-non-linear-solution} gives a non-linear routing solution $\mathcal{F}=(f_1, f_2)$ with
$$
|\mathcal F|(a)=\left\{
\begin{array}{ll}
0, & \hbox{$a=[s_2,t_1]$;} \\
1, & \hbox{otherwise.}
\end{array}
\right.
$$
\end{exam}
\begin{figure}[htbp]
\centering
\includegraphics[width=4cm]{fig2.3}~~~~~~~~~~
\includegraphics[width=4cm]{fig2.4}~~~~~~~~~~
\caption{A non-linear routing solution.}\label{2-pair-non-linear-solution}
\end{figure}
Using the above language, the Langberg-M\'{e}dard multiple unicast conjecture says that strongly reachable $k$-pair networks always have routing solutions. Here, we conjecture that it can be further strengthened as follows.
\begin{conj}
Each strongly reachable $k$-pair network has a {\bf linear} routing solution.
\end{conj}
\subsection{$\mathcal{S}_{\mathcal{N}}$ and $g_s(\mathcal{C})$}
Let $[k] = \{1, 2, \dots, k\}$ and define
$$
\mathcal{S}_{\mathcal{N}} = \{ \{(i, j) \in [k] \times [k]: {P}_{s_i, t_j} \mbox{ passes through } a\}: a \in A \};
$$
in other words, each element of $\mathcal{S}_{\mathcal{N}}$ is a set of index pairs corresponding to all $\mathbf{P}$-paths that pass through a given arc in $\mathcal{N}$. Note that, if $\mathcal N$ is a strongly reachable network, then for any feasible $j$, each arc is passed by at most one of the paths $P_{s_1, t_j}, P_{s_2, t_j}, \dots, P_{s_k, t_j}$, and hence $\mathcal S_{\mathcal N}\subseteq\mathcal S_k$, where
$$
\mathcal{S}_k := \{\{(i_1,j_1),\dots (i_r,j_r)\}\subseteq [k]\times[k] \| \, j_1 < j_2 < \dots < j_r, 1 \leq r \leq k\}.
$$
Now, for a tuple of $k\times k$ matrices $\mathcal C=((c^{(1)}_{i,j}), (c^{(2)}_{i,j}), \dots, (c^{(k)}_{i,j}))$ satisfying (\ref{commodity condition}), given $s \in \mathcal{S}_{\mathcal{N}}$ and $l \in [k]$, we define
$$
g^{(l)}_{s}(\mathcal C):=\underset{(i,j)\in s}{\sum}c^{(l)}_{i,j},
$$
and furthermore,
\begin{equation} \label{g-smalls}
g_{s}(\mathcal C):=\sum_{l=1}^k|g^{(l)}_{s}(\mathcal C)|.
\end{equation}
The following theorem, whose proof is straightforward and thus omitted, will be used as a key tool to establish our results.
\begin{thm}\label{thm-comput}
$\mathcal C$ is a linear routing solution of $\mathcal N$ if $g_{s}(\mathcal C)\leq 1$ for any $s\in \mathcal S_{\mathcal N}$.
\end{thm}
For $s=\{(i_1, j_1), (i_2, j_2), \dots, (i_{\alpha(s)}, j_{\alpha(s)})\}\in\mathcal S_{\mathcal N}$,
we define the following multi-set:
$$
Ind_s:=\{i_1, j_1, i_2, j_2, \dots, i_{\alpha(s)}, j_{\alpha(s)}\},
$$
where $\alpha(s)$ denotes the size of $s$. And for any $l=1, 2, \dots, k$, denote by $m_{Ind_s}(l)$ the multiplicity of $l$ in $Ind_s$ (if $l\notin Ind_s$, then $m_{Ind_s}(l)=0$). An element $(i,j)\in s$ is said to be {\em diagonal} if $i=j$, otherwise {\em non-diagonal}. We use $\gamma(s)$ to denote the number of diagonal elements in $s$. For a quick example, consider $s=\{(1,1),(2,2),(1,3),(3,4),(1,6)\}\subseteq[6]\times[6]$. Then, $Ind_s=\{1,1,2,2,1,3,3,4,1,6\}$, $m_{Ind_s}(1)=4$, $m_{Ind_s}(2)=m_{Ind_s}(3)=2$, $m_{Ind_s}(4)=m_{Ind_s}(6)=1$, $m_{Ind_s}(5)=0$, $\alpha(s)=5$ and $\gamma(s)=2$.
\section{Characterization of Stable Networks} \label{section-solutions}
In this section, unless specified otherwise, we assume that $\mathcal{N}$ is an {\em extra strongly reachable} $k$-pair network.
\begin{defn}[Residual Network \cite{Langberg06}]\label{def-residual network}
For $j=1, 2, \dots, k$, the $j$-th {\em residual network} $\mathcal N_j$ is formed from $\mathcal N$ by reversing the directions of all its $\mathbf{P}_{t_j}$-arcs (that may be simultaneously $\mathbf{P}_{t_{j'}}$-arcs for some $j' \neq j$).
\end{defn}
Note that in spite of the acyclicity of $\mathcal N$, there may exist directed cycles in $\mathcal N_j$, and such a directed cycle must contain at least one reversed $\mathbf P_{t_j}$-arc.
\begin{defn}[Regular Cycle]
A directed cycle $C$ of $\mathcal N_j$ is called {\em regular}, if $C$ has no isolated vertex of $\mathbf P_{t_j}$, otherwise it is called {\em singular}.
\end{defn}
\begin{defn}[Semi-Cycle \cite{Han09}]\label{def-semi-cycle}
A $\mathbf P_{t_j}$-{\it semi-cycle} of $\mathcal N$ is formed from a regular cycle of $\mathcal N_j$ by reversing the directions of all its $\mathbf P_{t_j}$-arcs.
\end{defn}
Obviously, there is a one-to-one correspondence from the set of all the $\mathbf P_{t_j}$-semi-cycles in $\mathcal N$ to the set of all the regular cycles of the $j$-th residual network $\mathcal N_j$.
\begin{figure}[htbp]
\centering
\includegraphics[width=6cm]{fig4.11}~~~~~~~~~~
\includegraphics[width=6cm]{fig4.12}~~~~~~~~~~
\caption{If we reverse the directions of all the $\mathbf P_{t_1}$-paths, then both $(a)$ and $(b)$ will give rise to directed cycles of $\mathcal N_1$. Note that the one from $(a)$ is regular, whereas the one from $(b)$ is singular since it contains an isolated vertex $A$ on the path $P_{s_1, t_1}$. And by definition, (a) is a $\mathbf{P}_{t_1}$-semi-cycle.}\label{fig-reg-cycle}
\end{figure}
\begin{defn}[Crossing]\label{def-crossing}
A $\mathbf P_{t_j}$-{\it crossing} of $\mathcal N$ is formed from a $\mathbf P_{t_j}$-semi-cycle of $\mathcal N$ by removing all the $\mathbf P_{t_j}$-arcs.
\end{defn}
For example, consider the network $\mathcal N$ depicted in $(b)$ of Fig.~\ref{Minimal Networks}. While the choices of $P_{s_1, t_3}$ and $P_{s_3, t_3}$ are both unique, there are two choices for $P_{s_2, t_3}$: $P^{(1)}_{s_2, t_3}=[s_2, D, F, I, K, t_3]$ and $P^{(2)}_{s_2, t_3}=[s_2, E, G, I, K, t_3]$, which give rise to two choices of $\mathbf{P}_{t_3}$: $\mathbf P^{(1)}_{t_3}=\{P_{s_1, t_3}, P^{(1)}_{s_2, t_3}, P_{s_3, t_3}\}$ and $\mathbf P^{(2)}_{t_3}=\{P_{s_1, t_3}, P^{(2)}_{s_2, t_3}, P_{s_3, t_3}\}$. By definition, $\{[s_2, D, F, I], [s_2, E, G, I]\}$ is a $\mathbf P^{(1)}_{t_3}$-semi-cycle and also a $\mathbf P^{(2)}_{t_3}$-semi-cycle of $\mathcal N$; $[s_2, E, G, I]$ is a $\mathbf P^1_{t_3}$-crossing and $[s_2, D, F, I]$ is a $\mathbf P^{(2)}_{t_3}$-crossing. If we reverse the direction of $\mathbf P^{(1)}_{t_3}$, then $[s_2, E, G, I, F, D, s_2]$ is a cycle in the corresponding residual network, and if we choose to reverse the direction of $\mathbf P^{(2)}_{t_3}$, then $[s_2, D, F, I, G, E, s_2]$ is a cycle in the corresponding residual network.
We are now ready to state the following theorem, which give characterizations of stable networks.
\begin{thm}\label{thm-main}
For any extra strongly reachable $k$-pair network $\mathcal N$, the following statements are all equivalent.
\begin{description}
\item[$1)$] $\mathcal N$ is stable.
\item[$2)$] $\mathcal N$ has no $\mathbf P_{t_j}$-semi-cycle, $j=1, 2, \dots, k$.
\item[$3)$] $\mathcal N$ has no $\mathbf P_{t_j}$-crossing, $j=1, 2, \dots, k$.
\item[$4)$] None of $\mathcal N_1, \mathcal N_2, \dots, \mathcal N_k$ has a regular directed cycle.
\end{description}
\end{thm}
\begin{proof}
We will only establish the equivalence between $1)$ and $2)$, which is the only non-trivial part of the proof.
$ 2) \to 1)$: Suppose $\mathcal N$ is unstable. Then, for some $j$, there exist two choices for $\mathbf{P}_{t_j}$: $\mathbf P^{(1)}_{t_j}=\{P^{(1)}_{s_1, t_j}, P^{(1)}_{s_2, t_j}, \dots, P^{(1)}_{s_k, t_j}\}$ and $\mathbf P^{(2)}_{t_j}=\{P^{(2)}_{s_1, t_j}, P^{(2)}_{s_2, t_j}, \dots, P^{(2)}_{s_k, t_j}\}$. Let $C_1$ be the subnetwork of $\mathcal N$ induced on $\mathbf P^{(1)}_{t_j}\cup \mathbf P^{(2)}_{t_j}$ after removing all the vertices whose in-degree and out-degree are both $1$. Then, for each arc $a$ of $C_1$, there are three cases: $(1)$ $a$ only belongs to a path of $\mathbf P^{(1)}_{t_j}$; $(2)$ $a$ only belongs to a path of $\mathbf P^{(2)}_{t_j}$; $(3)$ $a$ belongs to both a path of $\mathbf P^{(1)}_{t_j}$ and a path of $\mathbf P^{(1)}_{t_j}$. Let $C$ be the digraph induced on the arcs of Cases $(1)$ and $(2)$ after reversing the directions of the arcs of Case $(1)$. It is easy to see that for each vertex $v$ of $C$, the in-degree of $v$ equals the out-degree of $v$. Thus, $C$ is an Eulerian directed graph and hence composed of arc-disjoint directed cycles, which corresponds to a $\mathbf P^{(1)}_{t_j}$-semi-cycle by definition.
$ 1) \to 2)$: Suppose that there exists a $\mathbf P_{t_j}$-semi-cycle $C$, and let $C'$ be the corresponding $\mathbf P_{t_j}$-crossing. Then it is easy to see that $(\mathbf P_{t_j}\setminus C)\cup C'$ is an alternative choice of $\mathbf{P}_{t_j}$, which means that $\mathcal N$ is not stable.
\end{proof}
We would like to add that one can efficiently check that if a given $k$-pair network $\mathcal{N}$ is extra strongly reachable by applying the Ford-Fulkerson algorithm to a set of $k$ directed graphs $D_i$, $1\leq i\leq k$, constructed below:
\begin{itemize}
\item Add a vertex $s$ as the source node and add an arc $[s, s_j]$ for each $s_j\in S$, $1\leq j\leq k$.
\item Split each vertex $v\in V\setminus \{s, t_i\}$ into two vertices $v_{in}$ and $v_{out}$ and add an arc $[v_{in}, v_{out}]$. Accordingly, replace arcs $[s, u]$, $[u, v]$ and $[v, t_i]$ by $[s, u_{in}]$, $[u_{out}, v_{in}]$ and $[v_{out}, t_i]$, respectively.
\end{itemize}
It is easy to see that the maximal flow from $s$ to $t_i$ is the number of vertex-disjoint $\mathbf P_{t_i}$-paths, and moreover, if it equals $k$ for all $D_i$, then $\mathcal N$ is extra strongly reachable. Furthermore, it is widely known \cite{CLRS09} that the depth-first search (DFS) algorithm can be used to detect directed cycles in a directed graph, which can be slightly modified \footnote{To see this, whenever the DFS visits a vertex on a $\mathbf{P}_{t_j}$-path from a vertex outside of the $\mathbf{P}_{t_j}$-path, it goes along the $\mathbf{P}_{t_j}$-arc for the next step's visit.} to detect {\bf regular cycles} in a residual network $\mathcal N_j$. To sum up, the equivalence between $1)$ and $4)$ of Theorem~\ref{thm-main}, together with the Ford-Fulkerson algorithm and the DFS algorithm, can be used to efficiently check the stability of a $k$-pair network.
\section{Stable $3$-pair Networks}
In this section, unless specified otherwise, we assume that $\mathcal{N}$ is a stable $3$-pair network. For the sake of convenience, a $\mathbf P_{t_1}$-path, $\mathbf P_{t_2}$-path or $\mathbf P_{t_3}$-path may be referred to as a red path, green path or blue path, respectively. For each feasible $i$, we may use shorthand notations $r_i$, $g_i$ and $b_i$ for paths $P_{s_i, t_1}$, $P_{s_i, t_2}$ and $P_{s_i, t_3}$, respectively. Similarly, we call a $\mathbf P_{t_1}$-crossing, $\mathbf P_{t_2}$-crossing and $\mathbf P_{t_3}$-crossing as a $r$-crossing, $g$-crossing and $b$-crossing, respectively.
\begin{defn}[Longest Common Segment (l.c.s.)]
For any directed paths $p_1, p_2, \dots, p_r$ in $\mathcal{N}$, a longest common segment of $\{p_1, p_2, \dots, p_r\}$, henceforth abbreviated as a $\{p_1, p_2, \dots, p_r\}$-l.c.s., is a segment common to all $p_i$ and any segment properly containing it is not common to all $p_i$.
\end{defn}
\begin{figure}[htbp]
\centering
\includegraphics[width=2.5cm]{fig2.5}~~~~~~~~~~
\caption{Longest common segments by paths $p_1$, $p_2$ and $p_3$}\label{merging}
\end{figure}
For example, In Fig.~\ref{merging}, there are three paths $p_1, p_2, p_3$ represented using distinct colors. It is easy to see that $[v_1,v_2]$ is a $\{p_1,p_2\}$-l.c.s.; $[v_2,v_3,v_4]$ is a $\{p_2,p_3\}$-l.c.s.; vertex $v_2$ is a $\{p_1,p_2,p_3\}$-l.c.s. and also a $\{p_1,p_3\}$-l.c.s. On the other hand, $[v_2,v_3]$ is not a $\{p_2,p_3\}$-l.c.s. since $[v_2, v_3, v_4]$, which properly contains $[v_2,v_3]$, is common to both $p_2$ and $p_3$.
\subsection{$\mathcal{N}_{t_i, t_j}$}
For $i \neq j$, we will use $\mathcal{N}_{t_i, t_j}$ to denote the subnetwork of $\mathcal{N}$ induced on all $\mathbf{P}_{t_i}$-paths and $\mathbf{P}_{t_j}$-paths. In this section, we will characterize the topology of $\mathcal{N}_{t_i, t_j}$, and without loss of generality, we will only consider $\mathcal{N}_{t_1, t_2}$. For any path $p\in\mathbf P_{t_1}\cup\mathbf P_{t_2}=\{r_1, r_2, r_3, g_1, g_2, g_3\}$, let $\ell(p)$ denote the number of $\{r_i, g_j\}$-l.c.s.'s ($1\leq i,j\leq 3$) on $p$ and we order all such l.c.s.'s by $p(1) < p(2) < \dots < p(\ell(p))$, where by $p(i)<p(i+1)$, we mean $head(p(i))<tail(p(i+1))$ according to the topological order of the vertices/arcs of a DAG; and we will use $p(i, i+1)$ to denote the path segment of $p$ form $head(p(i))$ to $tail(p(i+1))$. Note that $r_j(1)=g_j(1)$ since $r_j$ and $g_j$ share the same source $s_j$ for all feasible $j$. We first give a simple yet very useful lemma.
\begin{lem}\label{lem-basic-crossing}
For $p, q \in \{r_1, r_2, r_3, g_1, g_2, g_3\}$ and $l = 1, 2, \dots, \ell(p)-1$, if $p(l) \subseteq q$, then $p(l+1) \not \subseteq q$.
\end{lem}
\begin{proof}
Without loss of generality, we suppose $q\in\{r_1,r_2,r_3\}$. Clearly, if $p(l), p(l+1)\subseteq q$, then $p(l, l+1)$ forms a $r$-crossing, which contradict the assumption that the network is stable.
\end{proof}
The following lemma is a key tool in this paper.
\begin{lem}\label{lem-crossing}
There exist $1\leq i\neq j\leq 3$ such that $\ell(r_i) = \ell(g_j) =1$.
\end{lem}
\begin{proof}
${\mathbf(1)}$ We first prove that there exists a green path $g_j$ such that $\ell(g_j) = 1$. To this end, note that if for all $1\leq i\leq 3$, $\ell(g_i)=1$, then the desired result is obviously true. Hence, we suppose, without loss of generality, that $\ell(g_1) > 1$. Clearly, by Lemma~\ref{lem-basic-crossing}, $g_1(2) \not \subseteq r_1$. Thus, we further assume in the following that $g_1(2) \subseteq r_2$ (See Fig.~\ref{proof-lemma-1}(a)).
Now, we consider $g_2$. If $\ell(g_2)=1$, then we are done. So we suppose in the following that $\ell(g_2) > 1$. Then, by Lemma~\ref{lem-basic-crossing}, we deduce that $g_2(2) \not \subseteq r_2$. We also have that $g_2(2) \not \subseteq r_1$ since otherwise $g_1(1, 2)$ and $g_2(1, 2)$ form a $r$-crossing. So, we have $g_2(2)\subseteq r_3$ (See Fig.~\ref{proof-lemma-1}(b)).
Now, consider $g_3$ and suppose, by way of contradiction, that $\ell(g_3) > 1$. Then, we have $(1)$ $g_3(2) \not \subseteq r_3$ (by Lemma \ref{lem-basic-crossing}); $(2)$ $g_3(2) \not \subseteq r_2$ since otherwise $g_2(1, 2)$ and $g_3(1, 2)$ form a $r$-crossing; and $(3)$ $g_3(2)\not \subseteq r_1$ since otherwise $g_1(1, 2)$, $g_2(1, 2)$ and $g_3(1, 2)$ form a $r$-crossing. Hence, we obtain a contradiction to the existence of $g_3(2)$ and thus deduce that $\ell(g_3) = 1$, completing the proof of $(1)$.
${\mathbf(2)}$ By considering the red paths in the parallel manner, we can find a red path, say, $r_i$, such that $\ell(r_i) = 1$.
${\mathbf(3)}$ We now prove $i\neq j$ by contradiction. Without loss of generality, we suppose $i=j=1$, i.e., $\ell(r_1) = \ell(g_1) = 1$. Note that if $\ell(g_2) =1$, then we are done. Hence, we suppose in the following that $\ell(g_2) > 1$. Clearly, $g_2(2) \not \subseteq r_2$ by Lemma~\ref{lem-basic-crossing} and $g_2(2)\not \subseteq r_1$ since $\ell(r_1) = 1$. Hence, $g_2(2) \subseteq r_3$.
Now, consider $g_3$. If $\ell(g_3) = 1$, then we are done since $\ell(r_1)=\ell(g_3)=1$. So, we suppose $\ell(g_3) > 1$. Clearly, $g_3(2) \not \subseteq r_3$ by Lemma~\ref{lem-basic-crossing}, and $g_3(2)\not \subseteq r_2$ since otherwise $g_2(1, 2)$ and $g_3(1, 2)$ form a $r$-crossing. Note that $\ell(r_1) = 1$, we have $g_3(2) \not \subseteq r_1$, which implies that $\ell(g_3) = 1$, completing the proof of the lemma.
\end{proof}
\begin{figure}[htbp]
\centering
\includegraphics[width=4cm]{fig1.6}~~~~~~~~~~~~~~~~~~~~~~~
\includegraphics[width=4cm]{fig1.7}
\caption{Proof of Lemma \ref{lem-crossing}.}\label{proof-lemma-1}
\end{figure}
A careful examination of the above proof, in particular, Step $(3)$ thereof, reveals that it actually yields a stronger result:
\begin{cor}\label{lem-crossing-2}
If there exists a feasible $i$ such that $\ell(r_i) = 1$ (resp. $\ell(g_i) =1$), then there exists a feasible $j$ such that $j \neq i$ and $\ell(g_j) = 1$ (resp. $\ell(r_j) = 1$).
\end{cor}
\begin{defn}[Non-degenerated $\mathcal{N}_{t_1, t_2}$]
We say $\mathcal{N}_{t_1, t_2}$ is {\em non-degenerated} if there uniquely exist distinct $i,j$ such that $\ell(r_i) = \ell(g_j) = 1$, otherwise {\em degenerated}.
\end{defn}
The following corollary lists all possible topologies of a degenerated $\mathcal{N}_{t_1, t_2}$.
\begin{thm}\label{lem-degenerated-config}
A degenerated $\mathcal{N}_{t_1, t_2}$ is equivalent to $(a)$, $(b)$, or $(c)$ of Fig.~\ref{fig-deg-config} in the sense that the two are isomorphic if each l.c.s. is treated as a single vertex.
\end{thm}
\begin{proof}
We will have to deal with the following two cases:
\begin{description}
\item[$1)$] there exists $i$ such that $\ell(r_i) = \ell(g_i) = 1$. In this case, by Corollary~\ref{lem-crossing-2}, we have the following subcases:
\begin{description}
\item[$1.1)$] There exists $j \neq i$ such that $\ell(g_j) = 1$; $\ell(r_j) = 1$. In this case, it is easy to see that there exists $l$ distinct from both $i$ and $j$ such that $\ell(r_l) = \ell(g_l) =1$ as shown in $(a)$ of Fig.~\ref{fig-deg-config}.
\item[$1.2)$] There exist $j \neq i$ and $l \neq i$ such that $\ell(r_j) = 1$; $\ell(g_l) = 1$. In this case, if $\ell(g_j) = 1$, we have Case $1.1)$; otherwise, we have $g_j(2)=r_l(2)$ as shown in $(b)$ of Fig.~\ref{fig-deg-config}.
\end{description}
\item[$2)$] there exist distinct $i,j,l$ such that $\ell(r_i) = \ell(r_j) = \ell(g_l)=1$. In this case, we have either $r_l(2)=g_j(2)$; $r_l(3)=g_i(2)$ as shown in $(c)$ of Fig.~\ref{fig-deg-config} or $r_l(2)=g_i(2)$; $r_l(3)=g_j(2)$ resulting a network equivalent to $(c)$.
\end{description}
The proof is complete by combining all the discussions above.
\end{proof}
\begin{figure}[htbp]
\centering
\includegraphics[width=2.5cm]{fig3.6}~~~~~~~~~~
\includegraphics[width=2.5cm]{fig3.7}~~~~~~~~~~
\includegraphics[width=2.5cm]{fig3.8}~~~~~~~~~~
\caption{Possible cases of a degenerated $\mathcal{N}_{t_1, t_2}$.}\label{fig-deg-config}
\end{figure}
\begin{thm}\label{thm-max-config}
A non-degenerated $\mathcal{N}_{t_1, t_2}$ is equivalent to one of the five networks as shown in Fig.~\ref{Non-dege-Config} in the sense that the two are isomorphic if each l.c.s. is treated as a single vertex.
\end{thm}
\begin{proof}
For a non-degenerated $\mathcal{N}_{t_1, t_2}$, we suppose that $\ell(r_i) = \ell(g_l) = 1$ and consider all the possible l.c.s. of $\{r_j, r_l\}$ and $\{g_i, g_j\}$. Recalling that $r_l(1)=g_l(1)$, for all feasible $l$, we start our argument by considering $g_i(2)$ and $g_j(2)$. By Lemma~\ref{lem-basic-crossing}, we have the following two cases:
\begin{description}
\item[$1)$] $g_i(2) \subseteq r_j$ and $g_j(2) \subseteq r_l$. In this case, by Lemma~\ref{lem-basic-crossing}, we infer that $r_j(2) \not \subseteq g_j$ and hence $r_j(2) \subseteq g_i$, which implies that $g_i(2)=r_j(2)$ (due to the acyclicity of $\mathcal{N}$). We then further consider the following two subcases:
\begin{description}
\item[$1.1)$] $\ell(g_i) = 2$.
\item[$1.2)$] $\ell(g_i) > 2$.
\end{description}
In Case $1.1)$, it is easy to see that $g_j(2)=r_l(2)$, which leads to the following two subcases:
\begin{description}
\item[$1.1.1)$] $\ell(g_j) = 2$. In this case, $\mathcal{N}_{t_1, t_2}$ is equivalent to (1.1.1) of Fig.~\ref{Non-dege-Config}.
\item[$1.1.2)$] $\ell(g_j) \geq 3$. In this case, $g_j(3) \subseteq r_j$. By Lemma~\ref{lem-basic-crossing} and the acylicity of the network, we have $g_j(3)=r_j(3)$. We declare that $\ell(g_j) = 3$ since if otherwise $g_j(4) \subseteq r_l$, again by Lemma~\ref{lem-basic-crossing} and the acyclicity of the network, we have $g_j(4)=r_l(3)$ and hence $r_l(2), r_l(3)\subseteq g_j$, which contradicts Lemma~\ref{lem-basic-crossing}. Hence, in this case, $\mathcal{N}_{t_1, t_2}$ is equivalent to (1.1.2) of Fig.~\ref{Non-dege-Config}.
\end{description}
In Case $1.2)$, since $g_i(2) \subseteq r_j$, we have $g_i(3) \subseteq r_l$. Since $g_j(2), g_i(3) \subseteq r_l$, we have to deal with the following two subcases:
\begin{description}
\item[$1.2.1)$] $g_j(2)=r_l(2)$ and $g_i(3)=r_l(3)$. In this case, if $\ell(g_j) \geq 3$, then by Lemma~\ref{lem-basic-crossing}, $g_j(3) \subseteq r_j$, which however would imply $g_j(2, 3)$ and $g_i(2, 3)$ form a $r$-crossing. Hence, we have $\ell(g_j) = 2$. Now, if $\ell(g_i) \geq 4$, then by Lemma~\ref{lem-basic-crossing}, $g_i(4) \subseteq r_j$, which further implies $g_i(4)=r_j(3)$. Hence, $r_j(2), r_j(3) \subseteq g_i$, which contradicts Lemma~\ref{lem-basic-crossing}. Hence $\ell(g_j) =2 , \ell(g_i) =3$, and $\mathcal{N}_{t_1, t_2}$ is equivalent to (1.2.1) of Fig.~\ref{Non-dege-Config}.
\item[$1.2.2)$] $g_i(3)=r_l(2)$ and $g_j(2)=r_l(3)$. In this case, if $\ell(g_i) \geq 4$, then $g_i(4) \subseteq r_j$ and hence $g_i(3, 4)$ and $g_j(1, 2)$ form a $r$-crossing, which contradicts the stability of the network. Thus, $\ell(g_i) =3 $ and we have to consider the following two subcases:
\end{description}
\begin{description}
\item[$1.2.2.1)$] $\ell(g_j) =2$. In this case, we conclude that $\mathcal{N}_{t_1, t_2}$ is equivalent to (1.2.2.1) of Fig.~\ref{Non-dege-Config}.
\item[$1.2.2.2)$] $\ell(g_j) \geq 3$. In this case, by Lemma~\ref{lem-basic-crossing}, we have $g_j(3) \subseteq r_j$, which further implies $g_j(3)=r_j(3)$. Now, if $\ell(g_j) \geq 4$, then by Lemma~\ref{lem-basic-crossing}, $g_j(4) \subseteq r_l$, which further implies $g_j(4)=r_l(4)$ and hence $r_l(3),r_l(4) \subseteq g_j$, which contradicts Lemma~\ref{lem-basic-crossing}. Hence $\ell(g_j) =3$ and we conclude that $\mathcal{N}_{t_1, t_2}$ is equivalent to (1.2.2.2) of Fig.~\ref{Non-dege-Config}.
\end{description}
\item[$2)$] $g_i(2) \subseteq r_l$ and $g_j(2) \subseteq r_l$. In this case, without loss of generality, we assume $g_j(2)=r_l(2)$ and $g_i(2)=r_l(3)$ (since otherwise, we can relabel $s_i$, $s_j$ as $s_j$, $s_i$, respectively). By Lemma~\ref{lem-basic-crossing}, we have $g_i(3) \subseteq r_j$ and $g_j(3) \subseteq r_j$. Then, there are two cases:
\begin{description}
\item[$2.1)$] $g_j(3)=r_j(2)$;
\item[$2.2)$] $g_i(3)=r_j(2)$.
\end{description}
It is easy to see that $2.1)$ is impossible since otherwise $r_j(1), r_j(2) \subseteq g_j$, which contradicts Lemma~\ref{lem-basic-crossing}. Hence, we have $r_j(2) \subseteq g_i$ and $r_l(2) \subseteq g_j$. By switching the colors of the paths and relabeling sources $s_i$, $s_l$ as $s_l$, $s_i$, respectively, we will reach Case $1)$, which has been dealt with before.
\end{description}
\end{proof}
\begin{figure}[htbp]
\centering
\includegraphics[width=2.5cm]{fig3.1}~~~~~~~~~~
\includegraphics[width=2.5cm]{fig3.2}~~~~~~~~~~
\includegraphics[width=2.5cm]{fig3.3}~~~~~~~~~~
\includegraphics[width=2.5cm]{fig3.4}~~~~~~~~~~
\includegraphics[width=2.5cm]{fig3.5}~~~~~~~~~~
\caption{Possible cases of a non-degenerated $\mathcal{N}_{t_1, t_2}$.}\label{Non-dege-Config}
\end{figure}
The following corollary follows from an inspection of all the possible cases of a non-degenerated $\mathcal{N}_{t_1, t_2}$ as stated in Theorem~\ref{thm-max-config}.
\begin{cor}\label{lem-0-2}
Suppose that $\mathcal{N}_{t_1, t_2}$ is non-degenerated with $\ell(r_i) = \ell(g_l) =1$. Then, $(1)$ $g_i(2)\neq r_l(2)$; $(2)$ there exist a unique $\{g_i, r_j\}$-l.c.s. and a unique $\{g_j, r_l\}$-l.c.s., where $j$ is distinct from both $i$ and $l$; and $(3)$ one of the following $3$ statements holds:
\begin{description}
\item[$a)$] $g_i(2)=r_j(2)$ and $g_j(2)=r_l(2)$;
\item[$b)$] $g_i(2)=r_j(2)$ and $g_j(2)=r_l(3)$;
\item[$c)$] $g_i(3)=r_j(2)$ and $g_j(2)=r_l(2)$.
\end{description}
\end{cor}
We say $\mathcal{N}_{t_1, t_2}$ is of {\em type $1$} if $a)$ of Corollary~\ref{lem-0-2} holds, and of {\em type $2$} otherwise. It is easy to check that in Fig.~\ref{Non-dege-Config}, $(1.1.1), (1.1.2), (1.2.1)$ are of type $1$ and $(1,2,2,1),(1,2,2,2)$ are of type $2$ since they satisfy $b)$. Note that in Fig.~\ref{Non-dege-Config}, if we switch colors of paths and source labels $i$ and $l$, then $(1.1.1), (1.1.2), (1.2.1)$ still satisfy $a)$ but $(1,2,2,1),(1,2,2,2)$ satisfy $c)$ of Corollary~\ref{lem-0-2} instead.
\subsection{A Forbidden Structure}
For any $\mathbf{P}$-path $p$ in $\mathcal{N}_{t_i, t_j}$, let $\ell^{i,j}(p)$ be the number of $\{P_{s_l, t_i}, P_{s_{l'}, t_j}\}$-l.c.s's ($1\leq l, l'\leq 3$) on $p$ and we order such l.c.s's as $p^{i,j}(1) <p^{i,j}(2) < \dots < p^{i,j}(\ell^{i,j}(p))$. Here we remark that the notation $\ell^{i, j}(p)$, $p^{i, j}(\cdot)$ subsume $\ell(p)$, $p(\cdot)$ as the latter two are simply $\ell^{1, 2}(p)$, $p^{1, 2}(\cdot)$, respectively. By Lemma~\ref{lem-crossing}, the following sets are non-empty:
$$
m_{i,j}^i:=\{l: \ell^{i,j}(P_{s_l, t_i})=1\}, \quad m_{i,j}^j:=\{l: \ell^{i,j}(P_{s_l, t_j})=1\}.
$$
Here, let us add that the two subscripts of $m_{i,j}^i$ are interchangeable, more precisely, $m_{i,j}^i=m_{j,i}^i$. In the case $m_{i,j}^i$ contains only one element, e.g., $\mathcal{N}_{t_i, t_j}$ is degenerated, we may write $m_{i,j}^i=l$ instead of $m_{i, j}^i=\{l\}$ for simplicity. For example, for the network depicted in Fig.~\ref{Minimal Networks}(a), each $\mathcal{N}_{t_i, t_j}$ is non-degenerated and $m_{1,2}^1=1$, $m_{1,2}^2=3$, $m_{1,3}^1=1$, $m_{1,3}^3=3$, $m_{2,3}^2=1$ and $m_{2,3}^3=3$.
\begin{thm}\label{thm-0-3}
There exists no stable network such that $(1)$ $m_{i,j}^i=l$, $m_{i,j}^j=i$; $m_{i,l}^i=l$, $m_{i,l}^l=j$; $m_{j,l}^j=i$, $m_{j,l}^l=j$; $(2)$ there exists a $\{P_{s_j, t_i}, P_{s_l, t_j}, P_{s_i, t_l}\}$-l.c.s, where $i, j, l$ are all distinct from one another.
\end{thm}
\begin{figure}[htbp]
\centering
\includegraphics[width=3cm]{fig4.1}~~~~~~~~~~
\includegraphics[width=3cm]{fig4.2}~~~~~~~~~~
\includegraphics[width=3cm]{fig4.3}~~~~~~~~~~
\includegraphics[width=3cm]{fig4.4}~~~~~~~~~~
\caption{Proof of Case $1)$ of Theorem \ref{thm-0-3}, where we do not show path $b_3$. Note that $r_2^{1,2}(2)=g_3^{1,2}(2)$, $r_2^{1,3}(2)=b_1^{1,3}(2)$, $g_3^{2,3}(2)=b_1^{2,3}(2)$ and there exists a unique $\{r_2, g_3,b_1\}$-l.c.s.}\label{fig-case-1}
\end{figure}
\begin{proof}
Suppose, by way of contradiction, that there exists a stable network $\mathcal N$ such that $(1)$ and $(2)$ hold. Without loss of generality, we assume $i=1$, $j=2$ and $l=3$ and therefore $m_{1,2}^1=m_{1,3}^1=3$, $m_{2,3}^2=m_{1,2}^2=1$, $m_{2,3}^3=m_{1,3}^3=2$, which implies $\ell^{1,2}(r_3)=\ell^{1,3}(r_3)=1$, $\ell^{1,2}(g_1)=\ell^{2,3}(g_1)=1$ and $\ell^{1,3}(b_2)=\ell^{2,3}(b_2)=1$. We consider the following two cases:
\begin{description}
\item[$1)$] all $\mathcal{N}_{t_1, t_2}$, $\mathcal{N}_{t_1, t_3}$ and $\mathcal{N}_{t_2, t_3}$ are of type $1$;
\item[$2)$] any of $\mathcal{N}_{t_1, t_2}$, $\mathcal{N}_{t_1, t_3}$ or $\mathcal{N}_{t_2, t_3}$ is of type $2$.
\end{description}
We first prove the theorem for Case $1)$. Consider $\mathcal{N}_{t_1, t_2}$. Since it is of type $1$, by Lemma \ref{lem-0-2}, we have that $r_1^{1,2}(2)=g_2^{1,2}(2)$, $r_2^{1,2}(2)=g_3^{1,2}(2)$. Note that although there are several types of $\{r_2, g_3,b_1\}$-l.c.s., as shown in $(a)-(d)$ of Fig.~\ref{fig-case-1}, our argument in the following however does not depend on the specific type. In the following, we prove $g_2^{1,2}(1, 2)$ and $b_1^{1,3}(1, 2)$ (shown as $[A, B]$ and $[C, D]$, respectively in Fig.~\ref{fig-case-1}) form a $r$-crossing, which will contradict Theorem~\ref{thm-main} and yield the theorem for this case. Towards this goal, we only need to prove the following two statements:
\begin{description}
\item[$(a)$] $C<B$ and $A<D$;
\item[$(b)$] $g_2^{1,2}(1, 2)\cap b_1^{1,3}(1, 2)=\emptyset$.
\end{description}
For $(a)$, it is easy to see that either $B\leq C$ or $D\leq A$ will imply that $b_1^{2,3}(2) \subseteq g_2$, which contradicts the fact that $b_1^{2,3}(2)=g_3^{2,3}(2)$ since $\mathcal{N}_{t_2, t_3}$ is of type 1. Hence $(a)$ holds. For $(b)$, it is easy to see that $g_2^{1,2}(1, 2)\cap b_1^{1,3}(1, 2) \neq \emptyset$ also contradicts the fact that $b_1^{2,3}(2)=g_3^{2,3}(2)$. Hence, $(b)$ holds.
Now, we prove the theorem for Case $2)$. Without loss of generality, we suppose $\mathcal{N}_{t_1, t_2}$ is of type $2$. Then, according to Corollary~\ref{lem-0-2}, there are two possible cases. Specifically, in Fig.~\ref{fig-case-2-1} (resp. Fig.~\ref{fig-case-2-2}), $(a)$ satisfies: $r_1^{1,2}(2)=g_2^{1,2}(2)$ and $r_2^{1,2}(2)=g_3^{1,2}(3)$; and $(b)$ satisfies: $r_1^{1,2}(3)=g_2^{1,2}(2)$ and $r_2^{1,2}(2)=g_3^{1,2}(2)$.
We consider the following two cases:
\begin{description}
\item[$2.1)$] $b_1^{2,3}(2)\subseteq g_3$;
\item[$2.2)$] $b_1^{2,3}(2)\subseteq g_2$.
\end{description}
\begin{figure}[htbp]
\centering
\includegraphics[width=3cm]{fig4.7}~~~~~~~~~~~~~~
\includegraphics[width=3cm]{fig4.8}
\caption{Proof of Case $2.1)$ of Theorem \ref{thm-0-3}, where we do not show path $b_3$.} \label{fig-case-2-1}
\end{figure}
For Case $2.1)$ (see Fig.~\ref{fig-case-2-1}), the proof is similar to that of Case $1)$. In this case, we can have $g_2^{1,2}(1, 2)$ and $b_1^{1,3}(1, 2)$ (shown as $[A, B]$ and $[C, D]$, respectively in Fig.~\ref{fig-case-2-1}) form a $r$-crossing, which contradicts the stability of the network.
For Case $2.2)$, since $b_1^{2,3}(2)\neq g_2^{2,3}(2)$ and $b_1^{2,3}(2)\not \subseteq g_3$, we have, by Lemma \ref{lem-0-2}, $b_1^{2,3}(2)=g_2^{2,3}(3)$ and $b_3^{2,3}(2)=g_2^{2,3}(2)$, as shown in Fig.~\ref{fig-case-2-2}. Let $A:=head(r_2^{1,2}(1))$, $B:=tail(r_2^{1,2}(2))$, $C:=head(b_3^{2,3}(1))$, $D:=head(b_3^{2,3}(2))$. Noticing that either $B\leq C$ or $D\leq A$ or $r_2^{1,2}(1, 2)\cap b_3^{2,3}(1, 2)\neq\emptyset$ will imply that $r_2^{1,3}(2)=b_3^{1,3}(2)$, which however, is impossible according to $(1)$ of Corollary~\ref{lem-0-2}. Hence, we have
\begin{figure}[htbp]
\centering
\includegraphics[width=4.5cm]{fig4.9}~~~~~~~~~~~~~~
\includegraphics[width=4.5cm]{fig4.10}
\caption{Proof of Case $2.2)$ of Theorem \ref{thm-0-3}.}\label{fig-case-2-2}
\end{figure}
\begin{description}
\item[$(a)$] $C<B$ and $A<D$;
\item[$(b)$] $r_2^{1,2}(1, 2)\cap b_3^{2,3}(1, 2)=\emptyset$.
\end{description}
By definition, $r_2^{1,2}(1, 2)$ and $b_3^{2,3}(1, 2)$ form a $g$-crossing, a contradiction that leads to the theorem for this case.
The proof is then complete by combining all the discussions above.
\end{proof}
\section{Main Result}
In this section, we state and prove our main result. Throughout this section, we again assume that $\mathcal{N}$ is a stable $3$-pair network.
The following seemingly trivial lemma is a key tool for us to determine $\mathcal S_{\mathcal N}$ throughout our treatment.
\begin{lem}\label{lem-basic}
If there are no $\{P_{s_{i_1},t_{j_1}}, P_{s_{i_2},t_{j_2}}\}$-l.c.s. within $\mathcal N$, then $\{(i_1,j_1),(i_2,j_2) \} \nsubseteq s$ for any $s\in \mathcal S_{\mathcal N}$.
\end{lem}
The following lemma is useful.
\begin{lem}\label{lem-0-1}
If there exists $h$ such that $h\in m_{i,j}^{l_1}\cap m_{i,l}^{l_2}\cap m_{j,l}^{l_3}$ for some feasible $l_1, l_2, l_3$ and distinct $i, j, l$, then either $\{(h,i),(h,j)\}\notin\mathcal S_{\mathcal N}$ or $\{(h,i),(h,l)\}\notin\mathcal S_{\mathcal N}$.
\end{lem}
\begin{figure}[htbp]
\centering
\includegraphics[width=4.3cm]{fig1.12}~~~~~~~~~~
\includegraphics[width=4.3cm]{fig1.13}~~~~~~~~~~
\caption{Proof of Lemma \ref{lem-0-1}. In $(a)$, the unique $\{r_1,g_1, b_1\}$-l.c.s. is also the unique $\{r_1,g_1\}$-l.c.s., $\{g_1, b_1\}$-l.c.s. and $\{r_1, b_1\}$-l.c.s.. In $(b)$, the unique $\{r_1,g_1, b_1\}$-l.c.s. is also the unique $\{r_1,g_1\}$-l.c.s. and $\{r_1, b_1\}$-l.c.s.. }\label{Same source}
\end{figure}
\begin{proof}
Without loss of generality, we assume $h=1$. Then, there exist a unique $\{r_1,g_1\}$-l.c.s., a unique $\{r_1, b_1\}$-l.c.s. and a unique $\{g_1, b_1\}$-l.c.s. with a same tail $s_1$. If all of them have a same head, as shown in $(a)$ of Fig.~\ref{Same source}, then $\{(1,1),(1,2),(1,3)\}\in \mathcal S_{\mathcal N}$ and none of $\{(1,1),(1,2)\}$, $\{(1,1),(1,3)\}$, $\{(1,2),(1,3)\}$ belongs to $\mathcal S_{\mathcal N}$; otherwise two of them share a same head, then $\{(1,1),(1,2),(1,3)\}\in \mathcal S_{\mathcal N}$ and at most one of $\{(1,1),(1,2)\}$, $\{(1,1),(1,3)\}$, $\{(1,2),(1,3)\}$ belongs to $\mathcal S_{\mathcal N}$ (for example, $(b)$ shows the case $\{(1,2),(1,3)\}\in \mathcal S_{\mathcal N}$). Hence, the result holds for both cases, which completes the proof.
\end{proof}
We also need the following lemma.
\begin{lem}\label{lem-1-2}
Let
$$
\mathcal C=\left(\left(
\begin{array}{ccc}
\frac{1}{2} & \frac{1}{4} &\frac{1}{4} \\
\frac{1}{4}& \frac{-1}{4} & 0 \\
\frac{1}{4} & 0 & \frac{-1}{4} \\
\end{array}
\right),
\left(
\begin{array}{ccc}
\frac{-1}{4} & \frac{1}{4} & 0 \\
\frac{1}{4} & \frac{1}{2} & \frac{1}{4} \\
0 & \frac{1}{4} & \frac{-1}{4} \\
\end{array}
\right),
\left(
\begin{array}{ccc}
\frac{-1}{4} & 0 & \frac{1}{4} \\
0 & \frac{-1}{4} & \frac{1}{4} \\
\frac{1}{4} & \frac{1}{4} & \frac{1}{2} \\
\end{array}
\right)
\right).
$$
Then for any $s\in \mathcal S_k$, $g_s(\mathcal C)>1$ if and only if $\alpha(s)=3$ and $\gamma(s)=0$.
\end{lem}
\begin{proof}
The result can be obtained by considering the following cases:
\begin{description}
\item[$1)$] $\gamma(s)=0$. In this case, it is easy to see that
$$
g_s(\mathcal C)=\frac{1}{4}\sum_{i=1}^3 m_{Ind_s}(i)=2\alpha(s)=\left\{
\begin{array}{ll}
\frac{1}{2}, & \hbox{$\alpha(s)=1$;} \\
1, & \hbox{$\alpha(s)=2$;} \\
\frac{3}{2}, & \hbox{$\alpha(s)=3$.}
\end{array}
\right.
$$
\item[$2)$] $\gamma(s)=1$. In this case, it is easy to check that
$$
g_s(\mathcal C)=\left\{
\begin{array}{ll}
\frac{1}{2}+\frac{1}{4}+\frac{1}{4}=1, & \hbox{$\alpha(s)=1$;} \\
\frac{3}{4}+\frac{1}{4}=1, & \hbox{$s=\{(i,i),(i,j)\}$, where $i\neq j$;} \\
\frac{1}{2}, & \hbox{$s=\{(i,i),(k,j)\}$, where $i,j,k$ are distinct;} \\
1, & \hbox{$\alpha(s)=3$.}
\end{array}
\right.
$$
\item[$3)$] $\gamma(s)=2$. In this case, it is easy to check that $g_s(\mathcal C)=1$.
\item[$4)$] $\gamma(s)=3$. In this case, obviously, $g_s(\mathcal C)=0$.
\end{description}
\end{proof}
We are now ready for our main result.
\begin{thm}
Each stable $3$-pair network has a linear routing solution.
\end{thm}
\begin{proof}
For the stable $3$-pair network $\mathcal{N}$, we consider the following two cases:
\begin{description}
\item[$1)$] there exist distinct $i,j,l\in\{1,2,3\}$, $m_{i,j}^i\cap\{i,j\}\neq\emptyset$ and $m_{i,l}^i\cap\{i,l\}\neq\emptyset$;
\item[$2)$] for any distinct $i,j,l\in\{1,2,3\}$, either $m_{i,j}^i=l$ or $m_{i,l}^i=j$.
\end{description}
For Case $1)$, we have the following subcases:
\begin{description}
\item[$1.1)$] $i\in m_{i,j}^i\cap m_{i,l}^i$;
\item[$1.2)$] $j\in m_{i,j}^i$ and $l\in m_{i,l}^i$;
\item[$1.3)$] $i\in m_{i,j}^i$ and $l\in m_{i,l}^i$.
\end{description}
In the following, without loss of generality, we assume $i=1$, $j=2$ and $l=3$.
For Case $1.1)$, if $1\in m_{1,2}^1\cap m_{1,3}^1$, then $r_1$ is disjoint from $\mathcal N':=\{g_2, g_3, b_2, b_3\}$, which is a stable (hence extra strongly reachable) $2$-pair network. By~\cite{CH17}, $\mathcal N'$ always has a linear routing solution
$$
\left(\left(
\begin{array}{cc}
\frac{3}{4} & \frac{1}{4} \\
\frac{1}{4} & \frac{-1}{4} \\
\end{array}
\right),
\left(
\begin{array}{cc}
\frac{-1}{4} & \frac{1}{4} \\
\frac{1}{4} & \frac{3}{4} \\
\end{array}
\right)
\right).
$$
Hence, $\mathcal{N}$ has the following linear routing solution:
$$
\left(\left(
\begin{array}{ccc}
1 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 0 \\
\end{array}
\right),
\left(
\begin{array}{ccc}
0 & 0 & 0 \\
0 & \frac{3}{4} & \frac{1}{4} \\
0 & \frac{1}{4} & -\frac{1}{4} \\
\end{array}
\right),
\left(
\begin{array}{ccc}
0 & 0 & 0 \\
0 & -\frac{1}{4} & \frac{1}{4} \\
0 & \frac{1}{4} & \frac{3}{4} \\
\end{array}
\right)
\right).
$$
For Case $1.2)$, consider all $s\in \mathcal S_{\mathcal N}\subseteq \mathcal S_3$ such that $\alpha(s)=3$. Let $s=\{(l_1,1),(l_2, 2), (l_3, 3)\}$. If $l_1=1$, then obviously $\gamma(s)\neq 0$; if $l_1=2$, then since $2\in m_{1,2}^1$, we have $l_2=2$ and hence $\gamma(s)\neq 0$; and if $l_1=3$, since $3\in m_{1,3}^1$, we have $l_3=3$ and hence $\gamma(s)\neq 0$. Thus, for any $s\in \mathcal S_{\mathcal N}$ such that $\alpha(s)=3$, we have $\gamma(s)\neq 0$.
By Lemma \ref{lem-1-2},
$$
\left(\left(
\begin{array}{ccc}
\frac{1}{2} & \frac{1}{4} &\frac{1}{4} \\
\frac{1}{4}& \frac{-1}{4} & 0 \\
\frac{1}{4} & 0 & \frac{-1}{4} \\
\end{array}
\right),
\left(
\begin{array}{ccc}
\frac{-1}{4} & \frac{1}{4} & 0 \\
\frac{1}{4} & \frac{1}{2} & \frac{1}{4} \\
0 & \frac{1}{4} & \frac{-1}{4} \\
\end{array}
\right),
\left(
\begin{array}{ccc}
\frac{-1}{4} & 0 & \frac{1}{4} \\
0 & \frac{-1}{4} & \frac{1}{4} \\
\frac{1}{4} & \frac{1}{4} & \frac{1}{2} \\
\end{array}
\right)
\right).
$$
is a linear solution of $\mathcal N$.
For Case $1.3)$, since $1\in m_{1,2}^1$ and $3\in m_{1,3}^1$, by Lemma \ref{lem-basic}, we have
\begin{equation*}
\begin{split}
\mathcal S_{\mathcal N}\subseteq \mathcal S:
&=\{\{(i,j)\}: 1\leq i,j\leq 3\} \\
&\cup \{\{(i,1),(j,2)\}: i=2,3;j=1,2,3\}\cup\{\{(1,1),(1,2)\}\}\\
&\cup \{\{(i,1),(l,3)\}: i=1,2;l=1,2,3\}\cup\{\{(3,1),(3,3)\}\}\\
&\cup \{\{(j,2),(l,3)\}: j,l=1,2,3\}\\
&\cup \{\{(1,1),(1,2),(l,3)\}:l=1,2,3\}\cup\{\{(2,1),(j,2),(l,3)\}:j,l=1,2,3\}\\
&\cup\{\{(3,1),(j,2),(3,3)\}:j=1,2,3\}.
\end{split}
\end{equation*}
Let
$$
\mathcal C=\left(\left(
\begin{array}{ccc}
\frac{3}{4} & 0 &\frac{1}{4} \\
0& 0 & 0 \\
\frac{1}{4} & 0 & \frac{-1}{4} \\
\end{array}
\right),
\left(
\begin{array}{ccc}
0 & 0 & 0 \\
0 & \frac{3}{4} & \frac{1}{4} \\
0 & \frac{1}{4} & \frac{-1}{4} \\
\end{array}
\right),
\left(
\begin{array}{ccc}
\frac{-1}{4} & 0 & \frac{1}{4} \\
0 & \frac{-1}{4} & \frac{1}{4} \\
\frac{1}{4} & \frac{1}{4} & \frac{1}{2} \\
\end{array}
\right)
\right).
$$
Through straightforward computations, one can verify that for any $s\in \mathcal S$, $g_s(\mathcal C)\leq 1$. Hence, by Theorem~\ref{thm-comput}, $\mathcal C$ is a linear solution of $\mathcal N$, which completes the proof of Case $1)$.
For Case $2)$, without loss of generality, we assume $m_{i,j}^i=l$. Note that by Lemma \ref{lem-crossing}, if $m_{i,j}^i=l$, then $m_{i,j}^j\cap\{i,j\}\neq \emptyset$, which further implies $m_{l,j}^j=i$ by the assumption of this case. Hence, by Lemma \ref{lem-crossing}, we have $m_{l,j}^l\cap\{l,j\}\neq\emptyset$, which implies $m_{l,i}^l=j$, again by the assumption of this case, and further implies $m_{l,i}^i\cap\{i,l\}\neq\emptyset$ by Lemma \ref{lem-crossing}. Finally, we have $l\in m_{i,j}^i$, $j\in m_{i,l}^l$ and $i\in m_{j,l}^j$. Consider the following subcases:
\begin{description}
\item[$2.1)$] $j\in m_{i,j}^j$, $i\in m_{i,l}^i$ and $l\in m_{j,l}^l$;
\item[$2.2)$] $i\in m_{i,j}^j$, $i\in m_{i,l}^i$ and $l\in m_{j,l}^l$;
\item[$2.2')$] $j\in m_{i,j}^j$, $i\in m_{i,l}^i$ and $j\in m_{j,l}^l$;
\item[$2.2'')$] $j\in m_{i,j}^j$, $l\in m_{i,l}^i$ and $l\in m_{j,l}^l$;
\item[$2.3)$] $i\in m_{i,j}^j$, $i\in m_{i,l}^i$ and $j\in m_{j,l}^l$;
\item[$2.3')$] $i\in m_{i,j}^j$, $l\in m_{i,l}^i$ and $l\in m_{j,l}^l$;
\item[$2.3'')$] $j\in m_{i,j}^j$, $l\in m_{i,l}^i$ and $j\in m_{j,l}^l$;
\item[$2.4)$] $i\in m_{i,j}^j$, $l\in m_{i,l}^i$ and $j\in m_{j,l}^l$.
\end{description}
It is easy to check that Cases $2.2')$ and $2.2'')$ can be obtained form Case $2.2)$ (resp. Cases $2.3')$ and $2.3'')$ can be obtained form Case $2.3)$) by the relabelling: $i\mapsto j$, $j\mapsto l$, $l\mapsto i$ and the relabelling: $i\mapsto l$, $j\mapsto i$, $l\mapsto j$, respectively. So, in the following, we only need to consider Cases $2.1), 2.2), 2.3), 2.4)$.
For Case $2.1)$, without loss of generality, we assume $i=1$, $j=2$ and $l=3$ and thus $2\in m_{1,2}^2$, $1\in m_{1,3}^1$ and $3\in m_{2,3}^2$. Hence, paths $r_1$, $g_2$ and $b_3$ are pairwise disjoint and the network has a linear routing solution
$$
\left(\left(
\begin{array}{ccc}
1 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 0 \\
\end{array}
\right),
\left(
\begin{array}{ccc}
0 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 0 \\
\end{array}
\right),
\left(
\begin{array}{ccc}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 1 \\
\end{array}
\right)
\right).
$$
For Case $2.2)$, without loss of generality, we assume $i=1$, $j=2$ and $l=3$ and thus $1\in m_{1,2}^2\cap m_{1,3}^1\cap m_{2,3}^2$; $2\in m_{1,3}^3$ and $3\in m_{1,2}^1\cap m_{2,3}^3$. By Lemma \ref{lem-basic}, we have
\begin{equation*}
\begin{split}
\mathcal S_{\mathcal N}\subseteq \mathcal S: &=\{\{(i,j)\}: 1\leq i,j\leq 3\}\\
&\cup\{\{(i,1),(j,2)\}: i=1,2;j=2,3\}\cup\{\{(1,1),(1,2)\}, \{(3,1),(3,2)\}\}\\
&\cup\{\{(i,1),(l,3)\}: i=2,3;l=1,3\}\cup\{\{(1,1),(1,3)\},\{(2,1),(2,3)\}\}\\
&\cup\{\{(j,2),(l,3)\}: j=2,3;l=1,2\}\cup\{\{(1,2),(1,3)\},\{(3,2),(3,3)\}\}\\
&\cup\{\{(1,1),(j,2),(1,3)\}:j=1,2,3\}\cup\{\{(2,1),(2,2),(l,3)\}:l=1,2\}\\
&\cup\{\{(2,1),(3,2),(l,3)\}: l=1,2,3\}\cup\{\{(3,1),(3,2),(l,3)\}:l=1,3\}.
\end{split}
\end{equation*}
Then, by Lemma \ref{lem-0-1}, we have the following two subcases:
If $\{(1,1),(1,2)\}\notin\mathcal S_{\mathcal N}$, then, $\mathcal S_{\mathcal N}\subseteq\mathcal S\setminus\{(1,1),(1,2)\}$ and by Theorem~\ref{thm-comput}, one can check that
$$
\left(\left(
\begin{array}{ccc}
\frac{8}{14} & \frac{7}{14} &\frac{-1}{14} \\
\frac{3}{14}& \frac{-5}{14} & \frac{2}{14} \\
\frac{3}{14} & \frac{-2}{14} & \frac{-1}{14} \\
\end{array}
\right),
\left(
\begin{array}{ccc}
\frac{-3}{14} & \frac{7}{14} & \frac{-4}{14} \\
\frac{3}{14} & \frac{7}{14} & \frac{4}{14} \\
0 & 0 & 0 \\
\end{array}
\right),
\left(
\begin{array}{ccc}
\frac{-3}{14} & 0 & \frac{3}{14} \\
0 & \frac{-2}{14} & \frac{2}{14} \\
\frac{3}{14} & \frac{2}{14} & \frac{9}{14} \\
\end{array}
\right)
\right)
$$
is a linear solution.
If $\{(1,1),(1,3)\}\notin\mathcal S_{\mathcal N}$, then $\mathcal S_{\mathcal N}\subseteq\mathcal S\setminus\{\{(1,1),(j,2),(1,3)\}:j=2,3\}$ and by Theorem~\ref{thm-comput}, one can check that
$$
\left(\left(
\begin{array}{ccc}
\frac{6}{12} & \frac{3}{12} &\frac{3}{12} \\
\frac{3}{12}& \frac{-3}{12} & 0\\
\frac{3}{12} & 0 & \frac{-3}{12} \\
\end{array}
\right),
\left(
\begin{array}{ccc}
\frac{-3}{12} & \frac{4}{12} & \frac{-1}{12} \\
\frac{3}{12} & \frac{7}{12} & \frac{2}{12} \\
0 & \frac{1}{12} & \frac{-1}{12} \\
\end{array}
\right),
\left(
\begin{array}{ccc}
\frac{-3}{12} & \frac{1}{12} & \frac{2}{12} \\
0 & \frac{-2}{12} & \frac{2}{12} \\
\frac{3}{12} & \frac{1}{12} & \frac{8}{12} \\
\end{array}
\right)
\right)
$$
is a linear solution, which proves the theorem for Case $2.2)$.
For Case $2.3)$, without loss of generality, we assume $i=1$, $j=2$ and $l=3$. It can be readily verified that $1\in m_{1,2}^2\cap m_{1,3}^1\cap m_{2,3}^2$; $2\in m_{1,3}^3\cap m_{2,3}^3$ and $3\in m_{1,2}^1$. By Lemma \ref{lem-basic},
\begin{equation*}
\begin{split}
\mathcal S_{\mathcal N}\subseteq \mathcal S:&=\{\{(i,j)\}: 1\leq i,j\leq 3\}\\
&\cup\{\{(i,1),(j,2)\}: i=1,2;j=2,3\}\cup\{\{(1,1),(1,2)\}, \{(3,1),(3,2)\}\}\\
&\cup\{\{(i,1),(l,3)\}: i=2,3;l=1,3\}\cup\{\{(1,1),(1,3)\},\{(2,1),(2,3)\}\}\\
&\cup\{\{(j,2),(l,3)\}: j=2,3;l=1,3\}\cup\{\{(1,2),(1,3)\},\{(2,2),(2,3)\}\}\\
&\cup\{\{(1,1),(j,2),(1,3)\}:j=1,2,3\}\cup\{\{(2,1),(2,2),(l,3)\}:l=1,2,3\}\\
&\cup\{\{(2,1),(3,2),(l,3)\}: l=1,3\}\cup\{\{(3,1),(3,2),(l,3)\}:l=1,3\}.
\end{split}
\end{equation*}
By Lemma \ref{lem-0-1}, we have the following two subcases:
If $\{(1,1),(1,2)\}\notin\mathcal S_{\mathcal N}$, then, $\mathcal S_{\mathcal N}\subseteq\mathcal S\setminus\{(1,1),(1,2)\}$ and by Theorem~\ref{thm-comput}, one can check that
$$
\left(\left(
\begin{array}{ccc}
\frac{4}{8} & \frac{3}{8} &\frac{1}{8} \\
\frac{2}{8}& \frac{-2}{8} & 0 \\
\frac{2}{8} & \frac{-1}{8} & \frac{-1}{8} \\
\end{array}
\right),
\left(
\begin{array}{ccc}
\frac{-2}{8} & \frac{3}{8} & \frac{-1}{8} \\
\frac{2}{8} & \frac{4}{8} & \frac{2}{8} \\
0 & \frac{1}{8} & \frac{-1}{8} \\
\end{array}
\right),
\left(
\begin{array}{ccc}
\frac{-2}{8} & 0 & \frac{2}{8} \\
0 & \frac{-2}{8} & \frac{2}{8} \\
\frac{2}{8} & \frac{2}{8} & \frac{4}{8} \\
\end{array}
\right)
\right)
$$
is a linear solution.
If $\{(1,1),(1,3)\}\notin\mathcal S_{\mathcal N}$, then, $\mathcal S_{\mathcal N}\subseteq\mathcal S \setminus\{\{(1,1),(1,3)\}\}\setminus\{\{(1,1),(j,2),(1,3)\}:j=2,3\}$ and by Theorem~\ref{thm-comput}, one can check that
$$
\left(\left(
\begin{array}{ccc}
\frac{4}{6} & \frac{1}{6} &\frac{1}{6} \\
\frac{1}{6}& \frac{-1}{6} & 0\\
\frac{1}{6} & 0 & \frac{-1}{6} \\
\end{array}
\right),
\left(
\begin{array}{ccc}
\frac{-2}{6} & \frac{3}{6} & \frac{-1}{6} \\
\frac{2}{6} & \frac{2}{6} & \frac{2}{6} \\
0 & \frac{1}{6} & \frac{-1}{6} \\
\end{array}
\right),
\left(
\begin{array}{ccc}
0 & 0 & 0 \\
\frac{-1}{6}& \frac{-1}{6} & \frac{2}{6} \\
\frac{1}{6} & \frac{1}{6} & \frac{4}{6} \\
\end{array}
\right)
\right)
$$
is a linear solution, which proves the theorem for Case $2.3)$.
For Case $2.4)$, if one of $\mathcal N_{t_1, t_2}$, $\mathcal N_{t_1, t_3}$ and $\mathcal N_{t_2, t_3}$ is degenerated, then $\mathcal N$ has a linear solution by previous cases. So, we assume all of them are non-degenerated and without loss of generality $i=1$, $j=2$ and $l=3$. Hence, $m_{1,2}^1=3$, $m_{1,2}^2=1$; $m_{1,3}^1=3$, $m_{1,3}^3=2$; and $m_{2,3}^2=1$, $m_{2,3}^3=2$. In the following, consider $s\in\mathcal S_{\mathcal N}\subseteq \mathcal S_3$ such that $\alpha(s)=3$. Let $s=\{(l_1,1),(l_2,2),(l_3,3)\}$. If $l_1=3$, then since $m_{1,3}^1=3$, we have $l_3=3$; if $l_2=1$, then since $m_{1,2}^2=1$, we have $l_1=1$; if $l_3=2$, then since $m_{2,3}^3=2$, we have $l_2=2$. Hence, $\gamma(s)=0$ only if $s=\{(2,1),(3,2),(1,3)\}$, which however, is impossible by Theorem~\ref{thm-0-3}.
Hence, by Lemma~\ref{lem-1-2},
$$
\left(\left(
\begin{array}{ccc}
\frac{1}{2} & \frac{1}{4} &\frac{1}{4} \\
\frac{1}{4}& \frac{-1}{4} & 0 \\
\frac{1}{4} & 0 & \frac{-1}{4} \\
\end{array}
\right),
\left(
\begin{array}{ccc}
\frac{-1}{4} & \frac{1}{4} & 0 \\
\frac{1}{4} & \frac{1}{2} & \frac{1}{4} \\
0 & \frac{1}{4} & \frac{-1}{4} \\
\end{array}
\right),
\left(
\begin{array}{ccc}
\frac{-1}{4} & 0 & \frac{1}{4} \\
0 & \frac{-1}{4} & \frac{1}{4} \\
\frac{1}{4} & \frac{1}{4} & \frac{1}{2} \\
\end{array}
\right)
\right).
$$
is a linear routing solution of $\mathcal N$, which completes the proof.
\end{proof}
\section{Conclusions and Future Work}
We have settled in this work the Langberg-M\'{e}dard multiple unicast conjecture for stable $3$-pair networks. The conjecture in more general settings, e.g., unstable $3$-pair networks and even more general $k$-pair networks, is currently under investigation.
|
2,877,628,088,475 | arxiv | \section{Derivation of EM Update Rule for Truncated Gaussian Mixture} \label{app:EM-rule}
For this derivation, we use $\phi(\vec{x};\bf{\lambda},\vec{\Sigma})$ to denote the normal pdf with mean vector $\vec{\lambda}$ and covariance matrix $\vec{\Sigma}$. Let us denote $c_1$ for the component of the Gaussian corresponding to $+\vec{\lambda}$ and $c_2$ denote the component corresponding to $-\vec{\lambda}$.
First we have to find the posterior densities in the Expectation step. Let $\vec{\lambda}_t$ be our estimate of the parameter at time $t$.
\begin{align}
\begin{split}
Q_{\vec{\lambda}_t}(c_1)&=\mathbb{P}_{\vec{\lambda}_t,S}(Z=c_1|\vec{X}=\vec{x})\\
&=\frac{\mathbb{P}_{\vec{\lambda}_t,S}(\vec{X}=\vec{x}|Z=c_1)\mathbb{P}(Z=c_1)}{\mathbb{P}_{\vec{\lambda}_t,S}(\vec{X}=\vec{x})}\\
&=\frac{\phi(\vec{x};\vec{\lambda}_t,\vec{\Sigma})}{\phi(\vec{x};\vec{\lambda}_t,\vec{\Sigma})+\phi(\vec{x};-\vec{\lambda}_t,\vec{\Sigma})}\\
Q_{\vec{\lambda}_t}(c_2)&=\frac{\phi(\vec{x};-\vec{\lambda}_t,\vec{\Sigma})}{\phi(\vec{x};\vec{\lambda}_t,\vec{\Sigma})+\phi(\vec{x};-\vec{\lambda}_t,\vec{\Sigma})}
\end{split}
\end{align}
Now the maximization step involves the following:
\begin{align}
\vec{\lambda}_{t+1}&=\argmax_{\vec{\lambda}}\left[Q_{\vec{\lambda}_t}(c_1)\log\frac{\mathbb{P}_{\vec{\lambda},S}(\vec{x},c_1)}{Q_{\vec{\lambda}_t}(c_1)}+Q_{\vec{\lambda}_t}(c_2)\log\frac{\mathbb{P}_{\vec{\lambda},S}(\vec{x},c_2)}{Q_{\vec{\lambda}_t}(c_2)}\right]
\end{align}
Now substituting for $Q_{\vec{\lambda}_t}(c_1)$, $Q_{\vec{\lambda}_t}(c_2)$ and writing $\mathbb{P}_{\vec{\lambda},S}(\vec{x},c_1)=\frac{\phi(\vec{x};\vec{\lambda},\vec{\Sigma})S(\vec{x})}{\int_{\mathbb{R}^d}(\phi(\vec{x};\vec{\lambda},\vec{\Sigma})+\phi(\vec{x};-\vec{\lambda},\vec{\Sigma}))S(\vec{x})d\vec{x}}$, similarly $\mathbb{P}_{\vec{\lambda},S}(\vec{x},c_2)=\frac{\phi(\vec{x};-\vec{\lambda},\vec{\Sigma})S(\vec{x})}{\int_{\mathbb{R}^d}(\phi(\vec{x};\vec{\lambda},\vec{\Sigma})+\phi(\vec{x};-\vec{\lambda},\vec{\Sigma}))S(\vec{x})d\vec{x}}$, we get the following:
\begin{align}
\begin{split}
\vec{\lambda}_{t+1}&=\argmax_{\vec{\lambda}}\bigg[Q_{\vec{\lambda}_t}(c_1)\log\frac{\phi(\vec{x};\vec{\lambda},\vec{\Sigma})S(\vec{x})}{\int_{\mathbb{R}^d}(\phi(\vec{x};\vec{\lambda},\vec{\Sigma})+\phi(\vec{x};-\vec{\lambda},\vec{\Sigma}))S(\vec{x})d\vec{x}}\\
&+Q_{\vec{\lambda}_t}(c_2)\log\frac{\phi(\vec{x};-\vec{\lambda},\vec{\Sigma})S(\vec{x})}{\int_{\mathbb{R}^d}(\phi(\vec{x};\vec{\lambda},\vec{\Sigma})+\phi(\vec{x};-\vec{\lambda},\vec{\Sigma}))S(\vec{x})d\vec{x}}\bigg]\\
&=\argmax_{\vec{\lambda}}\bigg[Q_{\vec{\lambda}_t}(c_1)\big(\log\phi(\vec{x};\vec{\lambda},\vec{\Sigma})-\log\int_{\mathbb{R}^d}\phi(\vec{x};\vec{\lambda},\vec{\Sigma})+\phi(\vec{x};-\vec{\lambda},\vec{\Sigma})d\vec{x}\big)\\
&+Q_{\vec{\lambda}_t}(c_2)\big(\log\phi(\vec{x};-\vec{\lambda},\vec{\Sigma})-\log\int_{\mathbb{R}^d}(\phi(\vec{x};\vec{\lambda},\vec{\Sigma})+\phi(\vec{x};-\vec{\lambda},\vec{\Sigma}))S(\vec{x})d\vec{x}\big)\bigg]\\
&=\argmax_{\vec{\lambda}}\bigg[Q_{\vec{\lambda}_t}(c_1)\big(\log\phi(\vec{x};\vec{\lambda},\vec{\Sigma})-\log\phi(\vec{x};-\vec{\lambda},\vec{\Sigma})\big)\\
&+\log\phi(\vec{x};-\vec{\lambda},\vec{\Sigma})-\log\int_{\mathbb{R}^d}(\phi(\vec{x};\vec{\lambda},\vec{\Sigma})+\phi(\vec{x};-\vec{\lambda},\vec{\Sigma}))S(\vec{x})d\vec{x}\bigg]\\
\end{split}
\end{align}
Finding the gradient of the above maximization we get the following:
\begin{align}
\begin{split}
\nabla_{\vec{\lambda}}g(\vec{\lambda};\vec{x},\vec{\Sigma})&=\frac{d}{d\vec{\lambda}}\bigg[Q_{\vec{\lambda}_t}(c_1)\left(2\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda}\right)-0.5*\vec{x}^T\vec{\Sigma}^{-1}\vec{x}-0.5*\vec{\lambda}^T\vec{\Sigma}^{-1}\vec{\lambda}-\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda}\\
&-\log\int_{\mathbb{R}^d} 2*f_{\vec{\lambda}}(\vec{x})S(\vec{x}) d\vec{x}\bigg]\\
&=\left(2Q_{\vec{\lambda}_t}(c_1)-1\right)\vec{x}^T\vec{\Sigma}^{-1}-\vec{\lambda}^{T}\vec{\Sigma}^{-1}-\int\frac{d}{d\vec{\lambda}}f_{\vec{\lambda},S}(\vec{x}) d\vec{x}\\
&=\left(2Q_{\vec{\lambda}_t}(c_1)-1\right)\vec{x}^T\vec{\Sigma}^{-1}-\vec{\lambda}^{T}\vec{\Sigma}^{-1}-\int\frac{d}{d\vec{\lambda}}\log f_{\vec{\lambda}}(\vec{x}) f_{\vec{\lambda},S}(\vec{x}) d\vec{x}\\
&=\left(2Q_{\vec{\lambda}_t}(c_1)-1\right)\vec{x}^T\vec{\Sigma}^{-1}-\vec{\lambda}^{T}\vec{\Sigma}^{-1}-\mathbb{E}_{\lambda,S}\left[-\vec{\lambda}^T\vec{\Sigma}^{-1}+\vec{x}^T\vec{\Sigma}^{-1}\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right]\\
&=\left(2Q_{\vec{\lambda}_t}(c_1)-1\right)\vec{x}^T\vec{\Sigma}^{-1}-\mathbb{E}_{\lambda,S}\left[\vec{x}^T\vec{\Sigma}^{-1}\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right]\\
&=\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda}_t)\vec{x}^T\vec{\Sigma}^{-1}-\mathbb{E}_{\lambda,S}\left[\vec{x}^T\vec{\Sigma}^{-1}\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right]
\end{split}
\end{align}
Thus under the infinite sample case we have the following EM update rule:
\begin{align}
\vec{\lambda}_{t+1}=\left\{\vec{\lambda}:h(\vec{\lambda}_t,\vec{\lambda})=\vec{0}\right\}
\end{align}
such that $h(\vec{\lambda}_t,\vec{\lambda}_{t+1})=\vec{0}$, where
\begin{align}
h(\vec{\lambda}_t,\vec{\lambda}):=\mathbb{E}_{\vec{\mu},S}\left[\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda}_t)\vec{x}^T\vec{\Sigma}^{-1}\right]-\mathbb{E}_{\vec{\lambda},S}\left[\vec{x}^T\vec{\Sigma}^{-1}\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right].
\end{align}
\newpage
\section{Computation of Derivatives in Lemma \ref{lem:derivatives}}\label{app:derivatives}
\begin{proof}[Proof of Lemma \ref{lem:derivatives}]
We first compute the derivative (3) as follows:
\begin{align}
\begin{split}
\nabla_{\vec{\lambda}} \mathbb{E}_{\vec{\mu},S}\left[\vec{x}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right]&=
\mathbb{E}_{\vec{\mu},S}\left[\vec{x}^T\nabla_{\vec{\lambda}}\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right]\\
&=\vec{\Sigma}^{-1}\mathbb{E}_{\vec{\mu},S}\left[\vec{x}\vec{x}^T\frac{1}{\cosh^2(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})}\right]\\
&=\vec{\Sigma}^{-1}\mathbb{E}_{\vec{\mu},S}\left[\vec{x}\vec{x}^T\left(1-\tanh^2(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right)\right]
\end{split}
\end{align}
Next, derivative (2) is given by:
\begin{align}
\begin{split}
\nabla_{\vec{\mu}} \mathbb{E}_{\vec{\mu},S}\left[\vec{x}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right]&=\nabla_{\vec{\mu}}\dfrac{\int_{\mathbb{R}^d} \vec{x}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})f_{\vec{\mu}}(\vec{x})S(\vec{x})d\vec{x}}{\int_{\mathbb{R}^d} f_{\vec{\mu}}(\vec{x}) S(\vec{x})d\vec{x}}\\
&=\dfrac{\int_{\mathbb{R}^d}\vec{x}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\nabla_{\vec{\mu}}f_{\vec{\mu}}(\vec{x})S(\vec{x})d\vec{x}}{\int_{\mathbb{R}^d} f_{\vec{\mu}}(\vec{x}) S(\vec{x})d\vec{x}}\\
-&\dfrac{\int_{\mathbb{R}^d} \nabla_{\vec{\mu}}f_{\vec{\mu}}(\vec{x})S(\vec{x})d\vec{x}}{\int_{\mathbb{R}^d}f_{\vec{\mu}}(\vec{x})S(\vec{x})d\vec{x}}\mathbb{E}_{\vec{\mu},S}\left[\vec{x}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right]\\
&=\dfrac{\int_{\mathbb{R}^d}\vec{x}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\vec{\Sigma}^{-1}\left(-\vec{\mu}f_{\vec{\mu}}(\vec{x})+\vec{x}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\mu})f_{\vec{\mu}}(\vec{x})\right)S(\vec{x})d\vec{x}}{\int_{\mathbb{R}^d} f_{\vec{\mu}}(\vec{x})S(\vec{x})d\vec{x}}\\
&-\vec{\Sigma}^{-1}\dfrac{\int_{\mathbb{R}^d} -\vec{\mu}f_{\vec{\mu}}(\vec{x})S(\vec{x}) + \vec{x}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\mu})f_{\vec{\mu}}(\vec{x})S(\vec{x})d\vec{x}}{\int_{\mathbb{R}^d} f_{\vec{\mu}}(\vec{x})S(\vec{x})d\vec{x}}\mathbb{E}_{\vec{\mu},S}\left[\vec{x}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right]\\
&=\vec{\Sigma}^{-1}\mathbb{E}_{\vec{\mu},S}\left[-\vec{x}\vec{\mu}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})+\vec{x}\vec{x}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\mu})\right]\\
&-\vec{\Sigma}^{-1}\Big[\mathbb{E}_{\vec{\mu},S}\left[-\vec{x}\vec{\mu}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right]\\
&+\mathbb{E}_{\vec{\mu},S}\left[\vec{x}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right]\mathbb{E}_{\vec{\mu},S}\left[\vec{x}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\mu})\right]\Big]\\
&=\vec{\Sigma}^{-1}\mathbb{E}_{\vec{\mu},S}\left[\vec{x}\vec{x}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\mu})\right]\\
&-\vec{\Sigma}^{-1}\mathbb{E}_{\vec{\mu},S}\left[\vec{x}\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right]\mathbb{E}_{\vec{\mu},S}\left[\vec{x}\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\mu})\right]^T
\end{split}
\end{align}
Finally, derivative of (1) is computed by using the above two derivatives as follows:
\begin{align}
\begin{split}
\nabla_{\vec{\lambda}} \mathbb{E}_{\vec{\lambda},S}\left[\vec{x}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right]&= \mathbb{E}_{\vec{\lambda},S}\left[\vec{x}^T\nabla_{\vec{\lambda}}\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right]\\
&+\dfrac{\int_{\mathbb{R}^d} \vec{x}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\nabla_{\vec{\lambda}}f_{\vec{\lambda}}(\vec{x})S(\vec{x})d\vec{x}}{\int_{\mathbb{R}^d} f_{\vec{\lambda}}(\vec{x})S(\vec{x})d\vec{x}}\\
&- \dfrac{\int_{\mathbb{R}^d} \nabla_{\vec{\lambda}}f_{\vec{\lambda}}(\vec{x}) S(\vec{x})d\vec{x}}{\int_{\mathbb{R}^d}f_{\vec{\lambda}}(\vec{x})S(\vec{x})d\vec{x}}\mathbb{E}_{\vec{\lambda},S}\left[\vec{x}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right]\\
&=\vec{\Sigma}^{-1}\mathbb{E}_{\vec{\lambda},S}\left[\vec{x}\vec{x}^T\left(1-\tanh^2(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right)\right]\\
&+\vec{\Sigma}^{-1}\mathbb{E}_{\vec{\lambda},S}\left[\vec{x}\vec{x}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right]\\
&-\vec{\Sigma}^{-1}\mathbb{E}_{\vec{\lambda},S}\left[\vec{x}\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right]\mathbb{E}_{\vec{\lambda},S}\left[\vec{x}\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right]^T\\
&=\vec{\Sigma}^{-1}\mathbb{E}_{\vec{\lambda},S}\left[\vec{x}\vec{x}^T\right]\\
&-\vec{\Sigma}^{-1}\mathbb{E}_{\vec{\lambda},S}\left[\vec{x}\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right]\mathbb{E}_{\vec{\lambda},S}\left[\vec{x}\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right]^T
\end{split}
\end{align}
\end{proof}
\section{Multi-Dimensional Convergence}\label{sec:multi}
In this section we prove the qualitative part of Theorem \ref{thm:multi}. The techniques follow similar lines as in the single-dimensional case. We will state the Lemmas that deviate technically from those in Section \ref{sec:single}. The two lemmas below provide stability analysis for the fixed points $\vec{-\mu},\vec{0},\vec{\mu}$.
\begin{lemma}[Stability of $\vec{\mu}$ in multi-dimensional]\label{lem:stability2a} It holds that the spectral radius of $$\nabla_{\vec{\lambda}}\mathbb{E}_{\vec{\lambda},S}\left[\vec{x}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right]^{-1} \Big\vert_{\vec{\lambda}=\vec{\mu}}\cdot\nabla_{\vec{\lambda}}\mathbb{E}_{\vec{\mu},S}\left[\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\vec{x}^T\right]\Big\vert_{\lambda=\vec{\mu}}$$ (i.e., the Jacobian of the update rule of EM method computed at true mean $\vec{\mu}$) is less than one.
\end{lemma}
\begin{proof}
We set $A:= \mathbb{E}_{\vec{\mu},S}[\vec{x}\vec{x}^T] - \mathbb{E}_{\vec{\mu},S}[\vec{x}\tanh(\vec{x}^T \vec{\Sigma}^{-1}\vec{\mu})]\mathbb{E}_{\vec{\mu},S}[\vec{x}\tanh(\vec{x}^T \vec{\Sigma}^{-1}\vec{\mu})]^T$ and $B:= \mathbb{E}_{\vec{\mu},S}[\vec{x}\vec{x}^T] -\mathbb{E}_{\vec{\mu},S}[\vec{x}\vec{x}^T \tanh^2 (\vec{x}^T \vec{\Sigma}^{-1}\vec{\mu})]$. From the proof of Lemma \ref{lem:localdiff} it follows that both $A,B$ are positive definite. Observe that $A-B$ is also positive definite since
\[A - B = Cov(\vec{x}\tanh(\vec{x}^T \vec{\Sigma}^{-1}\vec{\mu}), \vec{x}\tanh(\vec{x}^T \vec{\Sigma}^{-1}\vec{\mu})),\] and the measure $S$ is positive so the vector $\vec{x}\tanh(\vec{x}^T \vec{\Sigma}^{-1}\vec{\mu})$ does not live in a lower dimensional subspace. Moreover, we get that $\vec{\Sigma}^{-1/2}(A-B)\vec{\Sigma}^{-1/2} = \vec{\Sigma}^{-1/2}A\vec{\Sigma}^{-1/2}-\vec{\Sigma}^{-1/2}B\vec{\Sigma}^{-1/2}$ is also positive definite.
We set $\tilde{A} := \vec{\Sigma}^{-1/2}A\vec{\Sigma}^{-1/2}$ and $\tilde{B} := \vec{\Sigma}^{-1/2}B\vec{\Sigma}^{-1/2}$ ($\tilde{A}, \tilde{B}$ are also positive definite). Using Claim \ref{lem:positive} (stated in the end of the section) we conclude that $\tilde{A}^{-1}(\tilde{A} - \tilde{B}) = \vec{I} - \tilde{A}^{-1}\tilde{B}$ has positive eigenvalues. Thus $C: = \vec{I}-\vec{\Sigma}^{1/2}A^{-1}B\vec{\Sigma}^{-1/2}$ has positive eigenvalues. We conclude that $\vec{\Sigma}^{1/2}A^{-1}B\vec{\Sigma}^{-1/2}$ has eigenvalues less than one. Since $\vec{\Sigma}^{1/2}A^{-1}B\vec{\Sigma}^{-1/2}$ has same eigenvalues as $A^{-1}B$, it follows that $A^{-1}B$ has eigenvalues less than one. Finally, from Lemma \ref{lem:positive} it holds that $A^{-1}B$ has positive eigenvalues. The proof follows since $A^{-1}B = \nabla_{\vec{\lambda}}\mathbb{E}_{\vec{\lambda},S}\left[\vec{x}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right]^{-1} \Big\vert_{\vec{\lambda}=\vec{\mu}}\cdot\nabla_{\vec{\lambda}}\mathbb{E}_{\vec{\mu},S}\left[\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\vec{x}^T\right]\Big\vert_{\lambda=\vec{\mu}}$.
\end{proof}
The same proof works for the case of $\vec{-\mu}$. Below we provide the stability analysis for $\vec{0}$.
\begin{lemma}[Stability of $\vec{0}$ in multi-dimensional]\label{lem:stability2b} It holds that the spectral radius of $$\nabla_{\vec{\lambda}}\mathbb{E}_{\vec{\lambda},S}\left[\vec{x}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right]^{-1} \Big\vert_{\vec{\lambda}=\vec{0}}\cdot\nabla_{\vec{\lambda}}\mathbb{E}_{\vec{\mu},S}\left[\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\vec{x}^T\right]\Big\vert_{\lambda=\vec{0}}$$ (i.e., the Jacobian of the update rule of EM method computed at true mean $\vec{\mu}$) is greater than one.
\end{lemma}
\begin{proof} We set $\vec{x} \leftarrow \vec{\Sigma}^{-1/2}\vec{x}, \vec{\mu} \leftarrow \vec{\Sigma}^{-1/2}\vec{\mu}$ and define $S'$ accordingly (transforming $S$). It suffices to prove that the matrix $\mathbb{E}_{\vec{0},S'}^{-1}[\vec{x}\vec{x}^T] \mathbb{E}_{\vec{\mu},S'}[\vec{x}\vec{x}^T]$ has an eigenvalue greater than one (using Lemma \ref{lem:derivatives}). We set $G(t) = \mathbb{E}_{t\vec{\mu},S'}[\vec{x}\vec{x}^T]$ and we get that $$\frac{dG}{dt} = \mathbb{E}_{t\vec{\mu},S'}[\vec{x}\vec{x}^T (\vec{x}^T \vec{\mu}) \tanh(\vec{x}^T t\vec{\mu})].$$
Using the fundamental theorem of calculus we get that
\begin{equation}
G(1)-G(0) = \int_0^1 \mathbb{E}_{t\vec{\mu},S'}[\vec{x}\vec{x}^T (\vec{x}^T \vec{\mu}) \tanh(\vec{x}^T t\vec{\mu})]dt.
\end{equation}
It holds that
\begin{align*}
\vec{\mu}^T G(1) \vec{\mu} = \vec{\mu}^T G(0) \vec{\mu} + \int_0^1 \vec{\mu}^T\mathbb{E}_{t\vec{\mu},S'}[\vec{x}\vec{x}^T (\vec{x}^T \vec{\mu}) \tanh(\vec{x}^T t\vec{\mu})]\vec{\mu} dt
\end{align*}
The proof below is inspired by the proof of FKG inequality (because $(\vec{x}^T \vec{\mu})^2, \vec{x}^T \vec{\mu}\tanh(\vec{x}^T t\vec{\mu})$ are increasing for $\vec{x}^T \vec{\mu} \geq 0$ and decreasing for $\vec{x}^T \vec{\mu} < 0$ with respect to $\vec{x}^T \vec{\mu}$ and since $t \geq 0$).
Let $\vec{x}_1,\vec{x}_2$ be two independent and identically distributed random variables that follow the distribution of $f_{t\vec{\mu},S'}(\vec{x})$. Assume w.l.o.g that $|\vec{x}_1^T \vec{\mu}|>|\vec{x}_2^T\vec{\mu}|$ then it holds that $(\vec{x}_1^T \vec{\mu})^2 > (\vec{x}_2^T \vec{\mu})^2$ and $\vec{x}_1^T \vec{\mu} \tanh(t\vec{x}_1^T \vec{\mu}) > \vec{x}_2^T \vec{\mu} \tanh(t\vec{x}_2^T \vec{\mu})$ (since $t \geq 0$).
Therefore we get that $\left[(\vec{x}_1^T \vec{\mu})^2 - (\vec{x}_2^T \vec{\mu})^2\right]\left[\vec{x}_1^T \vec{\mu} \tanh(t\vec{x}_1^T \vec{\mu})- \vec{x}_2^T \vec{\mu} \tanh(t\vec{x}_2^T \vec{\mu})\right]>0$ (except for a measure zero set where it might be equality). Thus
\[\mathbb{E}_{t\vec{\mu},S'}\left \{ \left[(\vec{x}_1^T \vec{\mu})^2 - (\vec{x}_2^T \vec{\mu})^2\right]\left[\vec{x}_1^T \vec{\mu} \tanh(t\vec{x}_1^T \vec{\mu})- \vec{x}_2^T \vec{\mu} \tanh(t\vec{x}_2^T \vec{\mu})\right]\right\}>0.\]
By using the fact that $\vec{x}_1, \vec{x}_2$ are independent and identically distributed, it holds that
\[\mathbb{E}_{t\vec{\mu},S'} \left[(\vec{x}_1^T \vec{\mu})^3 \tanh(t\vec{x}_1^T \vec{\mu})\right]>\mathbb{E}_{t\vec{\mu},S'} \left[(\vec{x}_1^T \vec{\mu})^2\right]\mathbb{E}_{t\vec{\mu},S'} \left[ (\vec{x}_1^T \vec{\mu})\tanh(t\vec{x}_1^T \vec{\mu})\right].\]
Hence, we conclude that
$\vec{\mu}^T (\mathbb{E}_{\vec{\mu},S'} [\vec{x}\vec{x}^T] - \mathbb{E}_{\vec{0},S'} [\vec{x}\vec{x}^T])\vec{\mu}>0$, i.e., the matrix $(\mathbb{E}_{\vec{\mu},S'} [\vec{x}\vec{x}^T] - \mathbb{E}_{\vec{0},S'} [\vec{x}\vec{x}^T])$ has a positive eigenvalue. Since $\mathbb{E}_{\vec{0},S'}^{-1}[\vec{x}\vec{x}^T]$ is positive definite, it holds that $$\mathbb{E}_{\vec{0},S'}^{-1}[\vec{x}\vec{x}^T](\mathbb{E}_{\vec{\mu},S'}[\vec{x}\vec{x}^T] - \mathbb{E}_{\vec{0},S'}[\vec{x}\vec{x}^T])$$
has a positive eigenvalue, i.e., $\mathbb{E}_{\vec{0},S'}^{-1}[\vec{x}\vec{x}^T]\mathbb{E}_{\vec{\mu},S'}[\vec{x}\vec{x}^T] - I$ has a positive eigenvalue.\\ Hence, $\mathbb{E}_{\vec{0},S'}^{-1}[\vec{x}\vec{x}^T]\mathbb{E}_{\vec{\mu},S'}[\vec{x}\vec{x}^T]$ has an eigenvalue greater than one, and the proof is complete.
\end{proof}
The following lemma shows that there are three fixed points in the multi-dimensional case when the function $S'(\vec{x}) = S(\vec{\Sigma}^{1/2}\vec{x})$ is rotation invariant.
\begin{lemma}[Rotation invariance implies three fixed points]\label{lem:symrotation} Let $S': \mathbb{R}^d \to \mathbb{R}$ be a rotation invariant function, where $S'(\vec{x}) = S(\vec{\Sigma}^{1/2}\vec{x})$. It holds that the update rule of EM has exactly three fixed points, i.e., $-\vec{\mu},\vec{0},\vec{\mu}$, for any $d>1$.
\end{lemma}
\begin{proof} Consider the transformation $\vec{x} \leftarrow \vec{\Sigma}^{-1/2}\vec{x}, \vec{\mu} \leftarrow \vec{\Sigma}^{-1/2}\vec{\mu}$ and $S' \leftarrow S$. Assume for the sake of contradiction that there exists another fixed point $\vec{\lambda}\neq \vec{0}$ (after the transformation so that we can consider the covariance matrix to be identity). We may assume without loss of generality that $\vec{\mu}^T\vec{\lambda} \geq 0$ since if $\vec{\lambda}$ is a fixed point of the EM rule, so it is $-\vec{\lambda}$.
Let $Q$ be an orthogonal matrix so that $Q \vec{\lambda} = \norm{\vec{\lambda}}_2 \vec{e}_1$ and $Q \vec{\mu} = \mu_1 \vec{e}_1 + \mu_2 \vec{e}_2$ where $\vec{e}_1, \vec{e}_2 ,...,\vec{e}_d$ is the classic orthogonal basis of $\mathbb{R}^d$ ($Q$ rotates the space), with $\mu_1 \geq 0$ (by assumption) and $\mu_2 \geq 0$ (by the choice of $Q$). We will show that the equation \[\mathbb{E}_{\vec{\lambda},S'}\left[\tanh(\vec{x}^T\vec{\lambda})\vec{x}\right]=\mathbb{E}_{\vec{\mu},S'}\left[\tanh(\vec{x}^T\vec{\lambda})\vec{x}\right]\] holds only for $\vec{\lambda} = \vec{\mu}$ (assuming $\vec{\lambda} \neq \vec{\mu}$ we shall reach a contradiction).
Under the transformation $\vec{y} = Q\vec{x}$ (and because $S'$ is rotation invariant, $|\det(Q)| = 1$, $Q^TQ = QQ^T = \vec{I}$) we get that the fixed point $\vec{\lambda}$ of EM satisfies
\begin{equation}\label{eq:transform}
\mathbb{E}_{Q\vec{\lambda},S'}\left[\tanh(\vec{y}^TQ\vec{\lambda})Q^T\vec{y}\right]=\mathbb{E}_{Q\vec{\mu},S'}\left[\tanh(\vec{y}^TQ\vec{\lambda})Q^T\vec{y}\right].
\end{equation}
We multiply by $Q$ both sides in Equation (\ref{eq:transform}) and we conclude that
\begin{equation}\label{eq:rule}
\mathbb{E}_{\norm{\vec{\lambda}}_2 \vec{e}_1,S'}\left[\tanh(\norm{\vec{\lambda}}_2y_1)\vec{y}\right] = \mathbb{E}_{Q\vec{\mu},S'}\left[\tanh(\norm{\vec{\lambda}}_2y_1)\vec{y}\right],
\end{equation}
We consider the following two cases:
\begin{itemize}
\item $\mu_2 = 0$. For the rest of this case, we denote by $\vec{y}_{-1}$ the vector $\vec{y}$ by removing coordinate $y_1$.
We use the notation $f_{\nu} = \dfrac{1}{2}\mathcal{N}(\vec{y};-\vec{\nu},\vec{I})+\frac{1}{2}\mathcal{N}(\vec{y};\vec{\nu},\vec{I})$. By rotation invariance of $S'$, it is true that $S'(y_1,\vec{y}_{-1}) = S'(-y_1, \vec{y}_{-1}) = S'(y_1,-\vec{y}_{-1})$, $S'(-\vec{y}) = S'(\vec{y})$ and because $\tanh(\norm{\vec{\lambda}}_2y_1)y_1$ is an even function we get
\begin{align*}
\mathbb{E}_{Q\vec{\mu},S'}\left[\tanh(\norm{\vec{\lambda}}_2y_1)y_1\right] &= \frac{ \int_{\mathbb{R}^{d}} \tanh(\norm{\vec{\lambda}}_2y_1)y_1S'(\vec{y}) f_{Q\vec{\mu}}d\vec{y}}{\int_{\mathbb{R}^{d}} S'(\vec{y}) f_{Q\vec{\mu}}d\vec{y}}
\\&= \frac{\int_{\mathbb{R}^{d}} \tanh(\norm{\vec{\lambda}}_2y_1)y_1S'(\vec{y}) \mathcal{N}(\vec{y};Q\vec{\mu},\vec{I})d\vec{y}}{\int_{\mathbb{R}^{d}} S'(\vec{y}) \mathcal{N}(\vec{y};Q\vec{\mu},\vec{I})d\vec{y}}
\\&= \frac{\int_{\mathbb{R}}e^{-\frac{(y_1 - (Q\vec{\mu})_1)^2}{2}}\tanh(\norm{\vec{\lambda}}_2y_1)y_1\int_{\mathbb{R}^{d-1}} S'(\vec{y})\mathcal{N}(\vec{y}_{-1};(Q\vec{\mu})_{-1},\vec{I})d\vec{y}_{-1}dy_1}{\int_{\mathbb{R}}e^{-\frac{(y_1 - (Q\vec{\mu})_1)^2}{2}}\int_{\mathbb{R}^{d-1}} S'(\vec{y}) \mathcal{N}(\vec{y}_{-1};(Q\vec{\mu})_{-1},\vec{I})d\vec{y}_{-1}dy_1}
\\&= \frac{\int_{\mathbb{R}}e^{-\frac{(y_1 - (Q\vec{\mu})_1)^2}{2}}\tanh(\norm{\vec{\lambda}}_2y_1)y_1 r(y_1)dy_1}{\int_{\mathbb{R}}e^{-\frac{(y_1 - (Q\vec{\mu})_1)^2}{2}}r(y_1)dy_1}
\end{align*}
where $r(y_1) = \int_{\mathbb{R}^{d-1}} S'(\vec{y}) \mathcal{N}(\vec{y}_{-1};(Q\vec{\mu})_{-1},\vec{I})d\vec{y}_{-1}$ is an even, non-negative function (of positive measure). Since $\tanh(\norm{\vec{\lambda}}_2y_1)y_1 r(y_1)$ is an even function we conclude that
\begin{align*}
\mathbb{E}_{Q\vec{\mu},S'}\left[\tanh(\norm{\vec{\lambda}}_2y_1)y_1\right] &= \frac{\int_{\mathbb{R}}e^{-\frac{(y_1 - (Q\vec{\mu})_1)^2}{2}}\tanh(\norm{\vec{\lambda}}_2y_1)y_1 r(y_1)dy_1}{\int_{\mathbb{R}}e^{-\frac{(y_1 - (Q\vec{\mu})_1)^2}{2}}r(y_1)dy_1}\\&= \frac{\int_{\mathbb{R}}f_{(Q\mu)_1}\tanh(\norm{\vec{\lambda}}_2y_1)y_1 r(y_1)dy_1}{\int_{\mathbb{R}}f_{(Q\mu)_1}r(y_1)dy_1}
\\&= \mathbb{E}_{(Q\mu)_1,r}\left[\tanh(\norm{\vec{\lambda}}_2y_1)y_1\right]
\end{align*}
Therefore we conclude that (since $(Q\mu)_1 = \mu_1$)
\[\mathbb{E}_{\norm{\vec{\lambda}}_2,r}\left[\tanh(\norm{\vec{\lambda}}_2y_1)y_1\right] = \mathbb{E}_{\mu_1,r}\left[\tanh(\norm{\vec{\lambda}}_2y_1)y_1\right],\]
namely we have reduced the problem to the single dimensional case. Hence from Lemma \ref{lem:threefixedpoints}, it must hold that $\mu_1 = \norm{\vec{\lambda}}_2$, i.e., $\vec{\lambda} = \vec{\mu}$ (contradiction).
\item $\vec{\mu}_2>0$. We use the same machinery as before; by Equation (\ref{eq:rule}), the fact that $S'(y_1,y_2, \vec{y}_{-1,2}) = S'(y_1,y_2, -\vec{y}_{-1,2}) = S'(-y_1,y_2, \vec{y}_{-1,2}) = S'(y_1,-y_2, \vec{y}_{-1,2})$ and moreover the fact that function $\tanh(\norm{\vec{\lambda}}_2y_1)y_2$ is odd with respect to $y_2$, we conclude \[\mathbb{E}_{(\mu_1,\mu_2),r'}\left[\tanh(\norm{\vec{\lambda}}_2y_1)y_2\right]=0,\] where $r'(y_1,y_2)$ is a non-negative function (of positive measure) and bounded by 1, with the property that $r'(y_1,y_2) = r'(-y_1,y_2) = r'(y_1,-y_2) = r'(-y_1,-y_2)$ (reducing it to the two-dimensional case).
We will show that \[\mathbb{E}_{(\mu_1,\mu_2) ,r'}\left[\tanh(\norm{\vec{\lambda}}_2y_1)y_2\right]>0\] and reach a contradiction. Assume $(z_1,z_2) \in \mathbb{R}^2$ so that $z_1 \cdot z_2 > 0$ and $r'(z_1,z_2) > 0$. from the measure $f_{(\mu_1,\mu_2),r'}$. It suffices to show that $f_{(\mu_1,\mu_2),r'}(-z_1,-z_2) = f_{(\mu_1,\mu_2),r'}(z_1,z_2)>f_{(\mu_1,\mu_2),r'}(-z_1,z_2) = f_{(\mu_1,\mu_2),r'}(z_1,-z_2)$ (by the assumption of positive measure). This reduces to (by the property of $r'$)
\[e^{-\frac{(z_1-\mu_1)^2+(z_2-\mu_2)^2}{2}}+e^{-\frac{(z_1+\mu_1)^2+(z_2+\mu_2)^2}{2}} > e^{-\frac{(-z_1-\mu_1)^2+(z_2-\mu_2)^2}{2}}+e^{-\frac{(-z_1+\mu_1)^2+(z_2+\mu_2)^2}{2}},\]
which after cancelling from both sides the common term $e^{-\frac{z_1^2+z_2^2+\mu_1^2+\mu_2^2}{2}}$ is equivalent to
\[\cosh(z_1\mu_1+z_2\mu_2) > \cosh(z_1\mu_1-z_2\mu_2).\]
In case both $z_1,z_2$ are positive then $|z_1\mu_1+z_2\mu_2| > |z_1\mu_1-z_2\mu_2|$ (since $\mu_1,\mu_2>0$) and the inequality follows. In case both $z_1,z_2$ are negative then again $|-z_1\mu_1-z_2\mu_2| > |-z_1\mu_1+z_2\mu_2|$ (since $\mu_1,\mu_2>0$) and the inequality follows. The proof is complete.
\end{itemize}
\end{proof}
\begin{remark} Let $B_{l,r} = \{\vec{x}:l \leq \norm{\vec{x}}_{\vec{\Sigma}^{-1}} \leq r\}$, where $\norm{\vec{x}}_{\vec{\Sigma}^{-1}}$ captures the Mahalanobis distance of $\vec{x}$ from $\vec{0}$, i.e., $\sqrt{\vec{x}^T \vec{\Sigma}\vec{x}}$ ($\vec{\Sigma}$ is positive definite). We would like to note that EM update rule has exactly three fixed points for any truncation set that is a union of $B_{l_i,r_i}$ for a sequence of intervals $(l_i,r_i)$ \footnote{Observe that rotation invariant sets $S$ include unions of $B_{l_i,r_i}$ where the $\vec{\Sigma}$ is the identity matrix.}.
\end{remark}
\begin{lemma}\label{lem:positive} Let $A,B$ be two positive definite matrices. Then $AB$ has positive eigenvalues
\end{lemma}
\begin{proof} $AB$ has the same eigenvalues as $A^{1/2}BA^{1/2}$ ($A^{1/2}$ is well defined since $A$ is positive definite). But $A^{1/2}BA^{1/2}$ is also positive definite, hence the claim follows.
\end{proof}
\section{Introduction}
\input{intro}
\input{Background}
\input{EM-algo}
\input{Sym-truncation}
\input{Arb-truncation}
\input{more_fixed_points}
\input{rates}
\section{Conclusion}
In this paper, we studied the convergence properties of EM applied to the problem of truncated mixture of two Gaussians. We managed to show that EM converges almost surely to the true mean in the case $d=1$ (with an exponential rate depending on $\alpha$) and moreover that the same result carries over for $d>1$ under the assumption that the update rule of EM has only three fixed points (if it has more, then our results imply local convergence of EM if the initializations are close enough to the true mean). Some interesting questions that arise from this line of work are the following:
\begin{itemize}
\item Finite population case: Our setting assumes infinite samples. Can we prove a similar convergence result using only finitely many samples? The multi-dimensional case will be challenging because of the existence of more than three fixed points in general.
\item Beyond two components: Characterize the truncated sets $S$ for which EM converges almost surely to the true mean for truncated mixture of $k$-Gaussians, where $k \geq 3$.
\end{itemize}
\section*{Acknowledgements}
{IP would like to thank Arnab Bhattacharya and Costis Daskalakis for fruitful discussions.}
\bibliographystyle{plain}
\section{Background}
\subsection{Truncated Mixture Model}\label{sec:model}
Before describing the model, we establish the notations used in this paper. We use bold font to represent vectors, any generic element in $\mathbb{R}^d$ is represented by $\vec{x}$.
The density of a balanced mixture of two different Gaussians with parameters $\left(\vec{\mu}_1,\vec{\Sigma}_1\right)$ and $\left(\vec{\mu}_2,\vec{\Sigma}_2\right)$ respectively, is given by
$f(\vec{x}) := \dfrac{1}{2}\mathcal{N}(\vec{x};\vec{\mu}_1,\vec{\Sigma}_1)+\frac{1}{2}\mathcal{N}(\vec{x};\vec{\mu}_2,\vec{\Sigma}_2),$ where $\mathcal{N}(\vec{x};\vec{\mu},\Sigma) := \dfrac{\exp(-\frac{1}{2}(\vec{x}-\vec{\mu})^T\Sigma^{-1}(\vec{x}-\vec{\mu}))}{(2\pi)^{\frac{d}{2}}\det(\Sigma)^{1/2}}$. For this work we consider the case when true covariances are known and they are equal to $\vec{\Sigma}$. The means are assumed to be symmetric around the origin and we represent the true parameters of the distribution to be $\left(-\vec{\mu},\vec{\Sigma}\right)$ and $\left(\vec{\mu},\vec{\Sigma}\right)$.
Thus, we can write the density as follows:
\begin{align}\label{eq:density_simple}
f_{\vec{\mu}}(\vec{x}) := \dfrac{1}{2}\mathcal{N}(\vec{x};-\vec{\mu},\vec{\Sigma})+\frac{1}{2}\mathcal{N}(\vec{x};\vec{\mu},\vec{\Sigma}),
\end{align}
Under this setting we consider a truncation set $S \subset \mathbb{R}^d$, which means that we have access only to the samples that fall in the set $S$ which is of positive measure under the true distribution, i.e., \[\int_{\mathbb{R}^d} (0.5\mathcal{N}(\vec{x};-\vec{\mu},\vec{\Sigma})+0.5\mathcal{N}(\vec{x};-\vec{\mu},\vec{\Sigma}))\mathbf{1}_{S}d\vec{x}= \alpha > 0,\] where $\mathbf{1}_{S}$ is the indicator function for $S$, i.e., if $\vec{x} \in S$ then $\mathbf{1}_{S}(\vec{x})=1$ and is zero otherwise.
Hence we can write the truncated mixture density as follows:
\begin{align}
f_{\vec{\mu},S}(\vec{x})={
\begin{cases}
\dfrac{0.5\mathcal{N}(\vec{x};-\vec{\mu},\vec{\Sigma})+0.5\mathcal{N}(\vec{x};\vec{\mu},\vec{\Sigma})}{\int_{S}0.5\mathcal{N}(\vec{x};-\vec{\mu},\vec{\Sigma})+0.5\mathcal{N}(\vec{x};\vec{\mu},\vec{\Sigma}) d\vec{x}} \;\;, \vec{x} \in S\\
0 \qquad\qquad\qquad\qquad\qquad\quad\qquad\qquad\qquad\;, \vec{x} \notin S
\end{cases}
}
\end{align}
The above definition can be generalized for ``truncation" \textit{functions} too. Let $S : \mathbb{R}^d \to \mathbb{R}$ be a non-negative, bounded by one, measurable function so that $0< \alpha = \int_{\mathbb{R}^d} S(\vec{x}) f_{\vec{\mu}}(\vec{x})d\vec{x}$ (we say nonnegative function $S$ is of ``positive measure" if $S(\vec{x})$ is \textit{not} almost everywhere zero). The truncated mixture then is defined as follows:
\[f_{\vec{\mu},S}(\vec{x})= \dfrac{(0.5\mathcal{N}(\vec{x};-\vec{\mu},\vec{\Sigma})+0.5\mathcal{N}(\vec{x};\vec{\mu},\vec{\Sigma}))S(\vec{x})}{\int_{\mathbb{R}^d}(0.5\mathcal{N}(\vec{x};-\vec{\mu},\vec{\Sigma})+0.5\mathcal{N}(\vec{x};\vec{\mu},\vec{\Sigma}))S(\vec{x}) d\vec{x}}
\]
One can think of $S(\vec{x})$ as the probability to actually see sample $\vec{x}$.
\begin{remark}[Results proven for truncation functions]
Our main Theorems \ref{thm:single} and \ref{thm:multi} provided in the introduction, hold in the general setting where we have non-negative truncation functions $S(\vec{x})$ of ``positive measure". Our proofs are written in the general setting (not only the case of indicator functions).
\end{remark}
We will use the following short hand for the truncated EM density with means $\vec{\mu}$ and truncation set or function $S$ such that
$f_{\vec{\mu},S}(\vec{x})=\dfrac{f_{\vec{\mu}}(\vec{x}) \mathbf{1}_{S}}{\int_{\mathbb{R}^d} f_{\vec{\mu}}(\vec{x})\mathbf{1}_S d\vec{x}}$ or $f_{\vec{\mu},S}(\vec{x})=\dfrac{f_{\vec{\mu}}(\vec{x}) S(\vec{x})}{\int_{\mathbb{R}^d}f_{\vec{\mu}}(\vec{x})S(\vec{x})d\vec{x}}$. Also, we will denote the expected value with respect to the truncated mixture distribution with parameters $-\vec{\lambda}$ and $\vec{\lambda}$ by $\mathbb{E}_{\vec{\lambda},S}\left[.\right]$.
We conclude the subsection with an important definition that will be needed for the multi-dimensional case.
\begin{definition}[Rotation invariant/Symmetric] We call a ``truncation" function $S(\vec{x})$ rotation invariant if $S(Q\vec{x}) = S(\vec{x})$ for all orthogonal matrices $Q$. It is clear that every rotation invariant ``truncation" function is also even (choose $Q = - \vec{I}$, where $\vec{I}$ denotes the identity matrix). A set $S$ is called rotation invariant if $\mathbf{1}_{S}$ is rotation invariant function and moreover it is called symmetric if $\mathbf{1}_{S}$ is an even function.
\end{definition}
Next, we derive the EM-update rule to estimate the mean under the ``truncated" setting.
\subsection{EM Algorithm}
The EM algorithm tries to maximize a lower bound of the likelihood at every time step. The population version of the update rule to estimate the mean of a truncated balanced Gaussian mixture with symmetric means $(-\vec{\mu},\vec{\mu)}$ and covariance $\vec{\Sigma}$ with truncation set $S$ boils down to:
\begin{align}\label{eq:EM-rule}
h(\vec{\lambda}_t,\vec{\lambda}):=\mathbb{E}_{\vec{\mu},S}\left[\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda}_t)\vec{x}^T\vec{\Sigma}^{-1}\right]-\mathbb{E}_{\vec{\lambda},S}\left[\vec{x}^T\vec{\Sigma}^{-1}\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right]
\end{align}
such that
\begin{align}\label{eq:dyn}
\vec{\lambda}_{t+1}=\left\{\vec{\lambda}:h(\vec{\lambda}_t,\vec{\lambda})=\vec{0}\right\}.
\end{align}
For the derivation of the update rule please see supplementary material \ref{app:EM-rule}. We note that the above system, in contrast to the un-truncated setting accommodates an \textit{implicit function} in the update rule and hence we cannot obtain a closed form solution for $\vec{\lambda}_{t+1}$.
\begin{comment}
\begin{remark}[Even function $S(\vec{x})$, Symmetric set $S$]
When the truncation function $S$ is even or the truncation set $S$ is symmetric, we can show (Appendix \ref{app:symtrunc-set}) that the EM update rule accommodates a simpler form, given by:
\begin{align}
h(\vec{\lambda}_t,\vec{\lambda}):=\mathbb{E}_{\vec{\mu},S}\left[\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda}_t)\vec{x}^T\vec{\Sigma}^{-1}\right]-\vec{\lambda}^T\vec{\Sigma}^{-1}
\end{align}
and
\begin{align}
\vec{\lambda}_{t+1}=\mathbb{E}_{\vec{\mu},S}\left[\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda}_t)\vec{x}\right].
\end{align}
\end{remark}
\end{comment}
\begin{remark}[Fixed Points]
We first characterize the fixed points of the dynamical system given in equation (\ref{eq:dyn}).
We can identify that there are 3 fixed points, namely, $\vec{\mu}$,$-\vec{\mu}$ and $\vec{0}$, since
\begin{align}
h(\vec{\mu},\vec{\mu})=\vec{0},\;\;h(-\vec{\mu},-\vec{\mu})=\vec{0} \;\;\textit{and}\;\; h(\vec{0},\vec{0})=\vec{0}
\end{align}
In general there may be more fixed points in the dynamics for any arbitrary truncation function $S(\vec{x})$ or set $S$ (see Section \ref{sec:more}). However, in the single dimension case we prove that there are only three fixed points (see Lemma \ref{lem:threefixedpoints}). In multi-dimensional ($d>1$) case we can also show that if $S$ is rotation invariant, then there are only three fixed points as well (see Lemma \ref{lem:symrotation}).
\end{remark}
\section{Properties of the EM Update Rule}\label{sec:EMalgo}
In the section we analyze the dynamical system arising from the EM update rule. To this end, we first describe the derivative $\nabla_{\vec{\lambda}_t} \vec{\lambda}_{t+1}$ of the dynamical system, by invoking the \textit{Implicit Function Theorem}. Then we present some derivatives that are essential to characterize the dynamics and argue about the stability of fixed points.
\subsection{Properties of the Dynamics}
We use the \textit{Implicit Function Theorem} to represent the derivative of $\vec{\lambda}_{t+1}$ with respect to $\vec{\lambda}_t$ to analyze the dynamical system around some point say $\vec{\gamma}$.
\begin{flalign}\label{eqn:multi-ratio}
\nabla_{\vec{\lambda}_t}\vec{\lambda}_{t+1} \Big\vert_{\vec{\gamma}}=\nabla_{\vec{\lambda}_{t+1}}\mathbb{E}_{\vec{\lambda}_{t+1},S}\left[\vec{x}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda}_{t+1})\right]^{-1} \Big\vert_{\vec{\gamma}}
\cdot\nabla_{\vec{\lambda}_t}\mathbb{E}_{\vec{\mu},S}\left[\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda}_t)\vec{x}^T\right]\Big\vert_{\vec{\gamma}}
\end{flalign}
The analogue of the above ratio in the single dimension setting is given by:
\begin{align}\label{eqn:single-ratio}
\frac{d \lambda_{t+1}}{d \lambda_t}\Big\vert_{\vec{\gamma}}=\frac{\frac{d}{d \lambda_t}\mathbb{E}_{\mu,S}\left[x\tanh\left(\frac{x\lambda_t}{\sigma^2}\right)\right]\Big\vert_{\lambda_t=\gamma}}{\frac{d}{d \lambda_{t+1}}\mathbb{E}_{\lambda_{t+1},S}\left[x\tanh\left(\frac{x\lambda_{t+1}}{\sigma^2}\right)\right]\Big\vert_{\lambda_t=\gamma}}
\end{align}
To this end, we state the following lemma which describes certain derivatives of the terms involved in the above ratio to argue about local stability of the fixed points.
\begin{lemma}[Some Useful Derivatives]\label{lem:derivatives}
The following equations hold:
\begin{enumerate}
\item
$\begin{aligned}[t]
\nabla_{\vec{\lambda}} \mathbb{E}_{\vec{\lambda},S}\left[\vec{x}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right]&=\vec{\Sigma}^{-1}\mathbb{E}_{\vec{\lambda},S}\left[\vec{x}\vec{x}^T\right]-\\
&\vec{\Sigma}^{-1}\mathbb{E}_{\vec{\lambda},S}\left[\vec{x}\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right]\mathbb{E}_{\vec{\lambda},S}\left[\vec{x}\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right]^T
\end{aligned}$
\item
$\begin{aligned}[t]
\nabla_{\vec{\mu}} \mathbb{E}_{\vec{\mu},S}\left[\vec{x}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right]=\vec{\Sigma}^{-1}\mathbb{E}_{\vec{\mu},S}\left[\vec{x}\vec{x}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\mu})\right]\\
\hspace{1in}-\vec{\Sigma}^{-1}\mathbb{E}_{\vec{\mu},S}\left[\vec{x}\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right]\mathbb{E}_{\vec{\mu},S}\left[\vec{x}\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\mu})\right]^T
\end{aligned}$
\item $\begin{aligned}[t]
\nabla_{\vec{\lambda}} \mathbb{E}_{\vec{\mu},S}\left[\vec{x}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right]&=\vec{\Sigma}^{-1}\mathbb{E}_{\vec{\mu},S}\left[\vec{x}\vec{x}^T\frac{1}{\cosh^2(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})}\right]\\
&=\vec{\Sigma}^{-1}\mathbb{E}_{\vec{\mu},S}\left[\vec{x}\vec{x}^T\left(1-\tanh^2(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right)\right]
\end{aligned}$
\end{enumerate}
\end{lemma}
\subsection{Two Important Lemmas}
We end the section about the update rule of EM by proving that is well-defined (in the sense that for every $\lambda_t$ there exists a \textit{unique} $\vec{\lambda}_{t+1}$) and moreover, we show that the update rule has Jacobian that is invertible for all $\vec{x} \in \mathbb{R}^d$.
The first Lemma that is needed to argue about global convergence (in case there are three fixed points), with the use of center-stable manifold (as the proof appears in \cite{LPPSJR17}) is the following:
\begin{lemma}[Local Diffeomorphism]\label{lem:localdiff} Let $J$ be the Jacobian of the update rule of the EM dynamics (of size $d \times d$). It holds that $J$ is invertible.
\end{lemma}
\begin{proof}
It suffices to prove that $ \nabla_{\vec{\lambda}} \mathbb{E}_{\vec{\lambda},S}\left[\vec{x}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right], \nabla_{\vec{\lambda}} \mathbb{E}_{\vec{\mu},S}\left[\vec{x}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right]$ have non zero eigenvalues (thus invertible) for all $\vec{\lambda} \in \mathbb{R}^d$ and hence the result follows by Equation (\ref{eqn:multi-ratio}).
Observe that
\begin{align*}
M:= \mathbb{E}_{\vec{\lambda},S}[\vec{x}\vec{x}^T(1-\tanh^2(\vec{x}^T \vec{\Sigma}^{-1}\vec{\lambda}))] &= Cov\left(\vec{x}\sqrt{1-\tanh^2(\vec{x}^T \vec{\Sigma}^{-1}\vec{\lambda})}, \vec{x}\sqrt{1 -\tanh^2(\vec{x}^T \vec{\Sigma}^{-1}\vec{\lambda})}\right) \\&+ \mathbb{E}_{\vec{\lambda},S}[\vec{x}\sqrt{1-\tanh^2(\vec{x}^T \vec{\Sigma}^{-1}\vec{\lambda})}]\mathbb{E}_{\vec{\lambda},S}[\vec{x}\sqrt{1-\tanh^2(\vec{x}^T \vec{\Sigma}^{-1}\vec{\lambda})}]^T
\end{align*}
(where $\vec{x}$ follows a truncated mixture with parameters $\vec{\lambda}, \vec{\Sigma}$ and truncated function $S$ of ``positive measure") which is positive definite (not positive semidefinite) since the function $S$ is of ``positive measure" and $-1<\tanh(y)< 1$ for all $y \in \mathbb{R}$ (otherwise the variables $\vec{x}_1,...,\vec{x}_d$ would live in a lower dimensional subspace). Moreover, from Lemma \ref{lem:derivatives} it is clear that
\begin{align*}
\vec{\Sigma} \nabla_{\vec{\lambda}} \mathbb{E}_{\vec{\lambda},S}\left[\vec{x}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right] - M &= Cov\left(\vec{x}\tanh(\vec{x}^T \vec{\Sigma}^{-1}\vec{\lambda}), \vec{x}\tanh(\vec{x}^T \vec{\Sigma}^{-1}\vec{\lambda})\right),
\end{align*}
which is positive definite as well. Hence we conclude that $$\vec{\Sigma} \cdot \nabla_{\vec{\lambda}} \mathbb{E}_{\vec{\lambda},S}\left[\vec{x}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right]$$ is positive definite, thus
$\nabla_{\vec{\lambda}}\mathbb{E}_{\vec{\lambda},S}\left[\vec{x}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right]$ is invertible.
The proof for \\$\nabla_{\vec{\lambda}} \mathbb{E}_{\vec{\mu},S}\left[\vec{x}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right]$ is simpler since
\begin{align*}
\vec{\Sigma} \nabla_{\vec{\lambda}} \mathbb{E}_{\vec{\mu},S}\left[\vec{x}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right] &= Cov\left(\vec{x}\sqrt{1-\tanh^2(\vec{x}^T \vec{\Sigma}^{-1}\vec{\lambda})}, \vec{x}\sqrt{1 -\tanh^2(\vec{x}^T \vec{\Sigma}^{-1}\vec{\lambda})}\right)
\\&+\mathbb{E}_{\vec{\mu},S}[\vec{x}\sqrt{1-\tanh^2(\vec{x}^T \vec{\Sigma}^{-1}\vec{\lambda})}]\mathbb{E}_{\vec{\mu},S}[\vec{x}\sqrt{1-\tanh^2(\vec{x}^T \vec{\Sigma}^{-1}\vec{\lambda})}]^T,
\end{align*}
(where $\vec{x}$ follows a truncated mixture with parameters $\vec{\mu}, \vec{\Sigma}$ and truncated function $S$ of ``positive measure").
\end{proof}
The second lemma is about the fact that the update rule of EM is well defined.
\begin{lemma}[Well defined]\label{lem:welldefined} Let $\lambda_t \in \mathbb{R}^d$. There exists a unique $\vec{\lambda'}$ such that
\[
\mathbb{E}_{\vec{\mu},S}\left[\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda}_t)\vec{x}^T\vec{\Sigma}^{-1}\right]=\mathbb{E}_{\vec{\lambda'},S}\left[\vec{x}^T\vec{\Sigma}^{-1}\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda'})\right].
\]
\end{lemma}
\begin{proof}
Let $H(\vec{w}) = \vec{\Sigma}\mathbb{E}_{\vec{w},S}\left[\vec{x}^T\vec{\Sigma}^{-1}\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{w})\right]$. In the Lemma \ref{lem:localdiff} we showed that $\nabla_{\vec{w}} H(\vec{w})$ is positive definite since $S$ is of positive measure.
Assume there exist $\lambda, \tilde{\lambda}$ so that $H(\vec{\lambda}) = H(\vec{\tilde{\lambda}})$. Let $\vec{y}_t = t \vec{\lambda} + (1-t) \vec{\tilde{\lambda}}$ for $t \in [0,1]$. Using standard techniques from calculus and that $\nabla_{\vec{w}} H(\vec{w})$ is symmetric we get that
\begin{equation}
(\vec{\lambda} - \vec{\tilde{\lambda}})^T (H(\vec{\lambda}) - H(\vec{\tilde{\lambda}})) \geq \min_{t \in [0,1]}\lambda_{\min} (\nabla_{\vec{w}} H(\vec{w}) \big\vert_{\vec{w} = \vec{y}_t}) \norm{\vec{\lambda} - \vec{\tilde{\lambda}}}^2,
\end{equation}
where $\lambda_{\min}(A)$ denotes the minimum eigenvalue of matrix $A$. It is clear that the left hand side is zero, and also the matrix $\nabla_{\vec{w}} H(\vec{w}) \big\vert_{\vec{w} = \vec{y}_t}$ has all its eigenvalues positive for every $t \in [0,1]$ (using the fact that $\nabla_{\vec{w}} H(\vec{w})$ is positive definite for all $w$ from the proof of Lemma \ref{lem:localdiff} above). We conclude that $\vec{\lambda} = \vec{\tilde{\lambda}}$.
\end{proof}
\begin{remark} In this remark, we would like to argue why there is always a $\vec{\lambda}_{t+1}$ such that \[
\mathbb{E}_{\vec{\mu},S}\left[\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda}_t)\vec{x}^T\vec{\Sigma}^{-1}\right]=\mathbb{E}_{\vec{\lambda}_{t+1},S}\left[\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda}_{t+1})\vec{x}^T\vec{\Sigma}^{-1}\right].
\]
The reason is that $\vec{\lambda}_{t+1}$ is chosen to maximize a particular quantity. If the gradient of that quantity has no roots, it means that $\norm{\vec{\lambda}_{t+1}}_2$ should be infinity. But the quantity is a concave function (in the proof of Lemma \ref{lem:localdiff} we showed that $- \nabla_{\vec{\lambda}} \mathbb{E}_{\vec{\lambda},S}\left[\vec{x}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right]\vec{\Sigma}^{-1}$ is negative definite which is the Hessian of the quantity to be maximized), so the maximum should be attained in the interior (i.e., $\lambda_{t+1}$ cannot have $\ell_2$ norm infinity).
\end{remark}
\subsection{Our results and techniques}
Our results can be summarized in the following two theorems (one for single-dimensional and one for multi-dimensional case):
\begin{theorem}[Single-dimensional case]\label{thm:single} Let $S \subset \mathbb{R}$ be an arbitrary measurable set of positive Lebesgue measure, i.e, $\int_S \mathcal{N}(x;\mu, \sigma^2) + \mathcal{N}(x;-\mu, \sigma^2) dx = \alpha>0$. It holds that under random initialization (under a measure on $\mathbb{R}$ that is absolutely continuous w.r.t Lebesgue), EM algorithm converges with probability one to either $\mu$ or $-\mu$. Moreover, if initialization $\lambda_0>0$ is in a neighborhood of $\mu$ then EM converges to $\mu$ with an exponential rate \[|\lambda_{t+1}- \mu| \leq \rho_t |\lambda_t - \mu|,\] with $\rho_t = 1 - \Omega(\alpha^4)\min(\alpha^2\min(\lambda_t,\mu),1)$ which is decreasing in $t$. Analogously if $\lambda_0 < 0$, it converges to $-\mu$ with same rate (substitute $\max(\lambda_t,-\mu)$ in the expression).
\end{theorem}
\begin{theorem}[Multi-dimensional case]\label{thm:multi} Let $S \subset \mathbb{R}^d$ with $d>1$ be an arbitrary measurable set of positive Lebesgue measure so that $\int_S \mathcal{N}(\vec{x};\vec{\mu}, \vec{\Sigma}) + \mathcal{N}(\vec{x};\vec{-\mu}, \vec{\Sigma}) d\vec{x} = \alpha>0$. It holds that under random initialization (according to a measure on $\mathbb{R}^d$ that is absolutely continuous with Lebesgue measure), EM algorithm converges with probability one to either $\vec{\mu}$ or $\vec{-\mu}$ as long as EM update rule has only $\vec{-\mu},\vec{0},\vec{\mu}$ as fixed points. Moreover, if initialization $\vec{\lambda}_0$ is in a neighborhood of $\vec{\mu}$ or $-\vec{\mu}$, it converges with a rate $1 - \Omega(\alpha^6)$\footnote{If $ \alpha \mu \ll 1$ then the global convergence rate we provide in the single-dimensional case coincides with the local convergence rate of multidimensional.}.
\end{theorem}
\begin{remark} We would like first to note that we prove the two theorems above in a more general setting where we have truncation functions instead of truncation sets (see Section \ref{sec:model} for definitions). Furthermore, in the proof of Theorem \ref{thm:multi}, we show that $\vec{0}$ is a repelling fixed point and moreover $\vec{-\mu},\vec{\mu}$ are attracting so if the initialization is close enough to $\vec{-\mu}$ or $\vec{\mu}$, then EM actually converges to the true mean. Finally, in Section \ref{sec:multi}, Lemma \ref{lem:symrotation} we provide sufficient conditions of the truncated set $S$ (or truncation function) so that the EM update rule has exactly three fixed points. The sufficient condition is that $S$ is rotation invariant under some appropriate transformation.
\end{remark}
To put our results in the context of recent works discussed above, we see this as the first step in rigorously analyzing the aforementioned settings with truncation which introduces new complexities in the form of \textbf{induced correlations} and as a result our techniques deviate from \cite{DTZ17} and moreover from \cite{DGTZ18} (the latter paper provides mean and covariance estimation of a \textbf{single} high dimensional Gaussian where the likelihood is convex unlike the case of mixtures). Our results indicate that even for the \textbf{two component} mixtures the \textbf{population} version could have \textbf{spurious} fixed points (unlike the untruncated case) even in the simplest case of truncation (rectangles in 2 dimensions) which makes giving global rates challenging. Moreover, we feel that this could also complement \textbf{experimental} results such as \cite{LS12} where they provide a heuristic for box truncated multi-component mixtures but do not provide any theoretical guarantees of convergence.
\paragraph{Technical Overview}
To prove the qualitative part of our two main theorems, we perform stability analysis on the fixed points $\vec{-\mu},\vec{0},\vec{\mu}$ of the dynamical system that is induced by EM algorithm and moreover show that the update rule is a diffeomorphism. This is a general approach that has appeared in other papers that talk about first-order methods avoiding saddle points (\cite{MPP15}, \cite{LSJR16}, \cite{PP17}, \cite{LPPSJR17}, \cite{DP18} to name a few).
Nevertheless, computing the update rule of EM for a truncated mixture of two Gaussians is not always possible, because the set/function $S$ is not necessarily symmetric around $\vec{0}$ (even for functions). As a result, the techniques of \cite{DTZ17} (for the population version) do not carry over to our case. In particular we can find an \textit{implicit} description of the update rule of the EM.
Finally, by leveraging the \textit{Implicit Function Theorem}, we are able to compute explicitly the Jacobian of the update rule of EM and perform spectral analysis on it (Jacobian is computed at the three fixed points $\vec{-\mu},\vec{0}, \vec{\mu}$). We show that the spectral radius of the Jacobian computed at $\vec{-\mu},\vec{\mu}$ is less than one (the fixed points are attracting locally) and moreover the spectral radius of the Jacobian computed at $\vec{0}$ is greater than one (repelling). Along with the fact that the Jacobian is invertible (hence the update rule of EM is a diffeomorphism\footnote{A function is called a diffeomorphism if it is differentiable and a bijection and its inverse is differentiable.}), we can use the center-stable manifold theorem to show that the region of attraction of fixed point $\vec{0}$ is of measure zero. Due to the fact that EM converges always to stationary points (folklore), our result follows. We note that in the case $d=1$, the fixed points are exactly three ($-\mu,0,\mu$) and we prove this fact using FKG (see Theorem \ref{thm:FKG}) inequality. As far as the case $d>1$ is concerned, if $S$ is rotation invariant (under proper transformation so that covariance matrix becomes identity), we can show that there are exactly three fixed points by reducing it to the single dimensional case. Last but not least, for the rates of convergence (quantitative part of our theorems), we prove a quantitative version of the FKG inequality (see Lemma \ref{lem:FKGrevise}) which also might be of independent interest.
\subsection{Existence of more Fixed Points}\label{sec:more}
The previous section proved the existence of three fixed points in the case of rotation invariant truncation set/function for the multi-dimensional setting. In this section, we describe an example in two dimensions where the EM update rule has more than three fixed points.
Consider the following setting where the true parameters are give by
$\vec{\mu}\approx \left[2.534,6.395\right]$, and the truncation set $S$ is a ``rectangle", i.e, a product of intervals such that $x_1 \in \left[1,2\right]$ and $x_2 \in \left[-3,1.5\right]$.
We show that $\lambda=\left[1,0\right]$ is a stationary point that satisfies equations, (*) := $\mathbb{E}_{\vec{\lambda},S}\left[\tanh(\vec{x}^T\vec{\lambda})\vec{x}\right]-\mathbb{E}_{\vec{\mu},S}\left[\tanh(\vec{x}^T\vec{\lambda})\vec{x}\right]=\vec{0}$.
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{fixedpoint}
\caption{Surfaces of the equations (*) in the neighborhood of fixed point $A$. The point of view is such that the first equation in (*) is a line passing through point $A$.}
\label{fig:fp_surf}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=0.9\linewidth]{newvectorfield}
\caption{Vector field of the EM update.}
\label{fig:fp_like}
\end{subfigure}
\caption{The figures represent the evidence for more fixed points. The figure on the left are the surfaces of the fixed point equation and the figure on the right is the vector field of the EM update.}
\label{fig:fixed_pts}
\end{figure}
\begin{comment}
To see the fixed point we fix $\lambda=\left[1,0\right]$ and we change $\mu_1,\mu_2$ and thus we plot two surfaces and see where these surfaces intersect the constant ``zero" surface, i.e, we require a solution that simultaneously, satisfies the following equations:
\begin{align}
g^{1}(\mu_1,\mu_2)=0\\
g^{2}(\mu_1,\mu_2)=0
\end{align}
To see this from another point of view, we plot the infinite population log-likelihood surface with $\vec{\mu}=\left[2.534,6.395\right]$.
The log-likelihood can be obtained as follows:
If $L(\vec{\lambda};\vec{x},\vec{\Sigma})$ is the likelihood and we will be deriving the $\log L(\vec{\lambda};\vec{x},\vec{\Sigma})$ , its gradient and its Hessian.Let the true mean parameter be $\mu$.
First for a single sample falling in $S$, we get the following:
\begin{align}
\begin{split}
\mathbb{E}_{\mu,S}[\log L(\vec{\lambda};\vec{x},\vec{\Sigma})]&=\int_S \log f_{\lambda,S}(x)f_{\mu,S}(x) d\vec{x}\\
\end{split}
\end{align}
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/fixedpoint}
\caption{Surfaces of the fixed point equation. The fixed point is denoted by $A$.}
\label{fig:fp_surf}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=0.9\linewidth]{figures/fixed_pt_likelihood}
\caption{Likelihood Surface}
\label{fig:fp_like}
\end{subfigure}
\caption{The figures represent the evidence for more fixed points. The figure on the left are the surfaces of the fixed point equation and the figure on the right is the likelihood surface.}
\label{fig:fixed_pts}
\end{figure}
Figure \ref{fig:fp_surf} represents the zoomed in version of the intersection of the surfaces which are represented in equation \ref{eqn:fp_surf} and thus are planar approximations in the neighborhood around the true parameters $\vec{\mu}=\left[2.534,6.395\right]$, which is the solution to the fixed point equation. The point of view is such that equation $g^{1}(\mu_1,\mu_2)=0$ is a line passing through point $A$.
Alternatively, we can view figure \ref{fig:fp_like}, where the likelihood surface fixing is plotted by fixing the aforementioned solution to be the true parameters. The black point indicated is the point $\lambda=\left[1,0\right]$ which we can see from the likelihood surface, is a saddle point with the direction of maxima and minima indicated.
\end{comment}
\section{On the Rates of convergence}\label{sec:rates}
In this section we provide quantitative versions of our results of Sections \ref{sec:single} and \ref{sec:multi}.
\subsection{Single-dimensional}
Assume that at iteration $t$ the estimate about the true mean $\lambda_t > 0$. It is easy to see that $\lambda_{t+1}>0$ (the opposite is true if $\lambda_{t}$ is negative). W.l.o.g suppose that $\lambda_t >0$. Moreover, it holds that $\mathbb{E}_{\lambda,S}\left[x \tanh \left(\frac{\lambda x}{\sigma^2}\right)\right],$ $\mathbb{E}_{\mu,S}\left[x \tanh \left(\frac{\lambda x}{\sigma^2}\right)\right]$ are strictly increasing functions in $\lambda$ (argument in the proof of Lemma \ref{lem:localdiff}).
In case $\lambda_t < \mu$ then
\[
\mathbb{E}_{\lambda_{t+1},S}\left[x \tanh \left(\frac{\lambda_{t+1} x}{\sigma^2}\right)\right] = \mathbb{E}_{\mu,S}\left[x \tanh \left(\frac{\lambda_t x}{\sigma^2}\right)\right] > \mathbb{E}_{\lambda_t,S}\left[x \tanh \left(\frac{\lambda_t x}{\sigma^2}\right)\right],
\]
hence $\lambda_{t+1}>\lambda_t$ and moreover since
\[
\mathbb{E}_{\mu,S}\left[x \tanh \left(\frac{\mu x}{\sigma^2}\right)\right]> \mathbb{E}_{\mu,S}\left[x \tanh \left(\frac{\lambda_t x}{\sigma^2}\right)\right] = \mathbb{E}_{\lambda_{t+1},S}\left[x \tanh \left(\frac{\lambda_{t+1} x}{\sigma^2}\right)\right],
\]
it is also true that $\lambda_{t+1}< \mu$. Using the same argument we also conclude that if $\lambda_t > \mu$ then $\lambda_{t} > \lambda_{t+1} >\mu$.
We set $G(\lambda, \mu) = \mathbb{E}_{\mu,S}\left[x \tanh \left(\frac{\lambda x}{\sigma^2}\right)\right]$ and we also assume that $\lambda_t < \mu$. By the mean value theorem, we conclude that
\begin{equation}\label{eq:RHS}
G(\lambda_{t},\mu) - G(\lambda_{t},\lambda_{t}) \geq \min_{\xi \in [\lambda_t, \mu] }\frac{\partial G(\lambda_t,y)}{\partial y}\Bigr|_{y=\xi} (\mu - \lambda_t ).
\end{equation}
Moreover, using mean value theorem again it holds that
\begin{equation}\label{eq:LHS}
G(\lambda_{t+1},\lambda_{t+1}) - G(\lambda_{t},\lambda_{t}) \leq \max_{\xi \in [\lambda_t, \lambda_{t+1}] }\frac{\partial G(y,y)}{\partial y}\Bigr|_{y=\xi} (\lambda_{t+1} - \lambda_{t}).
\end{equation}
Using the fact that $G(\lambda_{t+1},\lambda_{t+1}) = G(\lambda_{t},\mu)$ and Equations (\ref{eq:RHS}), (\ref{eq:LHS}), it follows that
\begin{equation}\label{eq:derivrate}
\max_{\xi \in [\lambda_t, \lambda_{t+1}] }\frac{\partial G(y,y)}{\partial y}\Bigr|_{y=\xi} (\lambda_{t+1} - \lambda_{t}) \geq \min_{\xi \in [\lambda_t, \mu] }\frac{\partial G(\lambda_t,y)}{\partial y}\Bigr|_{y=\xi} (\mu - \lambda_t ).
\end{equation}
By rearranging (\ref{eq:derivrate}) we conclude that $|\lambda_{t+1} - \mu| \leq \left(1 - \frac{\min_{\xi \in [\lambda_t, \mu] }\frac{\partial G(\lambda_t,y)}{\partial y}\Bigr|_{y=\xi}}{\max_{\xi \in [\lambda_t, \lambda_{t+1}] }\frac{\partial G(y,y)}{\partial y}\Bigr|_{y=\xi}} \right)|\lambda_{t} - \mu|$.
In the rest of this section, we will give a lower bound for numerator term $\min_{\xi \in [\lambda_t, \mu] }\frac{\partial G(\lambda_t,y)}{\partial y}\Bigr|_{y=\xi}$ and an upper bound for denominator term $\max_{\xi \in [\lambda_t, \lambda_{t+1}] }\frac{\partial G(y,y)}{\partial y}\Bigr|_{y=\xi}$. As far as denominator is concerned, the following is true.
\begin{lemma}[Bounding the denominator]\label{lem:bounddenom} It holds that \[\frac{\partial G(y,y)}{\partial y}\Bigr|_{y=\xi} \leq O\left( \frac{1}{\alpha^2}\right).\]
\end{lemma}
\begin{proof}
\begin{equation}\label{eq:deryy}
\frac{\partial G(y,y)}{\partial y}\Bigr|_{y=\xi} = \frac{1}{\sigma^2} \left(\mathbb{E}_{\xi,S}[x^2] - \mathbb{E}_{\xi,S}^2\left[x\tanh \left(\frac{x \xi}{\sigma^2}\right)\right]\right).
\end{equation}
Observe now that for each even function $f(x)$ it holds that \[\mathbb{E}_{\xi,S}[f(x)] = \frac{\int_{\mathbb{R}}f(x) \left(e^{-\frac{(x-\xi)^2}{2\sigma^2}}+e^{-\frac{(x+\xi)^2}{2\sigma^2}}\right)S(x) dx}{\int_{\mathbb{R}}\left(e^{-\frac{(x-\xi)^2}{2\sigma^2}}+e^{-\frac{(x+\xi)^2}{2\sigma^2}}\right)S(x) dx} =
\frac{\int_{\mathbb{R}}f(x) e^{-\frac{(x-\xi)^2}{2\sigma^2}}\frac{S(x)+S(-x)}{2} dx}{\int_{\mathbb{R}}e^{-\frac{(x-\xi)^2}{2\sigma^2}}\frac{S(x)+S(-x)}{2} dx},\]
where the last term is just $\mathbb{E}_{\mathcal{N}(\xi,\sigma^2), \frac{S+S'}{2}}[f(x)]$ where $S'(x) = S(-x)$.
We conclude that (\ref{eq:deryy}) becomes
\begin{align*}
\frac{\partial G(y,y)}{\partial y}\Bigr|_{y=\xi} &= \frac{1}{\sigma^2}\left(\mathbb{E}_{\mathcal{N}(\xi,\sigma^2),\frac{S+S'}{2}}[x^2] - \mathbb{E}_{\mathcal{N}(\xi,\sigma^2),\frac{S+S'}{2}}^2\left[x\tanh \left(\frac{x \xi}{\sigma^2}\right)\right]\right)
\\&=\frac{1}{\sigma^2}\left(\mathbb{E}_{\mathcal{N}(\xi,\sigma^2),\frac{S+S'}{2}}[x^2] - \mathbb{E}_{\mathcal{N}(\xi,\sigma^2),\frac{S+S'}{2}}^2\left[x\right]\right)
\\&= \frac{1}{\sigma^2}\mathbb{V}_{\mathcal{N}(\xi,\sigma^2),\frac{S+S'}{2}}[x].
\end{align*}
We use Proposition 1, page 14 along with Lemma 7 in page 13 (for $B$ small enough) from paper \cite{DGTZ18} for truncated Gaussians, it follows that
\begin{equation}
\mathbb{V}_{\mathcal{N}(\xi,\sigma^2),\frac{S+S'}{2}}[x] \leq \mathbb{V}_{\mathcal{N}(\xi,\sigma^2)}\times O\left(\frac{1}{\alpha^2}\right) = \sigma^2 \times O\left(\frac{1}{\alpha^2}\right).
\end{equation}
The claim follows.
\end{proof}
To bound the numerator, we first provide with the following quantified version of the FKG correlation inequality.
\begin{lemma}[Quantitative FKG]\label{lem:FKGrevise} Let $f,g : \mathbb{R} \to \mathbb{R}$ be two twice continuously differentiable, even functions with $f,g$ are increasing in $(0,+\infty)$ and decreasing in $(-\infty,0)$. Given a random variable $x$, assume with probability at least $q$ it holds that $|x| \geq c>0$ and moreover $|f'(z)| \geq f'(c)$ for all $|z| \geq c$. It holds that
\begin{equation}
\mathbb{E}[f(x)g(x)] - \mathbb{E}[f(x)]\mathbb{E}[g(x)] \geq 2f'(c) g'(c)\cdot q^2\cdot \mathbb{V}\left[x\Big| \; |x|\geq c\right].
\end{equation}
\end{lemma}
\begin{proof}
Let $y$ be an independent and identically distributed to $x$ random variable. Since both $f,g$ are increasing, we conclude that $(f(x)-f(y))(g(x)-g(y)) \geq 0$ for all possible realizations.
It holds that
\begin{align*}
\mathbb{E}[(f(x)-f(y))(g(x)-g(y))] &\geq \mathbb{E}\left[(f(x)-f(y))(g(x)-g(y)) \Big|\; |x|,|y| \geq c \right] \cdot \Pr[|x| \geq c] \cdot \Pr[|y| \geq c]\\&\geq q^2 \mathbb{E}\left[(f(x)-f(y))(g(x)-g(y)) \Big|\; |x|,|y| \geq c \right] \\& = q^2 \mathbb{E}\left[|f(x)-f(y)||g(x)-g(y)| \Big| \; |x|,|y| \geq c \right]
\\&\geq q^2 f'(c) \cdot g'(c) \mathbb{E}\left[(x-y)^2\Big| \; |x|,|y| \geq c\right].
\end{align*}
The last term, since $x,y$ are independent and identically distributed, is equal to
\begin{align*}
\mathbb{E}\left[(x-y)^2\Big|\; |x|,|y| \geq c\right] &= 2 \mathbb{E}\left[x^2 \Big|\; |x| \geq c\right] - 2 \mathbb{E}^2\left[x \Big|\; |x|\geq c\right]\\& = 2 \mathbb{V}\left[x \Big|\; |x|\geq c\right].
\end{align*}
The proof is complete.
\end{proof}
We are ready to prove a lower bound on the term $\frac{\partial G(\lambda,y)}{\partial y}\Bigr|_{y=\xi}$.
\begin{lemma}[Bounding the numerator]\label{lem:boundnum} It holds that \[\frac{\partial G(\lambda_t,y)}{\partial y}\Bigr|_{y=\xi} \geq \Omega\left(\alpha^2 \tanh^2\left(\sqrt{2\pi}\lambda_t \alpha\right)\right).\]
\end{lemma}
\begin{proof} We will use Lemma \ref{lem:FKGrevise} for the functions $f(x) = x\tanh \left(\frac{\lambda x}{\sigma^2}\right)$ and $g(x) = x\tanh \left(\frac{\xi x}{\sigma^2}\right)$ with $\xi \in [\lambda,\mu]$, $x$ follows $\mathcal{N}(\xi,\sigma^2, \frac{S+S'}{2})$. Moreover we set $q=1/2$ and then the term $c$ from Lemma \ref{lem:FKGrevise} should satisfy $\int_{-c}^c e^{-\frac{(x-\xi)^2}{2\sigma^2}} dx \leq \frac{\sqrt{2\pi\sigma^2} \alpha}{2}$ for all $\xi \in [\lambda_t, \mu]$. Let $\rho$ be such that $\rho \tanh (\rho) = 1$, it is easy to see that the derivative of $h(x) = x\tanh x$, $|h'(x)| \geq h'(\infty) = 1$ whenever $|x| \geq \rho$, thus if $c \geq \frac{\sigma^2 \rho}{\lambda_t}$ then both $|f'(x)|, |g'(x)| \geq 1$. We assume that $c < \frac{\sigma^2 \rho}{\lambda_t}$.
First observe that $\mathbb{V}\left[x \Big| |x| \geq c\right]$ is the variance of a truncated Gaussian where the truncated measure is at most $\frac{\alpha}{2} + \alpha$ and at least $\alpha$, hence from Lemma 6 and Lemma 7 (with $B$ small enough) from \cite{DGTZ18} we conclude that $\mathbb{V}\left[x \Big| |x| \geq c\right] \geq \Omega(\alpha^2) \times \sigma^2$.
Finally, since $\int_{-c}^c e^{-\frac{(x-\xi)^2}{2\sigma^2}} dx \leq \int_{-c}^c e^{-\frac{x^2}{2\sigma^2}} dx$, we choose $c$ so that
\[\int_{-c}^c e^{-\frac{x^2}{2\sigma^2}} dx < \frac{2c}{\sigma} = \frac{\alpha \sqrt{2\pi\sigma^2}}{2}.\]
Therefore using Lemma \ref{lem:FKGrevise} and the fact that $\tanh (x) \geq \frac{x}{\cosh^2 x}$ for $x$ positive and $xi \geq \lambda_t$ we conclude that
\begin{align*}
\frac{\partial G(\lambda_t,y)}{\partial y}\Bigr|_{y=\xi} &\geq \frac{1}{4\sigma^2} \left(\tanh \left(\frac{\lambda_t c}{\sigma^2}\right) + \frac{\lambda_tc}{\sigma^2\cosh^2\left(\frac{\lambda_tc}{\sigma^2}\right)}\right)\left(\tanh \left(\frac{\xi c}{\sigma^2}\right) + \frac{\xi c}{\sigma^2\cosh^2\left(\frac{\xi c}{\sigma^2}\right)}\right) \mathbb{V}\left[x \Big| |x| \geq c\right]\\&
\geq \Omega\left(\alpha^2 \tanh^2\left(\sqrt{2\pi}\lambda_t \alpha\right)\right).
\end{align*}
\end{proof}
Combining Lemmas \ref{lem:boundnum}, \ref{lem:bounddenom} along with above discussion, the proof of Theorem \ref{thm:single} is complete.
\subsection{Multi-dimensional}
In this section we prove rates of convergence for the multi-dimensional case when the $\lambda_t$ is sufficiently close to $\vec{\mu}$ or $-\vec{\mu}$. To do this we will prove an upper bound on the spectral radius of the Jacobian $$\nabla_{\vec{\lambda}}\mathbb{E}_{\vec{\lambda},S}\left[\vec{x}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right]^{-1} \Big\vert_{\vec{\lambda}=\vec{\mu}}\cdot\nabla_{\vec{\lambda}}\mathbb{E}_{\vec{\mu},S}\left[\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\vec{x}^T\right]\Big\vert_{\lambda=\vec{\mu}},$$ i.e., a quantitative version of Lemma \ref{lem:stability2a}.
The following lemma holds and the second part of Theorem \ref{thm:multi} is a corollary.
\begin{lemma}[Rates for local convergence]\label{lem:ratemulti} It holds that the spectral radius of $$\nabla_{\vec{\lambda}}\mathbb{E}_{\vec{\lambda},S}\left[\vec{x}^T\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\right]^{-1} \Big\vert_{\vec{\lambda}=\vec{\mu}, -\vec{\mu}}\cdot\nabla_{\vec{\lambda}}\mathbb{E}_{\vec{\mu},S}\left[\tanh(\vec{x}^T\vec{\Sigma}^{-1}\vec{\lambda})\vec{x}^T\right]\Big\vert_{\lambda=\vec{\mu}, -\vec{\mu}}$$ (i.e., the Jacobian of the update rule of EM method computed at true mean $\vec{\mu}$) is at most $1 - \Omega(\alpha^6)$.
\end{lemma}
\begin{proof} First we may assume under appropriate transformation ($\vec{x} \leftarrow \vec{\Sigma}^{-1/2}\vec{x}, \vec{\mu} \leftarrow \vec{\Sigma}^{-1/2}\vec{\mu}$) that $\vec{\Sigma} = \vec{I}$. We want to bound the spectral radius of \[\left(\mathbb{E}_{\vec{\mu},S}\left[\vec{x}\vec{x}^T\right]-\mathbb{E}_{\vec{\mu},S}\left[\vec{x}\tanh(\vec{x}^T \vec{\mu})\right]\mathbb{E}_{\vec{\mu},S}\left[\vec{x}\tanh(\vec{x}^T \vec{\mu})\right]^T\right)^{-1}\mathbb{E}_{\vec{\mu},S}\left[\vec{x}\vec{x}^T(1-\tanh^2(\vec{x}^T\vec{\mu}))\right].\]
We may assume that $\vec{x}$ follows $\mathcal{N}(\mu,\vec{I}, \frac{S+S'}{2})$ ($S'(\vec{x}) = S(-\vec{x})$), hence we conclude that $\mathbb{E}[\vec{x}\tanh(\vec{x}^T\vec{\mu})] = \mathbb{E}[\vec{x}]$. Thus the Jacobian becomes
\begin{equation}\label{eq:mati}
\textrm{Cov}(\vec{x},\vec{x})^{-1} \left(\textrm{Cov}(\vec{x},\vec{x}) - \textrm{Cov}(\vec{x}\tanh(\vec{x}^T \vec{\mu}),\vec{x}\tanh(\vec{x}^T \vec{\mu}))\right).
\end{equation}
Using Proposition 1, page 14 and Lemma 7 from page 13 (for small enough $B$) from \cite{DGTZ18} we conclude that $\norm{\textrm{Cov}(\vec{x},\vec{x})}_2$ is at most $O\left(\frac{1}{\alpha^2}\right)$. We choose a $c>0$ such that $\Pr[|\vec{x}^T \vec{\mu}| \geq c] \geq \frac{1}{2}$. By law of Total Variance we get that
\begin{align*}
\textrm{Cov}(\vec{x}\tanh(\vec{x}^T\vec{\mu}),\vec{x}\tanh(\vec{x}^T\vec{\mu})) &\succeq \Pr[|\vec{x}^T \vec{\mu}| \geq c] \textrm{Cov}((\vec{x}\tanh(\vec{x}^T\vec{\mu}),\vec{x}\tanh(\vec{x}^T\vec{\mu})) \big| \;|\vec{x}^T \vec{\mu}| \geq c)\\& \succeq
\frac{\tanh^2 (c)}{2} \textrm{Cov}((\vec{x},\vec{x}) \big| \; |\vec{x}^T \vec{\mu}| \geq c) \\& \succeq \Omega(a^4) \vec{I},
\end{align*}
where the last relation holds because of Proposition 1, page 14 and Lemma 7 from page 13 (for small enough $B$) from \cite{DGTZ18} and the fact that $\tanh(c)$ is $\Omega(\alpha)$. Hence the minimum eigenvalue of the matrix above is at least $\Omega(\alpha^4)$. Finally the spectral norm of matrix (\ref{eq:mati}) is at most one minus the minimum eigenvalue of $\textrm{Cov}(\vec{x}\tanh(\vec{x}^T\vec{\mu}),\vec{x}\tanh(\vec{x}^T\vec{\mu}))$ multiplied with the inverse of maximum eigenvalue of $\textrm{Cov}^{-1}(\vec{x},\vec{x})$ hence it is at least $1-\Omega(\alpha^6)$.
\end{proof}
\section{Single Dimensional Convergence}\label{sec:single}
In this section we provide a proof for the qualitative part of Theorem \ref{thm:single}. We mention first an important theorem that will be used for the proofs of both qualitative parts of Theorems \ref{thm:single} and \ref{thm:multi}
\begin{theorem}[FKG inequality]\cite{FKG71}\label{thm:FKG}
Let $f,g : \mathbb{R} \to \mathbb{R}$ be two monotonically increasing functions and $\nu$ any probability measure on $\mathbb{R}$. It holds that
\begin{equation}
\int_{\mathbb{R}} f(x)g(x) d\nu \geq \int_{\mathbb{R}} f(x) d\nu \int_{\mathbb{R}} g(x) d\nu.
\end{equation}
Moreover, in case there is positive mass (of the product measure $\nu \otimes \nu$) on the case $(f(x_1)-f(x_2))(g(x_1)-g(x_2)) > 0$ (where $x_1,x_2$ are two independent samples from $\nu$) then the above inequality is strict.
\end{theorem}
We first perform stability analysis for the fixed points $-\mu,0,\mu$ which is captured in the next Lemma.
\begin{lemma}[Stability in single-dimensional]\label{lem:stability1} It holds that \[\left|\frac{d \lambda_{t+1}}{d \lambda_t} \Big\vert_{\lambda_t=0}\right| >1 \textrm{ and }\left|\frac{d \lambda_{t+1}}{d \lambda_t} \Big\vert_{\lambda_t = \mu, -\mu}\right|<1.\]
\end{lemma}
\begin{proof} Using Lemma \ref{lem:derivatives} and Equation (\ref{eqn:single-ratio}) it holds that
\begin{equation}\label{eq:singlederivative0}
\frac{d \lambda_{t+1}}{d \lambda_t} \Big\vert_{\lambda_t=0} = \frac{\mathbb{E}_{\mu,S}[x^2]}{\mathbb{E}_{0,S}[x^2]}.
\end{equation}
We consider the function $\mathbb{E}_{t\mu,S}[x^2]$ w.r.t variable $t$. We use the Mean Value theorem and we get that there exists $\xi \in (0,1)$ such that
\begin{align}
\mathbb{E}_{\mu,S}[x^2] - \mathbb{E}_{0,S}[x^2] &= \frac{d \mathbb{E}_{t\mu,S}[x^2]}{dt}\big\vert _{t = \xi}
\\& = \frac{1}{\sigma^2} \left[ \mathbb{E}_{\xi\mu,S}\left[x^3\tanh(\xi\mu x) \right] - \mathbb{E}_{\xi\mu,S}\left[x^2 \right]\mathbb{E}_{\xi\mu,S}\left[x\tanh(\xi\mu x) \right] \right]
\end{align}
We shall show that \[\mathbb{E}_{\xi\mu,S}\left[x^3\tanh(\xi\mu x) \right] > \mathbb{E}_{\xi\mu,S}\left[x^2 \right]\mathbb{E}_{\xi\mu,S}\left[x\tanh(\xi\mu x) \right].\]
The proof below is inspired by the proof of FKG inequality (because $x^2, x\tanh(x\xi\mu)$ are increasing for $x \geq 0$ and decreasing for $x < 0$).
Let $x_1,x_2$ be two independent and identically distributed random variables that follow the distribution of $f_{\xi\mu,S}(x)$. Assume w.l.o.g that $|x_1|>|x_2|$ then it holds that $x_1^2 > x_2^2$ and $x_1 \tanh(x_1 \xi\mu) > x_2 \tanh(x_2 \xi\mu)$ (since $\mu>0$).
Therefore we get that $(x_1^2 - x_2^2)(x_1 \tanh(x_1 \xi\mu)- x_2 \tanh(x_2 \xi\mu))>0$ (except for a measure zero set where it might be equality).
We conclude that \[\mathbb{E}_{\xi\mu,S}[(x_1^2 - x_2^2)(x_1 \tanh(x_1 \xi\mu)- x_2 \tanh(x_2 \xi\mu))]>0.\]
From independence and the fact that $x_1,x_2$ are identically distributed, we get that
\[\mathbb{E}_{\xi\mu,S}[x_1^2 x_2 \tanh(x_2 \xi\mu)] = \mathbb{E}_{\xi\mu,S}[x_2^2 x_1 \tanh(x_1 \xi\mu)] = \mathbb{E}_{\xi\mu,S}[x_1^2] \mathbb{E}_{\xi\mu,S}[x_1 \tanh(x_1 \xi\mu)]\]
and also \[\mathbb{E}_{\xi\mu,S}[x_1^3 \tanh(x_1 \xi\mu)] = \mathbb{E}_{\xi\mu,S}[x_2^3 \tanh(x_2 \xi\mu)].\]
It occurs that $\mathbb{E}_{\xi\mu,S}\left[x_1^3\tanh(\xi\mu x_1) \right] > \mathbb{E}_{\xi\mu,S}\left[x_1^2 \right]\mathbb{E}_{\xi\mu,S}\left[x_1\tanh(\xi\mu x_1) \right]$\\
thus,
$\mathbb{E}_{\mu,S}[x^2] > \mathbb{E}_{0,S}[x^2]$ (i.e., the ratio (\ref{eq:singlederivative0}) is greater than 1), namely $0$ is a repelling fixed point.
Moreover, using Lemma \ref{lem:derivatives} and Equation (\ref{eqn:single-ratio}) it holds that
\begin{equation}\label{eq:singlederivative1}
\frac{d \lambda_{t+1}}{d \lambda_t} \Big\vert_{\lambda_t=\mu} = \frac{\mathbb{E}_{\mu,S}\left[\frac{x^2}{\sigma^2}(1 - \tanh^2(\frac{x\mu}{\sigma^2}))\right]}{\mathbb{E}_{\mu,S}\left[\frac{x^2}{\sigma^2}\right] - \mathbb{E}_{\mu,S}^2\left[\frac{x}{\sigma}\tanh(\frac{x\mu}{\sigma})\right]}.
\end{equation}
Since $S$ (function or set) has positive measure we get that the variance of the random variable $\frac{x}{\sigma}\tanh(\frac{x\mu}{\sigma})$ is positive (otherwise the random variable would be constant with probability one and hence $S$ would be of zero measure), thus
\begin{equation}
\mathbb{E}_{\mu,S}\left[\frac{x^2}{\sigma^2}\tanh^2\left(\frac{x\mu}{\sigma}\right)\right] > \mathbb{E}^2_{\mu,S}\left[\frac{x}{\sigma}\tanh\left(\frac{x\mu}{\sigma}\right)\right]
\end{equation}
or equivalently
\begin{equation}\label{eq:help}
\mathbb{E}_{\mu,S}\left[\frac{x^2}{\sigma^2}\right] - \mathbb{E}_{\mu,S}\left[\frac{x^2}{\sigma^2}\tanh^2\left(\frac{x\mu}{\sigma}\right)\right] < \mathbb{E}_{\mu,S}\left[\frac{x^2}{\sigma^2}\right] - \mathbb{E}^2_{\mu,S}\left[\frac{x}{\sigma}\tanh\left(\frac{x\mu}{\sigma}\right)\right].
\end{equation}
By inequality (\ref{eq:help}) we conclude that the ratio (\ref{eq:singlederivative1}) is less than one, hence fixed point $\mu$ is attracting.
The same proof as in the case for $\mu$ works for the fixed point $-\mu$.
\end{proof}
Next, we provide a proof that for the case $d=1$ (single-dimensional), the update rule of EM has exactly three fixed points ($0,\mu,-\mu$).
\begin{lemma}[Only 3 fixed points for single-dimensional]\label{lem:threefixedpoints}
We consider the update rule of the EM method for the single dimensional case (\ref{eq:EM-rule}). The update rule has only $-\mu,0,\mu$ as fixed points.
\end{lemma}
\begin{proof}
Let $\mu>\lambda>0$ and assume $\lambda$ is a fixed point of the update rule of EM (\ref{eq:EM-rule}). Set $G(\mu,\lambda,S) = \mathbb{E}_{\mu,S}[\frac{x}{\sigma^2}\tanh(\frac{x\lambda}{\sigma^2})]$. It holds that $G(\mu,\lambda,S) = G(\lambda,\lambda,S)$ (by definition of $\lambda$).
It follows from Mean Value theorem that there exists a $\xi \in (\lambda, \mu)$ so that (using also Lemma \ref{lem:derivatives})
\begin{align*}
\frac{G(\mu,\lambda,S) - G(\lambda,\lambda,S)}{\mu - \lambda} = &\mathbb{E}_{\xi,S}\left[\frac{x^2}{\sigma^2}\tanh\left(\frac{x\xi}{\sigma^2}\right)\tanh\left(\frac{x\lambda}{\sigma^2}\right)\right]-\\
&\mathbb{E}_{\xi,S}\left[\frac{x}{\sigma}\tanh\left(\frac{x\xi}{\sigma^2}\right)\right]\cdot \mathbb{E}_{\xi,S}\left[\frac{x}{\sigma}\tanh\left(\frac{x\lambda}{\sigma^2}\right)\right].
\end{align*}
We get that $\frac{x}{\sigma}\tanh(\frac{x\lambda}{\sigma^2}), \frac{x}{\sigma}\tanh(\frac{x\xi}{\sigma^2})$ are increasing functions for $x\geq 0$ and decreasing for $x<0$, so inspired by the proof of FKG inequality \ref{thm:FKG} we shall show that $G(\mu,\lambda,S) - G(\lambda,\lambda,S)>0$ (and reach contradiction).
Let $x_1,x_2$ be two independent and identically distributed random variables that follow the distribution of $f_{\xi,S}(x)$. Assume w.l.o.g that $|x_1|>|x_2|$ then it holds that $\frac{x_1}{\sigma} \tanh(\frac{x_1\lambda}{\sigma^2}) > \frac{x_2}{\sigma} \tanh(\frac{x_2\lambda}{\sigma^2})$ and $\frac{x_1}{\sigma} \tanh(\frac{x_1\xi}{\sigma^2}) > \frac{x_2}{\sigma} \tanh(\frac{x_2\xi}{\sigma^2})$.
Therefore we get that $$\left(\frac{x_1}{\sigma} \tanh\left(\frac{x_1\lambda}{\sigma^2}\right)- \frac{x_2}{\sigma} \tanh\left(\frac{x_2\lambda}{\sigma^2}\right)\right)\cdot \left(\frac{x_1}{\sigma} \tanh\left(\frac{x_1\xi}{\sigma^2}\right)- \frac{x_2}{\sigma} \tanh\left(\frac{x_2\xi}{\sigma^2}\right)\right)>0$$ (except for a measure zero set where it might be equality).
We conclude that $$\mathbb{E}_{\xi,S}\left[\left(\frac{x_1}{\sigma} \tanh\left(\frac{x_1\lambda}{\sigma^2}\right)- \frac{x_2}{\sigma} \tanh\left(\frac{x_2\lambda}{\sigma^2}\right)\right)\cdot \left(\frac{x_1}{\sigma} \tanh\left(\frac{x_1\xi}{\sigma^2}\right)- \frac{x_2}{\sigma} \tanh\left(\frac{x_2\xi}{\sigma^2}\right)\right)\right]>0.$$
From independence and the fact that $x_1,x_2$ are identically distributed, we get that
\[\mathbb{E}_{\xi,S}\left[\frac{x_1 x_2}{\sigma^2} \tanh\left(\frac{x_1 \lambda}{\sigma^2}\right) \tanh\left(\frac{x_2 \xi}{\sigma^2}\right)\right] = \mathbb{E}_{\xi,S}\left[\frac{x_1 x_2}{\sigma^2} \tanh\left(\frac{x_1 \xi}{\sigma^2}\right) \tanh\left(\frac{x_2 \lambda}{\sigma^2}\right)\right]\]
and also \[\mathbb{E}_{\xi,S}\left[\frac{x_1^2}{\sigma^2} \tanh\left(\frac{x_1 \xi}{\sigma^2}\right) \tanh\left(\frac{x_1 \lambda}{\sigma^2}\right)\right] = \mathbb{E}_{\xi,S}\left[\frac{x_2^2}{\sigma^2} \tanh\left(\frac{x_2 \xi}{\sigma^2}\right) \tanh\left(\frac{x_2 \lambda}{\sigma^2}\right)\right].\]
We conclude that $G(\mu,\lambda,S) - G(\lambda,\lambda,S)> 0$. However by assumption that $\lambda$ is a fixed point, it must hold that $G(\mu,\lambda,S) - G(\lambda,\lambda,S)=0$ (contradiction).
The same proof works when $\lambda > \mu >0$. In case $\lambda<0$, the proof is exactly the same with before, using $-\mu$ instead of $\mu$ (with opposite direction on the inequality).
\end{proof}
Using the generic proof of Theorem 2, page 6 from \cite{LPPSJR17} paper, the fact that EM converges to stationary points (which are fixed points of the update rule of EM) and combining it with Lemmas \ref{lem:stability1}, \ref{lem:threefixedpoints} and the Lemma \ref{lem:localdiff} about local diffeomorphism of the update rule, the proof of the qualitative part of Theorem \ref{thm:single} follows.
|
2,877,628,088,476 | arxiv | \section*{Introduction}
There is a problem in quantum theory of gravitons
created from vacuum in the expanding Universe with
nonzero scalar curvature $R$ (inflation, dust, etc.)
concerning the long wave graviton modes. In linearized
theory of quantum gravitons in curved space-time of
the isotropic homogeneous Universe one ob\-tains after
separation of variables in the wave equation the
equation for the function dependent only on time.
After conformal (Weyl) transformation to stationary
metric this equation can be understood as equation in
flat stationary metric with time dependent mass. It
occurs that for long waves this effective mass squared
is negative. All this occurs due to conformal
noninvariance of the graviton theory for nonzero $R$
leading to tachyonic behaviour of long waves modes.
The necessity of going from nonstationary metric to
the stationary one is motivated by finite results for
particle creation \cite{Grib}. In the end one surely
must return to the original space-time where the
obtained results are still finite.
In some papers (see \cite{Gri} and references
there) it was proposed to consider these modes as
classical excitations of the field growing in
time, so that one must quantize only modes with
momentum with the square larger than the negative
square of the effective mass. However one knows
from the quantum field theory that tachyonic
behaviour disappears if one takes into account
nonlinear terms neglected in linearized theory.
This is typical in theories of spontaneous
breaking of symmetry due to redefinition of the
vacuum leading to its noninvariance to this or
that transformation of the Lagrangian. In
quantum theory based on a new vacuum one gets new
masses for the redefined quantum field so that
there is no negative mass square.
In this paper the analogous program is made for
gravitons. It occurs that if one is going from
the linear theory of gravitons taking into
account the next order of nonlinearity one gets
the redefinition of vacuum solving the problem
for long wave gravitons. In the result one gets
gravitons with zero and positive effective mass.
Differently from the situation in theory of weak interactions where Higgs potential with nonlinear term is taken by hand in our case the nonlinear term naturally appears as the second order in Einstein equation.
In the end of the paper the expressions for the
particle density and the energy density are
obtained for gravitons created in expanding
Universe with metric which has some special
dependence of the scale factor on time.
\section{Getting the graviton equation from Einstein
equation}
Einstein equations in presence of matter have the
form
\begin{equation}\label{ghG}
R_{ik}-\frac12g_{ik}R=\kappa T_{ik}
\end{equation}
or
\begin{equation}\label{ghT}
R^i_k=\kappa(T^i_k-\frac12\delta^i_k T)
\end{equation}
Let us consider the case when matter is
homogeneous isotropic liquid filling the
Universe. Then
\begin{equation}\label{tei}
T_{ik}=(\varepsilon+p)u_iu_k-g_{ik}p
\end{equation}
where $u_i$ is the four velocity, $p$ -- the
pressure and $\varepsilon$ -- the energy density
of the liquid.
The problem of creation of gravitons in the early
Universe was discussed in literature with
gravitons considered as quantized small term in
the metrical tensor. Due to absence of exact
quantum gravity usually one deals with linearized
analogy with quantization of other quantum
fields. First let us obtain equations for
classical small perturbations of the metrical
tensor and then do quantization. Consider the
graviton perturbations-the gravitational waves as
small term added to the background metric
So there are
\begin{equation}\label{fd}
g_{ik}=\stackrel{(\circ)}g_{ik}+h_{ik}
\end{equation}
if $h_{ik}=0$ the $\stackrel{(\circ)}g_{ik}$ is
the solution of Einstein's equation of the form
\begin{equation}\label{Ef}
\stackrel{(\circ)}R{}^i_k=\kappa(\stackrel{(\circ)}T{}^i_k-
\frac12\delta^i_k\stackrel{(\circ)}T)
\end{equation}
Let us go from the up to low indices and vice
verse by using the background metric
$\stackrel{(\circ)}g_{ik}:
h^i_k=\stackrel{(\circ)}g{}^{in}h_{nk}$ and
expand equations (\ref{ghT}) in a series in
$h_k^i$:
\begin{equation}\label{raz}
\stackrel{(\circ)}R{}^i_k+\delta
R^i_k=\kappa(\stackrel{(\circ)}T{}^i_k-
\frac12\delta^i_k\stackrel{(\circ)}T+\delta
T^i_k-\frac12\delta^i_k\delta T)
\end{equation}
from which due to (\ref{Ef}) the perturbations
$h_{ik}$ satisfy equations
\begin{equation}\label{uh}
\delta R^i_k=\kappa(\delta
T^i_k-\frac12\delta^i_k\delta T)
\end{equation}
Using the notation $(1+h)^{-1}$ for the matrix
inverse to $(1+h)$ with small $h_k^i$ (small in
the sense that all eigenvalues of the matrix
$(1+h)$ are smaller than the unit)one obtains
\begin{equation}\label{uh00}
{(1+h)^{-1}}^i_k=\delta^i_k-h^i_k+h^i_nh^n_k-\dots
\end{equation}
Write the Ricci tensor and the curvature tensor
as
$$R^i_k={(1+h)^{-1}}^i_{i'}(-h^{i'}_n\stackrel{(\circ)}R{}^n_k+
\frac12{(1+h)^{-1}}^l_{l'}(h^{l'i'}_{;k;l}+h^{l';i'}_{k;l}-
h^{i';l'}_{k;l}-h^{l';i'}_{l;k})$$
$$+\frac14{(1+h)^{-1}}^l_{l'}{(1+h)^{-1}}^n_{n'}(h^{l'}_{n;k}h^{n';i'}_l
-(2h^{l'}_{n;l}-h^{l'}_{l;n})$$
\begin{equation}\label{Ri}
\cdot(h^{n';i'}_k+h^{n'i'}_{;k}-h^{i';n'}_k
)-2h^{l'}_{k;n}(h^{n'i'}_{;l}-h^{i';n'}_l)))+\stackrel{(\circ)}R{}^i_k
\end{equation}
$$R_{ik}=\stackrel{(\circ)}R_{ik}+
\frac12{(1+h)^{-1}}^l_{l'}(h^{l'}_{i;k;l}+h^{l'}_{k;i;l}-
h^{;l'}_{ik;l}-h^{l'}_{l;k;i})$$
$$+\frac14{(1+h)^{-1}}^l_{l'}{(1+h)^{-1}}^n_{n'}(h^{l'}_{n;k}h^{n'}_{l;i}
-(2h^{l'}_{n;l}-h^{l'}_{l;n})$$
\begin{equation}\label{Rik}
\cdot(h^{n'}_{k;i}+h^{n'}_{i;k}-h^{;n'}_{ik}
)-2h^{l'}_{k;n}(h^{n'}_{i;l}-h^{;n'}_{il})))
\end{equation}
$$R=\stackrel{(\circ)}R+{(1+h)^{-1}}^i_{i'}
(-h^{i'}_n\stackrel{(\circ)}R{}^n_i+
{(1+h)^{-1}}^l_{l'}(h^{l';i'}_{i;l}-h^{i';l'}_{i;l})$$
$$+\frac14{(1+h)^{-1}}^l_{l'}{(1+h)^{-1}}^n_{n'}(3h^{l'}_{n;i}h^{n';i'}_l
-2h^{l'}_{i;n}h^{n'i'}_{;l}$$
\begin{equation}\label{R}
-h^l_{l';n}h^{i';n'}_k
+4h^{l'}_{n;l}(h^{i';n'}_i-h^{n';i'}_i ))
\end{equation}
Here '';'' means the covariant derivative in
background metric $\stackrel{(\circ)}g_{ik}$.
Consi\-dering in (\ref{Rik}) only first degree in
$h_{ik}$ one obtains the linearized equations for
$h_{ik}$ in (\ref{ghG}) as
\begin{equation}\label{lho}
h^{;n}_{ik;n}+h_{;i;k}-h^{n}_{i;k;n}-h^{n}_{k;i;n}-
\stackrel{(\circ)}g_{ik}(h^{;n}_{;n}-h^{m;n}_{n;m})+
h_{ik}\stackrel{(\circ)}R=-2\kappa\delta\!\stackrel{(1)}T_{ik}
\end{equation}
Let us consider the background as homogeneous
isotropic nonstastionary space-time
\begin{equation}\label{met}
ds^2=dt^2-a^2(t)\vec{dl}^2
\end{equation}
where $\eta$ is the conformal time. Here the
Latin indices take the values $0,1,2,3$ and the
Greek -- $1,2,3$. Then write for the scalar
curvature and the Ricci tensor
$$\delta
R={(1+h)^{-1}}^k_{k'}({(1+h)^{-1}}^l_{l'}(h^{l'\tilde{;}k'}_{k\quad\tilde{;}l}-
h^{k'\tilde{;}l'}_{k\quad\tilde{;}l})-\frac1{a^2}h^{k'}_{k,0,0}-
\frac{3a'}{a^3}h^{k'}_{k,0}$$
$$+\frac{2\epsilon}{a^2}h^{k'}_k+\frac14{(1+h)^{-1}}^l_{l'}
{(1+h)^{-1}}^n_{n'}(4h^{l'}_{n\tilde{;}l}(h^{k'\tilde{;}n'}_k-h^{n'\tilde{;}k'}_k)
+3h^{l'}_{k,n}h^{n'\tilde{;}k'}_l$$
$$-2h^{l'}_{k\tilde{;}n}h^{n'k'}_{\tilde{;}l}
-h^{l'}_{l\tilde{;}n}h^{k'\tilde{;}n'}_k)+\frac1{4a^2}{(1+h)^{-1}}^l_{l'}
(3h^{l'}_{k,0}h^{k'}_{l,0}-h^{l'}_{l,0}h^{k'}_{k,0}))$$
$$\delta R^0_0=-\frac1{2a^2}{(1+h)^{-1}}^l_{l'}(h^{l'}_{l,0,0}+
\frac{a'}ah^{l'}_{l,0}-
\frac12{(1+h)^{-1}}^n_{n'}h^{l'}_{l,0}h^{n'}_{l,0})$$
$$\delta R^0_\alpha=\frac1{2a^2}{(1+h)^{-1}}^l_{l'}
(h^{l'}_{\alpha\tilde{;}l,0}-h^{l'}_{l\tilde{;}\alpha,0})
+\frac1{4a^2}{(1+h)^{-1}}^n_{n'}$$
$$\cdot(h^{l'}_{n\tilde{;}\alpha}h^{n'}_{l,0}-h^{n'}_{\alpha,0}(2h^{l'}_{n\tilde{;}l}
-h^{l'}_{l\tilde{;}n}))$$
\begin{equation}\label{rue}
\delta R^\alpha_\beta={(1+h)^{-1}}^\alpha_{i'}
(\frac12{(1+h)^{-1}}^l_{l'}
((h^{l'i'}_{\tilde{;}\beta\tilde{;}l}+h^{l'\tilde{;}i'}_{\beta\quad\tilde{;}l}
-h^{i'\tilde{;}l'}_{\beta\quad\tilde{;}l}-h^{l'\tilde{;}i'}_{l\tilde{;}\beta})$$
$$+\frac14{(1+h)^{-1}}^n_{n'}
(h^{l'}_{n\tilde{;}\beta}h^{n'\tilde{;}i'}_l-(2h^{l'}_{n\tilde{;}l}-
h^{l'}_{l\tilde{;}n})\cdot(h^{n'\tilde{;}i'}_{\beta}+
h^{n'i'}_{\tilde{;}\beta}-h^{i'\tilde{;}n'}_{\beta})$$
$$-2h^{l'}_{\beta\tilde{;}n}
\cdot(h^{n'i'}_{\tilde{;}l}-h^{i'\tilde{;}n'}_l))
-\frac1{4a^2}
(h^{i'}_{\beta,0}h^{l'}_{l,0}-2h^{i'}_{l,0}h^{l'}_{\beta,0})))$$
$$-\frac1{2a^2}h^{i'}_{\beta,0,0}
-\frac{a'}{a^3}h^{i'}_{\beta,0}+\frac{2\epsilon}{a^2}h^{i'}_\beta)
-\frac{a'}{2a^3}\delta^\alpha_\beta{(1+h)^{-1}}^l_{l'}h^{l'}_{l,0}
\end{equation}
where $\epsilon=\pm1,0$ for the closed, open and
flat Universe. The sign "$\tilde{;}$" is used for
the covariant derivative in space part of the
metric and "$,0$" or comma for the derivative in
conformal time $\eta$. Metric $g^{ik}$ is defined
up to arbitrary coordinate transformations so one
can put some auxiliary conditions.
Solutions (\ref{raz}) for $h_{ik}$ can be written
as
$$h^i_k=S^i_k+V^i_k+B^i_k$$ where
$S^i_k,V^i_k,B^i_k$ are irreducible scalar,
vector and tensor components of the tensor
satisfying the conditions \cite{Lif}:
$$\stackrel{(\circ)}g_{ik}S^{ik}=S\ne0,
\quad\stackrel{(\circ)}g_{ik}V^{ik}=0,
\quad\stackrel{(\circ)}g_{ik}B^{ik}=0$$
Considering only the gravitational waves exclude
the scalar and vector parts by putting the gauge
conditions
\begin{equation}\label{sv}
h=0,\qquad{h^i_k}_{\tilde;i}=0
\end{equation}
In this case the linearized equations for
perturbations $h^i_k$ take the form
\begin{equation}\label{lr}
h^{\alpha}_{\beta,0,0}+
\frac{2a'}ah^{\alpha}_{\beta,0}+\frac{2\epsilon}{a^2}h^\alpha_\beta+
h^{\alpha\tilde;\gamma}_{\beta\tilde;\gamma}=0
\end{equation}
Go from the variables $h^i_k$ to new variables
$\mu^i_k=a(\eta)h^i_k$ and make the conformal
transformation
\begin{equation}\label{ure090}
\tilde g_{ik}=g_{ik}/a^2(\eta)
\end{equation}
Then one obtains the equation for the field with
spin 2 in Minkowsky flat space in some external
effective field ($\epsilon=0$)
\begin{equation}\label{ure10}
\mu^\alpha_{\beta,0,0}+
\mu^{\alpha,\gamma}_{\beta,\gamma}-\frac{a''}{a}\mu^\alpha_\beta=0
\end{equation}
After separation of variables and Fourier
representation $\mu^\alpha_\beta(x)$
\begin{equation}\label{ure11}
\mu^\alpha_\beta(x)=\int d^3k(g_{\vec
k}(\eta)e^{i\vec k\vec x}a^\alpha_\beta+g_{\vec
k}^*(\eta)e^{-i\vec k\vec x}a^{\alpha*}_\beta)
\end{equation}
one obtains for the time dependent $g_{\vec
k}(\eta)$ function the equation
\begin{equation}\label{ure12}
g_{\vec k}''(\eta)+(k^2-\frac{a''}a)g_{\vec
k}(\eta)=0
\end{equation}
which formally has the negative square of the
effective mass $m^2a^2=-a''/a$.
Differently from the situation in theory of weak interactions where Higgs potential with nonlinear term is taken by hand in our case the nonlinear term naturally appears as the second order in Einstein equation.
Equation (\ref{lr}) was considered in the paper of
L. P. Grishuk \cite{Gri2}. Calculations of gravitational excitations based on eq. (\ref{ure10}) were made in \cite{Parker}, where absence of the infrared divergence in this method was shown. Calculation of the energy density and pressure of created gravitons was made in the papers of A. Starobinsky \cite{Star} and V. Sahni \cite{Sahni}. For small $k$ the Fourier transform of the solution (\ref{lr}) was obtained as:
\begin{equation}\label{ure120}
h^{\alpha}_{\beta}(\vec k)=a^{\alpha}_{\beta}(\vec
k)+b^{\alpha}_{\beta}(\vec
k)\int\frac{d\eta}{a^2(\eta)}
\end{equation}
Note however that solutions of the form (\ref{ure120}) for small $k$ cannot be inter\-preted in terms of usual particles, that is why L. P. Grishuk \cite{Gri} considered them as ''frozen'' modes forming some condensed classical state.
Here we continue this research considering what
changes in the form of the condensed state (or the
new vacuum) are introduced by next orders in
Einstein's equations.
\section{Third order equations for gravitational
waves} Let us consider the right hand side of
Einstein's equations. For (\ref{tei})
\begin{equation}\label{lr2}
\delta T^i_k=(\delta\varepsilon+\delta
p)u^iu_k-\delta^i_k\delta p+
(\varepsilon+p)(\delta u^iu_k+u^i\delta
u_k+\delta u^i\delta u_k)
\end{equation}
One has for the four velocities $u_i$ and
$\stackrel{(\circ)}u_i$ the conditions
$g_{ik}u^iu^k=1$ and
$\stackrel{(\circ)}g_{ik}\stackrel{(\circ)}u{}^i
\stackrel{(\circ)}u{}^k=1$. So
\begin{equation}\label{sk1}
2\stackrel{(\circ)}u{}_k\delta
u^k+\stackrel{(\circ)}g_{ik}\delta u^i\delta
u^k=0
\end{equation}
Note that $\delta u_i$ due to constraints
(\ref{sv}) can depend only on
$h^j_kh^k_{j\tilde;i}$ \dots so $\delta
u^\alpha\delta u^\beta$ depends on the squares of
these terms. But we shall neglect the fourth and
higher orders. In synhronous reference system
$u^0=1/a,u^\alpha=0$ so from (\ref{sk1}) one has
$\delta u^0=0$ and
$$\delta R^0_0=\frac12\kappa(\delta\varepsilon+3\delta
p),\quad \delta
R^\alpha_\beta=\frac12\kappa\delta^\alpha_\beta(\delta
p-\delta\varepsilon)$$
\begin{equation}\label{uh2}
\delta R^\alpha_0=a\kappa(\varepsilon+p)\delta u^\alpha
\end{equation}
Consider the case of flat space $\epsilon=0$.
Then $\displaystyle
h^\alpha_{\beta\tilde;\gamma}=
h^\alpha_{\beta,\gamma},h_\alpha^{\beta\tilde;\gamma}=
\frac1{a^2}h_\alpha^{\beta,\gamma}$ where Greek
indices are put up and below by use of the
Minkowsky metric. One can look on eqs.
(\ref{uh2}) as on Euler Lagrange equations for
the fields $h_{ik}$. Then due to constraints
(\ref{sv}) up to terms of the divergence form one
can obtain that not only $h^i_{k\tilde;i}=0$ but
$h^k_lh^i_{n\tilde;k}=0,\dots$ so that
(\ref{rue}) can be transformed to
\begin{equation}\label{ruesv}
\delta R^\alpha_\beta={(1+h)^{-1}}^\alpha_{i'}
(\frac12{(1+h)^{-1}}^l_{l'} (
-h^{i'\tilde{;}l'}_{\beta\tilde{;}l}-
h^{l'\tilde{;}i'}_{l\tilde{;}\beta})$$
$$+\frac14{(1+h)^{-1}}^l_{l'} {(1+h)^{-1}}^n_{n'}
(h^{l'}_{n\tilde{;}\beta}h^{n'\tilde{;}i'}_l-h^{l'}_{l\tilde{;}n}
h^{i'\tilde{;}n'}_{\beta}+
2h^{l'}_{\beta\tilde{;}n}h^{i'\tilde{;}n'}_l))$$
$$-\frac{a'}{2a^3}\delta^\alpha_\beta{(1+h)^{-1}}^l_{l'}h^{l'}_{l,0}+
{(1+h)^{-1}}^{\alpha}_{i'}(-\frac1{2a^2}h^{i'}_{\beta,0,0}-
\frac{a'}ah^{i'}_{\beta,0}$$
$$-\frac1{4a^2}{(1+h)^{-1}}^l_{l'}
(h^{i'}_{\beta,0}h^{l'}_{l,0}-2h^{i'}_{l,0}h^{l'}_{\beta,0}))
\end{equation}
From (\ref{ruesv}) follows that one can take
instead of (\ref{uh}) the equations
\begin{equation}\label{url}
(1+h)^\gamma_\alpha\delta
R^\alpha_\beta=\frac12\kappa(1+h)^\gamma_\beta(\delta
p-\delta\varepsilon)
\end{equation}
Multiply the last equation on ''$-2a^2$'', then
from (\ref{ruesv}), (\ref{url}) one obtains
\begin{equation}\label{uregl}
h^\alpha_{\beta,0,0}+\frac{2a'}ah^\alpha_{\beta,0}
+\frac{a'}{a}(1+h)^\alpha_\beta{(1+h)^{-1}}^l_{l'}
h^{l'}_{l,0}$$
$$+\frac12{(1+h)^{-1}}^l_{l'}
(h^\alpha_{\beta,0}h^{l'}_{l,0}-
2h^\alpha_{l,0}h^{l'}_{\beta,0})
+{(1+h)^{-1}}^l_{l'} (h^{\alpha;l'}_{\beta;l}+
h^{l',\alpha}_{l,\beta})$$
$$-\frac12{(1+h)^{-1}}^l_{l'} {(1+h)^{-1}}^n_{n'}
(h^{l'}_{n,\beta}h^{n',\alpha}_l- h^{l'}_{l,n}
h^{\alpha,n'}_{\beta}+
2h^{l'}_{\beta,n}h^{\alpha,n'}_l)$$
$$=a^2\kappa(1+h)^\alpha_\beta(\delta\varepsilon-\delta
p)
\end{equation}
Consider first three orders in $h^i_k$ in
equations (\ref{uregl}).
\begin{equation}\label{uregl2}
h^\alpha_{\beta,0,0}+\frac{2a'}ah^\alpha_{\beta,0}+
\frac12(\delta^l_{l'}-h^l_{l'}+ h^l_kh^k_{l'})
(h^\alpha_{\beta,0}h^{l'}_{l,0}-
2h^\alpha_{l,0}h^{l'}_{\beta,0}
+2h^{\alpha,l'}_{\beta,l}$$
$$+2h^{l',\alpha}_{l,\beta}-(\delta^n_{n'}-h^n_{n'})
(h^{l'}_{n,\beta}h^{n',\alpha}_l- h^{l'}_{l,n}
h^{\alpha,n'}_{\beta}+
2h^{l'}_{\beta,n}h^{\alpha,n'}_l))$$
$$=(\delta^\alpha_\beta+h^\alpha_\beta)(a^2\kappa(\delta\varepsilon-\delta
p)+\frac{a'}{a}(h^l_{l'}-h^l_kh_{l'}^k)h^{l'}_{l,0})
\end{equation}
or
\begin{equation}\label{uregl3}
h^\alpha_{\beta,0,0}+\frac{2a'}ah^\alpha_{\beta,0}+
h^{\alpha,l}_{\beta,l}-h^\alpha_{l,0}h^{l}_{\beta,0}
-h^\alpha_{l,n}h^{l,n}_\beta-\frac12h^{n,\alpha}_lh^l_{n,\beta}
-h^l_{l'}h^{l',\alpha}_{l,\beta}$$
$$-\delta^\alpha_\beta(a^2\kappa(\delta\epsilon-\delta p)+\frac{a'}ah^n_{n'}h^{n'}_{n,0})$$
$$+\frac12h^l_{l'}(
(2h^{l'}_{n,\beta}h^{n,\alpha}_l-2h^{\alpha,n}_lh^{l'}_{\beta,n}
-h^{l'}_{l,n}h^{\alpha,n}_\beta+2h^{l'}_lh^{l,\alpha}_{l',\beta}
-h^\alpha_{\beta,0}h^{l'}_{l,0}$$
$$+2h^{l'}_{\beta,0}h^\alpha_{l,0}+\frac{2a'}{a}\delta^\alpha_\beta h_l^nh^{l'}_{n,0}-
\frac{2a'}{a}h^\alpha_\beta)h^{l'}_{l,0})=
h^\alpha_\beta a^2\kappa(\delta\varepsilon-\delta
p)=0
\end{equation}
So
$$\delta^\alpha_\beta a^2\kappa(\delta
p-\delta\varepsilon)$$
\begin{equation}\label{uregl4}
=h^\alpha_{l,0}h^{l}_{\beta,0}
+h^\alpha_{l,n}h^{l,n}_\beta+\frac12h^{n,\alpha}_lh^l_{n,\beta}
+h^l_{l'}h^{l',\alpha}_{l,\beta}+\delta^\alpha_\beta\frac{a'}a
h^n_{n'}h^{n'}_{n,0}
\end{equation}
Putting away the divergence of
$h^\alpha_{l,\gamma}h^{l,\gamma}_\beta+
\frac12h^{\gamma,\alpha}_lh^l_{\gamma,\beta}
+h^l_{l'}h^{l',\alpha}_{l,\beta}$ and taking into
account for fixed nonzero components of the
tensor $h^i_k$ the condition
$h^l_{l'}h^n_lh^{l'}_n=0$ after simple
transformations one obtains
\begin{equation}\label{uregl5}
h^\alpha_{\beta,0,0}+\frac{2a'}ah^\alpha_{\beta,0}+
h^{\alpha,\gamma}_{\beta,\gamma}+h^l_{l'}h^{l'}_{\beta,0}h^{\alpha}_{l,0}$$
$$+h^l_{l'}(
\frac12h^{l'}_{l,n}h^{\alpha,n}_\beta+h^{l',\alpha}_n
h^n_{l,\beta}-h^{l'}_{\beta,n}h^{\alpha,n}_l+
h^l_nh_{l',\beta}^{n,\alpha})=0
\end{equation}
Now let us go from variables $h^i_k$ to variables
$\mu^i_k=a(\eta)h^i_k$ and make the conformal
transformation $$\tilde
g_{ik}=g_{ik}/a^2(\eta)$$
Then we obtain the equation in flat Minkowsky
space with some effective external field
\begin{equation}\label{ure3}
\mu^\alpha_{\beta,0,0}+
\mu^{\alpha,\gamma}_{\beta,\gamma}-
\frac{a''}{a}\mu^\alpha_\beta+
\frac1{a^2}(\mu^\alpha_{l,0}-\frac{a'}a\mu^\alpha_l)
(\mu^n_{\beta,0}-\frac{a'}a\mu^n_\beta)\mu^l_n$$
$$+\frac1{a^2}\mu^l_{l'}(
\frac12\mu^{l'}_{l,n}\mu^{\alpha,n}_\beta+\mu^{l',\alpha}_n
\mu^n_{l,\beta}-\mu^{l'}_{\beta,n}\mu^{\alpha,n}_l+
\mu^l_n\mu_{l',\beta}^{n,\alpha})=0
\end{equation}
\section{Spontaneous breaking of symmetry\\ for gravitons}
Let us consider vacuum solution (\ref{ure3})
depending only on time. In quantum field theory
this means dependence of vacuum on time. Then
\begin{equation}\label{uret11}
\mu^\alpha_{\beta,0,0}=\frac{a''}{a}\mu^\alpha_\beta-
\frac{a'^2}{a^4}\mu^\alpha_l\mu^l_n\mu^n_\beta
\end{equation}
Taking into account constraints (\ref{sv}) in
variables $\mu^1_1=-\mu^2_2,
\quad\mu^1_2=\mu^2_1$ one gets the potential
corresponding to (\ref{uret11}) as
\begin{equation}\label{cal3}
V=-\frac{a''}{2a}\mu^\alpha_\beta\mu_\alpha^\beta+\frac{a'^2}{8a^4}
(\mu^\alpha_\beta\mu_\alpha^\beta)^2
\end{equation}
Write the field $\mu^\alpha_\beta$ close to the
minimum of the potential energy
\begin{equation}\label{cal4}
\frac{a''}a=\frac{a'^2}{2a^4}\mu^\alpha_\beta(0)\mu_\alpha^\beta(0),\quad
\mu^\alpha_\beta=\mu^\alpha_\beta(0)+\xi^\alpha_\beta
\end{equation}
One must note that the condition (\ref{cal4}) on
$\mu^\alpha_\beta(0)$ is the condition of minimal
energy at some fixed moment $t_0$.
This is the basic idea. Instead of dealing with time
dependent $m,\lambda$ we put the initial
conditions at some $t_0$. This corresponds to the principle of minimal energy
at this moment. Surely $m,\lambda$ at this moment
are numbers.
Take the solution (\ref{cal4}) as
\begin{equation}\label{calm}
\mu^1_2(0)=\mu^2_1(0)=0,\quad
\mu^1_1(0)=-\mu^2_2(0)=\mu_0=\sqrt{\frac{a''a^3}{2a'^2}}
\end{equation}
Then the Lagrangian
$$L=\frac12\mu^{\alpha,n}_\beta\mu_{\alpha,n}^\beta+
\frac{a''}{2a}\mu^\alpha_\beta\mu_\alpha^\beta-\frac{a'^2}{8a^4}
(\mu^\alpha_\beta\mu_\alpha^\beta)^2$$ can be written as
$$L=\frac12(\mu^\alpha_\beta(0)+\xi^\alpha_\beta)^{,n}
(\mu_\alpha^\beta(0)+\xi_\alpha^\beta)_{,n}+
\frac{a''}{2a}(\mu^\alpha_\beta(0)+\xi^\alpha_\beta)
(\mu_\alpha^\beta(0)+\xi_\alpha^\beta)$$
$$-\frac{a'^2}{8a^4}(\mu^\alpha_\beta(0)\mu_\alpha^\beta(0)
+\xi^\alpha_\beta\xi_\alpha^\beta+
2\mu_\alpha^\beta(0)\xi_\alpha^\beta)^2$$
$$=\mu_{0,0}^2+\frac12\xi^{\alpha,n}_l\xi_{\alpha,n}^l+
\mu_{0,0}(\xi^1_{1,0}-\xi^2_{2,0})+
\frac{a''}{2a}(2\mu_0^2+2\mu_0(\xi^1_1-
\xi^2_2)$$
$$+\xi^\alpha_\beta\xi_\alpha^\beta)-\frac{a'^2}{8a^4}(4\mu_0^4+(\xi^\alpha_\beta\xi_\alpha^\beta)^2
+4\mu_0\xi^\alpha_\beta\xi_\alpha^\beta(\xi^1_1-\xi^2_2)+8\mu_0^3(\xi_1^1-\xi_2^2)$$
$$+4\mu_0^2\xi^\alpha_\beta\xi_\alpha^\beta+
4\mu_0^2((\xi_1^1)^2+(\xi_2^2)^2-2\xi_1^1\xi^2_2))$$
Consider the quadratic in $\xi^\alpha_\beta$
terms
\begin{equation}\label{lag1}
L=\frac12\xi^{\alpha,n}_l\xi_{\alpha,n}^l+
\frac{a''}{2a}\xi^\alpha_\beta\xi_\alpha^\beta-
\frac{a'^2}{2a^4}\mu_0^2(\xi_\beta^\alpha\xi^\beta_\alpha+
(\xi_1^1)^2+(\xi_2^2)^2-2\xi_1^1\xi^2_2)
\end{equation}
Taking into account the equality
$\displaystyle\mu_0^2\frac{a'^2}{a^4}=\frac{a''}a$ and the gauge
\hbox{$\xi^1_1+\xi^2_2=0$} one obtains the Lagrangian for gravitons
\begin{equation}\label{lag2}
L(\xi)=\frac12\xi^{\alpha,n}_\beta\xi_{\alpha,n}^\beta-
\frac{a''}a((\xi_1^1)^2+(\xi^2_2)^2)
\end{equation}
which is called by us the effective Lagrangian
after the spontaneous breaking of symmetry.
Euler-Lagrange equations have the form
\begin{equation}\label{lag3}
\xi^{1,n}_{1,n}+\frac{2a''}a\xi^1_1=0,\quad
\xi^{2,n}_{2,n}+\frac{2a''}a\xi^2_2=0,\quad
\xi^{2,n}_{1,n}=0,\quad \xi^{1,n}_{2,n}=0
\end{equation}
One sees that now in (\ref{lag3}) the sign of
the mass squared is a correct one. The components
$\xi^1_2,\xi^2_1$ are massless while
$\xi^1_1,\xi^2_2$ have the nonnegative mass
squared $\displaystyle\frac{2a''}a$. The
solutions for diagonal components
$\xi^1_1,\xi^2_2$ (or
$\xi^\alpha_\alpha,\alpha=1,2$) can be written as
$\xi$ the Fourier integral
\begin{equation}\label{Fur1}
\xi(x)=\frac1{(2\pi)^3}\int d\vec k(c_{\vec
k}g_k^*(\eta)e^{i\vec k\vec x}+c^*_{\vec k}
g_k(\eta)e^{-i\vec k\vec x})
\end{equation}
And one has the equation
\begin{equation}\label{vr1}
g''_k+(k^2+\frac{2a''}a)g_k=0
\end{equation}
This equation is free from the problem of the negative square of the
effective mass if $\displaystyle\frac{2a''}a>0$ and one can
construct the quantum theory of gravitons based on the new vacuum
state (\ref{calm}). One can notice that the vacuum expectation value
of the field $\mu(\eta)$ is close to the scale factor. For the
scale factor $a(\eta)=\eta^p$ one obtains
$$\mu_0=a(\eta)\sqrt{\frac{p-1}{2p}}$$
One sees that for all $p\in[-1;0)$ (if
$p\in(0;1)$ then $m^2=-2a''/a>0$ and we leave
vacuum as $\mu_0=0$) the dynamical perturbation
$h^\alpha_\beta\geqslant1$ which contradicts the
condition of the expansion of the curvature
tensor into a series in this perturbation. The
value $p=-1$ corresponds to inflation, so we
cannot deal the inflation model here while other
situations can be considered. However this case
must be considered separately and it is not
studied in this paper.
\section{The Lagrange formalism for gravitons}
One can see from (\ref{Fur1}-\ref{vr1}) that gravitons in the
expanding isotropic Universe are described by the effective scalar
field $\vartheta(x)$ with the Lagrangian
\begin{equation}\label{lk1}
L=\sqrt{-g}(\vartheta(x)_{,n}\vartheta(x)^{,n}-\frac13R\vartheta(x)\vartheta(x))
\end{equation}
where $g=\det(g_{ik})$ and $R$ the scalar
curvature. There is no factor $\frac12$ because one
deals with two polarizations. Euler-Lagrange
equation for the field is
$\xi^1_1=\xi^2_2=\xi=\vartheta\cdot a$. Look for
solutions of (\ref{vr1}) in the form
(\ref{Fur1}). The numbers $c_{\vec k},c^*_{\vec
k}$ are changed on the operators $\widehat
c_{\vec k},\widehat c{\,}^+_{\vec k}$ with
commutation relations
\begin{equation}\label{kom1}
[\widehat c_{\vec k},\widehat c{\,}^+_{\vec
k'}]=\delta(\vec k-\vec k'),\quad [\widehat
c_{\vec k},\widehat c_{\vec k'}]=[\widehat
c{\,}^+_{\vec k},\widehat c{\,}^+_{\vec k'}]=0
\end{equation}
The Fock vacuum state $\rm|in>$ is defined as
$$\widehat c_{\vec k}\rm|in>=0,\quad<in|in>=1$$
Then for $\displaystyle\widehat\vartheta(x)=\frac1a\widehat\xi(x)$
one has
\begin{equation}\label{Fur2}
\widehat\vartheta(x)=\frac1{(2\pi)^3a(\eta)}\int d\vec k(\widehat
c_{\vec k}g_k^*(\eta)e^{i\vec k\vec x}+\widehat c{\,}^+_{\vec
k}g_k(\eta)e^{-i\vec k\vec x})
\end{equation}
where $g_k(\eta)$ satisfy (\ref{vr1}) written as
\begin{equation}\label{vr2}
g''_k+\omega_k^2(\eta)g_k=0,\quad\omega_k^2(\eta)=\frac{2a''}a+k^2
\end{equation}
with initial conditions
\begin{equation}\label{vr3}
g_k(\eta_0)=\frac1{\sqrt{\omega_k(\eta_0)}},\quad
g'_k(\eta_0)=i\sqrt{\omega_k(\eta_0)}
\end{equation}
The condition for the wronskian
\begin{equation}\label{vr4}
g_k(\eta_0){g'_k}^*(\eta_0)-g_k^*(\eta_0)g'_k(\eta_0)=-2i
\end{equation}
leads to existence of the full set of solutions
of (\ref{vr1}) in the sense of the indefinite
scalar product
\begin{equation}\label{vr5}
(\xi_1,\xi_2)=i\int d\vec
x(\xi_1^*\stackrel{\longleftrightarrow}{\partial_0}\xi_2)
\end{equation}
The Hamiltonian of the quantized field
$\widehat\xi(x)$ in the metric (\ref{met}) has
the form
\begin{equation}\label{vr6}
\widehat H(\eta)=\int d\vec
x(\widehat\xi'^+\widehat\xi'+\frac{2a''}a\widehat\xi^+\widehat\xi)
\end{equation}
Putting the field $\widehat\vartheta(x)$ from (\ref{vr6}) into
(\ref{Fur2}) one obtains
$$\widehat H(\eta)=\frac1{\pi^2a^4(\eta)}\int_0^\infty k^2dk\omega_k(\eta)
(E_k(\eta)(\widehat c{\,}^+_k\widehat c_k+
\widehat c{\,}_k\widehat c_k^+)$$
\begin{equation}\label{vr7}
+F_k\widehat c{\,}^+_k\widehat c{\,}^+_k
+F_k^*\widehat c{\,}_k\widehat c{\,}_k)
\end{equation}
where the coefficients $E_k,F_k$ are expressed
through the solutions of the (\ref{vr2})
\begin{equation}\label{vr71}
E_k(\eta)=\frac1{2\omega}(|g_k'|^2+|g_k|^2)\qquad
F_k(\eta)=\frac1{2\omega}(g_k'^2+g_k^2)
\end{equation}
The corpuscular interpretation can be made in
terms of creation and annihi\-lation operators
$\widehat b{\,}_k,\widehat b{\,}^+_k$
diagonalizing the Hamiltonian. If
$$\widehat c{\,}_k=\alpha_k^*(\eta)\widehat b{\,}_k -\beta_k(\eta)\widehat
b{\,}^+_k$$
then the Hamiltonian is
\begin{equation}\label{dgm}
\widehat
H(\eta)=\frac1{\pi^2a^4(\eta)}\int_0^\infty
k^2dk\omega_k(\eta) (E_k(\eta)-1)(\widehat
b{\,}^+_k\widehat b_k+ \widehat b{\,}_k\widehat
b_k^+)
\end{equation}
The density of created particles and their energy
density \cite{Grib} can be found using formulas
\begin{equation}\label{plch}
n(\eta)=\frac1{\pi^2a^3(\eta)}\int_0^\infty
k^2dk|\beta_k|^2
\end{equation}
\section{Some models of graviton creation}
Let us consider some matter filling the Universe
with the equation of state $p=\gamma\varepsilon$
where $p$ is pressure and $\varepsilon$ the
energy density. One has for the homogeneous
quasieuclidean isotropic Universe the equation
\cite{Lan}
\begin{equation}\label{ed}
\frac{8\pi\kappa}{c^4}\varepsilon=\frac{3a'^2}{a^4},
\quad\hbox{then}\quad
a(\eta)=C\eta^{\frac2{1+3\gamma}}
\end{equation}
Let us take $a(\eta)=C\eta^p$ (take $p>1$) and put it into
(\ref{vr2}), then one obtains
\begin{equation}\label{mst1}
g''(\eta)+(2p(p-1)\frac1{\eta^2}+k^2)g(\eta)=0
\end{equation}
where
$$m^2=2p(p-1)=\frac{4(1-3\gamma)}{(1+3\gamma)^2},
\quad a(\eta)=\frac1\eta$$
So the results obtained for the scalar field in
\cite{Grib} are valid for gravitons for any
scale factor with the scale factor of a given
form. In \cite{Grib} it was shown that for the
density of created particles and the energy
density defined by (\ref{dgm} -- \ref{plch}) one
gets convergent integrals. Let us calculate them.
Putting the notation $x=k\eta$ one gets
\begin{equation}\label{mst2}
\frac{d^2g}{dx^2}+(1+\frac{m^2}{x^2})g(x)=0
\end{equation}
Then the energy density of created particles due
to (\ref{mst2}) is calculated as
$$\varepsilon(\eta)=<0|\hat T_0^0|0>$$
\begin{equation}\label{plen1}
=\frac2{\pi^2(a(\eta)\eta)^4}\int_0^\infty x^3dx
\omega(x)(\frac1{2\omega(x)}(|\frac{dg(x)}{dx}|^2+\omega^2(x)|g(x)|^2)-1)
\end{equation}
The solutions of (\ref{mst2}) are Bessel
functions
$$g(x)=C_1\sqrt{\frac{\pi x}2}J(\frac12\sqrt{1-4m^2},x)+
C_2\sqrt{\frac{\pi
x}2}Y(\frac12\sqrt{1-4m^2},x)$$
Then
\begin{equation}\label{plen2}
\varepsilon(\eta)\thickapprox\frac2{\pi^2(a(\eta)\eta)^4}0.04m^3=
\frac{1.5\cdot10^{-3}R^{3/2}}{a(\eta)\eta^4},\qquad
0<m^2<4
\end{equation}
For the density of created particles in the unit
volume one gets
\begin{equation}\label{plchm2}
n(\eta)=\frac1{\pi^2(a(\eta)\eta)^3}\int_0^\infty
x^2dx
(\frac1{2\omega(x)}(|\frac{dg(x)}{dx}|^2+\omega^2(x)|g(x)|^2)-1)
\end{equation}
This integral is convergent \cite{Grib}. For
small $m$ ($0<m<0.5$) $n(\eta)\sim R$ and for
large $m$ ($m>0.5$) $n(\eta)\sim\sqrt R.$\par
Consider dust Universe with $a(\eta)=C\eta^2$.
Then
\begin{equation}\label{et1}
\varepsilon_{gr.}=\frac{4\cdot10^{-3}}{t^4}
\end{equation}
For the background classical matter one has
\begin{equation}\label{et2}
\varepsilon_{matt.}=\frac{2\cdot10^{84}}{t^2}
\end{equation}
So for Planckean time ($t_{pl}=10^{-43}sek$) the
graviton energy density created from vacuum is
some ten percent of the matter density while at
the inflation time $t_{inf}=10^{-36}sek$) it is
only $10^{-14}$ of matter. These numbers are
consistent with our approximation for the metric
in perturbation theory. At the modern epoch one
gets from (\ref{et1}) that the energy flow from
the time of the end of inflation
$t_{inf}=10^{-36}sek$ is
\begin{equation}\label{et3}
\varepsilon=0.5\cdot10^{-12}\quad(\frac{erg}{sec\cdot
cm^2})
\end{equation}
This can be compared with the flow from the Crab nebula \cite{Vein}.
One sees that it is much smaller.
\begin{equation}\label{et4}
\varepsilon_{Crab}=10^{-8}\quad(\frac{erg}{sec\cdot cm^2})
\end{equation}
\section{Acknowledgements}
The authors are indebted to the participants of the A. A. Friedmann
seminar of St. Petersburg for the discussions of the paper.
|
2,877,628,088,477 | arxiv | \section{#1}}
\newcommand{\vs}[1]{\rule[- #1 mm]{0mm}{#1 mm}}
\newcommand{\hs}[1]{\hspace{#1mm}}
\newcommand{\mb}[1]{\hs{5}\mbox{#1}\hs{5}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\wt}[1]{\widetilde{#1}}
\newcommand{\underline}[1]{\underline{#1}}
\newcommand{\ov}[1]{\overline{#1}}
\newcommand{\sm}[2]{\frac{\mbox{\footnotesize #1}\vs{-2}}
{\vs{-2}\mbox{\footnotesize #2}}}
\newcommand{\partial}{\partial}
\newcommand{\epsilon}{\epsilon}
\newcommand{\p}[1]{(\ref{#1})}
\newcommand{\mbox{\rule{0.2mm}{2.8mm}\hspace{-1.5mm} R}}{\mbox{\rule{0.2mm}{2.8mm}\hspace{-1.5mm} R}}
\newcommand{Z\hspace{-2mm}Z}{Z\hspace{-2mm}Z}
\newcommand{{\cal D}}{{\cal D}}
\newcommand{{\cal G}}{{\cal G}}
\newcommand{{\cal K}}{{\cal K}}
\newcommand{{\cal W}}{{\cal W}}
\newcommand{\vec{J}}{\vec{J}}
\newcommand{\vec{\lambda}}{\vec{\lambda}}
\newcommand{\vec{\sigma}}{\vec{\sigma}}
\newcommand{\vec{\tau}}{\vec{\tau}}
\newcommand{\vec{W}}{\vec{W}}
\newcommand{\stackrel{\otimes}{,}}{\stackrel{\otimes}{,}}
\newcommand{\theta_{12}}{\theta_{12}}
\newcommand{\overline{\theta}_{12}}{\overline{\theta}_{12}}
\newcommand{\zw}[1]{{#1 \over Z_{12}}}
\newcommand{{\alpha}}{{\alpha}}
\newcommand{{\overline\alpha}}{{\overline\alpha}}
\newcommand{\nonumber \\}{\nonumber \\}
\newcommand{\NP}[1]{Nucl.\ Phys.\ {\bf #1}}
\newcommand{\PLB}[1]{Phys.\ Lett.\ {B \bf #1}}
\newcommand{\PLA}[1]{Phys.\ Lett.\ {A \bf #1}}
\newcommand{\NC}[1]{Nuovo Cimento {\bf #1}}
\newcommand{\CMP}[1]{Commun.\ Math.\ Phys.\ {\bf #1}}
\newcommand{\PR}[1]{Phys.\ Rev.\ {\bf #1}}
\newcommand{\PRL}[1]{Phys.\ Rev.\ Lett.\ {\bf #1}}
\newcommand{\MPL}[1]{Mod.\ Phys.\ Lett.\ {\bf #1}}
\newcommand{\BLMS}[1]{Bull.\ London Math.\ Soc.\ {\bf #1}}
\newcommand{\IJMP}[1]{Int.\ J.\ Mod.\ Phys.\ {\bf #1}}
\newcommand{\JMP}[1]{Jour.\ Math.\ Phys.\ {\bf #1}}
\newcommand{\LMP}[1]{Lett.\ Math.\ Phys.\ {\bf #1}}
\renewcommand{\thefootnote}{\fnsymbol{footnote}}
\newpage
\setcounter{page}{0}
\pagestyle{empty}
\vs{12}
\begin{center}
{\LARGE {\bf $N=4$ Sugawara construction on $\widehat{sl(2|1)}$,}}
\\ {\LARGE{\bf $\widehat{sl(3)}$ and mKdV-type superhierarchies
}}\\[0.8cm]
\vs{10} {\large E. Ivanov$^{a,1}$, S. Krivonos$^{a,2}$ and F.
Toppan$^{b,3}$} ~\\ \quad \\ {\em {~$~^{(a)}$ JINR-Bogoliubov
Laboratory of Theoretical Physics,}}\\ {\em 141980 Dubna, Moscow
Region, Russia}~\quad\\ {\em ~$~^{(b)}$ DCP-CBPF,}\\ {\em Rua
Xavier Sigaud 150, 22290-180, Urca, Rio de Janeiro, Brazil}
\end{center}
\vs{6}
\centerline{ {\bf Abstract}}
\vspace{0.3cm} \noindent The local Sugawara constructions of the
``small'' $N=4$ SCA in terms of supercurrents of $N=2$ extensions
of the affine $\widehat{sl(2|1)}$ and $\widehat{sl(3)}$ algebras are
investigated. The associated super mKdV type hierarchies
induced by $N=4$ SKdV ones are defined. In the $\widehat{sl(3)}$ case the
existence of
two non-equivalent Sugawara constructions is found. The ``long''
one involves
all the affine $\widehat{sl(3)}$ currents, while the ``short'' one deals
only with those from the subalgebra $\widehat{sl(2)\oplus u(1)}$. As a
consequence, the $\widehat{sl(3)}$-valued affine superfields carry two
non-equivalent mKdV type super hierarchies induced by the
correspondence between ``small'' $N=4$ SCA and $N=4$ SKdV
hierarchy. However, only the first hierarchy possesses genuine global
$N=4$ supersymmetry. We discuss peculiarities of the realization
of this $N=4$ supersymmetry on the affine supercurrents.
\vs{6} \vfill \rightline{CBPF-NF-046-99} \rightline{JINR E2-99-302}
\rightline{ solv-int/9912003} {\em E-Mail:\\ 1)
[email protected]\\ 2) [email protected]\\
3) [email protected]}
\newpage
\pagestyle{plain}
\renewcommand{\thefootnote}{\arabic{footnote}}
\setcounter{footnote}{0}
\section{Introduction}
In the last several years integrable hierarchies of non-linear
differential equations have been intensely explored, mainly in
connection with the discretized two-dimensional gravity theories
(matrix models) \cite{DGZ} and, more recently, with the
$4$-dimensional super Yang-Mills theories in the
Seiberg-Witten approach \cite{SW}.
A vast literature is by now available on the construction and
classification of the hierarchies. In the bosonic case the understanding
of integrable hierarchies in $1+1$ dimensions is to a large extent
complete. Indeed, a generalized Drinfeld-Sokolov scheme \cite{DS}
is presumably capable to accommodate all known bosonic
hierarchies.
On the other hand, due to the presence of even and odd fields, the
situation for supersymmetric extensions remains in many respects
unclear. Since a fully general supersymmetric Drinfeld-Sokolov
approach to the superhierarchies is still lacking, up to now they
were constructed using all sorts of the available tools. These
include, e.g., direct methods, Lax operators of both scalar and
matrix type, bosonic as well as fermionic, coset construction,
etc. \cite{MRK}-\cite{Top}.
In \cite{IKT} a general Lie-algebraic framework for the $N=4$
super KdV hierarchy \cite{{DI},{DG2},{IK},{DG1}} and, hopefully, for its
hypothetical higher conformal spin counterparts (like $N=4$
Boussinesq) has been proposed. It is based upon a generalized
Sugawara construction on the $N=2$ superextended affine
(super)algebras which possess a hidden (nonlinearly realized)
$N=4$ supersymmetry. This subclass seemingly consists of $N=2$
affine superextensions of both the bosonic algebras with the
quaternionic structure listed in \cite{SSTP} and proper
superalgebras having such a structure. In its simplest version
\cite{IKT}, the $N=4$ Sugawara construction relates affine
supercurrents taking values in the $sl(2)\oplus u(1)$ algebra to
the ``minimal'' (or ``small'') $N=4$ superconformal algebra
($N=4$ SCA) which provides the second Poisson structure for the
$N=4$ super KdV hierarchy. The Sugawara-type transformations are
Poisson maps, i.e. they preserve the Poisson-brackets structure of
the affine (super)fields. Therefore for any Sugawara
transformation which maps affine superfields, say, onto the
minimal $N=4$ SCA, the affine supercurrents themselves inherit an
integrable hierarchy which is constructed using the tower of the
$N=4$ SKdV hamiltonians in involution.
Such $N=4$ hierarchies realized on the affine supercurrents
can be interpreted as generalized mKdV-type superhierarchies.
The simplest example, the combined $N=4$ mKdV-NLS hierarchy
associated with the affine $N=2 \;\;\widehat{sl(2)\oplus u(1)}$
superalgebra, was explicitly constructed in \cite{IKT}.
In the case of higher-dimensional $N=4$ affine superalgebras this
sort of Sugawara construction is expected to yield additional
$N=4$ multiplets of currents which would form, together with those
of $N=4$ SCA (both ``minimal'' and ``large''), more general
nonlinear $N=4$ superalgebras of the $W$ algebra type.
Respectively, new SKdV (or super Boussinesq) type hierarchies with
these conformal superalgebras as the second Poisson structures can
exist, as well as their mKdV type counterparts associated with the
initial $N=4$ affine superalgebras. Besides, the linear $N=4$ SCAs
can be embedded into a given affine superalgebra in different
ways, giving rise to a few non-equivalent mKdV-type superhierarchies
associated with the same KdV-type superhierarchy.
In this paper we describe non-equivalent $N=4$ Sugawara
constructions for the eight-dimensional affine
(super)algebras $N=2 \;\;\widehat{sl(2|1)}$ and $N=2 \;\;\widehat{sl(3)}$.
These algebras are natural candidates for the higher-rank affine
superalgebras with hidden $N=4$ supersymmetry, next in complexity to
the simplest $\widehat{sl(2)\oplus u(1)}$ case treated in ref. \cite{IKT}.
The results can be summarized as follows.
In the $\widehat{sl(2|1)}$ case there are no
other {\em local} Sugawara constructions leading to the ``small''
$N=4$ SCA besides the one which proceeds from the
bosonic $\widehat{sl(2)\oplus u(1)}$ subalgebra supercurrents.
The $\widehat{sl(2|1)}$ affine
supercurrents carry a unique mKdV type hierarchy,
the evolution equations for the extra four superfields
being induced from their Poisson brackets with
the $N=4$ SKdV hamiltonians constructed from the
$sl(2)\oplus u(1)$-valued supercurrents.
The full hierarchy possesses by construction the
manifest $N=2$ supersymmetry and also reveals some extra exotic ``$N=2$
supersymmetry''. These two yield the standard
$N=4$ supersymmetry only on the $\widehat{sl(2)\oplus u(1)}$
subset of currents (``standard'' means closing on $z$
translations). Actually, such an extra $N=2$ supersymmetry
is present in {\it any} $N=2$ affine (super)algebra with a
$\widehat{sl(2)\oplus u(1)}$ subalgebra. As the result,
neither the $N=2$ $\widehat{sl(2|1)}$ superalgebra itself, nor
the above-mentioned mKdV hierarchy reveal the genuine $N=4$
supersymmetry.
The $\widehat{sl(3)}$ case is more interesting since it admits
such an extended supersymmetry. In this case, besides the
``trivial'' $N=4$ SCA based on the $\widehat{sl(2)\oplus u(1)}$
subalgebra, one can define an extra $N=4$ SCA containing the full
$N=2$ stress-tensor and so involving all affine $\widehat{sl(3)}$
supercurrents \footnote{In what follows we name the corresponding
Sugawara construction ``long'' $N=4$ Sugawara, as opposed to the
``short'' one based on the $\widehat{sl(2)\oplus u(1)}$
subalgebra.}. We have explicitly checked that no other
non-equivalent local $N=4$ Sugawaras exist in this case. The
supercurrents of the second $N=4$ SCA generate global $N=4$
supersymmetry closing in the standard way on $z$-translations.
The defining relations of the $N=2$ $\widehat{sl(3)}$ algebra are
covariant under this supersymmetry, so it is actually $N=4$
extension of $\widehat{sl(3)}$, similarly to the
$\widehat{sl(2)\oplus u(1)}$ example. In the original basis,
where the affine currents satisfy nonlinear constraints, the
hidden $N=2$ supersymmetry transformations are essentially
nonlinear and mix all the currents. After passing, by means of a
non-local field redefinition, to the basis where the constraints
become the linear chirality conditions, the supercurrents split
into some invariant subspace and a complement which transforms
through itself and the invariant subspace. In other words, they
form a not fully reducible representation of the $N=4$
supersymmetry. This phenomenon was not previously encountered in
$N=4$ supersymmetric integrable systems. We expect it to hold
also in higher rank $N=2$ affine superalgebras with the hidden
$N=4$ structure.
The ``long'' Sugawara gives rise to a new mKdV type hierarchy
associated with the $N=4$ SKdV one. Thus the $\widehat{sl(3)}$
affine supercurrents provide an example of a Poisson structure
leading to two non-equivalent mKdV-type hierarchies, both associated
with $N=4$ SKdV, but recovered from the ``short'' and,
respectively, ``long'' $N=4$ Sugawara constructions. Only the
second hierarchy possesses global $N=4$ supersymmetry.
As a by-product, we notice the existence of another sort of super mKdV
hierarchies associated with both affine superalgebras considered.
They are related to the so-called ``quasi'' $N=4$ SKdV hierarchy
\cite{{DGI},{DG2}}
which still possesses the ``small'' $N=4$ SCA as the second
Poisson structure but lacks global $N=4$ supersymmetry.
In the $\widehat{sl(3)}$ case there also exist two non-equivalent
``quasi'' super mKdV hierarchies generated through the ``short'' and
`long'' Sugawara constructions.
Like in \cite{IKT}, in the present paper we use the $N=2$ superfield
approach with the manifest linearly realized $N=2$ supersymmetry.
The results are presented in the language of
classical OPEs between $N=2$ supercurrents, which is equivalent
to the Poisson brackets formalism used in \cite{IKT}.
When evaluating these $N=2$ OPEs, we systematically exploit the
Mathematica package of ref. \cite{KT}.
\section{$N=2$ conventions and the minimal $N=4$ SCA}
Here we fix our notation and present the $N=2$ superfield
Poisson brackets structure of the ``minimal'' (``small'')
$N=4$ superconformal algebra (in the OPE language).
The $N=2$ superspace is parametrized by the coordinates
$Z\equiv \left\{ z, \theta , {\overline \theta}\right\}$,
with $\left\{ \theta , {\overline \theta} \right\}$
being Grassmann variables. The (anti)-chiral $N=2$
derivatives $D, {\overline D}$ are defined as
\begin{eqnarray}
D = \frac{\partial}{\partial \theta}
-\frac{1}{2}{\overline \theta} \partial_z \;,\;\;
{\overline D} = \frac{\partial}{\partial
{\overline \theta}} - \frac{1}{2}\theta\partial_z\; ,\;\;
D^2 = {\overline D}{}^2 = 0 \; , \;\;
\{ D, {\overline D}\} = -\partial_z \; \;. \label{Dcomm}
\end{eqnarray}
In the $N=2$ superfield notation the minimal $N=4$ SCA
is represented by the spin $1$ general superfield $J(Z)$ and two
(anti)-chiral spin $1$ superfields $W$, ${\ov W}$
($DW = {\ov D}\, {\ov W} =0$), with the following
classical OPE's
\begin{eqnarray}
{\underline {J(1)J(2)}} &=& {2\over {Z_{12}}^2} - {{\theta_{12} \overline{\theta}_{12}}\over {Z_{12}}^2}
J - \zw{\overline{\theta}_{12}} {\ov D}J +\zw{ \theta_{12}} DJ -\zw{ \theta_{12} \overline{\theta}_{12}} J' \;, \nonumber\\
{\underline {J(1)W(2)}} &=& -{{\theta_{12}\overline{\theta}_{12}}\over {Z_{12}}^2} W - \zw{2} W -\zw{\overline{\theta}_{12}} {\ov
D}W -\zw{\theta_{12}\overline{\theta}_{12}} W' \;, \nonumber\\
{\underline {J(1){\ov W}(2)}} &=& -{{\theta_{12}\overline{\theta}_{12}}\over {Z_{12}}^2} {\ov W} + \zw{2} {\ov W}
+\zw{\theta_{12}} D{\ov W} -\zw{ \theta_{12}\overline{\theta}_{12}} {\ov W}' \;, \nonumber\\
{\underline {W(1){\ov W}(2)}} &=& {{\theta_{12}\overline{\theta}_{12}}\over {Z_{12}}^3 } - {1\over {Z_{12}}^2}
- {{\frac{1}{2} \theta_{12}\overline{\theta}_{12}} \over {Z_{12}}^2} J +\zw {\overline{\theta}_{12} } {\ov D} J +\zw{1}
J\; . \label{n4sca}
\end{eqnarray}
Here
$Z_{12} =
z_1 -z_2+\frac{1}{2}\left( \theta_1{\overline\theta}_2
-\theta_2{\overline\theta}_1\right)$, $\theta_{12}=\theta_1-\theta_2$, $\overline{\theta}_{12}
={\overline\theta}_1-{\overline\theta}_2$, and the superfields
in the r.h.s. are evaluated at the point $(2)\equiv (\,z_2, \theta_2,
{\overline\theta}_2\,) $.
\section{The superaffinization of the $sl(2|1)$ superalgebra}
In this and next Sections we follow the general $N=2$ superfield
setting for $N=2$ extensions of affine (super)algebras \cite{HS,AIS}.
The $N=2$ $\widehat{sl(2|1)}$ superalgebra is generated by four fermionic
and four bosonic superfields, respectively
($H, {\overline H}, F, {\overline F}$) and ($S, {\overline S}, R,
{\overline R}$).
\par
The superfields $H, {\overline H}$ are associated with the Cartan
generators
of $sl(2|1)$ and satisfy the (anti)chiral constraints
\begin{eqnarray}
{\overline D}\, {\overline H} = D H = 0 \; \label{chir23}
\end{eqnarray}
while the remaining superfields are associated with the root
generators of $sl(2|1)$. In particular $F, {\overline F}$ are
related to the bosonic ($\pm$)-simple roots and, together with
$H, {\ov H}$, close on the superaffine ${\widehat {sl(2)\oplus u(1)}}$
subalgebra. The extra superfields satisfy the non-linear chiral
constraints
\begin{eqnarray}
&& {\overline D}\,{\overline R} =0 \;, \quad
{\overline D}\, {\overline F} = {\overline H}\,{\overline F} \; , \quad
{\overline D}\, {\overline S} = -{\overline F}\,{\overline R}
+{\overline H}\,{\overline S}\; , \nonumber \\
&& DR = HR \;, \quad DF = - H F\; ,
\quad DS = F R \;.\label{cond23}
\end{eqnarray}
The full set of OPEs defining the classical
$N=2$ superaffine ${\widehat{sl(2|1)}}$ algebra is given by
\begin{eqnarray}
&&{\underline{H(1){\overline H}(2)}} = {{\frac{1}{2}\theta_{12}\overline{\theta}_{12}}\over {Z_{12}}^2}
- {1\over Z_{12}}\;, \;
{\underline{H(1)F(2)}} = \zw{\overline{\theta}_{12}} {F}\;, \;
{\underline{H(1){\overline F}(2)}}= - \zw{\overline{\theta}_{12}} {\overline F}\; , \;
\nonumber\\
&&{\underline {H(1)S(2)}} = \zw{\overline{\theta}_{12}} S\;,\;
{\underline {H(1) {\ov S}(2)}} = -\zw{\overline{\theta}_{12}} {\ov S}\;, \;
{\underline {{\ov H} (1)F(2)}} = \zw{\theta_{12}} F\nonumber\; , \;
{\underline {{\ov H}(1){\ov F}(2)}} = -\zw{\theta_{12}} {\ov F}\; , \nonumber \\
&& {\underline {{\ov H}(1) R(2)}} = -\zw{\theta_{12}} R\; , \;
{\underline {{\ov H}(1){\ov R}(2)}} = \zw {\theta_{12} }{\ov R}\;, \nonumber\\
&& {\underline {F(1){\ov F}(2)}} = {{\frac{1}{2}\theta_{12}\overline{\theta}_{12}}\over {Z_{12}}^2}
-\zw{1 -\overline{\theta}_{12} {\ov H} - \theta_{12} H - \theta_{12}\overline{\theta}_{12}
( F{\ov F} + H{\ov H} + {\ov D} H)} \;,
\nonumber\\
&&{\underline{F(1)S(2)}} = -\zw{\theta_{12}\overline{\theta}_{12}} FS\;,\;
{\underline {F(1){\ov S}(2)}}
= \zw{ \overline{\theta}_{12} {\ov R} +\theta_{12}\overline{\theta}_{12} (F {\ov S} +H{\ov R})}\;, \nonumber\\
&&{\underline {F(1){R}(2)}} = -\zw{\overline{\theta}_{12} S + \theta_{12}\overline{\theta}_{12} HS}\;,\;
{\underline {{\ov F} (1)S(2)}} = -\zw{\theta_{12} R + \theta_{12}\overline{\theta}_{12} {\ov H} R}\;,\nonumber\\
&&{\underline {{\ov F}(1) R(2)}} = \zw{\theta_{12}\overline{\theta}_{12}} R{\ov F}\;, \;
{\underline {{\ov F}(1) {\ov R}(2)}} = \zw{ \theta_{12} {\ov S} -\theta_{12}\overline{\theta}_{12}
({\ov F}\,{\ov R} -{\ov H}\,{\ov S})}\;,\nonumber\\
&&{\underline {S(1){\ov S}(2)}} = -{{\frac{1}{2}\theta_{12}\overline{\theta}_{12}}\over {Z_{12}}^2}
+\zw{1 -\overline{\theta}_{12} {\ov H} -\theta_{12}\overline{\theta}_{12} (F{\ov F}- R{\ov R})} \;,\;
{\underline{S(1)R(2)}} = -\zw{\theta_{12}\overline{\theta}_{12}} SR\;, \nonumber\\
&&{\underline {S(1){\ov R}(2)}} = \zw{\theta_{12} F +\theta_{12}\overline{\theta}_{12} {\ov D}F}\;, \;
{\underline {{\ov S}(1)R(2)}} = \zw{\overline{\theta}_{12} {\ov F}
+\theta_{12}\overline{\theta}_{12} ( R{\ov S} + H {\ov F} - D{\ov F})}\;,\nonumber\\
&&{\underline {R(1){\ov R}(2)}} = -{{\frac{1}{2}\theta_{12}\overline{\theta}_{12}}\over {Z_{12}}^2}
+ \zw{1 + \theta_{12} H +\theta_{12}\overline{\theta}_{12} {\ov D} H} \;.\label{sope23}
\end{eqnarray}
All other OPEs are vanishing. The superfields
in the r.h.s. are evaluated at the point (2).
There is only one local Sugawara realization of $N=4$ SCA associated
with this affine $sl(2|1)$ superalgebra. It is explicitly given
by the relations
\begin{equation} \label{sl21N4}
J = {\ov D} H + D {\ov H} + H{\ov H} + F {\ov F}\; ,\;
W = D{\ov F}\; , \;
{\ov W} = {\ov D} F \; .
\end{equation}
It involves only the superfields ($H, {\ov H}, F, {\ov F}$)
which generate just the ${\widehat{sl(2)\oplus u(1)}}$-superaffine
subalgebra. It can be checked that no Sugawara construction
involving all the $sl(2|1)$ superfields exists in this case. The
$N=4$ SKdV hamiltonians constructed from the superfields \p{sl21N4}
produce an mKdV type hierarchy of the evolution equations for the
$\widehat{sl(2|1)}$ supercurrents through the OPE relations \p{sope23}.
Note that the supercurrents \p{sl21N4} generate
global non-linear automorphisms of $N=2$ $\widehat{sl(2|1)}$
(preserving both
the OPEs \p{sope23} and the constraints \p{cond23}), such that their
algebra formally coincide with the $N=4$ supersymmetry algebra.
However, these
fermionic transformations close in a standard way on $z$-translations
only on the ${\widehat{sl(2)\oplus u(1)}}$ subset. On the rest of
affine supercurrents they yield complicated
composite objects in the closure. It is of course a consequence of
the fact that the true $z$-translations of all supercurrents are generated
by the full $N=2$ stress-tensor on the affine superalgebra, while
$N=4$ SCA \p{sl21N4} contains the stress-tensor on a subalgebra.
So this fermionic automorphisms symmetry cannot be viewed
as promoting the manifest $N=2$ supersymmetry
to $N=4$ one \footnote{This
kind of odd automorphisms is inherent to any $N=2$ affine algebra or
superalgebra containing ${\widehat{sl(2)\oplus u(1)}}$ subalgebra.}.
Thus the $N=2$ superaffine $\widehat{sl(2|1)}$ algebra as a whole
possesses no hidden $N=4$ structure, as distinct from its
${\widehat{sl(2)\oplus u(1)}}$ subalgebra. This obviously implies
that the super mKdV hierarchy induced on the full set of
the $\widehat{sl(2|1)}$ supercurrents through the Sugawara construction
\p{sl21N4} is not $N=4$ supersymmetric as well.
\section{The superaffine ${\widehat{sl(3)}}$ algebra}
The superaffinization of the $sl(3)$ algebra is spanned by eight
fermionic $N=2$ superfields subjected to non-linear (anti)chiral
constraints. We denote these superfields $H, F, R, S$ (their antichiral
counterparts are ${\ov H}, {\ov F}, {\ov R}, {\ov S}$). The
${\widehat{sl(2)\oplus u(1)}}$ subalgebra is represented by $H, {\ov
H}, S, {\ov S}$. As before the Cartan subalgebra
is represented by the standard (anti)chiral $N=2$
superfields $H, {\overline H}$
\begin{eqnarray}
&& D H = {\ov D}\, {\ov H} = 0~. \label{chirH}
\end{eqnarray}
The remaining supercurrents are subject to
the non-linear constraints:
\begin{eqnarray}
&&DS = - HS\;, \quad D F = -{\overline\alpha} H F + SR \;, \quad
DR = {\alpha} H R \;, \nonumber \\
&&{\ov D}\, {\ov S} = {\ov H}\,{\ov S}\; ,\quad
{\ov D}\, {\ov F} = {\alpha} {\ov H}\,{\ov F} -{\ov S}\,{\ov R}\; , \quad
{\ov D}\,{\ov R} = - {\overline\alpha} {\ov H}\,{\ov R}\;, \label{cond3}
\end{eqnarray}
where
\begin{equation} \label{param}
{\alpha}= \frac{1+i\sqrt{3}}{2} \;, \quad {\overline\alpha}= \frac{1-i\sqrt{3}}{2} \;.
\end{equation}
The non-vanishing OPEs of the classical $N=2$ superaffine
${\widehat{sl(3)}}$ algebra read:
\begin{eqnarray}
&&{\underline{H(1) {\ov H}(2)}} = {{\frac{1}{2}\theta_{12}\overline{\theta}_{12}}\over {Z_{12}}^2}- \zw{1} \;,\;
{\underline {H(1) F(2)}} = \zw{{\alpha}\overline{\theta}_{12}} F \;,\;
{\underline {H(1) {\ov F}(2)}} = -\zw{{\alpha} \overline{\theta}_{12}} {\ov F}\;, \nonumber\\
&&{\underline {H(1) S(2) }} = \zw{\overline{\theta}_{12}} S\;, \;
{\underline {H(1) {\ov S}(2)}} = -\zw{\overline{\theta}_{12}} {\ov S}\; ,\;
{\underline {H(1) R(2)}}= -\zw{{\overline\alpha} \overline{\theta}_{12}} R\; ,\;
{\underline {H(1) {\ov R}(2)}} = \zw{{\overline\alpha} \overline{\theta}_{12}} {\ov R}\; ,\nonumber\\
&&{\underline {{\ov H}(1) F(2)}} = \zw{{\overline\alpha} \theta_{12}} F\; , \;
{\underline {{\ov H}(1) {\ov F}(2)}} = -\zw{{\overline\alpha} \theta_{12}} {\ov F}\; , \;
{\underline {{\ov H}(1) S(2)}} = \zw{ \theta_{12}} S\; , \;
{\underline {{\ov H}(1) {\ov S}(2)}} = - \zw{ \theta_{12}} {\ov S}\; , \nonumber\\
&&{\underline {{\ov H}(1) R(2)}} = -\zw{ {\alpha} \theta_{12}} R \; , \;
{\underline {{\ov H}(1) {\ov R} (2)}} = \zw{{\alpha} \theta_{12}} {\ov R}\; , \nonumber\\
&&{\underline { F(1){\ov F}(2)}} = {{\frac{1}{2}\theta_{12}\overline{\theta}_{12}}\over {z_{12}}^2}
-\zw{1 - {\alpha} \overline{\theta}_{12} {\ov H} -{\overline\alpha} \theta_{12} H -
\theta_{12} \overline{\theta}_{12}
( F {\ov F} + H {\ov H} + R {\ov R}+ S{\ov S} +{\overline\alpha} {\ov D} H)}\;,
\nonumber\\
&&{\underline {F(1) S(2) }} = \zw{{\alpha} \theta_{12} \overline{\theta}_{12}} FS \; , \;
{\underline {F(1){\ov S}(2)}}= \zw{ \theta_{12} R + \theta_{12} \overline{\theta}_{12} ({\ov D} R +
{\overline\alpha} F{\ov S} - {\ov H} R )}\; ,\nonumber\\
&&{\underline { F(1)R(2)}} = \zw{ {\overline\alpha} \theta_{12}\overline{\theta}_{12}} FR\; , \;
{\underline {F(1){\ov R}(2)}} = -\zw {\theta_{12} S +\theta_{12}\overline{\theta}_{12} ( {\ov D} S -{\alpha} F{\ov R}
+{\overline\alpha} {\ov H} S)}\; ,\nonumber\\
&&{\underline { {\ov F}(1) S(2)}} = -\zw{ \overline{\theta}_{12} {\ov R } - \theta_{12} \overline{\theta}_{12}
( H{\ov R} -{\alpha} {\ov F} S + D {\ov R} )} \; , \;
{\underline { {\ov F}(1) {\ov S}(2)}} = -\zw{{\overline\alpha} \theta_{12}\overline{\theta}_{12}}
{\ov F}\,{\ov S}\; , \nonumber\\
&&{\underline { {\ov F} (1) R(2)}} = \zw{ \overline{\theta}_{12} {\ov S} - \theta_{12} \overline{\theta}_{12}( D {\ov S}-
{\alpha} H {\ov S} +{\overline\alpha} {\ov F} R)} \; , \;
{\underline { {\ov F} (1) {\ov R} (2)}}
= -\zw{ {\alpha}\theta_{12} \overline{\theta}_{12}} {\ov F} \,{\ov R}\;, \nonumber\\
&& {\underline {S(1) {\ov S} (2)}} = {{\frac{1}{2}\theta_{12}\overline{\theta}_{12}}\over{Z_{12}}^2} -
\zw{1- \overline{\theta}_{12} {\ov H} - \theta_{12} H - \theta_{12} \overline{\theta}_{12}
(S{\ov S} + H {\ov H} + {\ov D} H )}\;,\; \nonumber\\
&&{\underline { S(1) R(2)}} = -\zw{ \overline{\theta}_{12} F + \theta_{12} \overline{\theta}_{12} (H F - {\overline\alpha} S R )} \;, \;
{\underline {S(1){\ov R}(2)}} = -\zw{{\overline\alpha} \theta_{12}\overline{\theta}_{12}} S{\ov R} \; ,\nonumber\\
&&{\underline{{\ov S}(1) R(2)}} = \zw{{\alpha} \theta_{12}\overline{\theta}_{12}} {\ov S} R\; , \;
{\underline{{\ov S}(1){\ov R}(2)}} = \zw{ \theta_{12} {\ov F} + \theta_{12} \overline{\theta}_{12} ({\ov H}\,
{\ov F} -{\alpha} {\ov S}\, {\ov R})}\; ,\nonumber\\
&&{\underline { R(1){\ov R}(2)}} =
{{\frac{1}{2}\theta_{12}\overline{\theta}_{12}}\over {Z_{12}}^2} -\zw{1 + {\overline\alpha} \overline{\theta}_{12}
{\ov H} +{\alpha} \theta_{12} H - \theta_{12}\overline{\theta}_{12} ( H{\ov H} + R {\ov R}
-{\alpha} {\ov D} H)} \;. \label{sl3}
\end{eqnarray}
There exist two non-equivalent ways to embed the affine supercurrents
into the minimal $N=4$ SCA via a local Sugawara construction. One
realization, like in the $\widehat{sl(2|1)}$ case, corresponds to the
``short'' Sugawara construction based solely upon the
$\widehat{sl(2)\oplus u(1)}$
subalgebra. The second one,
which in what follows is referred to as the ``long'' Sugawara
construction, involves {\it all} the $sl(3)$-valued affine supercurrents.
This realization corresponds to a new globally $N=4$
supersymmetric hierarchy realized on the full set of superaffine
$\widehat{sl(3)}$ supercurrents. Thus the set of superfields generating the
superaffine ${\widehat{sl(3)}}$ algebra supplies the first known example of a
Poisson-brackets structure carrying two non-equivalent hierarchies of the
super mKdV type associated with $N=4$ SKdV hierarchy.
The two Sugawara realizations are respectively given by:
{\em i)} in the ``short'' case,
\begin{equation} \label{short}
J = D{\ov H} + {\ov D} H + H {\ov H} + S {\ov S}\; ,\quad
W = D {\ov S}\; , \quad
{\ov W} = {\ov D}S \;,
\end{equation}
{\em ii)} in the ``long'' case
\begin{equation} \label{long}
J= H{\ov H} + F {\ov F} + R {\ov R} + S {\ov S}
+ {\overline\alpha}{\ov D} H +{\alpha} D{\ov H}\; , \quad
W = D {\ov F}\; , \quad
{\ov W} = {\ov D}F \;.
\end{equation}
Their Poisson brackets (OPEs) are given by the relations (\ref{n4sca}).
\section{$N=4$ supersymmetry}
Like in the $\widehat{sl(2\vert 1)}$ case, the ``short'' Sugawara
$N=4$ supercurrents \p{short} do not produce the true global
$N=4$ supersymmetry for the entire set of the affine
supercurrents, yielding it only for the $\widehat{sl(2)\oplus
u(1)}$ subset. At the same time, the ``long'' Sugawara \p{long}
generates such a supersymmetry. In the $z, \theta, \bar{\theta} $
expansion of the supercurrents $J, W, {\ov W}$ the global
supersymmetry generators are present as the coefficients of the
monomials $\sim \theta / z$. {}From $J$ there come out the
generators of the manifest linearly realized $N=2$ supersymmetry,
while those of the hidden $N=2$ supersymmetry appear from $W, {\ov
W}$. The precise form of the hidden supersymmetry transformations
can then be easily read off from the OPEs \p{sl3}:
\begin{eqnarray}
\delta H & = & {\ov\epsilon} \left( HF -{\alpha} \, SR \right)
+ \epsilon {\alpha} \, D{\ov F} \; , \qquad
\delta {\ov H} = {\ov\epsilon}\, {\overline\alpha} \, {\ov D} F
-\epsilon \left( {\ov H}\,{\ov F} -{\overline\alpha} \,{\ov S}\,{\ov R}
\right) \;, \nonumber \\
\delta F &=& -\epsilon\left( {\alpha} D{\ov H}+F{\ov F}+H{\ov H}+
R{\ov R}+S{\ov S}\right)\,,
\delta{\ov F} = -{\ov \epsilon}\left(
{\overline\alpha}\,{\ov D}H+F{\ov F}+H{\ov H}+
R{\ov R}+S{\ov S}\right)\;, \nonumber \\
\delta S & = & -{\ov \epsilon} {\alpha}\, FS-\epsilon\left(
D{\ov R}-{\alpha}\,{\ov F}S+H{\ov R}\right)\,, \;
\delta {\ov S} = -{\ov \epsilon} \left(
{\ov D} R +{\overline\alpha} \, F{\ov S}-{\ov H}R \right)
+\epsilon {\overline\alpha}\, {\ov F}\,{\ov S}\;, \nonumber \\
\delta R & = & -{\ov \epsilon}\,{\overline\alpha}\,FR + \epsilon\left( D{\ov S}
+ {\overline\alpha}\,{\ov F}R -\alpha \, H{\ov S}\right)\,,\;
\delta {\ov R} = {\ov \epsilon}\left( {\ov D} S-
{\alpha} \, F{\ov R} +{\overline\alpha}\,{\ov H} S \right)
+ \epsilon\alpha \,{\ov F}\,{\ov R} \;. \label{hidN2}
\end{eqnarray}
Here $\epsilon, {\ov \epsilon}$ are the corresponding odd
transformation parameters. One can check that these
transformations have just the same standard closure in terms of
$\partial_z $ as the manifest $N=2$ supersymmetry
transformations, despite the presence of nonlinear terms. Also it
is straightforward to verify that the constraints \p{cond3} and
the OPEs \p{sl3} are covariant under these transformations.
Let us examine the issue of reducibility of the set of the $N=2$
$\widehat{sl(3)}$ currents with respect to the full $N=4$ supersymmetry.
In the $\widehat{sl(2)\oplus u(1)}$ case the involved currents form an
irreducible $N=4$ multiplet which is a nonlinear version of the
multiplet consisting of two chiral (and anti-chiral) $N=2$
superfields \cite{IK}. In the given case one can expect that eight $N=2$
$\widehat{sl(3)}$ currents form a reducible multiplet which can be divided
into a sum of two irreducible ones, each involving four superfields
(a pair of chiral and anti-chiral superfields together with
its conjugate). However, looking at the r.h.s. of \p{hidN2}, it is
difficult to imagine how this could be done in a purely algebraic
and local way. Nevertheless, there is a non-local redefinition of the
supercurrents which partly makes this job. As the first step one
introduces a prepotential for the chiral superfields $H, {\ov H}$
\begin{eqnarray}
H = DV, \quad {\ov H} = -{\ov D}\,{\ov V}
\end{eqnarray}
and chooses a gauge for $V$ in which it is expressed
through $H, {\ov H}$ \cite{Egau}
\begin{eqnarray}
V &=& - \partial^{-1}({\ov D} H + {\overline\alpha} \, D{\ov H})\;,\quad
{\ov V} = \partial^{-1}(D {\ov H} + {\alpha} \, {\ov D}H) \;, \quad
V = - {\overline\alpha} {\ov V}\;, \label{relV} \\
\delta V &=& {\alpha} (\bar \epsilon F -\epsilon {\ov F}) \;, \qquad
\delta {\ov V} = {\overline\alpha} (\bar \epsilon F -\epsilon {\ov F})
\;. \label{tranV}
\end{eqnarray}
Using this newly introduced quantity, one can pass to the
supercurrents which satisfy the standard chirality conditions
following from the original constraints \p{chirH}, \p{cond3}
and equivalent to them
\begin{eqnarray} S &=& \exp\{-V\}\tilde{S}\;, \quad {\ov S} =
\exp\{{\alpha} V\}{\ov {\tilde{S}}}
\;, \quad
R = \exp\{{\alpha} V\}\tilde{R}\;, \quad {\ov R} = \exp\{-V\}{\ov {\tilde{R}}}\;,
\nonumber \\
F &=& \exp\{-{\overline\alpha} V\}[\tilde{F} -
\partial^{-1}{\ov D}(\tilde{S}\tilde{R}) + \partial^{-1}D
({\ov {\tilde{S}}}\,{\ov {\tilde{R}}})]\;,\nonumber \\
{\ov F} &=& \exp\{-{\overline\alpha} V\}[{\ov {\tilde{F}}} -
\partial^{-1}{\ov D}(\tilde{S}\tilde{R}) + \partial^{-1}D
({\ov {\tilde{S}}}\,{\ov {\tilde{R}}})]\;,
\label{redef2}
\end{eqnarray}
\begin{eqnarray}
D\tilde{S} = D \tilde{R} = D \tilde{F} = 0\;, \qquad
{\ov D}{\ov {\tilde{S}}} = {\ov D}{\ov {\tilde{R}}} =
{\ov D}{\ov {\tilde{F}}} =
0\;. \label{chirSRF}
\end{eqnarray}
The $N=4$ transformation rules \p{hidN2} are radically simplified in
the new basis
\begin{eqnarray}
&& \delta \tilde{S} = -\epsilon D {\ov {\tilde{R}}} \;,\quad
\delta {\ov {\tilde{S}}} = -\bar\epsilon {\ov D} \tilde{R} \;,\quad
\delta \tilde{R} = \epsilon D {\ov {\tilde{S}}} \;,\quad
\delta {\ov {\tilde{R}}} = \bar\epsilon {\ov D} \tilde{S} \;, \nonumber \\
&& \delta \tilde{F} = \epsilon D {\ov D} (\exp\{{\overline\alpha} V\})\;, \qquad
\delta {\ov {\tilde{F}}} = -\bar\epsilon {\ov D}D
(\exp\{{\overline\alpha} V\})\;, \nonumber \\
&& \delta (\exp\{{\overline\alpha} V\}) = \bar\epsilon \tilde{F} -\epsilon {\ov
{\tilde{F}}} -(\bar\epsilon - \epsilon)
\partial^{-1}[{\ov D}(\tilde{S}\tilde{R}) - D
({\ov {\tilde{S}}}\,{\ov {\tilde{R}}})] \;.\label{tranVn} \end{eqnarray} We
see that the supercurrents $\tilde{S}\;, {\ov {\tilde{S}}}\;,
\tilde{R}\;, {\ov {\tilde{R}}}$ form an irreducible $N=4$
supermultiplet, just of the kind found in \cite{IK}. At the same
time, the superfields $V, \tilde{F}\;, {\ov {\tilde{F}}}$ do not
form a closed set: they transform through the former multiplet. We
did not succeed in finding the basis where these two sets of
transformations entirely decouple from each other. So in the
present case we are facing a new phenomenon consisting in that the
$N=2 \;\;\widehat{sl(3)}$ supercurrents form a not fully reducible
representation of $N=4$ supersymmetry. The same can be anticipated
for higher rank affine supergroups with a hidden $N=4$ structure.
One observes that putting the supercurrents $\tilde{S}\;, {\ov
{\tilde{S}}}\;, \tilde{R}\;, {\ov {\tilde{R}}}$ (or their
counterparts in the original basis) equal to zero is the
truncation consistent with $N=4$ supersymmetry. After this
truncation the remaining supercurrents $H,F, {\ov H}, {\ov F}$ form
just the same irreducible multiplet as in the
$\widehat{sl(2)\oplus u(1)}$ case \cite{IKT}.
Note that the above peculiarity does not show up at the level of
the composite supermultiplets like \p{long}. Indeed, it is straightforward to
see that the supercurrents in \p{long} form the same irreducible
representation as in the $\widehat{sl(2)\oplus u(1)}$ case \cite{IKT}
\begin{eqnarray}
\delta J = -\epsilon {\ov D} W - \bar \epsilon D{\ov W}\;, \qquad
\delta W = \bar \epsilon D J\;, \qquad
\delta {\ov W} = \epsilon {\ov D} J\;. \label{JWtran}
\end{eqnarray}
Another irreducible multiplet is comprised by the following
composite supercurrents
\begin{eqnarray}
\hat{J} &=& H{\ov H} + F{\ov F} + S{\ov S} +R{\ov R}\;, \nonumber \\
\quad \hat{W} &=& DF = -{\overline\alpha} HF +SR \;, \; \hat{{\ov W}} =
{\ov D}\,{\ov F} = {\alpha} {\ov H}\,{\ov F} - {\ov S}\,{\ov R}~.
\label{top}
\end{eqnarray}
Under \p{hidN2} they transform as
\begin{eqnarray}
\delta \hat{J} = -\epsilon D \hat{{\ov W}} -
\bar \epsilon {\ov D} \hat{W}\;, \qquad
\delta \hat{W} = \epsilon D \hat{J}\;, \qquad
\delta \hat{{\ov W}} = \bar\epsilon {\ov D} \hat{J}\;.
\label{JWtran2}
\end{eqnarray}
The OPEs of these supercurrents can be checked to generate another
``small'' $N=4$ SCA with zero central charge, i.e. a topological
``small'' $N=4$ SCA. The same SCA was found in the
$\widehat{sl(2)\oplus u(1)}$ case \cite{IKT}. This SCA
and the first one together close on the ``large'' $N=4$ SCA in some particular
realization \cite{RASS,IKT}. Thus the
$N=2 \;\;\widehat{sl(3)}$ affine
superalgebra provides a Sugawara type construction for this
extended SCA as well. It would be of interest to inquire whether this
superalgebra conceals in its enveloping algebra any other SCA containing
$N=4$ SCA as a subalgebra, e.g., possible $N=4$ extensions of nonlinear
$W_n$ algebras.
\section{$N=4$ mKdV-type hierarchies}
Both two non-equivalent $N=4$ Sugawara constructions, eqs.
({\ref{short}) and (\ref{long}), define Poisson maps. As a
consequence, the superaffine $sl(3)$-valued supercurrents inherit
all the integrable hierarchies associated with $N=4$ SCA.
The first known example of hierarchy with $N=4$ SCA
as the Poisson structure is $N=4$ SKdV
hierarchy (see \cite{DI}). The densities of the lowest hamiltonians from
an infinite sequence of the corresponding superfield hamiltonians
in involution, up to an overall normalization factor, read
\begin{eqnarray}
{\cal H}_1 &=& J\nonumber\\ {\cal H}_2 &=& -{1\over 2}( J^2 - 2 W{\ov
W})\nonumber \\
\relax {\cal H}_3 &=&
{1\over 2} (J [ D, {\overline D}] J + 2 W {\overline W}' +{2\over 3} J^3
- 4 J W{\ov W})~.
\end{eqnarray}
Here the $N=2$ superfields $J$, $W$, ${\ov W}$ satisfy the
Poisson brackets (\ref{n4sca}).
Let us concisely denote by $\Phi_a$, $a=1,2,...,8$,
the $\widehat{sl(3)}$-valued superfields $H,F,R,S$ together with the barred
ones. Their evolution equations which, by construction, are
compatible with the $N=4$ SKdV flows, for the $k$-th flow
($k=1,2,...$) are written as
\begin{eqnarray}
\relax
\frac{\partial}{\partial t_k}\Phi_a (X,t_k) &=& \{ \int dY
{\cal H}_k (Y, t_k) , \Phi_a (X,t_k)\}~.
\end{eqnarray}
The Poisson bracket here is given by the superaffine $\widehat{sl(3)}$
structure (\ref{sl3}), with $X, Y$ being two different ``points'' of
$N=2$ superspace.
The identification of the superfields $J$, $W$, $
{\ov W}$ in terms of the affine supercurrents
can be made either via eqs. (\ref{short}), i.e. the
``short'' Sugawara, or via eqs. (\ref{long}),
that is the ``long'' Sugawara. Thus the same $N=4$ SKdV
hierarchy proves to produce two non-equivalent
mKdV type hierarchies for the affine supercurrents, depending on
the choice of the underlying Sugawara construction.
The first hierarchy is $N=2$ supersymmetric, while the other one
gives a new example of globally $N=4$ supersymmetric hierarchy.
Let us briefly outline the characteristic features of
these two hierarchies.
It is easy to see that
for the superfields $H, {\ov H}, S, {\ov S}$ corresponding to the
superaffine algebra $\widehat{sl(2)\oplus u(1)}$ as
a subalgebra in $\widehat{sl(3)}$, the ``short'' hierarchy coincides
with $N=4$ NLS-mKdV hierarchy of ref. \cite{IKT}.
For the remaining $\widehat{sl(3)}$ supercurrents one gets
the evolution equations in the ``background'' of the
basic superfields just mentioned.
New features are revealed while examining the ``long'', i.e. $N=4$
supersymmetric $\widehat{sl(3)}$ mKdV hierarchy. It can be easily
checked that for all non-trivial flows $(k \geq 2)$ the evolution
equations for any given superfield $\Phi_a$ necessarily contain in the
r.h.s. the whole set of eight $\widehat{sl(3)}$ supercurrents.
In this case the previous $N=4$ NLS-mKdV hierarchy can also be
recovered. However, it is obtained in a less trivial
way. Namely, it is produced only after coseting out
the superfields $R, S$ and ${\ov R}, {\ov S}$, i.e. those associated
with the simple roots of $sl(3)$ (as usual, the passing to the
Dirac brackets is required in this case). As was mentioned in the
preceding Section, this truncation preserves the global $N=4$
supersymmetry.
Let us also remark that, besides the two mKdV
hierarchies carried by the superaffine $\widehat{sl(3)}$ algebra
and discussed so far, this Poisson bracket structure also carries
at least one extra pair of non-equivalent hierarchies of the mKdV type
possessing only global $N=2$ supersymmetry.
It was shown in \cite{DGI} (see also \cite{DG2}) that the enveloping
algebra of $N=4$ SCA
contains, apart from an infinite abelian subalgebra corresponding
to the genuine $N=4$ SKdV hierarchy, also an infinite abelian
subalgebra formed by the hamiltonians in involution associated
with a different hierarchy referred to as the ``quasi'' $N=4$ SKdV one.
This hierarchy admits only a global $N=2$ supersymmetry
and can be thought of as an integrable extension of the $a=-2$, $N=2$ SKdV
hierarchy. In \cite{DGI} there was explicitly found a non-polynomial
Miura-type transformation which in a surprising way relates $N=4$ SCA
to the non-linear $N=2$ super-$W_3$ algebra. This transformation
maps the ``quasi'' $N=4$ SKdV hierarchy onto the $\alpha=-2$, $N=2$
Boussinesq hierarchy. Since these results can be
rephrased in terms of the Poisson brackets structure alone, and the
same is true both for our ``short'' (\ref{short}) and ``long''
(\ref{long}) Sugawara constructions, it immediately follows that the
super-affine $\widehat{sl(3)}$ superfields also carry two non-equivalent
``quasi'' $N=4$ SKdV structures and can be mapped in two non-equivalent
ways onto the $\alpha=-2$, $N=2$ Boussinesq hierarchy.
\section{Conclusions}
In this work we have investigated the local Sugawara
constructions leading to the $N=4$ SCA expressed in terms of the
superfields corresponding to the $N=2$ superaffinization of the
$sl(2|1)$ and the $sl(3)$ algebras. We have shown that the
$\widehat{sl(3)}$ case admits a non-trivial $N=4$ Sugawara
construction involving all eight affine supercurrents and
generating the hidden $N=4$ supersymmetry of $N=2
\;\;\widehat{sl(3)}$ algebra. This property has been used to
construct a new $N=4$ supersymmetric mKdV hierarchy associated
with $N=4$ SKdV. Another mKdV hierarchy is obtained using the
$N=4$ Sugawara construction on the subalgebra
$\widehat{sl(2)\oplus u(1)}$. Thus the $N=2$ $\widehat{sl(3)}$
algebra was shown to provide the first example of a Poisson
brackets structure carrying two non-equivalent integrable mKdV type
hierarchies associated with the $N=4$ SKdV one. Also, the
existence of two non-trivial $N=2$ supersymmetric mKdV-type
hierarchies associated with the same superaffine Poisson structure
and ``squaring'' to the quasi $N=4$ SKdV hierarchy of ref.
\cite{DGI} was noticed.
An interesting problem is to generalize the two Sugawara constructions
to the full quantum case and to find out (if existing) an $N=4$ analog
of the well-known GKO coset construction \cite{GKO} widely used in
the case of bosonic affine algebras. It is also of importance to perform
a more detailed analysis of the enveloping algebra of $N=2$
$\widehat{sl(3)}$ with the aim to list all irreducible composite $N=4$
supermultiplets and to study possible $N=4$ extended $W$ type algebras
generated by these composite supercurrents. At last, it still remains
to classify all possible $N=2$ affine superalgebras admitting the
hidden $N=4$ structure, i.e. $N=4$ affine superalgebras. As is clear from
the two available examples ( $\widehat{sl(2)\oplus u(1)}$ and
$\widehat{sl(3)}$ ) a sufficient condition of the existence of such
a structure on the given affine superalgebra is the possibility to
define $N=4$ SCA on it via the corresponding ``long'' Sugawara
construction, with the full $N=2$ stress-tensor included.
\vskip1cm
\noindent{\Large{\bf Acknowledgments}} \\
\noindent F.T. wishes to express his gratitude to the
JINR-Bogoliubov Laboratory of Theoretical Physics, where this work
has been completed, for the kind hospitality. E.I. and S.K.
acknowledge a support from the grants RFBR-99-02-18417,
INTAS-96-0308 and INTAS-96-538.
\vspace{0.3cm}
\section*{Appendix: the second flow of the ``long'' $\widehat{sl(3)}$
$N=4$ mKdV}
\vspace{0.1cm}
For completeness we present here the evolution equations for the
second flow of the ``long''
$\widehat{sl(3)}$ mKdV hierarchy (it is the first non-trivial flow).
We have \footnote{ In order to save space and to avoid an unnecessary
duplication we present the equations only for the
non-linear chiral sector.}
\begin{eqnarray}
{\dot H} &=& - 2\partial^2 H- 2\alpha ( 2HD\partial
\overline{H}+\partial HD\overline{H}-SD\partial\overline{S}
-\partial S D\overline{S}- R D\partial \overline{R}-\partial R
D\overline{R})-\nonumber\\ && -4{\overline \alpha} \partial H
\overline{D}H + 2\alpha\overline{(F}S\partial R +
\overline{F}\partial S R - H S\partial\overline{S} - D\overline{F}
S\overline{D}R+D\overline{F}\overline{D}SR) -
\nonumber\\&&-2{\overline\alpha} HR
\partial \overline{R}- 2(1+\alpha)(H\partial S
\overline{S}+\partial H S\overline{S}) -2(1-{\overline{\alpha}}) (H\partial R
\overline{R}+\partial H R\overline{R})+\nonumber\\&&
+2(2H\overline{D}FD\overline{F}+H\overline{D}RD\overline{R}+
H\overline{D}SD\overline{S}
-2\overline{D}HFD\overline{F}-
\overline{D}HRD\overline{R}-\overline{D}HSD\overline{S}
-2H\partial F \overline{F} -\nonumber\\&&- 2\partial
HF\overline{F})
+2\alpha (2H\overline{H}FD\overline{F}+2HD\overline{H}F\overline{F}+
S\overline{S} RD\overline{R}+SD\overline{S}R\overline{R})
-\nonumber\\&&-2{\overline{\alpha}}(H\overline{H}RD
\overline{R}+HD\overline{H}R\overline{R}+
\overline{H}D\overline{F}SR
-D\overline{H}\overline{F}SR)+\nonumber\\ &&
+2(2HF\overline{S}D\overline{R}-2HF
D\overline{S}\overline{R}-H\overline{F}S\overline{D}R
+H\overline{F}\overline{D}SR
+H\overline{H}SD\overline{S}+HD\overline{H}S\overline{S}+
\overline{D}H\overline{F}SR)-\nonumber\\ &&
-2\alpha H\overline{H}\overline{F}SR -
2HS\overline{S}R\overline{R}\nonumber\\
{\dot S} &=&
2{\overline\alpha}(D\overline{H}\partial S -\overline{D}H\partial S +
D\partial \overline{H}S -\partial H\overline{D}S) -
2\overline{D}\partial H S - 2\partial FD\overline{R}-\nonumber\\
&& -2\alpha(2H\partial
\overline{H}S+SR\partial\overline{R}+S\partial
R\overline{R}+\partial H \overline{H}S)-\nonumber\\ &&
-2{\overline\alpha}
(H\overline{D}FD\overline{R}-\overline{D}HFD\overline{R}+\partial
S S\overline{S}) -\nonumber\\ && -2(F\overline{F}\partial S
+FD\overline{F}\overline{D}S + H\overline{H}\partial S
+HD\overline{H}\overline{D}S -H\partial
F\overline{R}-S\overline{D}RD\overline{R}+S\overline{D}SD\overline{S}+
2\overline{D}SRD\overline{R}+\nonumber\\ && +\partial S
R\overline{R}) +2\alpha (FSD\overline{S}R
-FS\overline{S}D\overline{R}) -2{\overline\alpha}
(HF\overline{F}\overline{D}S
+H\overline{D}HF\overline{R}+D\overline{H}F\overline{F}S+\nonumber\\
&& +\overline{H}FD\overline{F}S)
+2(1+\alpha)(\overline{F}S\overline{D}SR +
H\overline{D}SR\overline{R})-2 (
2HS\overline{D}R\overline{R}+2HS\overline{D}S\overline{S}-2H
\overline{D}F\overline{F}S-
\nonumber\\ &&
-H\overline{D}H\overline{H}S+\overline{D}HF\overline{F}S)
+2\alpha H\overline{H}F\overline{F}S
-2{\overline\alpha} H\overline{H}SR\overline{R}+2HFS\overline{S}R\nonumber\\
{\dot R} &=& 2\alpha (\overline{D}\partial H R -
D\overline{H}\partial R) -2{\overline\alpha}(\overline{D}H\partial R
+\partial H \overline{D}R) -2(D\partial \overline{H}R -\partial
FD\overline{S}) +\nonumber\\ && +2\alpha (H\partial
F\overline{S}-\partial R R\overline{R}) +2{\overline\alpha}
(H\overline{D}FD\overline{S}-2H\partial\overline{H}R-S\partial\overline{S}R-
\overline{D}HFD\overline{S}-\nonumber\\ && -\partial
H\overline{H}R -\partial S\overline{S}R)-2(F\overline{F}\partial R
+FD\overline{F}\overline{D}R + H\overline{H}\partial R +H
D\overline{H}\overline{D}R +R\overline{D}RD\overline{R} +S
\overline{S}\partial R +\nonumber\\ &&+ 2
SD\overline{S}\overline{D}R-\overline{D}SD\overline{S}R ) +
2\alpha(2HR\overline{D}R
\overline{R}-2H\overline{D}F\overline{F}R-H\overline{D}H
\overline{H}R-2H\overline{D}S \overline{S}R +\nonumber\\
&&+\overline{D}HF\overline{F}R) + 2{\overline\alpha}
(F\overline{S}RD\overline{R}+FD\overline{S}R\overline{R}-HF
\overline{F}\overline{D}R)
+2(1+{\overline\alpha})(\overline{F}SR\overline{D}R) -\nonumber\\
&&-2(1+\alpha )HS\overline{S}\overline{D}R
-2(H\overline{D}HF\overline{S}-\overline{H}FD\overline{F}R
-D\overline{H}F\overline{F}R) +\nonumber\\ &&+2\alpha
(HF\overline{S}R\overline{R}-H\overline{H}S\overline{S}R)
+2{\overline\alpha} H\overline{H}F\overline{F}R \nonumber\\ {\dot F}
&=& 2\partial^2 F -4{\alpha}D\overline{H}\partial F
-4{\overline\alpha}( \overline{D}H\partial F +\overline{D}\partial H
F) +2(\overline{D}S\partial R -S\overline{D}\partial R
+\overline{D}\partial S R-\nonumber\\ &&
-\partial S\overline{D}R)-2{\overline\alpha} (4HF\overline{D}F\overline{F}
-2H\overline{D}H\overline{H}F+\overline{H}FRD\overline{R}-
D\overline{H}FR\overline{R})
-\nonumber\\
&&
-2\alpha\overline{D}HFS\overline{S}-2(1+{\overline\alpha})
(H\overline{D}FS\overline{S}+
HF\overline{D}S\overline{S})
+2i\sqrt{3}(HF\overline{D}R\overline{R}+H\overline{D}FR\overline{R})+
\nonumber\\
&& +2 (2F\overline{F}S\overline{D}R -2F\overline{F}\overline{D}SR
+H\overline{H}S\overline{D}R -H\overline{H}\overline{D}SR +\overline{H}
FSD\overline{S}+2SR\overline{D}R\overline{R}-2S\overline{D}S\overline{S}R
+\nonumber\\ && +2\overline{D}F\overline{F}SR +\overline{D}H FR
\overline{R}+\overline{D}H\overline{H}SR
-D\overline{H}FS\overline{S})+\nonumber\\ &&+2\alpha
(D\overline{H}S\overline{D}R-D\overline{H}\overline{D}SR) -
2{\overline\alpha}\partial \overline{H}SR
-2(FR\partial\overline{R}+FS\partial\overline{S}+2F\overline{D}FD
\overline{F}
+\nonumber\\ && +F\overline{D}RD\overline{R}+
F\overline{D}SD\overline{S}+2H\overline{H}\partial F + 2 HD
\overline{H}\overline{D}F +2H\partial\overline{H}F +\overline{D}FR
D\overline{R}+\overline{D}FSD\overline{S}+\nonumber\\&&+ 2\partial
F F\overline{F}+2\partial F R\overline{R}+2\partial F
S\overline{S}) +2(1+\alpha) H\overline{H}FR\overline{R}
+2(1+{\overline\alpha}) H \overline{H}FS\overline{S}~.\nonumber
\end{eqnarray}
The parameters $\alpha, \bar\alpha$ have been defined in eq. \p{param}.
\vskip1cm
|
2,877,628,088,478 | arxiv | \section{Introduction}
Quantum optimal control theory is the science of shaping control pulses to manipulate quantum systems in a useful way \cite{Rice_Book, Brumer_Book}. In many quantum control systems, the time evolution is optimized under unitary time evolution. Examples of this include the evolution of many electron systems under Hamiltonian dynamics \cite{Castro_PRL_109_153603} as well as time evolution under a non-linear Schr\"odinger equation \cite{Sklarz_PRA_66_053619} or also quantum gates for quantum computing in solid state systems \cite{Egger_SUST_27_014001, Cerfontaine_arXiv, Schutjens_PRA_88_052330, Vesterinen_arXiv}. Additionally, optimization towards a target unitary time evolution can also be done in the presence of non-unitary dynamics \cite{Floether_NJP_14_073023, Herbruggen_JPB_44_154013}. However, some desired quantum processes are inherently incoherent, such as cooling \cite{Reich_NJP_15_125028}. A central application of incoherent processes is measurement within the field of circuit QED. Unlike many other detection processes in quantum physics, object and detector are made out of the same technology and act on similar time scales making careful design possible and necessary. Similar statements can be made about readout of quantum states in semiconductor quantum dots \cite{Elzerman_Nature_430_431, Gilad_PRL_97_116806, Elzerman_PRB_67_161308}. The read out mechanism depends on the type of superconducting qubit \cite{Clark_Nature_453_1031, Devoret_Science_339_1169} being used. For instance transmon qubits are typically read out through a resonator \cite{Koch_PRA_76_042319, Jeffrey_PRL_112_190504} whilst phase qubit readout is based on tunneling out of a metastable well \cite{Neeley_NatPhys_4_523, Chen_APL_101_182601}. Additionally, this tunneling mechanism can also be used to create a microwave photon counter named the Josephson photomultiplier (JPM) \cite{Chen_PRL_107_217401}. It is usually desirable to have a high measurement contrast and in some cases high speed. The latter is particularly crucial for quantum computing which can involve many measurements \cite{Fowler_PRA_86_032324}.
In this paper we expand the gradient ascent pulse engineering (GRAPE) optimal control algorithm to the optimization of non-unitary quantum channels using Choi matrices. The algorithm is presented in section \ref{Sec:algo}. We illustrate it in section \ref{Sec:PQ} with the optimization of a readout pulse for the phase qubit. Conclusions are drawn in section \ref{sec:conclusion}.
\section{Optimal Control Algorithm \label{Sec:algo}}
An open quantum system with Markovian dynamics follows the time evolution given by a Lindblad master equation \cite{Nielsen_and_Chuang_book}. The time evolved density matrix can be found by vectorizing the master equation using the identity ${\rm col}(ABC)=(C^T\otimes A){\rm col}(B)$. Here ${\rm col}(X)=\vec{X}$ denotes column stacking of the matrix $X$. The result is a first order differential equation $\dot{\vec{\rho}}=\mathcal{S}(t)\,\vec{\rho}$ for the vectorized density matrix $\vec{\rho}$ \cite{Herbruggen_JPB_44_154013}. This equation is similar to the Schr\"odinger equation and can be solved by exponentiating the generator $\mathcal{S}(t)$. The time evolution, of duration $T$, of a general initial density matrix is thus given by $\vec{\rho}(T)=\mathcal{T}(T)\vec{\rho}(0)$ with the time propagator $\mathcal{T}$ given by the time ordered exponential of the integral of the generator. For a column stacked vectorized master equation the generator is
\begin{align} \label{Eqn:PII_Chap5_Gen}
\mathcal{S}(t)=&~i\LR{\hat H^T\otimes\mathds{1}-\mathds{1}\otimes\hat H}\\ \notag &+\sum\limits_l\gamma_l\LR{\hat L_l^*\otimes\hat L_l-\frac{1}{2}\hat L_l^T\hat L_l^*\otimes\mathds{1}-\frac{1}{2}\mathds{1}\otimes\hat L_l^\dagger\hat L_l^{\phantom{\dagger}}}
\end{align}
where $\hat H$ is the Hamiltonian and $\hat L_l$ is the Lindblad operator associated to the incoherent process with rate $\gamma_l$. Note that having the rates be positive for all times ensures that the resulting dynamics is completely positive and trace preserving \cite{Breuer_JPBAMOP_45_154001}. Within this generator are hidden the control fields $\boldsymbol u(t)$. They can be located in the Hamiltonian $\hat H$ which, as in the GRAPE algorithm \cite{Khaneja_JMR_172_296305}, is separated into drift $\hat H_\text{d}$ and controls $\hat H_k$. However they can also control some of the rates such that the set of rates can be split into controllable rates and drift rates $\{\gamma_l\}=\{\gamma_{l,\text{d}},\gamma_{l,\text{c}}(\boldsymbol u(t))\}$. This suggests a drift-control decomposition for the generator
\begin{align} \notag
\mathcal{S}(t)=\mathcal{S}_\text{d}+\sum\limits_k f_k(\boldsymbol u(t))\mathcal{S}_k\,.
\end{align}
The drift term $\mathcal{S}_\text{d}$ is the part of Eq. (\ref{Eqn:PII_Chap5_Gen}) containing the drift Hamiltonian $\hat H_\text{d}$ and the Lindblad operators corresponding to the drift rates $\gamma_{l,\text{d}}$. The control part is the remainder of Eq. (\ref{Eqn:PII_Chap5_Gen}). It contains terms dependent on $\boldsymbol u(t)$. The functions $f_k$ account for possible non linear behaviors with respect to $\boldsymbol u(t)$. However, these functions $f_k$ are known and assumed to be differentiable allowing us to use the chain rule when computing gradients with respect to the controls. When dealing with actual experiments, fine tunning of the control pulses can be done with adaptive hybrid optimal control (Ad-HOC) if these functions are not properly characterized \cite{Egger_PRL_112_240503}.
Similarly to the GRAPE algorithm, the controls are discretized in time into $N$ piecewise constant control pixels of duration $\Delta T$ and the time propagator $\mathcal{T}(T)$ is approximated by
\begin{align} \notag
\mathcal{T}(T)=\prod\limits_{j=N-1}^0e^{\mathcal{S}(j\Delta T)\Delta T}\,.
\end{align}
Note that in the product early times go to the right to satisfy time ordering and the product counts down. $\mathcal{S}(j\Delta T)$ is the generator evaluated at pixels $\boldsymbol u(j\Delta T)$. This time evolution corresponds to a quantum channel which we wish to optimize. To do so a fidelity measure based on Choi matrices \cite{Choi_LinAlg_10_285290, Stromer_Springer_2013} is constructed. The Choi matrix $C$ is related to the time propagator $\mathcal{T}$ by reorganizing the elements according to
\begin{align} \label{Eqn:PII_Choi2T}
C_{d\alpha+\beta,d\alpha'+\beta'}=\mathcal{T}_{d\beta'+\beta,d\alpha'+\alpha}\,,
\end{align}
where $d$ is the dimension of the Hilbert space and $\alpha,\alpha',\beta,\beta'\in\{1,...,d\}$. This can be shown by noticing that the vectorized matrix $\ket{i}\!\!\bra{j}$ is the unit vector $\hat e_{dj+i}$ with 1 on entry $dj+i$ and zero elsewhere. Therefore with $[\mathcal{E}(\ket{i}\!\!\bra{j})]_{\beta,\beta'}=\mathcal{T}_{d\beta'+\beta,dj+i}$ and $C=\sum_{ij}\ket{i}\!\!\bra{j}\otimes\mathcal{E}(\ket{i}\!\!\bra{j})$ which defines the Choi matrix, the above identity ensues. A natural way to measure how close the realized quantum channel is to a target channel, described by a Choi matrix $C_\text{t}$, is through the channel fidelity \cite{raginsky_PLA01_290_1118}
\begin{align} \notag
\Phi_\text{ch}=\frac{1}{d^2}\LR{{\rm Tr}\left\{\sqrt{\sqrt{C_\text{t}}C[\boldsymbol u]\sqrt{C_\text{t}}}\right\}}^2\,.
\end{align}
This fidelity was constructed from the fidelity between two states $\rho$ and $\sigma$ given by $\mathcal{F}={\rm Tr}\sqrt{\sqrt{\rho}\sigma\sqrt{\rho}}$ \cite{Nielsen_and_Chuang_book} by using the Choi-Jamiolkwoski isomorphism which, loosely speaking, relates quantum channels to states in a higher dimension. The channel fidelity $\Phi_\text{ch}$ reduces to, in the case when both processes are unitary, to the gate overlap fidelity $\Phi_\text{\tiny QPT}=|{\rm Tr}\{\hat U_\text{t}^\dagger\hat U[\boldsymbol u]\}|^2/d^2$ where $\hat U_\text{t}$ is the target unitary matrix. However it is not suitable for a pulse optimisation algorithm due to the square root which prevents an analytical expression for the gradient. Instead we define a fidelity starting from the square of the Frobenius norm
\begin{align} \notag
\|C_\text{t}-C[\boldsymbol u]\|^2={\rm Tr}\left\{C_\text{t}^2\right\} +&~ {\rm Tr}\left\{C[\boldsymbol u]^2\right\} \\ \notag
&-2\,{\rm Re\,Tr}\left\{C_\text{t}^\dagger C[\boldsymbol u]\right\}\,.
\end{align}
The equality follows from the definition of the Frobenius norm. As the realized channel approaches the target one, the error $\|C_\text{t}-C[\boldsymbol u]\|^2$ is reduced. This prompts the following definition for the fidelity
\begin{align} \label{Eqn:PII_FidChannel}
\Phi_\text{ch}'=\frac{{\rm Re\,Tr}\left\{C_\text{t}^\dagger C[\boldsymbol u]\right\}}{{\rm Re\,Tr}\left\{C_\text{t}^\dagger C_\text{t}\right\}}\,.
\end{align}
The factor in the denominator has been included to upper bound the fidelity by one. Its presence is called for by the fact that, contrary to density matrices, Choi matrices do not have unit trace. Note that this expression is not sensitive to global phases contrary to its counterpart for unitary matrices \cite{Khaneja_JMR_172_296305}. The gradient with respect to the control pixels is
\begin{align} \label{Eqn:PII_GradFidChannel1}
\nabla_{kj}\Phi_\text{ch}'=\frac{{\rm Re\,Tr}\left\{C_\text{t}^\dagger\, \frac{\partial C[\boldsymbol u]}{\partial u_{kj}}\right\}}{{\rm Re\,Tr}\left\{C_\text{t}^\dagger C_\text{t}\right\}}\,.
\end{align}
The gradient of the Choi matrix is found by computing the gradient of the time propagator and rearranging the terms according to Eq. (\ref{Eqn:PII_Choi2T}). The procedure to compute the gradient of $\mathcal{T}$ follows the same idea as for the unitary case. However, since in a generic open system $\mathcal{S}$ is not necessarily normal \cite{Machnes_PRA_84_022305}, the procedure of computing the gradient of a single pixel using eigenvalues does not work. Instead the identity
\begin{align} \label{Eqn:PII_GradFidChannel2}
\left.\frac{{\rm d}}{{\rm d} x}e^{A+xB}\right\vert_{x=0}=e^A\int_0^1 e^{-A\tau}Be^{A\tau}{\rm d}\tau
\end{align}
is used. The latter can be evaluated exactly using augmented matrix exponentials \cite{Floether_NJP_14_073023}
\begin{align} \label{Eqn:PII_GradFidChannel3}
\exp\begin{pmatrix} A & B \\ 0 & A \end{pmatrix}=\begin{pmatrix} e^A & \int_0^1e^{A(1-\tau)}Be^{A\tau}{\rm d}\tau \\ 0 & e^A \end{pmatrix}\,.
\end{align}
Thus for computing $\partial \mathcal{T}/\partial u_{kj}$ one sets $A=\mathcal{S}(j\Delta T)\Delta T$ and $B=\mathcal{S}_k\Delta T$. Given that the augmented matrix can be defective, its exponential is computed with Ward's Pad\'e approximation \cite{Ward_JNA_14_600,Moler_SIAM_45_3}. Finally all elements are in place to successfully optimize the pulse of a non-unitary process towards a target non-unitary channel using the GRAPE and BFGS algorithms \cite{Fouquieres_JMP_212_412, Nocedal_Springer_2006}. The fidelity is given by Eq. (\ref{Eqn:PII_FidChannel}) whilst its gradient is found from Eqs. (\ref{Eqn:PII_GradFidChannel1}) through (\ref{Eqn:PII_GradFidChannel3}).
\section{Optimization of a Phase Qubit Measurement Pulse \label{Sec:PQ}}
The flux biased phase qubit is a superconducting circuit made of a large area Josephson junction (JJ) shunted by an inductor. Threading an external flux through this loop makes the energy levels tunable and also allows for easy readout \cite{Neeley_NatPhys_4_523, Chen_APL_101_182601}. This type of qubit can be biased in a regime where the potential is made of a shallow and a deep well. The qubit logical $\ket{0}$ and $\ket{1}$ basis is formed in the shallow well. When the qubit is read out, a flux pulse makes the shallow well shallower; the $\ket{1}$ state tunnels into the deeper well whilst tunneling of $\ket{0}$ is exponentially smaller. A tunneling event creates a flux change that can be picked-up by a nearby SQUID \cite{Cooper_PRL_93_180401}. JPMs allow single photon detection in the microwave regime and are also based on a phase qubit like device \cite{Chen_PRL_107_217401, Govia_PRA_86_032311}. Here we will show how to optimize a measurement pulse for a phase qubit using the methods described in the previous section.
\subsection{Phase Qubit Model}
The phase qubit \cite{Cooper_PRL_93_180401, Simmonds_PRL_93_077003}, flux biased by $\varphi_\text{b}$ but without current bias, is described by the Hamiltonian
\begin{align} \label{Eqn:PII_PhaseQubitH}
\hat H=E_c\hat N^2+E_J\LR{\frac{1}{2\beta}(\hat \varphi-\varphi_\text{b})^2-\cos\hat\varphi}\,.
\end{align}
The charging energy is $E_c=2e^2/C$ and the Josephson coupling energy is $E_J=I_0\hbar/2e$. The qubit is coupled to the external bias flux $\Phi_0\varphi_\text{b}$ by the constant $\beta=2eLI_0/\hbar$. The critical current of the JJ is $I_0$ and its associated capacitance is $C$ whilst the shunt inductance is $L$.
\subsubsection{The Three Level Model}
When biased a little below $\varphi_\text{b}=2\pi$ the potential has a shallow and a deep well. The qubit states $\ket{0}$ and $\ket{1}$ are formed out of the two lowest states of the shallow well. By raising the bias closer to $2\pi$, the shallow well becomes shallower allowing the $\ket{1}$ and $\ket{0}$ states to tunnel into the deeper well, see Fig. \ref{Fig:PII_PhaseQubitPot}. Furthermore, at these bias values, the deep well is much deeper than the shallow well. Thus, the potential can be approximated by a cubic function where the deep well is treated as a continuum. This prompts a three state description of the qubit formed by the basis $\{\ket{0},\,\ket{1},\,\ket{\text{m}}\}$. $\ket{\text{m}}$ is a combination of all the states that $\ket{0}$ and $\ket{1}$ can incoherently tunnel into.
\begin{figure} \centering
\includegraphics[width=0.85\columnwidth]{Fig1_PhaseQubit}
\caption{Sketch of the phase qubit's potential focusing on the shallow well. The wavy line indicates the $0\leftrightarrow1$ transition frequency which is a coherent process and enters in the Hamiltonian. Controllable incoherent processes are indicated by solid straight lines whereas the uncontrollable $T_1$ relaxation process is constant. \label{Fig:PII_PhaseQubitPot}}
\end{figure}
The bias flux changes the shape of the potential, thus for different $\varphi_\text{b}$ the logical $\ket{0}$ and $\ket{1}$ states have different wave functions. For an arbitrary bias flux the three level model Hamiltonian is expressed with respect to a reference bias $\varphi_\text{ref}$
\begin{align} \notag
\hat H=P\hat H_\text{ref}P^{-1}~~~\text{with}~~~P=\begin{pmatrix} \eta & \sqrt{1-\eta^2} & 0 \\ \sqrt{1-\eta^2} & -\eta & 0 \\ 0 & 0 & 1 \end{pmatrix}\,.
\end{align}
$\hat H_\text{ref}=\hbar\omega_\text{ref}\ket{1}\!\!\bra{1}$ is the Hamiltonian at the reference bias where $\omega_\text{ref}$ is the corresponding $0\leftrightarrow1$ transition frequency. Since we neglect higher excitation states in the shallow well, $P$ has one parameter $\eta$. Furthermore, $P$'s form results from unitarity and it induces Landau-Zener type physics between $\ket{0}$ and $\ket{1}$ \cite{Landau_PZS_2_46, Zener_PRC_137_696}. Indeed, if the pulse is non-adiabatic, i.e. it contains rapid changes in flux bias, state transitions can occur. They result from the non-orthogonality between the wave functions of the new excited state and old ground state; There exists a matrix element connecting the two states. On the other hand, if the change is slow, i.e. adiabatic, then the state cannot jump between eigenstates and remains in its initial state. The matrix element connecting ground and excited state is negligibly small at all points in time. This effect is modeled by choosing $\eta$ to be the overlap between the wave function $\psi_0$ of $\ket{0}$ at the reference bias and itself at a different bias
\begin{align}\notag
\eta(\varphi_\text{b})=\int\psi_0^*(\varphi,\varphi_\text{b})\,\psi_0(\varphi,\varphi_\text{ref})\,{\rm d} \varphi\,.
\end{align}
The wave functions are found with a discrete variable representation (DVR) \cite{Colbert_JCP_96_19821991}. This consists of diagonalizing the phase qubit Hamiltonian (\ref{Eqn:PII_PhaseQubitH}) in a discretized eigenbasis of $\hat\varphi$ for different flux biases. The resulting eigenvalues are the energy levels and the associated eigenvectors are the wave-functions as function of phase $\varphi$. This yields $\eta$ which is then fitted to a third order polynomial, see Fig. \ref{Fig:PII_3lvl_eta}. The fit to a polynomial preserves the analytical aspect of the gradient computation.
\begin{figure} \centering
\includegraphics[width=0.95\columnwidth]{Fig2_eta}
\caption{ $\ket{0}\leftrightarrow\ket{1}$ state mixing parameter $\eta$ as function of the bias phase. The solid line indicates numerical DVR data whilst the dashed line is a third order polynomial fit to preserve analyticity when performing the gradient search. \label{Fig:PII_3lvl_eta}}
\end{figure}
The Lindblad operators of the incoherent processes that we include in our model are
\begin{align}
\hat L_{0\to\text{m}}=\sqrt{\gamma_0}\ket{\text{m}}\!\!\bra{0} \notag \\
\hat L_{1\to\text{m}}=\sqrt{\gamma_1}\ket{\text{m}}\!\!\bra{1} \notag \\
\hat L_{1\to0}=\sqrt{\gamma_{1\to0}}\ket{0}\!\!\bra{1} \notag
\end{align}
Note that we do not include pure dephasing between $\ket{0}$ and $\ket{1}$ since the coherences between these states do not matter when it comes to the measurement process. Whilst the relaxation rate $\gamma_{1\to0}=T_1^{-1}$ is constant, the tunneling rates to the continuum $\gamma_0$ and $\gamma_1$ depend on the bias flux. They are found by approximating the potential well by a third order polynomial \cite{Neeley_NatPhys_4_523} and using the WKB approximation \cite{WeissBook, Weiss_PRD_27_2916}
\begin{align}
\gamma_0(\alpha)\simeq&~6\omega\sqrt{\frac{\alpha}{\pi}}e^{-\frac{6}{5}\alpha}\,, \notag \\
\gamma_1(\alpha)\simeq&~432\omega \sqrt{\frac{\alpha^3}{\pi}}e^{-\frac{6}{5}\alpha}\,. \notag
\end{align}
$\omega$ is the $0\leftrightarrow1$ transition frequency in the harmonic approximation. This frequency is obtained from the second order term of the third order approximation employed by Martinis \emph{et al.} \cite{Martinis_PRL_55_1543} by building on work done by Caldeira and Leggett \cite{Caldeira_AP_149_374}. We improve this approximation for $\omega$ by using the DVR of the potential and then finding the eigenenergies for $\ket{0}$ and $\ket{1}$ in the shallow well at different bias values. The DVR data for $\omega$ is fitted to the five parameter function $a(b+c\varphi_\text{b})^d+e$ so that analytical gradients can be computed. This methodology, shown in Fig. \ref{Fig:DVR_vs_Harmonic}, allows for a good fit to the DVR data and shows that the harmonic approximation deviates a little from it. This is expected since by diagonalizing the Hamiltonian in a phase basis, DVR takes all orders of the potential into account.
\begin{figure} \centering
\includegraphics[width=0.95\columnwidth]{Fig3_Freq_vs_Flux}
\caption{Frequency of the $\ket{0}\leftrightarrow\ket{1}$ transition for the Harmonic approximation and for the DVR data which takes all orders into account. The dashed line is the fit to the five parameter function $a(b+c\varphi_\text{b})^d+e$ showing excellent agreement for the range of bias flux of concern. Note that beyond $\varphi_\text{b}=0.945\cdot2\pi$ DVR no longer finds two states in the shallow well. This is in excellent agreement with the three level model validity condition shown in Fig. \ref{Fig:PII_3lvl_model}. \label{Fig:DVR_vs_Harmonic} }
\end{figure}
The dimensionless parameter $\alpha$ also depends on the bias flux. In the cubic potential model it is given by
\begin{align} \label{Eqn:PII_PQ_alpha}
\alpha(\varphi_\text{b})=6\frac{V_\text{max}-V_\text{min}}{\sqrt{2E_JE_c}(\beta^{-1}+\cos\varphi_\text{min})}\,.
\end{align}
The potential extrema $V_\text{max/min}$ are defined in Fig. \ref{Fig:PII_PhaseQubitPot}. The phase value corresponding to the minimum is $\varphi_\text{min}$. These quantities all depend on the bias flux. The derivation of this formula is based on the expression of $\alpha$ found from the WKB approximation and the parameters entering the third order approximation of the qubit's potential. Some additional details are given in appendix \ref{Sec:Appendix_PQ}. Although not explicitly indicated, the potential extrema $V_\text{min/max}$ and the location of the minimum $\varphi_\text{min}$ depend on the bias flux. $\alpha$ is found numerically by solving for the different terms in Eq. (\ref{Eqn:PII_PQ_alpha}) for different values of $\varphi_\text{b}$. The result is shown in Fig. \ref{Fig:PII_3lvl_model} the numerical data is then fitted to a second order polynomial to preserve analyticity when computing gradients for the pulse optimization. In summary, the drift generator of the time propagator is
\begin{align} \notag
\mathcal{S}_\text{d}=\gamma_{1\to0}\LR{\ket{0}\!\!\bra{1}\!\otimes\!\ket{0}\!\!\bra{1}-\frac{1}{2}\LR{\ket{1}\!\!\bra{1}\!\otimes\!\mathds{1}+\mathds{1}\!\otimes\!\ket{1}\!\!\bra{1}}}.
\end{align}
The control generator is
\begin{align} \notag
&\mathcal{S_\text{c}}=i\LR{(P\hat H_\text{ref}P^{-1})^T\otimes\mathds{1}-\mathds{1}\otimes P\hat H_\text{ref}P^{-1} }+\\
&\sum\limits_{j=0}^1\gamma_j(\varphi_\text{b})\!\LR{\ket{\text{m}}\!\!\bra{j}\!\otimes\!\ket{\text{m}}\!\!\bra{j}-\frac{1}{2}\LR{\ket{j}\!\!\bra{j}\!\otimes\!\mathds{1}+\mathds{1}\!\otimes\!\ket{j}\!\!\bra{j}}}. \notag
\end{align}
In the first term, the dependence on the bias flux is located in the $\eta$ parameter in the unitary matrix $P$. The non-linearity of this expression in the control $\varphi_\text{b}$ can easily be taken into account in the optimization using the chain rule.
\begin{figure} \centering
\includegraphics[width=0.95\columnwidth]{Fig4_alpha_color}
\caption{Parameter controlling the tunneling rates in the three level model. The solid line shows the value of $\alpha$ as computed by Eq. (\ref{Eqn:PII_PQ_alpha}) whilst the dashed line shows a second order fit used to preserve analyticity during the gradient search. When $\alpha$ goes below the horizontal line, the three level model is no longer valid. \label{Fig:PII_3lvl_model}}
\end{figure}
\subsubsection{Optimal Control Problem}
The control problem is to optimize a measurement pulse $\varphi_\text{b}(t)$ of duration $T_\text{meas}$ that maximizes the contrast $\xi=P_\text{bright}(1-P_\text{dark})$ \cite{Chen_PRL_107_217401}. $P_\text{bright}$ is the probability that the initial state $\ket{1}$ tunneled to $\ket{m}$ whilst $P_\text{dark}$ is the probability that the state $\ket{0}$ tunneled to $\ket{m}$. This target can be shaped into a Choi matrix given by
\begin{align}\notag
C_\text{t}=\ket{1}\!\!\bra{1}\otimes\ket{\text{m}}\!\!\bra{\text{m}}+\!\!\sum\limits_{i,j\in\{0,\text{m}\}}\!\!\ket{i}\!\!\bra{j}\otimes\ket{i}\!\!\bra{j}\,.
\end{align}
Since the tunneling is incoherent the coherences between $\ket{1}$ and $\ket{\text{m}}$ are not preserved as the ideal quantum channel maps $\ket{1}\!\!\bra{1}$ to $\ket{\text{m}}\!\!\bra{\text{m}}$. This gives the first part of $C_\text{t}$. The second states that the elements $\ket{0}\!\!\bra{0}$, $\ket{\text{m}}\!\!\bra{0}$, $\ket{0}\!\!\bra{\text{m}}$ and $\ket{\text{m}}\!\!\bra{\text{m}}$ should be left untouched.
Before and after the measurement pulse, the qubit is at a reference bias $\varphi_\text{ref}$ chosen such that tunneling out of $\ket{1}$ is suppressed. Indeed it is expected that coherent operations are done between $\ket{0}$ and $\ket{1}$ before the measurement pulse. Therefore the states should not tunnel out of the shallow well. However the shape of the wave-functions $\psi_i(\varphi,\varphi_\text{b})=\braket{\varphi|i}$ for $i=0,1$ change with bias flux. Thus changing $\varphi_\text{b}$ can induce $\ket{0}\leftrightarrow\ket{1}$ transitions if it is non-adiabatic, similar to the Landau-Zener scenario. This, as well as tunneling from $\ket{0}$ to $\ket{\text{m}}$, creates dark counts. To avoid such effects an adiabatic pulse, with the appropriate area to minimize $\ket{0}\to\ket{\text{m}}$, should be used since slow changes in the potential will keep the system in $\ket{0}$ if it started in $\ket{0}$. However, $\ket{1}\to\ket{0}$ relaxation, graphically illustrated in Fig. \ref{Fig:PII_PhaseQubitPot}, causes missed counts. This degradation in contrast can be mitigated by using a fast pulse. This interplay between Landau-Zener like behavior and energy relaxation prompts the use of optimal control theory to shape the measurement pulse. The optimal pulse should reduce dark and missed counts. The former are reduced by the optimal shape whilst that latter are mitigated by forcing $\ket{1}$ to tunnel before relaxation happens.
\subsubsection{Baseline}
State measurement with phase qubits was originally limited by the high amount of two level fluctuators polluting the qubit \cite{Cooper_PRL_93_180401}. This has been overcome and phase qubit measurement visibilities around 90\% have been reported \cite{Steffen_PRL_97_050502, Chen_APL_101_182601}. Single photon measurement contrasts with JPMs approach 80\% \cite{Chen_PRL_107_217401}. Within the framework of the simplified model presented here the following section shows that these numbers could be increased. The limitations of a three level model could be overcome using Ad-HOC \cite{Egger_PRL_112_240503}, a closed-loop fine-tuning approach for pulses. The optimized pulses presented in the following section should thus be understood as a starting point for a closed-loop algorithm.
\subsection{Optimization Results}
The parameters used in the optimization correspond to typical phase qubit values \cite{Neeley_NatPhys_4_523}. These are shown in Tab. \ref{Tab:PII_PQVal}. Sharp jumps in the bias flux can introduce unwanted $\ket{0}\leftrightarrow\ket{1}$ jumps and cause St\"uckelberg oscillations, i.e. oscillations typical for finite-amplitude parameter sweeps \cite{Stueckelberg_HPA_5_369}. To prevent this, the pulses are convoluted with a Gaussian. This also results in pulses that are feasible with modern electronics. The optimization of several pulses of variable time is shown in Fig. \ref{Fig:PII_Ch6_T1500Pulses}. The initial guess for the gradient search is a square pulse convoluted by a Gaussian. The first and last two ns are held constant and only change through the convolution due to variations in the optimization pixels. The height of the initial pulse is too low to allow full tunneling out of $\ket{1}$ yet sufficiently high for small changes in the amplitude to produce appreciable changes in fidelity. This is best seen in Fig. \ref{Fig:PII_Ch6_T1500nsPopul} where the time evolution resulting from a 10 ns pulse is shown. The initial pulse fails to let the $\ket{1}$ state tunnel out. The initial channel fidelity and contrast are respectively $\Phi_\text{ch,i}'=87.0\%$ and $\xi_\text{i}=37.8\%$. After optimization these numbers are $\Phi_\text{ch,f}'=98.8\%$ and $\xi_\text{f}=97.9\%$ and show that the optimization has successfully increased the contrast, as desired.
\begin{table}
\begin{center}
\caption{Values used in the phase qubit model.\label{Tab:PII_PQVal}}
\begin{tabular}{l l r l} \hline\hline
Name & Symbol & Value & unit \\ \hline
Critical Current & $I_0$ & 2 & $\mu A$ \\
Junction Capacitance & $C$ & 1 & $pF$ \\
Flux coupling & $\beta$ & 4.375 & - \\
Energy Relaxation & $T_1$ & 500 & ns \\\hline\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure} \centering
\includegraphics[width=0.95\columnwidth]{Fig5_T1_500ns_summary}
\caption{Optimal pulses for different gate durations with their corresponding fidelities. The dashed lines show the initial guess. As can be seen the fidelity of the optimal pulses is higher for the fast pulses. This is due to the fact that faster pulses allow $\ket{1}$ to tunnel into $\ket{\text{m}}$ before $T_1$ relaxes it to $\ket{0}$. \label{Fig:PII_Ch6_T1500Pulses}}
\end{figure}
The optimization adds a bump on the initial rise of the pulse to kick out the $\ket{1}$ state. This bump has to be added at the beginning of the pulse before $T_1$ relaxes $\ket{1}$ to $\ket{0}$ which should be kept in the shallow well. The optimization carefully choses the area under the pulse. Indeed, allowing the bias flux to be held too high for too long diminishes the contrast since $\ket{0}$ starts to tunnel into $\ket{\text{m}}$. The rate at peak is pushed close to the maximum allowed by the three level model. This limitation is further discussed below. The slow hold value at the end of the longer pulses in Fig. \ref{Fig:PII_Ch6_T1500Pulses} is an artifact of the simulation resulting from the initial guess. For shorter pulses no hold value subsists. In the longer pulses it remains since it does not affect fidelity. Indeed, after the kick forcing $\ket{1}$ to tunnel, there can be no fidelity deterioration due to $T_1$. Furthermore this artificial hold level is too low to allow any significan tunneling of $\ket{0}$ into $\ket{\text{m}}$.
\begin{figure} \centering
\includegraphics[width=0.95\columnwidth]{Fig6_Population}
\caption{Time evolution of the populations for the 10 ns pulse of Fig. \ref{Fig:PII_Ch6_T1500Pulses} starting from the $\ket{1}\!\!\bra{1}$ state. As can be seen the initial pulse (corresponding to the thin lines) has a non-optimal pulse that fails to transfer population to $\ket{\text{m}}$ which would result in missed counts. Its channel fidelity and contrast are respectively $\Phi_\text{ch,i}'=87.0\%$ and $\xi_\text{i}=37.8\%$. The optimal pulse (thick lines) corrects for this as well as preventing population transfers between $\ket{0}$ and $\ket{1}$. It has $\Phi_\text{ch,f}'=98.8\%$ and $\xi_\text{f}=97.9\%$. \label{Fig:PII_Ch6_T1500nsPopul} }
\end{figure}
Faster pulses than those in Fig. \ref{Fig:PII_Ch6_T1500Pulses} were optimized. A 1.4 ns pulse is shown in Fig. \ref{Fig:PII_Ch6_TFastPulses} the initial fidelity and contrast were $\Phi_\text{ch,i}'=87.0\%$ and $\xi_\text{i}=37.8\%$, whilst the optimized pulse has $\Phi_\text{ch,f}'=98.8\%$ and $\xi_\text{f}=97.9\%$. However, faster pulses cannot be made in this model since it relies upon having at least two states in the meta stable well. This imposes a restriction on the maximum bias flux. Approximating the potential with a third order polynomial and asking for at least two levels in the well leads to the approximate condition $\alpha>9$ (details are in appendix \ref{Sec:Appendix_PQ}). This threshold value is shown by the horizontal line in Fig. \ref{Fig:PII_3lvl_model} and corresponds to a flux bias of $0.9454\cdot2\pi$. Also note that this value matches very well the maximum bias for which DVR can still find at least two states in the shallow well, see Fig. \ref{Fig:DVR_vs_Harmonic}. In the pulse optimization, the flux bias is constrained to be below this value. Thus, upon examining the optimal pulse in Fig. \ref{Fig:PII_Ch6_TFastPulses} it can be seen that the pulse has reached this limit. Therefore the tunneling rate out of $\ket{1}$ has reached its maximum within the validity of the three level model. It may thus be possible to extend contrast even further by biasing so that the excited state falls into the continuum. However theoretical description of this regime falls way beyond the scope of this paper.
\begin{figure} \centering
\includegraphics[width=0.98\columnwidth]{Fig7_Fast_Meas_Pulse}
\caption{Optimization of a fast readout pulse. \textcolor{blue}{(a)} initial pulse sequence with fidelity $\Phi_\text{ch,i}'=83.8\%$ and contrast $\xi_\text{i}=19.8\%$. \textcolor{blue}{(b)} Optimized pulse shape. \textcolor{blue}{(c)} Initial time evolution of populations. Again, the unoptimized pulse fails to let $\ket{1}$ tunnel into $\ket{m}$. \textcolor{blue}{(d)} Time evolution of populations after pulse optimization resulting in a high contrast of $\xi_\text{f}=98.2\%$ and final fidelity $\Phi_\text{ch,f}'=99.2\%$. \label{Fig:PII_Ch6_TFastPulses}}
\end{figure}
\section{Outlook and Conclusions \label{sec:conclusion}}
Optimal control in the presence of non-unitary dynamics towards a target unitary time evolution has already been implemented. In this work we have taken this a step further and presented a methodology to optimize a non-unitary time evolution towards a non-unitary target channel using a gradient search on a fidelity measure based on the Choi matrix. The algorithm was illustrated within the framework of optimizing a measurement pulse for a phase qubit where the measurement process relies on incoherent tunneling processes. The simple model shows a rich interplay between Landau-Zener type physics and the incoherent dynamics. The three level model discussed here is a good starting point for creating a measurement pulse. Going beyond this model could be done in the experiments by using the methodology developed in \cite{Egger_PRL_112_240503}. Measurement is important for superconducting qubits. Optimizing pulses for different systems, such as dispersive readout through a resonator, will require additional developments in OCT and could be the topic of future research.
\section{Acknowledgments}
We thank John M. Martinis for useful discussions and Luke C. G. Govia for his careful reading of the manuscript. This work was supported by the Army Research Office under contract W911NF-14-1-0080 and the European Union through ScaleQIT. This research was also funded by the Office of the Director of National Intelligence
(ODNI), Intelligence Advanced Research Projects Activity (IARPA), through the Army Research Office. All statements of fact, opinion, or conclusions contained herein are those of the authors and should not be construed as representing the official views or policies of IARPA, the ODNI, or the US government.
|
2,877,628,088,479 | arxiv | \section*{Supplemental figures}
\begin{figure*}[h!]
\includegraphics[width=0.9\linewidth, trim = 0 2cm 0 2cm ]{cspwc.pdf}
\caption{Total cross sections~\cite{Nowak:1978au,Watson:1963zz,Ciborowski:1982et,Humphrey:1962zz} fitted by the model described in the main part of the manuscript. Error bands represent the $1\sigma$ uncertainty determined in a re-sampling procedure. The dashed black line shows the contribution of the s-wave part of the amplitude only.}
\label{pic:TOT-CS}
\end{figure*}
\begin{figure*}[h!]
\includegraphics[width=0.9\linewidth, trim = 0 2cm 0 2cm ]{plotstogether.pdf}
\caption{Differential cross sections~\cite{Mast:1975pv} fitted by the model described in the main part of the manuscript. Error bands represent $1\sigma$ uncertainty determined in a re-sampling procedure.}
\label{pic:DIF-CS}
\end{figure*}
\begin{figure*}
\includegraphics[width=0.88\linewidth]{FULLphotopic.pdf}
\caption{
Fit ($\chi^2_{\rm pp}=1.07$) to the $\pi\Sigma$ invariant mass distribution ($M_{\rm inv}$) from $\gamma p\to K^+ (\pi\Sigma)$ reaction~\cite{Moriya:2013eb}. The model for the reaction is taken from Ref.~\cite{Mai:2014xna}, where only generic couplings $\gamma p\to K^+ \mathcal{S}$ are fitted to the data at each measured total energy $W_{\rm tot}$. The meson-baryon scattering amplitude in the final state is taken from the fit to the scattering data. Only the best fits are shown.
}
\label{pic:CLAS}
\end{figure*}
\begin{figure*}
\nocaption
\begin{subfigure}{0.49\linewidth}
\includegraphics[width=0.99\linewidth]{HEMINGWAYrev.pdf}
\caption{Fit of the generic couplings $K^-p\to\Sigma(1660)\pi^-$ and $\Sigma(1660)\to(\pi^-\Sigma^+)\pi^+$ to the invariant mass distribution~\cite{Hemingway:1984pz} in arbitrary units. The final state interaction is taken from the best fit to the scattering data as described in the main part of the manuscript. Only the best fit is shown.
}\label{pic:HEMINGWAY}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\includegraphics[width=0.99\linewidth]{UpdatedLasso.pdf}
\caption{
Dependence of the components of the $\chi^2_{\rm dof}$ from each data set on the distance of the $W^*$ pole from the real axis, parametrized with the penalty $\lambda$. The sum of all three components equals $\chi^2_{\rm dof}$ which increases with the imaginary position of the pole at $W=W^*$.
The contribution to the $\chi^2$ from the penalty is subtracted in all cases.
}\label{fig:LASSO}
\end{subfigure}
\end{figure*}
\end{document}
|
2,877,628,088,480 | arxiv | \section{Introduction}
\begin{figure*}
\includegraphics{smearing-explanation.pdf}
\caption{The transition to the double fringes packet for a binary. \rev{Gray} curves show the fringes of the individual components \rev{with vertical dotted lines indicating the position of their centres}. The \rev{black} curve is the detected fringe packet (sum). In comic strip order: (i) An unresolved binary superimposes two fringe systems and achieve maximum fringe contrast; (ii) Contrast loss arises when the binary is resolved because individual fringe packets do not overlap exactly; (iii) The packet is elongated and loses its original shape when the binary is sufficiently separated, this is the transition to (iv) two separate fringe packets are clearly seen. Cases (i), (ii) are standard in interferometry. Case (iv) is easily analysed, but helps to understand why smearing occurs: each fringe packet doesn't seem impacted, but the power in its fringes is diluted by the incoherent flux from the other one; the resulting visibility will be a constant, strictly smaller than one, independent \rev{of} binary separation. This paper focuses on case (iii) that has not been thoroughly studied in the optical. Some of the notations of the paper are also reported: $\beta^{ij}$, the decentering of the fringe packet of an on-axis source due to instrumental effects; $\xobj[ij]{o}$ the OPD position shift of the fringe packet for an off-axis source; $\phiobj[ij]{o}$ the same as the latter expressed in terms of a phase shift.}
\label{fig:smearing-explanation}
\end{figure*}
Long-baseline interferometry is an observational technique used from the optical \citep{MIC20} to the radio domain \citep{RYL46,PAW46} that allows to overcome the resolution limit of single-dish telescopes, as ultimately set by diffraction. To achieve such a goal an ideal interferometer measures the complex degree of coherence and relates this so-called complex visibility to the object intensity distribution through a Fourier transform \citep{VCI34,ZER38}. Practically speaking, interference fringes are formed and their contrast and shift will be used to retrieve partial or total information on the complex visibilities.
There are numerous sources of error and biases that have to be evaluated and as much as possible corrected in order to provide a proper estimation of the interferometric observables. Among them, \emph{bandwidth smearing} occurs in finite bandwidth for \rev{objects spanning an extended field of view}. The interferogram corresponding to a point-like source has a coherence length of the order of $R\lambda$ where $R$ is the spectral resolution and $\lambda$ the central wavelength. For two points of an extended sourc\rev{e} separated by a distance $\theta$ along the projected baseline length $B$ corresponding to two telescopes of the array, individual fringe packets are shifted with \rev{respect to} each other by an optical path difference $\theta/B$. When the OPD shift $\theta/B$ becomes of the order of, or greater \rev{than}, the fringe packet width, i.e. when $\theta \approx R\lambda/B$, the fringe packets of these points do not overlap correctly and bandwidth smearing of the interferogram occurs (see bottom left panel of Fig.~\ref{fig:smearing-explanation}). In other words, one can consider that the coherence length of an interferogram $R\lambda$ corresponds to an angular extension on the sky $\theta \approx R\lambda/B$: it is called the interferometric field of view. Objects composed of multiple \rev{incoherent} sources, either discrete or continuous, are affected by the smearing when their extent becomes of the order of the interferometric field of view.
Figure \ref{fig:smearing-explanation} shows an illustration of that effect applied to the case of a binary system. Each of the sources \rev{contributes} with a fringe packet; the observed interferogram is their sum. The distance between the interferograms is proportional to the angular separation. We can distinguish four separation regimes: 1) the unresolved case; 2) the resolved case where the separation is a small fraction of the interferogram envelope; 3) the smeared regime where \rev{the separation} is not anymore a small fraction and interferometric estimators are altered; 4) the ``double packet'' regime where two fringes packets are well separated.
While this effect \rev{has been} known for decades \citep{THO73}, it cannot be remedied by calibration as other biases. This was analysed in a review by \citet{BRI89} in the radio-astronomy context, in which the observer had no other choice but to define, somewhat heuristically, the best compromise between observing performance and limiting the bandwidth smearing. However, modern radio techniques, using \textit{a posteriori} software recombination, can overcome the problem in many situations by using several phase centres, around which smearing does not occur. In the optical and the infrared, software recombination is not technically feasible and bandwidth smearing must be dealt with. \citet{ZHA07} \rev{recommend to limit} the field of view $\theta$ to $1/5$ of \rev{the} theoretical value \rev{of the interferometric field of view} i.e. $\theta\approx R\lambda/(5B)$ to remain in the resolved regime. For an interferometer working in the near-IR with 100\,m baselines, this corresponds to 5--10 milliarcseconds of separation when a \rev{wide-band filter ($\lambda/\Delta\lambda \sim \rev{5}$)} is used without spectral resolution. The main leverage to increase the interferometric field of view is adapting the spectral resolution or the baseline length. However, it comes very often at a prohibitive sensitivity cost (spectral resolution) or a loss of spatial \rev{resolution} (baseline length).
In this paper, we present the first analytic calculation of the bandwidth smearing effect on the two main optical interferometric observables, namely the squared visibility and closure phase. We restricted the calculation to temporally encoded interferograms, including the so-called Fourier mode \rev{(a full scan of the fringe packet)} and the \rev{temporal} ABCD \rev{(a 4-point scan of a central fringe)}, which are among the most popular optical schemes. Fourier mode has been or is being used at COAST \citep{COAST}, IOTA with IONIC \citep{IONIC} and FLUOR \citep{IOTAFLUOR}, CHARA with FLUOR \citep{CHARAFLUOR} and CLIMB \citep{CLIMB}, VLTI with VINCI \citep{VINCI}, PIONIER \citep{PIONIER2,PIONIER}, and MIDI \citep{MIDI}. \rev{Temporal} ABCD is the choice at PTI \citep{PTI} and the Keck Interferometer \citep{KI}. It should be stressed that a similar line of reasoning can be used with very little adaptation to the 8-point time-encoded interferograms of NPOI \citep{NPOI}, and, with more efforts, to spatially encoded interferograms such as in VLTI/AMBER \citep{AMBER} and static ABCD systems such as VLTI/PRIMA \citep{PRIMA}.
The derived formula\newrev{e} can \rev{be} applied to correct \emph{any} squared visibility and closure phase analytic formula describing the object angular intensity distribution. We apply this corrective technique to the study of binary stellar systems. Indeed optical long baseline interferometry is a unique tool to study the astrometry of close binary systems with milli-arcsecond accuracy to provide direct means to measure accurate masses. Moreover very recently several attempts at searching for substellar companions \citep{ABS11,ZHA11} are pushing the technique down to dynamical ranges where no adverse effects can be neglected. Since \rev{most studies forgo} bandwidth smearing correction \rev{without assessing the biases that may arise from such approximation}, we felt a proper treatment had become mandatory and would be useful in the future. For practical purposes we used the PIONIER instrument characteristics to provide an application of that work. PIONIER is currently being used at the Very Large Telescope Interferometer \citep[VLTI,][]{VLTI} to combine four beams in the H band ($1.5$ to $1.8\mu\mathrm{m}$).
Sect.~\ref{sec:hypnot} gives the analytic expression of the observables in the absence of atmospheric turbulence for an instrument working in fringe-scanning mode. Section~\ref{sec:bin} is an application of these formulas to a binary star, which allows us to analyse the bias that smearing produces on the interferometeric observables and the model-fitted parameters of the binary. We also show there how simulated fringes of PIONIER are much better fitted with the smeared model than with the standard expression. Finally, section~\ref{sec:atm} studies the impact of atmospheric turbulence on the observables, indicating that a moderate spectral resolution is enough to alleviate most of its effects.
\section{Modelling the bandwidth smearing: turbulence-free case}
\label{sec:hypnot}
\label{sec:ana}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{PIONIER-transmission.pdf}
\caption{Spectral transmission and fringe packet envelope for PIONIER, as measured on an internal calibration with the 3-channel spectral setting of the H band. The left column display\newrev{s} the spectral transmission and instrumental phase for a telescope triplet, as contained in $\textsflux[ij]{\text{lamp}}(\wavenum - \wavenumzero)$. The right column shows the envelope and phase of the fringe packet, given by the Fourier transform of the latter. The slope of the instrumental phase translates into a fringe packet decentering, known as group delay. \rev{Phases are expressed in radians.}}
\label{fig:PT}
\end{figure*}
\begin{table}
\caption{Principal notations of this paper.}
\label{tab:notations}
\begin{tabular}{ll}
\hline\hline
\multicolumn{2}{l}{Indexing}\\
$o$, $p$, $q$ & Source number (index)\\
$i$, $j$, $k$ & Station number (superscript)\\
\hline
\multicolumn{2}{l}{Space and spatial frequency variables}\\
$\wavenum$, $\wavenumzero$ & Wavenumber, central wavenumber\\
$\xi = \wavenum - \wavenumzero$ & Reduced wavenumber\\
$\opdvar$, $\textopd[ij]$ & Optical path difference\\
\textxobj[ij]{o} & Fringe packet position in a perfect instrument\\
$\textphiobj[ij]{o} = 2\pi\wavenumzero\textxobj[ij]{o}$
& Fringe packet phase in perfect instrument\\
$\textphishift[ij]{}$ & Instrumental group delay (see Fig.~\ref{fig:smearing-explanation})\\
& $\rightarrow$ Packet position is \smash{$\xobj[ij]{o} + \phishift[ij]{}/2\pi\wavenumzero$}\\
\hline
\multicolumn{2}{l}{Functions of wavenumber $\wavenum$ or reduced wavenumber $\xi$}\\
$\sfluxstar{o}(\wavenum)$ & Spectrum of a point source\\
$\texttrans[i]{}(\xi)$ & Transmission through an arm\\
$\loss[ij]{}(\xi)$ & Instrumental contrast\\
$\insphi[ij]{}(\xi)$ & Instrumental differential phase\\
$\textsflux[ij]{o}(\xi)$ & The equivalent of $\newrev{N_iN_j}$\\
\hline
\multicolumn{2}{l}{Functions of OPD $\opdvar$ or phase $\alpha = 2\pi\wavenumzero\opdvar$}\\
$\textphasor[ij]{}(\opdvar)$ & Complex coherent flux\\
$\textsmearing{}(\alpha)$ & Complex smearing\\
$\textenvband{}(\alpha)$ & Smearing amplitude\\
$\phiband{}(\alpha)$ & Smearing phase\\
\hline
\multicolumn{2}{l}{Fluxes}\\
$\textflux{o}$ & Flux of a point source\\
$\textflux{}$ & Total flux\\
$\textflux[ij]{op}$ & Flux product equivalent\\
\hline
\multicolumn{2}{l}{Other}\\
$\dopd[ij]{}$ & OPD scanning speed\\
\hline
\end{tabular}
\end{table}
In order to introduce the basic concepts of the data processing for fringe-scanning mode instruments, we remind \rev{the reader} here how observables are derived \rev{in monochromatic light}.
Ignoring the atmosphere and instrumental signatures, the interferogram of a binary on baseline $ij$ can be written as
\begin{equation}
N^{ij}(\opdvar) = \rev{N}_1 \big[1 + \cos 2\pi\wavenum\opdvar\big]
+ \rev{N}_2\big[ 1 + \cos(2\pi\wavenum\opdvar + \phiobj[ij]{}) \big]\newrev{,}
\end{equation}
where $\rev{N}_1$ and $\rev{N}_2$ are the fluxes of each component, $\opdvar$ is the OPD \rev{between the arms of the interferometer}, and $\textphiobj[ij]{} = (2\pi\wavenum\textbase[ij]\cdot\textpos{})$ is proportional to the binary separation $\textpos{}$, the projected baseline $\textbase[ij]$, and wavenumber $\wavenum$.
It is convenient to use the coherent flux, a complex quantity representing the interferogram, from which the continuum $\rev{N}_1 + \rev{N}_2$ is removed and the negative frequencies are filtered out. In practice, one can take the Fourier transform of the interferogram, remove all frequencies but a small interval centred on the frequency of the fringes, and take the inverse Fourier transform. \rev{The coherent flux can be written as}
\begin{equation}
\phasor[ij]{}(\opdvar) = \exp{2\pi\j\wavenum\opdvar}
\big[ \rev{N}_1 + \rev{N}_2\, \exp{\j\phiobj[ij]{}} \big].
\label{eq:Mmono}
\end{equation}
The \rev{square visibility amplitude} is obtained by dividing the power contained in the coherent flux by that in the continuum:
\begin{equation}
\begin{split}
\vsqPS[ij]{} &= \frac{<|\phasor[ij]{}(\opdvar)|^2>_\opdvar}{(\rev{N}_1+\rev{N}_2)^2}\\
&= 1 - \frac{4 \rev{N}_1 \rev{N}_2}{(\rev{N}_1+\rev{N}_2)^2} \sin^2 \frac{\phiobj[ij]{}}2,
\end{split}
\label{eq:Vmono}
\end{equation}
\rev{where $<x>_\opdvar$ means the average of variable $x$ over the OPD.} In practice, the power may be computed using the Fourier \rev{transform} of the coherent flux, which is strictly equivalent (Parseval's identity).
When a triplet of telescopes $ijk$ is used, the closure phase is used to obtain partial information on the phase because it is independent \rev{of} atmospheric turbulence. It is the argument of the bispectrum given by:
\begin{equation}
\begin{split}
\bisp[ijk]{} &=\ <\phasor[ij]{}(\opd[ij](t))
\phasor[jk]{}(\opd[jk](t))
\phasor[ki]{}(\opd[ki](t))>_\opdvar\\
&= (N_1-N_2)^2
+ 4\rev{N}_1 \rev{N}_2
\cos\frac{\phiobj[ij]{}}2
\cos\frac{\phiobj[jk]{}}2
\cos\frac{\phiobj[ki]{}}2
\\
&\quad -4\j \, \rev{N}_1 \rev{N}_2(\rev{N}_1-\rev{N}_2)
\sin\frac{\phiobj[ij]{}}2
\sin\frac{\phiobj[jk]{}}2
\sin\frac{\phiobj[ki]{}}2 ,
\end{split}
\label{eq:Bmono}
\end{equation}
where $\textopd[ij]$, $\textopd[jk]$, and $\textopd[ki]$ are
the time-modulated OPDs on the three baselines, meeting the closure
relation $\textopd[ij] + \textopd[jk] + \textopd[ki] = 0$. (Eq.~\ref{eq:Bmono} gives \rev{a compact, generic expression for the bispectrum in the same way \citet{LEB12} did for the specific case of high-contrast binaries.})
The goal of this section is to describe the coherent flux, squared visibility, and closure phase of time encoded interferograms processed by means of Fourier analysis, when observing a source of arbitrary geometry in finite bandwidth. In other words, we seek to generalise Eqs.~(\ref{eq:Mmono}, \ref{eq:Vmono},~\& \ref{eq:Bmono}) and provide ready-to-use formulas to fit object models to smeared data. For the sake of simplicity we use a discrete formalism valid for a collection of point-like sources. The results presented here are easily generalised to systems of resolved, compact sources (Appendix~\ref{ap:syscomp}) and to any system with our summations over a finite number of point-like sources replaced by integrations on the plane of the sky.
The most \rev{frequently used} notations and symbols used in this section are given in Table~\ref{tab:notations}.
\subsection{Interferogram}
\label{sec:interferogram}
We consider an interferometer with stations $i$, $j$, etc. separated by a baseline $\base[ij]$ operating in a spectral channel centred on wavenumber $\wavenumzero$. In the following developments we shall use $\wavenum$, the wavenumber, and $\xi = \wavenum - \wavenumzero$ as ``reduced'' wavenumber. Without losing generality, we assume that we observe an object made of several point sources $o$, $p$, etc. with positions $\textpos{o}$, $\textpos{p}$, etc. \rev{in} the plane of the sky and spectra $\textsfluxstar{o}(\wavenum)$, $\textsfluxstar{p}(\wavenum)$, etc.
The interferometer measures the complex coherent flux of the electromagnetic field by forming dispersed fringes on a detector. In our case, fringes are obtained by a temporal modulation of the optical path difference (OPD) $\opdvar$ around an ideal position $\xobj[ij]{o}$. This position is related to the angular position of the source in the sky $\pos{o}$ through the relation $\xobj[ij]{o} = \base[ij]\ensuremath{\!\cdot\!}\pos{o}$. Each of the point sources contributes to a quasi-monochromatic interferogram per instrument spectral channel. Once the incoherent photometric contribution has been removed from the two telescopes and the negative frequencies have been filtered out in Fourier space, the complex coherent flux of one source reads:
\begin{equation}
\phasor[ij]{o}(\xi,\opdvar) =
2\sflux[ij]{o} (\xi)
\exp{
2\jpi(\wavenumzero+\xi)(\xobj[ij]{o} + \opdvar)
}
\label{eq:phasormono}
\end{equation}
where $\sflux[ij]{o} (\xi)$ is the \rev{``instrumental'' coherent flux density} \rev{primarily} due to the wavelength-dependent instrumental effects\rev{, but also to some extent to the spectrum of the source.} We can define this coherent flux density as:
\begin{equation}
\sflux[ij]{o}(\xi) = \loss[ij]{}(\xi)\sqrt{\trans[i]{}(\xi)\trans[j]{}(\xi)}
\,\exp{\j \insphi[ij]{}(\xi)}
\,\sfluxstar{o}(\wavenumzero + \xi)
\label{eq:cohernorm}
\end{equation}
where:
\begin{itemize}
\item $\loss[ij]{}(\xi)$, is the instrumental visibility, or instrumental contrast loss, \newrev{and} has different origins such as differential polarisation or wavefront aberrations;
\item $\insphi[ij]{}(\xi)$, is the instrumental differential phase, \newrev{and} arises from a difference of optical path lengths between the arms of the interferometer that depends on the wavelength. For example this can be the case when light travels through glass (e.g waveguides, dichroics) that do not have the same refraction index dependence as a function of wavelength.
\item $\trans[i]{}(\xi)$, is the spectral transmission through an arm including detector efficiency.
\end{itemize}
We assume that these instrumental signatures do not depend on the \newrev{OPD position in the interferogram}, which is a good approximation in fringe-scanning mode\newrev{, since the OPD modulation is obtained through a few micrometres of air or vacuum, with negligible dispersion. In other words, we assume that the instrumental differential phase is a static term that is not impacted by the movement of the differential delay lines.} However, this is usually not true for spatially dispersed fringes \citep[see][for a generic expression for the fringes]{TAT06}, so that our approach needs adaptation to instruments like AMBER \citep{AMBER}.
It is now possible to describe the coherent flux for an arbitrary number of sources and across a wider spectral bandpass:
\begin{equation}
\phasor[ij]{}(\opdvar) =
\intinf \sum_o \phasor[ij]{o}(\xi, \opdvar) \idiff\xi,
\label{eq:phasorgen}
\end{equation}
For practical purposes we use the Fourier transform
\begin{equation}
\IFT{f}(\opdvar) = \intinf f(\xi) \exp{2i\pi\xi\opdvar} \idiff\xi,
\end{equation}
substitute Eq.~(\ref{eq:phasormono}) into Eq.~(\ref{eq:phasorgen}), and
obtain
\begin{align}
\phasor[ij]{} (\opdvar) =
\sum_o
2\IFT{
\sflux[ij]{o}
}(\xobj[ij]{o} + \opdvar)
\,\exp{2\jpi\wavenumzero\opdvar + \j\phiobj[ij]{o}}.
\label{eq:def:phasor}
\end{align}
where $\textphiobj[ij]{o} = 2\pi\wavenumzero\textxobj[ij]{o}$. In the following, we will use the coherent flux expression (Eq.~\ref{eq:def:phasor}) to compute the most \rev{commonly used} interferometric observables i.e. the square visibility and the closure phase. In practice, $\textsflux[ij]{o}$ is not known a priori. However, it can be inferred from fringes obtained on an internal lamp. The coherent flux of the lamp fringes yield $\textsflux[ij]{\text{lamp}}$ (see Eq.~\ref{eq:def:phasor}). \rev{If both the spectrum of the source $\textsflux[\star]{o}$ and that of lamp $\textsflux{\text{lamp}}$ are known, $\textsflux[ij]{o} = \textsflux[ij]{\text{lamp}} \textstrans[ij]{\text{int}} \, (\textsflux[\star]{o}/\textsflux{\text{lamp}})$ (see Eq.~\ref{eq:cohernorm}) where $\textstrans[ij]{\text{int}}$ is the transmission of the interferometer before the calibration lamp. The amplitude of the VLTI transmission is a smooth function of wavelength that can be considered constant. Its phase results from dispersive elements in the optical path. The optical elements of the VLTI before PIONIER are all in reflection and the most dispersive ones (the M9 dichroics) have been designed to display the least differential dispersion, so that the dispersion is dominated by the air in the non evacuated delay line. In the rest of this paper, we have considered near-zenithal observations for which the interferometric delay is small so that the air dispersion could be ignored as Appendix~\ref{ap:gd} shows. While the presence of dispersion in non zenithal observations has a significant impact on the amount of smearing, it neither changes its order of magnitude nor the general conclusions of this paper. When the atmospheric dispersion must be tackled, it can be done either explicitly (Appendix~\ref{ap:gd} explains how) or implicitly by letting the parameters of Sect.~\ref{sec:isr} free in model fits, as \citet{ZHA07} do for the spectral resolution.}
As an illustration, we show in the left panels of Fig.~\ref{fig:PT} the spectral coherence transmission \textsflux[ij]{\text{lamp}} (amplitude and phase) that we measured on the internal source of PIONIER using three spectral channels across the H band on three baselines. The right panels correspond to the coherent flux of the fringes \textphasor[ij]{\text{lamp}} (amplitude and phase).
\subsection{Instrumental spectral response}
\label{sec:isr}
In this paper, after providing generic formulas using Fourier formalism, we will also give closed form expressions for direct use. To do so, we need an analytic description of the instrumental transmission ($\texttrans[i]{}$) and differential phase ($\textinsphi[ij]{}$). PIONIER's \rev{instrumental coherent flux density} is obtained on a calibration lamp (Fig.~\ref{fig:PT}, left panels)\newrev{. It} displays a near-quadratic behaviour of the differential phase and a spectral transmission intermediate between top-hat and Gaussian functions.
We therefore describe the instrumental differential phase as
\begin{equation}
\insphi[ij]{} (\xi) = \insphi[ij]{}(0) + \phishift[ij]{} (\xi/\wavenumzero) + \disp[ij]{} (\xi/\wavenumzero)^2.
\label{eq:hyp:insphi}
\end{equation}
The linear term $\textphishift[ij]{}$ in the instrumental differential phase $\textinsphi[ij]{}(\xi)$ translates into a fringe packet shift of $\textphishift[ij]{}/2\pi\wavenumzero$ with respect to the nominal zero OPD (see Fig.~\ref{fig:smearing-explanation}, bottom right panel). It is called group delay. In a single-spectral channel interferometer it is possible to zero it by means of fringe tracking. When several spectral channels are observed at the same time, it is no longer possible to do so in all channels simultaneously. \rev{For instance, if a central \rev{spectral} channel is centred at zero OPD, adjacent channels may be shifted with respect to it if there is a differential behaviour of the dispersive elements (such as waveguides, dichroics, or air whose refractive index depend on wavelength) in the beam paths before the recombiner. In the bottom panels of Fig.~\ref{fig:PT} (baseline 1-3), the central \rev{spectral} channel is approximately centred at zero OPD (the solid line on the right panel \newrev{shows the envelope of the fringe packet, i.e. the amplitude of the coherent flux}) with a slope of the phase averaging to $\approx 0$ (same line of the left panel). The adjacent channels feature some shift (dashed lines on the right panels) and non-zero phase slope (same lines on the left). Appendix~\ref{ap:gd} gives a further description of the group delay and its correction through fringe-tracking.}
The quadratic term in the instrumental differential phase $\disp[ij]{}$ has a less visible impact on the fringe packet.
We will give results both for Gaussian and top-hat transmissions of FWHM $\dwavenum{}$:
\begin{align}
\transG[i]{}(\xi) &= \wideexp{-\frac{4 \log 2}{\dwavenum{}^2} \xi^2},
\label{eq:hyp:bandpass}\\
\transH[i]{}(\xi) &=
\begin{cases}
1 \quad &\text{if $|\xi| \le \dwavenum{}/2$},\\
0 \quad &\text{otherwise}.
\end{cases}
\end{align}
\subsection{Square visibility amplitude}
The square visibility amplitude is obtained from the coherent flux using:
\begin{equation}
\vsqPS[ij]{}
= \frac1{4\normtot[ij]{}}
\intinf \phasor[ij]{}(\opdvar)
\!\cdot\! \conj{\phasor[ij]{}(\opdvar)} \idiff\opdvar,
\label{eq:def:vsqPS}
\end{equation}
where \textnormtot[ij]{} is a normalisation factor that relates to the total flux of the target ($\propto \textfluxtot{}^2$) and \rev{$\conj{x}$ stands for the complex conjugate of $x$}. In the first line of the previous equation, we substitute Eq.~(\ref{eq:def:phasor}) and expand the product into a double sum to find\rev{:}
\begin{equation}
\vsqPS[ij]{}
= \frac1{\normtot[ij]{}} \sum_{o,p}
\exp{\j(\phiobj[ij]{o} - \phiobj[ij]{p})}
\intinf
\IFT{\sflux[ij]{o}}(\xobj[ij]{o} + \opdvar)
\IFT{\sflux[ij]{p}}(-\xobj[ij]{p} - \opdvar)
\idiff\opdvar.
\end{equation}
Using the change of variables $\opdvar \rightarrow u = \opdvar + \textxobj[ij]{o}$, a correlation of Fourier transforms is recognised and simplified into the Fourier transform of a product. Thus,
\begin{equation}
\vsqPS[ij]{} = \frac1{\normtot[ij]{}} \sum_{o, p}
\IFT{\sflux[ij]{o}\sflux[ji]{p}}(\xobj[ij]{o} - \xobj[ij]{p})
\exp{\j(\phiobj[ij]{o} - \phiobj[ij]{p})}.
\end{equation}
The bandwidth smearing is contained in $\IFT{\sflux[ij]{o}\sflux[ji]{p}}$. It
can be made clearer by introducing the complex smearing
\begin{equation}
\smearing[ij]{op}(\alpha) = \frac{
\IFT{\sflux[ij]{o}\sflux[ji]{p}}(\alpha / 2\pi\wavenumzero)
}{ \IFT{\sflux[ij]{o}\sflux[ji]{p}}(0)},
\label{eq:gen:V2smearing}
\end{equation}
\rev{where $\alpha$ is an angular variable that is linked to the OPD by the relation $\alpha = 2\pi\wavenumzero\delta$.} It \rev{is} convenient to use the amplitude and phase \rev{of the smearing}: $\textenvband[ij]{op} = |\textsmearing[ij]{op}|$ is the contrast loss due to smearing and $\textphiband[ij]{op} = \arg \textsmearing[ij]{op}$ is a phase shift induced by it. We also define the flux product equivalent---the equivalent to $\flux{o}\flux{p}$ in the monochromatic case---as
\begin{equation}
\norm[ij]{op} = \intinf \sflux[ij]{o}(\xi)\sflux[ji]{p}(\xi)\idiff\xi.
\label{eq:def:norm}
\end{equation}
With these definitions, we can rearrange the square visibility amplitude:
\begin{equation}
\begin{split}
\vsqPS[ij]{} =
\sum_o & \frac{\norm[ij]{oo}}{\normtot[ij]{}}
+ \sum_{o < p}
\Bigg[\frac{2\norm[ij]{op}}{\normtot[ij]{}}
\envband[ij]{op}(\phiobj[ij]{o}-\phiobj[ij]{p})\\
&\times \cos \left(\phiobj[ij]{o}-\phiobj[ij]{p}
+ \phiband[ij]{op}(\phiobj[ij]{o}-\phiobj[ij]{p})
\right) \Bigg].
\end{split}
\label{eq:gen:vsqPS}
\end{equation}
These results are independent of the instrumental phase $\insphi[ij]{}$. If $\textenvband[ij]{op} = 1$ and $\textphiband[ij]{op} = 0$ (no smearing) this formula is equivalent to the monochromatic case (Eq.~\ref{eq:Vmono} in the case of a binary). In practice, model-fitting of square visibility amplitudes by multiple stellar systems uses Eqs.~(\ref{eq:gen:V2smearing}, \ref{eq:def:norm},~\& \ref{eq:gen:vsqPS}). Knowledge of $\textsflux[ij]{o}$, needed in Eqs.~(\ref{eq:gen:V2smearing} \& \ref{eq:def:norm}), can be inferred from fringes obtained on a calibration lamp (or a calibrator) if the spectra of both lamp and source $o$ are known, as we discussed in Sect.~\ref{sec:interferogram}.
\def\ensuremath{V_\text{ins}}{\ensuremath{V_\text{ins}}}
When the different sources share the same spectrum, i.e. $\sfluxstar{o}(\xi) \propto \sfluxstar{p}(\xi)$, we may express the visibility as a function of the individual fluxes \textflux{o} and the total flux \textfluxtot{}. In Eq.~\ref{eq:gen:vsqPS}, we then use the flux products in lieu of the flux products equivalents, i.e. $\textflux[ij]{op} = \ensuremath{V_\text{ins}}\textflux{o}\textflux{p}$ and $\textflux[ij]{} = \textflux{}^2$, where
\begin{equation}
\ensuremath{V_\text{ins}}^2 = \intinf \loss[ij]{}(\xi)^2\trans[i]{}(\xi)\trans[j]{}(\xi)
\sfluxstar{}(\xi)^2 \idiff\xi\, \Big/ \intinf \sfluxstar{}(\xi)^2 \idiff\xi
\end{equation}
is the ``instrumental'' square visibility amplitude. Note that $\ensuremath{V_\text{ins}}$ also depends on the spectral profile. It only disappears in the calibration if the calibrator has the same spectral profile as the source.
In the cases of the Gaussian and top hat transmissions with FWHM $\dwavenum{}$ around central wavelength $\wavenumzero$ and a constant contrast loss $\loss[ij]{}$ in the spectral channel, the smearing is purely real
($\phiband[]{} = 0$) and
\begin{subequations}
\label{eq:easy:vsqPS}
\begin{align}
\envbandH{}(\alpha)
&= \sinc\left(\frac{\alpha}{2\resol{}}\right),
\label{eq:C:vsqPS}
\\
\envbandG{}(\alpha)
&= \wideexp{\left(
-\frac{\alpha^2}{32\resol{}^2\log 2}
\right)},
\label{eq:gauss:vsqPS}
\end{align}
\end{subequations}
where $\resol{} = \wavenumzero / \dwavenum{}$ is the spectral resolution. For small enough baselines, we have shown in Appendix~\ref{ap:smallsmearing} that an exponential formula can be used by properly choosing the value of $\resol{}$. On real data, $\resol{}$ will need to be set to a value that differs from the spectral resolution in order to account from the departure from Gaussian profile and the wavelength dependence of the contrast. In practice, a model fit of smeared data may leave it as a free parameter. If high precision is needed, the asymmetry of the spectral band and the slope of $\loss[ij]{}$ give a non zero $\gamma$. Cubic developments for the smearing terms $\textenvband[]{}$ and $\textphiband[]{}$ are given in Appendix~\ref{ap:smallsmearing}.
\subsection{Closure phase}
\label{sec:ana:clo}
A triple correlation or its Fourier transform, the bispectrum, or an equivalent method, is generally used to determine the closure phase \citep{LOH83,ROD86}. The determination of the closure phase in direct space uses the phase of the bispectrum, given by:
\begin{equation}
\bispDS[ijk]{} = \intinf
\phasor[ij]{}(\opd[ij](t))
\phasor[jk]{}(\opd[jk](t))
\phasor[ki]{}(\opd[\rev{ki}](t))
\idiff t,
\label{eq:bispDS:1}
\end{equation}
where $t$ is time in the case of linear OPD variations. By substitution of Eq.~(\ref{eq:phasorgen}) into Eq.~(\ref{eq:bispDS:1}) and writing $\textopd[ij](t) = \textdopd[ij] t$
\begin{equation}
\bispDS[ijk]{} =
\sum_{o,p,q}
\intinf
\phasor[ij]{o}(\dopd[ij] t)
\phasor[jk]{p}(\dopd[jk] t)
\phasor[ki]{q}(\dopd[ki] t)
\idiff t.
\label{eq:def:bispDS}
\end{equation}
It follows from Eqs.~\newrev{(\ref{eq:def:phasor} \& \ref{eq:def:bispDS})} and closure relation $\textdopd[ij] + \textdopd[jk] + \textdopd[ki] = 0$ that
\begin{equation}
\begin{split}
\bispDS[ijk]{} &\propto
\sum_{o, p, q}
\Bigg[
\exp{i(\phiobj[ij]{o} + \phiobj[jk]{p} + \phiobj[ki]{q})}\\
&\times
\intinf
\IFTsflux[ij]{o}(\xobj[ij]{o} + \dopd[ij] t)
\IFTsflux[jk]{p}(\xobj[jk]{p} + \dopd[jk] t)
\IFTsflux[ki]{q}(\xobj[ki]{q} + \dopd[ki] t)
\idiff t
\Bigg].
\end{split}
\label{eq:int:bispDS}
\end{equation}
Using the change of variables $t \rightarrow u = \xobj[ij]{o}/\textopd[ij] + t$, a triple cross-correlation of Fourier transforms can be recognised and expressed as the two-dimensional Fourier transform
\begin{equation}
\IFTtd{\ f \ }(\opdvar_1, \opdvar_2)
= \iintinf f(\xi_1, \xi_2)
\exp{2\j\pi(\xi_1\opdvar_1 + \xi_2\opdvar_2)}
\idiff\xi_1\idiff\xi_2
\end{equation}
of the triple product
\begin{equation}
\striple[ijk]{opq}(\xi_1, \xi_2) =
\sflux[ij]{o}(\xi_1) \sflux[jk]{p}(\xi_2) \sflux[ki]{q}
\Big(
- \frac{\dopd[ij]}{\dopd[ki]} \xi_1
- \frac{\dopd[jk]}{\dopd[ki]} \xi_2
\Big).
\label{eq:def:striple}
\end{equation}
The bispectrum therefore reads
\begin{equation}
\begin{split}
\bispDS[ijk]{} \propto
\sum_{o, p, q} \Bigg[
\IFTstriple[ijk]{opq} \Big(
\phiobj[ij]{o} - \frac{\dopd[ij]}{\dopd[ki]} \phiobj[ki]{q},&
\frac{\dopd[jk]}{\dopd[ki]} \phiobj[ki]{q}
- \phiobj[jk]{p}
\Big)\\
&\times \exp{\j\left(\phiobj[ij]{o} + \phiobj[jk]{p} + \phiobj[ki]{q}\right)}
\Bigg].
\end{split}
\end{equation}
The bandwidth smearing is contained in $\IFTstriple[ijk]{opq}$. In order to make it clearer we need to introduce several terms. The triple flux product equivalent (corresponding to $\flux{o}\flux{p}\flux{q}$ in the monochromatic case) is given by
\begin{equation}
\triple[ijk]{opq} = \left| \IFTstriple[ijk]{opq}(0, 0) \right|,
\label{eq:gen:triple}
\end{equation}
the ``instrumental'' closure phase by
\begin{equation}
\insphi[ijk]{opq} = \arg \IFTstriple[ijk]{opq}(0, 0),
\label{eq:gen:insphi}
\end{equation}
and the smearing by
\begin{equation}
\smearing[ijk]{opq}(\phivar_1, \phivar_2) =
\IFTstriple[ijk]{opq}( \phivar_1 / 2\pi\wavenumzero,
-\phivar_2 / 2\pi\wavenumzero)
\,/\,\IFTstriple[ijk]{opq}(0, 0).
\label{eq:gen:smearing}
\end{equation}
The ``instrumental'' closure phase is a flux-weighted mean over the spectral channel and thus also depends on the spectrum of the source. The triple flux product equivalent can be simplified to the triple flux product ($\texttriple[ijq]{opq} \propto \textflux{o}\textflux{p}\textflux{q}$) when the sources have the same spectrum, i.e. $\textsfluxstar{o}(\xi) \propto \textsfluxstar{p}(\xi)$. Note that the instrumental closure phase cancels out in the calibration only if the sources $o$, $p$, $q$ and the calibrator all share the same spectrum.
With these notations, the bispectrum reads
\begin{equation}
\begin{split}
\bispDS[ijk]{} \propto \sum_{o, p, q}
\Bigg[
\smearing[ijk]{opq}
\Big(
\phiobj[ij]{o} - &\frac{\dopd[ij]}{\dopd[ki]} \phiobj[ki]{q},
\frac{\dopd[jk]}{\dopd[ki]} \phiobj[ki]{q}
- \phiobj[jk]{p}
\Big)\\
&\times\triple[ijk]{opq} \exp{i\left( \phiobj[ij]{o} + \phiobj[jk]{p} + \phiobj[ki]{q}
+ \insphi[ijk]{opq}
\right)}
\Bigg].
\end{split}
\label{eq:gen:bispDS}
\end{equation}
If $\textsmearing[ijk]{opq} = 1$ (no smearing) and $\insphi[ijk]{opq} = 0$ (no bandwidth-related differential phase), the formula is equivalent to the monochromatic case (Eq.~\ref{eq:Bmono} for a binary). In practice, Eqs.~(\ref{eq:def:striple}, \ref{eq:gen:triple}, \ref{eq:gen:insphi}, \ref{eq:gen:smearing}, \& \ref{eq:gen:bispDS}) allow us to to model fit multiple stellar systems to smeared interferometric data. The knowledge of $\textsflux[ij]{o}$ needed in Eq.~(\ref{eq:def:striple}) can be inferred from calibration fringes obtained on an internal lamp (or a calibrator) as discussed in Sect.~\ref{sec:interferogram}.
\rev{This modelling} can be further simplified using an analytic description of the bandpass. In that case, Eqs.~(\ref{eq:gen:bispDS}~\& \ref{eq:bispDS}) can be used for the model fit of closure phases. In our cases of top-hat and Gaussian transmission of FWHM \dwavenum{}, with a linear instrumental phase, we reorder baselines so that $\textdopd[ki]$ has the largest absolute value, and we can assume it negative without losing generality. Then, the smearing can be simplified to
\begin{subequations}
\label{eq:bispDS}
\begin{align}
\smearingH[ijk]{}(\phivar_1, \phivar_2) &\propto
\sinc\left(
\frac{\phivar_1 + \phishift[ijk]{}}{2\resol{}}
\right)
\sinc\left(
\frac{\phivar_2 + \phishift[ijk]{}}{2\resol{}}
\right)
\label{eq:gate:bispDS}
\\
\smearingG[ijk]{}(\phivar_1, \phivar_2) &\propto
\exp{
- \frac{
(\phishift[ijk]{} + \phivar_1)^2
+ (\phishift[ijk]{} + \phivar_2)^2 +
\left(
\phishift[ijk]{}
- \frac{\dopd[jk]\phivar_1
+ \dopd[ij]\phivar_2}
{\dopd[ki]}
\right)^2
}{
16\resol{}^2\log 2
\left(1
+ \big(\frac{\dopd[ij]}{\dopd[ki]}\big)^2
+ \big(\frac{\dopd[jk]}{\dopd[ki]}\big)^2\right)
}
}.
\label{eq:gauss:bispDS}
\end{align}
\end{subequations}
In the equations above, the ``group delay closure'' is expressed as
\begin{equation}
\phishift[ijk]{} =
\frac{\dopd[ki]\phishift[ij]{} - \dopd[ij] \phishift[ki]{}}
{\dopd[ki]}
.
\label{eq:bisp:gd}
\end{equation}
The group delay closure is the consequence of the incorrect centering of the three fringe packets on the three baselines of the telescope triplets. Because of this de-centering, the centres of these packets are not scanned at the same time. In order to yield a usable closure phase, there should still be an overlap in the time intervals when the high contrast part of the packets are scanned. It means that the individual group delays \textphishift[ij]{}, \textphishift[jk]{}, and \textphishift[ki]{}, and thus the group delay closure, should be of the order of a few times the spectral resolution or less ($\textphishift[ijk]{} \lesssim 2\pi\resol{}$). Since this overlap in time depends on the relative scanning speeds along the baselines, the group delay closure depends on $\dopd[ij]$, $\dopd[jk]$, and $\dopd[ki]$.
In our analytic approach to the spectral transmission, the instrumental closure phase reduces to a constant term, independent of \newrev{the} source\newrev{s}\begin{equation}
\insphi[ijk]{} = \insphi[ij]{}(0) + \insphi[jk]{}(0) + \insphi[ki]{}(0).
\end{equation}
Appendix~\ref{ap:disp} explains how to use the Gaussian formula if the the quadratic chromatic dispersion term $\textdisp[ij]{}$ is non zero.
\section{Consequence on companion search}
\label{sec:bin}
\subsection{Bias on the interferometric observables}
\label{sec:bias:observables}
\begin{table}
\caption{Test case used in \rev{Figs.~\ref{fig:ideal}~\& \ref{fig:phi:jitt}}. For the square visibility amplitude, the first baseline is used. The spectral resolution is, by definition, the major source of smearing. In addition, the visibility is slightly impacted by the spectral dispersion $\disp[ij]{}$. The closure phase is strongly impacted by the group delay closure $\phishift[123]{}$ (indirectly by group delays and OPD modulation speeds) and moderately by the dispersion $\disp[ij]{}$.}
\label{tab:testcase}
\begin{tabular}{ll}
\hline\hline
Binary flux ratio & 0.6\\
Effective bandpass & Gaussian\\
Spectral resolution & lines: 7, 18, 42, contours: 3--100\\
Projected telescope positions & $(0, B, 0.4B)$\\
\textit{Corresponding baselines} & $(B, -0.6B, -0.4B)$\\
OPD modulation along baselines & $\dopd[ij] = (\dopd[12], -2\dopd[12], \dopd[12])$\\
OPD bounds & $(\pm 25\lambda, \mp 50\lambda, \pm 25\lambda)$\\
Group delays & $\phishift[ij]{} = (5, 0, -5)\times2\pi$\\
\textit{Corresponding group delay closure}
& $\phishift[123]{} = 10\times 2\pi$\\
Spectral dispersion & $\disp[ij]{} = 0$\\
\hline
\end{tabular}
\end{table}
\begin{figure*}[p]
\subfigure[Square visibility amplitude]{\includegraphics[width=\linewidth]{ideal-visibility.pdf}\label{fig:ideal:Vsq}}
\subfigure[Closure phase]{\includegraphics[width=\linewidth]{ideal-closure.pdf}\label{fig:ideal:phi3}}
\caption{Square visibility amplitude (top) and closure phase (bottom) of a binary with flux ratio 0.6 (test case of Table~\ref{tab:testcase}) observed with an interferometer with Gaussian bandpass under ideal atmospheric conditions and baselines $B$, $-0.6B$, $-0.4B$. In both figures, \emph{top panel:} interferometric observable as a function of binary separation (milliarcseconds at one micron wavelength for a 100\,m baseline) for an infinite resolution and three spectral resolutions approximately matching those of PIONIER. \emph{bottom panel:} deviation of the smeared observable with respect to the infinite spectral resolution case, shown as contours in the separation-spectral resolution plane. In the lowest panel, the behaviour change around spectral resolution $\resol{} = 8$ is explained by the transition from the single spectral channel mode (group-delay free in ideal fringe tracking conditions, \rev{since a single fringe packet can be centred around zero OPD, see Appendix~\ref{ap:gd}}) to the multiple channel observation (where \rev{the fringe packets of the different spectral channels are shifted with respect to each other and therefore cannot be simultaneously positioned at zero OPD by the fringe-tracker, see Appendix~\ref{ap:gd}}).}
\label{fig:ideal}
\end{figure*}
The first impact of the smearing is a tapering of the peak-to-peak amplitude of the oscillation of the visibility with baseline, hour angle, or spectral
channel, due to the smearing amplitude $\envband{}$. The second \newrev{impact} only concerns the closure phase in multi-channel observations\rev{. I}t originates from the imperfect alignment of the fringe packets on baseline triplets,
as measured by $\phishift[ijk]{}$. In order to make these influences clearer,
we give in Fig.~\ref{fig:ideal} the interferometric observables of a binary with a high flux ratio 0.6, whose characteristics are given in Table~\ref{tab:testcase}.
\paragraph{Square visibility amplitude.}
Figure~\ref{fig:ideal:Vsq}, top panel, shows the theoretical smearing of the visibility amplitude of a binary as a function of reduced separation $\theta B/\lambda$ (in $\mathrm{mas}\cdot\mathrm{hm}\cdot\mu\text{m}\smash{^{-1}}$) for three different spectral resolutions ($\approx 7, 18, 42$) corresponding to the observing modes available on PIONIER at the VLTI. The lower panel of the figure displays the error on the square visibility occurring from not taking smearing into account, as a function of separation and spectral resolution. The result is easily generalised to binaries of different flux ratios, as the relative error on the visibility $\Delta|V^2| / |V^2|$ remains unchanged.
\paragraph{Closure phase.}
Figure~\ref{fig:ideal:phi3}, top panel, shows the theoretical closure phase of a binary for three different spectral resolutions ($\approx 7, 18, 42$) corresponding to the observing modes available on PIONIER at the VLTI. It can be seen at small separations (5--10\,$\mathrm{mas}\cdot\mathrm{hm}\cdot\mu\text{m}\smash{^{-1}}$) that the intermediate spectral resolution ($\approx 18$) shows more smearing than expected for these separations, in particular more than the broad-band $\approx 7$ observing mode. The reason lies in \rev{the dispersive elements in the light beams of the interferometer and instrument that decentre fringe packets more in some spectral channels than in others, thus making it impossible to centre all fringes packets at the same time. (see the imperfect centering of some spectral channels of PIONIER in Fig.~\ref{fig:PT} and a description of the group-delay tracking in Appendix~\ref{ap:gd})}. This effect is not seen in the broad band, where \rev{the single fringe packet of each baseline can be centred with a fringe tracker, thus eliminating the group-delay}. This low-separation smearing approximately scales linearly with separation, as $f\textphishift[ijk]{}\theta/\resol{}\smash{^2}$, where $f$ is the flux ratio of the binary, $\theta$ the separation, and $\textphishift[ijk]{}$ the group-delay closure (This can be obtained analytically by linearising Eq.~\ref{eq:gauss:bispDS} and normalising by the bispectrum of a point-source calibrator.) At larger separations ($\gtrsim 10\mathrm{mas}\cdot\mathrm{hm}\cdot\mu\mathrm{m}\smash{^{-1}}$ in Fig.~\ref{fig:ideal:phi3}), the closure phase is impacted by a combination of the tapering of the oscillation of the visibility (a purely spectral resolution effect, as seen in the visibility in Fig.~\ref{fig:ideal:Vsq}) and the instrumental phase, the impact is relatively complex, and we can only recommend to use Eq.~(\ref{eq:gauss:bispDS}) to model it. As an illustration, Fig.~\ref{fig:closim} of Appendix~\ref{ap:closim} compares the closure phase of the three spectral channels of PIONIER for a given configuration of the interferometer, and it is quite clear the the behaviour radically changes with channel and telescope triplet.
The lower panels displays the error on the closure phase occurring from not taking smearing into account, as a function of separation and spectral resolution. The figure shows a sharp discontinuity at resolution $\resol{} = 8$ where the transition occurs from a single spectral channel (where the single fringe packet of each baseline is positioned at zero OPD by an ideal fringe-tracker) to spectrally dispersed fringes (with the fringe packets \rev{of each baseline} that do not align well \rev{because they are shifted with respect with each other by the instrumental phase}). Even for moderately resolved sources, percent precision requires a good enough spectral resolution ($\resol{} \gtrsim 40$ or more), adequate modelling of \rev{bandwidth} smearing, or a good fringe-tracking on a single spectral channel at moderate spectral resolutions ($\resol{} \gtrsim 10$).
\subsection{Retrieving binary parameters}
\begin{figure}
\includegraphics[width=\linewidth]{binobs-uv.pdf}
\caption{$(u, v)$ coverage of a typical 100\,m baseline 4T observation (K0-A1-G1-I1) at the VLTI for an object close to the meridian, with 3 observations over a few hours.}
\label{fig:uv}
\end{figure}
We assess here the bias on the binary parameters that smearing produces. In order to model the data as \rev{realistically} as possible we build synthetic binary fringes corresponding to a \rev{typical scenario}: near-zenith object observed in a sequence of three sets of fringes separated by one hour using a large telescope quadruplet at VLTI (see Fig.~\ref{fig:uv} for $u$, $v$ coverage). They are obtained from calibration fringes obtained by PIONIER on an internal calibration lamp, which can be considered as a point source observation for our purpose. Then, we feed \rev{these} synthetic data to the PIONIER data reduction software and get visibility amplitudes and closure phases. They are calibrated using simulated fringes of a point-source calibrator. They are then fit with a binary model to derive the parameters of the binary. In a first step, the model is that of an unsmeared binary (Eqs.~\ref{eq:Vmono}~\& \ref{eq:Bmono}), then we use the smeared model of Sect.~\ref{sec:ana} with Gaussian bandpass (Eq.~\ref{eq:gauss:vsqPS}~\& \ref{eq:gauss:bispDS}). \rev{Additional transmission effects of the VLTI from the telescope up to the internal calibration lamp, positioned after the delay lines, have been ignored: the near-zenith observations we consider here are dominated by PIONIER's instrumental effects (as we discuss in Sect.~\ref{sec:interferogram}). For non zenithal observations, where the interferometric delay in the delay lines is several tens of metres, the air dispersion in the delay lines becomes a factor of the same order of PIONIER's instrumental phase and can be modelled using Appendix~\ref{ap:gd}.}
In our analysis, the separations in right ascension and declination are varied from $-30$ to 30\,mas or approximately 10 times the angular resolution the interferometer and the magnitude differences from 0.1 to 3.3 (flux ratios from 0.05 and 0.95). For each point triplet of parameters, the difference between the fitted values and the input gives us the bias on the binary position and magnitude difference. The reduced chi square was determined assuming a 2\% accuracy on visibilities and 20\,mrad on closure phases typical of single-mode instrument performances on bright objects (like PIONIER). Figure~\ref{fig:binobs-bias} shows the \rev{absolute values of the errors} and reduced chi square at each separation and position angle at the given magnitude difference of 0.55 (flux ratio of 0.6). In Figure~\ref{fig:binobs-bias-2}, we \rev{consider possible biases and give} the median value of the \rev{error with} its confidence intervals for a given binary separation, considering all the position angles and flux ratios at that separation.
\begin{figure*}
\includegraphics{binobs-bias.pdf}
\caption{Quality of least-squares model fitting of binary parameters to smeared interferometric observables. These observables are derived from PIONIER synthetic fringes in the 3-channel spectral resolution ($\resol{} \approx 20$) using the data reduction pipeline. The contour plots give the \newrev{absolute value of the error in the model fits} for each position of the secondary assuming a binary flux ratio of 0.6. \text{Left:} the binary model assumes monochromatic light and absence of smearing. \text{Right:} the binary model assumes a Gaussian bandpass and takes into account the smearing. \text{Top:} \rev{absolute value of the} error on the binary separation. \text{Middle:} \rev{absolute value of the} error on the magnitude difference. \text{Bottom:} reduced chi squares assuming 2\% error on square\newrev{d} visibilities and 20~mrad on closure phases.}
\label{fig:binobs-bias}
\end{figure*}
\begin{figure*}
\includegraphics[width=\linewidth]{bias-smearing-allratios.pdf}
\caption{\newrev{The solid lines give the median value of the errors on the fitted binary parameters} as a function of binary separation. \newrev{If non zero and systematically of one sign, the median indicates a bias. The grayed area are the} confidence intervals for the errors (dark gray 1-$\sigma$, light gray 2-$\sigma$). At a given separation, all binary orientations and flux ratios were considered. \text{Left:} the binary model assumes monochromatic light and absence of smearing. \text{Right:} the binary model assumes a Gaussian bandpass and takes into account the smearing. \text{Top:} \newrev{error} on the binary separation. \text{Middle:} \newrev{error} on the magnitude difference. \text{Bottom:} reduced chi square.}
\label{fig:binobs-bias-2}
\end{figure*}
\paragraph{Smearing-free binary model.} A binary model with the classical expression for the visibility amplitude and closure phase (Eqs.~\ref{eq:Vmono}~\& \ref{eq:Bmono}) is fitted to synthetic PIONIER data with the three-channel spectral resolution.
The left panel of Fig.~\ref{fig:binobs-bias} displays from top to bottom the \newrev{absolute value of the error} on the secondary's position, the \newrev{absolute value of the error} on the magnitude difference, and the reduced chi square for errors of 2\% and 20\,mrad on individual measurements of square visibility amplitudes and closure phases respectively. We checked that the results for other flux ratios are similar. The \newrev{errors (with median value and confidence intervals)} for the parameters are given in Fig.~\ref{fig:binobs-bias-2} (left panel) as a function of separation when the flux ratio is allowed to vary between detectable limits (0.05 to 0.95). \newrev{The median value of the error indicates a bias, if it is non zero and consistently of one sign.}
The main impact of the smearing is a degradation of the goodness of fit at all separations, followed by errors on the flux ratio and separation at moderate separations, and a clear bias of both observables at larger separations. In our models, the secondary is dimmer than the input of the simulation \newrev{more often than not} and the separation tends to be smaller \newrev{more often than not}. \newrev{(For instance, the confidence intervals on the errors of Fig.~\ref{fig:binobs-bias-2} show that the error on the separation is approximately 5 times more likely to be negative than positive at a separation of 30\,mas.)} The \newrev{apparent dimming of the secondary} is easily explained by the tapering of the fringe contrast that occurs due to smearing. The \newrev{bias on separation} is independent of smearing as we will see later on.
Even at moderate separations (5--10\,mas) the reduced chi square is around 3. However, the errors on the flux ratio and positions become significant (50\,$\mu$as and 20\,mmag) only at higher separations ($\gtrsim 15$ mas), as Fig.~\ref{fig:binobs-bias}. At first sight, it seems to contradict the trend of Sect.~\ref{sec:bias:observables}. In that section, we have found a significant smearing of the closure phase at small separations, as a result of the imperfect centering of fringe packets in an observation with multiple spectral channels. We easily reconcile these findings by noting that, as an average over the spectral band, the group delay is zero, i.e. both ends of the bands have group delays of same magnitude but opposite signs; thus their respective impacts on the observables approximately cancel out in the fit. The deviation of the individual spectral channels from the average over the band still explains the larger chi square. (Fig.~\ref{fig:closim} in Appendix~\ref{ap:closim} shows how the closure phases are impacted differently for the three spectral channels of PIONIER in low resolution mode.)
\paragraph{Smeared binary model.} We performed similar fits to synthetic smeared fringes of a binary \rev{by} using the Gaussian formulas for the smearing (see Sect.~\ref{sec:ana}). The \newrev{absolute values of the errors} on the position and flux ratio are given for a binary with a flux ratio of 0.6 in the right panel of Fig~\ref{fig:binobs-bias}. The \rev{errors} on the position and magnitude difference, and the quality of the fit are given in the right panel Fig.~\ref{fig:binobs-bias-2} for a wide range of flux ratios. \newrev{In Fig.~\ref{fig:binobs-bias-2}, the median value of the error indicates a bias if it is non zero and consistently of one sign.}
Taking the smearing into account eliminates most of the errors and bias on the flux ratio. It also largely improves the quality of the fit, with a reduced chi square of 3 found at significant separations ($\gtrsim 15$\,mas) in \rev{most cases}. The errors on the separation are improved at all separations but \rev{the bias remains at larger separations}. We \rev{have found that the bias is related} to the uncertainty on the effective wavelength of the interferometer, which varies by $\approx 0.1$\% across baselines on PIONIER; this phenomenon is independent \rev{of} our adequate modelling of the smearing. \rev{It is difficult to calibrate in the first place, because a deviation of the pie\newrev{z}o scan speed from its nominal value has exactly the same observable consequence. (We note that including a proper spectral calibration in the instrument would solve for this problem.)} At 30 mas of separation, \rev{a 0.1\% bias translates into 30$\,\mu$as}, which is what we indeed find: \rev{the solid lines in the top panels of Fig.~\ref{fig:binobs-bias-2} show this bias both in the monochromatic model and the smeared one.} At specific binary parameters, seen as high \rev{error} values islands on Fig.~\ref{fig:binobs-bias}, \rev{the discrepancy} originates from the difference between the smeared visibility and the Gaussian model: This happens close to smearing-induced phase jumps (see Fig.~\ref{fig:closim} of Appendix~\ref{ap:closim} for a comparison between Gaussian smearing and simulated values). High contrast binaries \rev{do not feature these phase jumps} and are not impacted. For precision work \rev{of high to moderate flux ratio binaries, we strongly recommend to discard closure phases} close to predicted jumps.
\section{Modelling the atmosphere}
\label{sec:atm}
\label{sec:atm:temp}
The estimators of the interferometric observables have been chosen to be mostly immune to atmospheric biases in the typical interferometric regime of a moderately resolved source, \rev{i.e. when bandwidth smearing can be ignored}. In this section, we investigate possible biases when \rev{bandwidth smearing becomes significant}, as \citet{ZHA07} did for IOTA's closure phase estimator.
For temporal scanning, it is possible to write the differential piston---the variable differential phase induced by the atmosphere---as a function of OPD since time and OPD are linked \citep[see for instance][]{jitter}. The jittered coherent flux can be expressed as a function of the ideal coherent flux
\begin{equation}
\phasorjitt[ij]{}(\opdvar) =
\phasor[ij]{}(\opdvar + \piston[ij](\opdvar))
\wideexp{\left[
-\frac16 \left(
\pi\wavenumzero
\pderiv{\piston[ij]}{\opdvar}(\opdvar)
\right)^2\right],
}
\label{eq:coherjitt}
\end{equation}
\rev{where $\textpiston[ij]$ is the atmospheric differential piston on baseline $ij$.} The exponential term is the contrast loss due to piston variation during the integration, of the order of one millisecond for one OPD step of a temporal scan. It bears the assumption that the spectral envelope of the fringes does not have features as sharp as the fringe frequency and that the integration during one OPD step is fast enough (of the order of \rev{a} millisecond in practice) to allow for a linearisation of piston.
\subsection{Orders of magnitude}
\label{sec:atm:om}
An analytic approach to the atmospheric turbulence can be taken, using the
assumption that scanning is fast enough for the piston to vary linearly during
a sub-second scan, i.e. $\textpist[ij]{} = \textpist[ij]{0} +
\textpist[ij]{1} \textopd[ij]$, where $\textpist[ij]{0}$
is the group-delay tracking error and $\textpist[ij]{1}$ a rate of piston
variation during scan. $\textpist[]{0}$ and $\textpist[]{1}$ are random variables when statistics over a large number of scans are derived. Using this approach, the coherent flux is:
\begin{align}
\begin{split}
\phasorjitt[ij]{} (\opd[ij]) &=
\sum_o
2 \IFTsflux{o}(\xobj[ij]{o} + (1 + \pist[ij]{1})\opd[ij] + \pist[ij]{0})
\\&\qquad \times \exp{
i\phiobj[ij]{o}
+ 2i\pi\wavenumzero[(1+\pist[ij]{1})\opd[ij] + \pist[ij]{0}]
- \frac16 (\pi\wavenumzero\pist[ij]{1})^2
}.
\end{split}
\label{eq:atm:phasor}
\end{align}
This approach can be used to determine the orders of magnitude of the atmospheric effects.
\paragraph{Visibility.}
The piston variation term $1 + \textpist[ij]{1}$ comes as a product of the OPD variable in Eq.~(\ref{eq:atm:phasor}), so we recognise it as a scaling factor. $\textpist[ij]{0}$ is a mere shift of the central OPD and has no impact---the square visibility does not depend on centering. Therefore, we can link the jittered visibility to the ideal case:
\begin{equation}
\vsqPS[ij]{\text{jit}} = \frac{1}{1+\pist[ij]{1}} \vsqPS[ij]{\text{ideal}}
\wideexp{-\frac13 (\pi\wavenumzero\pist[ij]{1})^2}.
\end{equation}
The impact of atmospheric jitter is independent \rev{of} the geometry of the source and, thus, smearing. For all separations it can be calibrated out if science target and calibrators are observed with similar atmospheric conditions.
\paragraph{Closure phase.} The group-delay tracking term $\textpist[ij]{0}$ can be seen as a fringe shift that adds to the predicted fringe position $\textphishift[ij]{} \rightarrow \textphishift[ij]{} + 2\pi\wavenumzero\textpist[ij]{0}$ and the linear variation of the piston can be seen as a scanning velocity change $\textdopd[ij] \rightarrow \textdopd[ij](1 + \textpist[ij]{1})$. With these substitutions, the formulas of Sect.~\ref{sec:ana:clo} can be used directly to determine the jittered closure phase. As we have seen, the predominant impact of the bandwidth smearing on the closure phase is the fringe decentering $\textphishift[ij]{}$, so we expect the group-delay tracking errors to be the main source of bias.
\subsection{Numerical modelling}
\begin{figure}
\includegraphics[width=\linewidth]{pdf/jittered-interferogram.pdf}
\caption{One of the simulated temporal scans. The deformation of the envelope is correlated with the piston slope and the accordion-like features to variations of its slope. \textit{Top:} piston; \textit{Bottom:} simulated fringes.}
\label{fig:interf:jitt}
\end{figure}
In the high frequency regime the pistons at the different stations can be considered as uncorrelated when the baselines are larger than the outer scale of turbulence $\mathcal{L}_0$ \citep{KEL07}. With a median value $\mathcal{L}_0 = 22$\,m at Paranal \citep{MAR00} baselines of the medium and large telescope quadruplets used with PIONIER normally fulfil the criterium. At other sites, for smaller baselines, or under relatively uncommon atmospheric conditions at Paranal, the pistons can be correlated. This correlation decreases the amount of atmospheric jitter for given coherence time and seeing, which in turns tends to decrease the bias on the interferometric observables. Therefore, we model the random piston $\piston[i](t)$ using its spectral density
\begin{equation}
\DFT{\piston[i]}(\nu) = A\nu^{-B} \exp{\j\Phi^i(\nu)},
\end{equation}
where $A$ and $B$ are constants and $\Phi^i(\nu)$ is chosen randomly for each sampled temporal frequency $\nu$. For Kolmogorov turbulence, the fast scan ($\ll 1$\,s) regime has $B = 17/6$ \citep{CON95} but there is \rev{experimental evidence \citep{DIF03}} that the slope is not as steep at VLTI, \rev{with simulations by \citet{ABS06} explaining it in terms of the piston induced at high frequency by the adaptive optics (imperfect) correction \citep[``bimorph piston'', see][]{VER01} and wavefront errors produced by the injection into single-mode waveguides \citep[``coupled piston'', see][]{RUI01}. \citet{LIN99} have also measured a deviation from the Kolmogorov behaviour at PTI.} We used $B = 2$, which experimentally reproduces well the accordion features of temporal scans obtained under below average atmospheric conditions (see Fig.~\ref{fig:interf:jitt}). We normalise $A$ to match the group-delay tracking rms in the differential piston $\piston[ij] = \piston[j] - \piston[i]$.
By substituting in Eq.~\ref{eq:atm:phasor}, we perform a numerical integration of Eqs.~(\ref{eq:def:vsqPS}~\& \ref{eq:def:bispDS}) and obtain the jittered
visibility amplitude and closure phase.
\begin{figure*}
\includegraphics{jittered-closure.pdf}
\caption{Bias on the closure phase resulting from atmospheric piston in temporal scans, assuming that the static smearing is correctly modelled. The x-axis shows the reduced binary separation in milliarcseconds-hectometres of baselines per micron of wavelength (below) or the OPD between binary components (above). \textit{Top:} bias and statistical errors for three spectral resolutions corresponding to PIONIER at the VLTI. \textit{Bottom panel:} bias in the spatial resolution-spectral resolution plane. The bias decreases quickly with spectral resolution.}
\label{fig:phi:jitt}
\end{figure*}
\subsection{Bias on the observables}
As we have seen in Sect.~\ref{sec:atm:om} there is little bias of the atmosphere on the square visibility amplitude and we could confirm it numerically. However, the bias can be substantial on the closure phase. Figure~\ref{fig:phi:jitt} displays in its top panel the bias on the closure phase of our test-case binary as a function of separation, for the three spectral resolutions $\resol{} = 7$, 18, 42 corresponding to PIONIER's modes. \rev{For each separation, baseline, and spectral resolution considered in the simulation}, 100 random scans with a \rev{remaining scatter of the fringe tracking of $6\lambda$ (typical value by average conditions) have been generated. The closure phase on the telescope triplet is the average closure phase of the scans. To better identify the biases}, the closure phase of a \rev{jitter-free observation} has been subtracted from the results. In the lower panel of the figure, the bias on the phase is given in the separation-spectral resolution plane. As one can see, the impact of the atmosphere is very important at low resolution but quickly vanishes for $\resol{} \gtrsim 20$. For three spectral channels across a typical IR band, the error on the phase is at most a few degrees or less.
\section{Discussion \& Conclusion}
\subsection{Impact of the instrument and visibility estimator}
As already discussed by \citet{PER05}, the square visibility amplitude is impacted differently for different estimators that otherwise would be equivalent in the absence of smearing. Not only is the amount of smearing different but the behaviour can be changed. Because it is a popular recombination method and it illustrates this argument, we have given the formulas for the smeared complex visibility of a time-modulated ABCD recombiner in Appendix~\ref{ap:ABCD}. In Sect.~\ref{sec:ana}, we have seen that the square visibility amplitude is not impacted by the fringe centering in full scans processed by Fourier analysis : in Eq.~(\ref{eq:gen:V2smearing}), smearing is independent \rev{of} absolute source position---only on source distances $\textphiobj[ij]{o} - \textphiobj[ij]{p}$---and group delay $\textphishift[ij]{}$. Conversely, the ABCD visibility estimator shows explicit dependence on $\textphiobj[ij]{o}$ and $\textphishift[ij]{}$ (see for instance Eq.~\ref{eq:ABCD:gauss:smearing}), and this propagates to the square visibility estimator.
Also, we have clearly put in evidence that instrumental features such as the OPD modulation scheme \rev{(ABCD or Fourier mode, stroke speeds on the different baselines)} or the chromatic dispersion have a strong impact on the closure phase. In particular, the smearing behaviour of the closure phase of PIONIER (Fig.~\ref{fig:closim}) shows different trends on different triplets or different spectral channels: on one hand, different telescope triplets are impacted differently because of the different OPD modulations; on the other hand, different spectral channels of the same triplet behave in different manners, as a consequence of different chromatic signatures. While the square visibility amplitude did not show a strong dependence on instrumental signature for full scans processed by Fourier analysis (Sect.~\ref{sec:ana}), this is not necessarily the case. For instance, a time-modulated ABCD method displays impact for both visibility and phase (see Eq.~\ref{eq:ABCD:gauss:smearing} in Appendix~\ref{ap:ABCD}).
\rev{We therefore stress} that each data reduction pipeline and each instrument require their own modelling of the smearing. In this paper, we have provided a generic formalism which can be used as is for VLTI/PIONIER and probably with little adaptation to other instruments that temporally scan most of the fringe packet.
\subsection{When only part of the fringe packet is sensed}
Also, our developments make the implicit assumption that most of the flux of \newrev{the} fringe packet is measured, i.e. that the OPD range is significantly larger than the FWHM of the fringe envelope. Actually, our developments still hold if the centres of the fringe packets originating from the different parts of the source are scanned but the extremities of the fringe packet are cropped, providing that the cropping is not too aggressive. \rev{In the case of PIONIER, the partial cropping on some baselines does not prevent a good agreement between simulated fringed and our analytic development, as Fig.~\ref{fig:closim} shows.}
However, it is clearly not the case in the ABCD method when a fringe-tracker locks the recombiner on the ``central'' fringe \citep[e.g][]{SHA80}. While the smearing can be derived theoretically for this method (see Appendix~\ref{ap:ABCD}), \rev{its magnitude will depend on the location of the fringe (i.e the OPD) onto which the fringe tracker locks. In the aforementionned Appendix it is shown that the visibility depends on the position of a source which in turns depends on the value of the group delay \textphishift[ij]{} (see Eq.~\ref{eq:ABCD:beta}). For relatively compact objects, the fringe tracker locks onto the brighter fringe or a local zero of the group delay and possible biases are calibrated out when observing an (almost) point-like calibrator under similar conditions. When a source is smeared, the fringe tracker does not necessarily behave in the same manner on source and calibrator, since there is no longer an obvious location of a central fringe (e.g. in the extreme case of a double fringe packet, it may lock on either packet). Therefore,} it is quite likely that instruments sensing the central fringe of sources more resolved than a few beam widths \rev{(i.e. a few times the resolution power of the interferometer) will lead to altered measurements}, unless \rev{(a)} a high spectral resolution \rev{is used ($\resol{} \gg \textphishift[ij]{}$ in Eq.~\ref{eq:ABCD:beta})} or \rev{(b) the fringe tracking scheme can be modelled with enough detail to know on which part of a given smeared fringe packet it locks}. In particular, instruments that target high accuracy astrometry with the ABCD method like GRAVITY \citep{GRAVITY} and PRIMA \citep{PRIMA} will require that both the tracking reference and the science target are not very resolved.
\subsection{Image reconstruction}
Our approach clearly targets parametric analysis, by providing formulas to model fit interferometric data by systems of compact sources. Image reconstruction however, usually relies on the Fourier relation between visibility and image, a relation which is broken in finite bandwidth. Thus, image reconstruction is made difficult as \cite{BRI89} already noted in radio interferometry.
\subsection{Dealing with bandwidth smearing in practice}
The angle of attack of radio astronomers to limit bandwidth smearing
(see e.g \citet{BRI89}), is to restrict its effects either by
increasing the spectral resolution to optimise the interferometric
field of view or centering the phase tracking delay optimally to
reduce the radial spread. Optical interferometry users do not have
necessarily such a flexibility. One of the important differences
between the wavelength regimes is that, in the optical, because the
arrays have \rev{many fewer} telescopes, most of the users do not actually
reconstruct images but rather model directly the interferometric
observables. This \rev{has} been done to an extreme level of precision
where visibilities are measured to a fraction of percent
\citep[e.g.][]{Absil:2008} and closure phases to a fraction of a degree
\citep[see e.g][]{Zhao:2011}. The particularly large impact of the
smearing, even for moderately resolved sources, undermine the idea
that the parameters for a large number of objects might be derived
effortlessly using the traditional techniques.
It therefore appears reasonable to adopt a two step strategy to deal with
bandwidth smearing first by \emph{limiting the static instrumental smearing
by design} and secondly by \emph{operating the instrument under
conditions that allow a proper modelling of the induced biases}.
\emph{Limiting the instrumental smearing.} We have seen that the ``group delay
closure'' is the major contributor to a static smearing effect in the closure
phase \rev{for instruments that operate in Fourier mode}; it depends on the
group delays and the OPD modulation scheme. The scanning speed scheme can be
chosen so as to minimise the average group delay closures. For the
\rev{temporal ABCD, visibility amplitudes and closures phases are directly
impacted by the group delay, and this mitigation can longer be used. Since the
group delay is mostly produced by a static chromatic dispersion in the instrument (waveguides, optical elements), an} integrated approach
to differential dispersion and birefringence compensation can be attempted as
discussed in \citep{LAZ12}. Solutions exist that can provide guided or free
space optics instrument with dispersion compensation \citep{Vergnole:2005}.
\rev{Correcting the air dispersion in the delay lines in real time may prove
more difficult to implement than static correction of the dispersion in the optical elements, so that evacuated delay lines are probably part of the solution for larger baseline lengths ($\gg 100$\,m) \newrev{and at shorter wavelengths where the air dispersion is larger}.}
\emph{Modelling the biases.} We have shown that bandwidth smearing can be
modelled provided that, a moderate spectral resolution is used (the first
obvious step) \rev{and} the \rev{estimators of the observables are properly
calculated}. In very low spectral resolution or in full-band ($\resol{} \sim
5$) observations atmospheric effects must also be decently constrained. For the
latter, initial studies \citep[e.g.][]{LIN99,DIF03} have shown the correlation
between atmospheric turbulence and low frequency statistics of piston but these
are not necessarily well adapted to the sub second exposure
\citep[e.g.][]{ABS06}. Dedicated further characterisation of piston statistics
\rev{vs. monitored atmospheric properties} would be needed. In summary, the
ultimate tool to obtain a smeared source's \rev{properties} will simulate the
instrumental visibility numerically taking the instrumental signatures, in
particular a dedicated spectral calibration, and the atmosphere into account.
\subsection{Concluding remarks}
\beginrevision
Optical interferometry is increasingly used for precise measurements of high flux ratios and/or separation. Application of this precision techniques range from the detection of hot dust components around debris-disc host stars or the search for direct detection of hot Jupiters to the accurate astrometry of binary systems in search of precise mass determination.
We have focused our work on a rarely studied effect that can alter significantly these astrophysical measurements, the so-called the bandwidth smearing. This bias-inducing phenomenon arises from the wavelength-dependence in the characteristics of the instrument, the atmosphere, and the source. We have modelled its impact by analysing its influence on the instrumental fringe contrast and determined how it alters the visibility amplitudes and closure phases. The magnitude of this effect will depend, for a given instrument, on the spectral resolution and the extension of the observed field of view and in some cases on the atmospheric piston.
We have demonstrated analytically how to calibrate for this degradation in the context of popular temporal fringe scanning instruments and applied this analysis to the specific case of binary systems by computing the error or biases induced on the separation vector and flux ratio.
We have further discussed ``real-life'' constraints such as the influence of the atmospheric piston, the use of different fringe encoding schemes or the imperfections of the fringe tracking quality. We believe that the current analysis can be used with little effort to correct for potential bandwidth smearing biases in almost any astrophysical case.
\endrevision
\section*{Acknowledgements}
\rev{We would like to thank an anonymous referee and Chris Haniff who helped us to improve this paper. This research has made use of NASA's Astrophysics Data System, the free softwares maxima, Yorick, and python. It has been supported by Comit\'e Mixto ESO-Chile and Basal-CATA (PFB-06/2007).}
{\footnotesize
|
2,877,628,088,481 | arxiv | \section{Introduction}
The emergence of infrastructure-free wireless communications networks (e.g., unmanned aerial vehicle (UAV) communications \cite{UAV}) brings new threads to public security, as they may be misused by criminals or terrorists to commit crimes or launch terror attacks \cite{paradigm}. To detect and stop such misuse, there is a growing need for authorized parties to surveil them via legitimate eavesdropping \cite{Rlefading,jammingfading,spoofrelay,HTran} and intervene in them via legitimate jamming and spoofing \cite{spoofbpsk,Xspoof}. This introduces a paradigm shift from the conventional secrecy communications \cite{YZou} (defending against eavesdropping \cite{Gopala2008,ZhouMahamHjorungnes2011,KapetanovicZhengRusek2015}, jamming, and spoofing \cite{Xiong2015}) to the new wireless surveillance and intervention legitimately exploiting these attacks \cite{paradigm}.
In the literature, there have been several prior works \cite{Rlefading,jammingfading,spoofrelay,HTran} investigating the wireless surveillance of a point-to-point suspicious communication link from Alice (suspicious transmitter) to Bob (suspicious receiver) via a legitimate monitor. Conventionally, the monitor employs {\it passive eavesdropping} to intercept the communicated message. This method, however, is difficult to overhear effectively when the monitor is located far away from Alice. To overcome this issue, the authors in \cite{Rlefading,jammingfading} proposed {\it proactive eavesdropping via noise jamming} by enabling the monitor to operate in a {\it full-duplex} mode. In this method, the monitor sends artificial noise (AN) to interfere with the Bob receiver, reduce its received signal-to-interference-plus-noise ratio (SINR) and the communication rate, thus facilitating the eavesdropping at the same time. As full-duplex radios are employed, this method requires the monitor to efficiently cancel the self-interference (SI) from its jamming to eavesdropping antennas. Furthermore, the authors in \cite{spoofrelay} proposed {\it proactive eavesdropping via hybrid jamming}, where the full-duplex monitor forwards its overheard message from Alice combined with an AN. At the Bob receiver, the forwarded message by the monitor is destructively added with the original message from Alice to reduce Bob's received signal strength, while the AN increases its received interference power. As a result, hybrid jamming can more effectively reduce Bob's received SINR (and the communication rate) than noise jamming, and therefore can help achieve better eavesdropping performance. Nevertheless, hybrid jamming is more difficult to be implemented, since the monitor not only needs to perform the SI cancellation (SIC), but also requires instantaneous message forwarding to ensure the two messages' destructive combining at the Bob receiver, which is technically challenging due to hardware constraints and channel acquisition.
\begin{figure*}
\centering
\subfigure[Mode (I): passive eavesdropping over both hops.]{
\label{fig:subfig:a}
\includegraphics[width=2in]{mode1.eps}}
\hspace{0.1in}
\subfigure[Mode (II): proactive eavesdropping via noise jamming over the first hop.]{
\label{fig:subfig:b}
\includegraphics[width=2in]{mode2.eps}}
\hspace{0.1in}
\subfigure[Mode (III): proactive eavesdropping via hybrid jamming over the second hop.]{
\label{fig:subfig:c}
\includegraphics[width=2in]{mode3.eps}}
\caption{A wireless surveillance scenario, where a monitor aims to legitimately eavesdrop a two-hop suspicious communication link from Alice to Bob through a relay.}
\label{fig:subfig}\vspace{-2em}
\end{figure*}
In practice, like most infrastructure-free wireless communications, the suspicious communication is likely to be operated in a multi-hop manner to extend the communication range. This motivates us to investigate new surveillance approaches by exploiting such a multi-hop nature to improve the eavesdropping performance. For example, the monitor can perform passive eavesdropping over multiple hops to intercept more than one copy of the suspicious message for overhearing more clearly. Furthermore, by noting the fact that the end-to-end communication rate of a multi-hop communication is highly dependent on the SINR of each individual hop, the monitor can reap the benefit of proactive eavesdropping in a half-duplex way, by jamming over one hop to reduce the end-to-end communication rate for intercepting more easily over another hop. Such half-duplex proactive eavesdropping efficiently eliminates the high requirements of SIC and instantaneous message forwarding in prior works with full-duplex monitors.
For the purpose of initial investigation, we consider the wireless surveillance of a simplified two-hop suspicious communication link via a {\it half-duplex} legitimate monitor, where the monitor aims to eavesdrop the suspicious message communicated from Alice to Bob through a relay. By exploring the suspicious link's two-hop nature, the monitor can adaptively choose among the following three eavesdropping modes to improve the eavesdropping performance: (I) passive eavesdropping to intercept both hops to decode the message collectively, (II) proactive eavesdropping via noise jamming over the first hop, and (III) proactive eavesdropping via hybrid jamming over the second hop. Note that due to the message causality issue, in mode (II), only noise jamming is feasible at the first hop as the suspicious message is not overheard yet; while in mode (III), more advanced hybrid jamming is implementable at the second hop by exploiting the overheard signal at the first hop. Under this setup, we maximize the eavesdropping rate at the monitor by jointly optimizing the eavesdropping mode selection as well as the transmit power for noise and hybrid jamming. Numerical results show that the eavesdropping mode selection significantly improves the eavesdropping rate as compared to each individual eavesdropping mode under both fixed and time-varying channels.\vspace{-0.5em}
\section{System Model}
In this paper, we consider the wireless surveillance problem as shown in Fig.~\ref{fig:subfig}, where a legitimate monitor aims to eavesdrop a two-hop suspicious communication link from Alice to Bob. The communication link goes through a half-duplex and decode-and-forward (DF) relay for extending the communication range between Alice and Bob. We consider a block-based flat fading channel model, where wireless channels remain unchanged over a time block of our interest. Let $h_{\rm AR}$, $h_{\rm RB}$, $h_{\rm AM}$, $h_{\rm MR}$, $h_{\rm RM}$, and $h_{\rm MB}$ denote the channel coefficients from Alice to the relay, from the relay to Bob, from Alice to the monitor, from the monitor to the relay, from the relay to the monitor, and from the monitor to Bob, respectively. It is assumed that the suspicious users (Alice, the relay, and Bob) only know the channel state information (CSI) of their suspicious links (i.e., $h_{\rm AR}$ and $h_{\rm RB}$), and they use fixed transmit powers but can adaptively adjust the end-to-end communication rate based on the SINRs of both hops. The monitor practically operates in a half-duplex manner to overhear the suspicious communication. It is assumed that the monitor knows the global CSI of $h_{\rm AR}$, $h_{\rm RB}$, $h_{\rm AM}$, $h_{\rm MR}$, $h_{\rm RM}$, and $h_{\rm MB}$. This assumption is made to characterize the fundamental performance limits of the legitimate eavesdropping in this case, and our design is extendible to the practical learning-based monitor without knowing the perfect CSI initially as in \cite{jammingfading}. By exploring the two-hop nature of the suspicious communication, the half-duplex monitor can operate in the following three eavesdropping modes, respectively.\vspace{-0.5em}
\subsection{Mode (I): Passive Eavesdropping over Both Hops}
In mode (I), as shown in Fig.~\ref{fig:subfig:a}, the monitor combines the overheard suspicious information from both hops for collective eavesdropping. Consider first the suspicious communication. In the first hop, let $s$ and $P_{\rm A}$ denote Alice's transmit suspicious message and the fixed transmit power, respectively. Here, $s$ is a circularly symmetric complex Gaussian (CSCG) random variable with zero mean and unit variance, i.e., $s \sim \mathcal{CN}(0,1)$. The received signal at the relay is $y{\rm _R}=\sqrt{P_{\rm A}}h_{\rm AR}s+n_1$, where $n_1 \sim \mathcal{CN}(0,\sigma^2)$ denotes the additive white Gaussian noise (AWGN) at the relay receiver. After the relay decodes the suspicious message $s$, in the second hop it uses the same codebook to send $s$ to Bob by using the fixed transmit power $P_{\rm R}$. The received signal at Bob is $y_{\rm B}=\sqrt{P_{\rm R}}h_{\rm RB}s+n_2$, where $n_2 \sim \mathcal{CN}(0,{\sigma}^{2})$ denotes the AWGN at the Bob receiver. In the two hops, the received signal-to-noise ratios (SNRs) at the relay and Bob are denoted as ${\gamma}_{\rm R}=\frac{|h_{\rm AR}|^2P_{\rm A}}{\sigma^{2}}$ and ${\gamma}_{\rm B}=\frac{|h_{\rm RB}|^{2}P_{\rm R}}{{\sigma}^{2}}$, and the corresponding achievable rates (in bps/Hz) are respectively
\begin{align}
r_{\rm R}=\frac{1}{2}\log_{2}\left(1+\frac{|h_{\rm AR}|^2P_{\rm A}}{\sigma^{2}}\right),
\label{equa:jnl:3}
\end{align}
\begin{align}
r_{\rm B}=\frac{1}{2}\log_{2}\left(1+\frac{|h_{\rm RB}|^{2}P_{\rm R}}{{\sigma}^{2}}\right),
\label{equa:jnl:4}
\end{align}
where $\frac{1}{2}$ indicates that each hop occupies only a half of the normalized time-frequency slot. The end-to-end suspicious communication rate is given as $\min \left(r_{\rm R},r_{\rm B}\right)$.
In the two hops, the monitor passively overhears the suspicious message from Alice and the relay, respectively, where the received signals are $y_{\rm M1}=\sqrt{P_{\rm A}}h_{\rm AM}s+n_3$ and $y_{\rm M2}=\sqrt{P_{\rm R}}h_{\rm RM}s+n_4$ with $n_3\sim \mathcal{CN}(0,{\sigma}^{2})$ and $n_4\sim \mathcal{CN}(0,{\sigma}^{2})$. By employing the maximum ratio combining (MRC) to decode $s$, the SNR and the achievable rate at the monitor in mode (I) are respectively ${\gamma}_{\rm M}^{(\rm I)} =\frac{P_{\rm A}|h_{\rm AM}|^2+P_{\rm R}|h_{\rm RM}|^2}{{\sigma}^{2}}$ and
\begin{align}
r_{\rm M}^{(\rm I)}=\frac{1}{2}\log_{2}\left(1+\frac{P_{\rm A}|h_{\rm AM}|^2+P_{\rm R}|h_{\rm RM}|^2}{{\sigma}^{2}}\right),
\label{equa:pass:3}
\end{align}
where the superscripts of ${\gamma}_{\rm M}^{(\rm I)}$ and $r_{\rm M}^{(\rm I)}$ represent mode (I).
Now, we formally define the eavesdropping rate at the monitor. Similarly as in \cite{Rlefading,jammingfading,spoofrelay}, when the achievable rate $r^{(\rm I)}_{\rm M}$ at the monitor is no smaller than the end-to-end suspicious communication rate $\min \left(r_{\rm R},r_{\rm B}\right)$, the monitor can successfully decode the suspicious message without any error. In this case, the eavesdropping rate is defined as $R_{\rm eav}^{(\rm I)} = \min \left(r_{\rm R},r_{\rm B}\right)$. Otherwise, if $r_{\rm M}^{(\rm I)}$ is smaller than $\min \left(r_{\rm R},r_{\rm B}\right)$, the monitor cannot decode the suspicious message without errors and the eavesdropping rate is $R_{\rm eav}^{(\rm I)}=0$. Thus, the eavesdropping rate in the passive eavesdropping mode is defined as:
\begin{align}
R_{\rm eav}^{(\rm I)} = \left\{\begin{array}{ll}
\min \left(r_{\rm R},r_{\rm B}\right), & {\rm{if}}\ r_{\rm M}^{({\rm I})} \geq \min \left(r_{\rm R},r_{\rm B}\right),\\
0, & {\rm otherwise.}
\end{array} \right.
\label{equa:pass:4}
\end{align}
Note that $R_{\rm eav}^{(\rm I)}$ is constant under fixed transmit power $P_{\rm A}$ at Alice and $P_{\rm R}$ at the relay.\vspace{-0.5em}
\subsection{Mode (II): Proactive Eavesdropping via Noise Jamming over the First Hop}
In mode (II), as shown in Fig.~\ref{fig:subfig:b}, the monitor jams the relay receiver via AN in the first hop to reduce the suspicious communication rate for facilitating the eavesdropping from the relay transmitter in the second hop. In the first hop, let $x_1 \sim \mathcal{CN}(0,1)$ and $Q_1$ denote the jamming signal (AN) and its power at the monitor, respectively, where the subscripts of $x_1$ and $Q_1$ indicate the first hop. In this jamming case, the received signal at the relay in the first hop is denoted as $\tilde y_{\rm R}=\sqrt{P_{\rm A}}h_{\rm AR}s+\sqrt{Q_1}h_{\rm MR}x_1+n_{1}$. Accordingly, the SINR at the relay reduces to ${\tilde\gamma}_{\rm R}(Q_1)=\frac{|h_{\rm AR}|^2P_{\rm A}}{|h_{\rm MR}|^2Q_1+\sigma^{2}}$ and the achievable rate is
\begin{align}
\tilde r_{\rm R}(Q_1)=\frac{1}{2}\log_{2}\left(1+\frac{|h_{\rm AR}|^2P_{\rm A}}{|h_{\rm MR}|^2Q_{1}+\sigma^{2}}\right).
\label{equa:noise:1}
\end{align}
In the second hop, similarly as in mode (I), the achievable rate at Bob is equal to $r_{\rm B}$ in (\ref{equa:jnl:4}). Accordingly, the end-to-end suspicious communication rate is $\min \left(\tilde r_{\rm R}(Q_1), r_{\rm B}\right)$.
At the half-duplex monitor, as it can only eavesdrop the suspicious message from the relay transmitter in the second hop, the achievable rate at the monitor is given as
\begin{align}
\tilde r_{\rm M}^{(\rm II)}=\frac{1}{2}\log_{2}\left(1+\frac{|h_{\rm RM}|^{2}P_{\rm R}}{{\sigma}^{2}}\right).
\label{equa:noise:2}
\end{align}
Similarly to (\ref{equa:pass:4}), given the jamming power $Q_1$, the eavesdropping rate at the monitor is defined as
\begin{align*}
\tilde R_{\rm eav}^{(\rm II)}{(Q_1)} = \left\{\begin{array}{ll}
\min \left(\tilde r_{\rm R}(Q_1), r_{\rm B}\right), ~ {\rm{if}}\ \tilde r_{\rm M}^{(\rm II)}\geq \min \left(\tilde r_{\rm R}(Q_1), r_{\rm B}\right) \\
0, ~~~~~~~~~~~~~~~~~~~~~ {\rm otherwise.}
\end{array} \right.
\end{align*}
In practice, the monitor should adjust the jamming power $Q_1$ to maximize the eavesdropping rate $\tilde R_{\rm eav}^{(\rm II)}{(Q_1)}$. Let $Q_{\rm max}$ denote the maximum jamming power at the monitor. The maximum eavesdropping rate in this mode is given as
\begin{align}
R_{\rm eav}^{(\rm II)} \triangleq
\mathop{{\max}}\limits_{0\le Q_1 \le Q_{\rm max}} ~~\tilde R_{\rm eav}^{(\rm II)}(Q_1).
\label{equa:noise:3}
\end{align}
Note that jamming at the maximum transmit power with $Q_1 = Q_{\rm max}$ is generally not optimal for problem (\ref{equa:noise:3}), since this may reduce the suspicious communication rate $\min \left(\tilde r_{\rm R}(Q_1), r_{\rm B}\right)$ too much and lead to over-reduced eavesdropping rate.
\vspace{-0.5em}
\subsection{Mode (III): Proactive Eavesdropping via Hybrid Jamming over the Second Hop}
In mode (III), as shown in Fig.~\ref{fig:subfig:c}, the monitor uses the hybrid jamming in the second hop to reduce the end-to-end communication rate for helping eavesdropping in the first hop.{\footnote{Note that in the second hop here, we can also use noise jamming, which, however, corresponds to a special case of the hybrid jamming and thus is not considered separately.}} As for the suspicious communication, the achievable rate at the relay in the first hop is equal to $r_{\rm R}$ in (\ref{equa:jnl:3}) in mode (I). In the second hop, based on the amplify-and-forward principle at the monitor, the monitor designs the hybrid jamming signal as $\hat\alpha y_{\rm M1}+x_2$, where $\hat\alpha$ denotes the amplifying coefficient for the received signal $y_{\rm M1}=\sqrt{P_{\rm A}}h_{\rm AM}s+n_3$ in the first hop, and $x_2 \sim \mathcal{CN}(0,Q_2)$ denotes the AN in the second hop, where the subscripts of $x_2$ and $Q_2$ indicate the second hop. The received signal at Bob is denoted as
\begin{align*}
&\hat y_{\rm B}=\sqrt{P_{\rm R}}h_{\rm RB}s+h_{\rm MB}({\hat\alpha y_{\rm M1}+x_2})+n_2 \\
&=(\sqrt{P_{\rm R}}h_{\rm RB}+\hat\alpha \sqrt{P_{\rm A}}h_{\rm MB}h_{\rm AM})s+h_{\rm MB}x_2+ \alpha h_{\rm MB}n_3+n_2.
\end{align*}
In order to most efficiently jam the Bob receiver, the monitor designs $\hat\alpha$ as $\hat{\alpha} = - \frac{h_{\rm RB} h_{\rm AM}^\dagger h_{\rm MB}^\dagger}{|h_{\rm RB} h_{\rm AM}^\dagger h_{\rm MB}^\dagger|} {{\alpha}}$, where the superscript $\dagger$ denotes the complex conjugate operation, and ${\alpha} \ge 0$ denotes the magnitude of $\hat{\alpha}$. This design makes the forwarded message $\hat\alpha \sqrt{P_{\rm A}}h_{\rm MB}h_{\rm AM}s$ from the monitor being destructively combined with $\sqrt{P_{\rm R}}h_{\rm RB}s$ from Alice at the Bob receiver, thus maximally reducing its SINR and achievable rate, which are respectively given as
\begin{align}
{\hat{\gamma}}_{\rm B}(\alpha, Q_2)&=\frac{|\sqrt{P_{\rm R}}h_{\rm RB}-\alpha\sqrt{P_{\rm A}}h_{\rm AM}h_{\rm MB}|^2}{|h_{\rm MB}|^2Q_2+{\alpha}^2|h_{\rm MB}|^2{\sigma}^2+{\sigma}^{2}},\nonumber\\
\hat r_{\rm B}(\alpha, Q_2)&=\frac{1}{2}\log_{2}\left(1+\frac{|\sqrt{P_{\rm R}}h_{\rm RB}-\alpha\sqrt{P_{\rm A}}h_{\rm AM}h_{\rm MB}|^2}{|h_{\rm MB}|^2Q_2+{\alpha}^2|h_{\rm MB}|^2{\sigma}^2+{\sigma}^{2}}\right).
\label{equa:combined:2}
\end{align}
By combining (\ref{equa:jnl:3}) and (\ref{equa:combined:2}), the end-to-end suspicious communication rate is $\min \left(r_{\rm R}, \hat r_{\rm B}(\alpha, Q_2)\right)$.
The half-duplex monitor can only overhear the suspicious message in the first hop. In this case, the received SNR at the monitor is $\hat\gamma_{\rm M}^{(\rm III)}=\frac{|h_{\rm AM}|^{2}P_{\rm A}}{{\sigma}^{2}}$, and the achievable rate is $\hat r_{\rm M}^{(\rm III)}=\frac{1}{2}\log_{2}\left(1+\frac{|h_{\rm AM}|^{2}P_{\rm A}}{{\sigma}^{2}}\right)$. Similarly to (\ref{equa:pass:4}) and under given $\alpha$ and $Q_2$, the eavesdropping rate is expressed as
\begin{align*}
&\hat R_{\rm eav}^{(\rm III)}{(\alpha,Q_2)} \nonumber\\
= & \left\{\begin{array}{ll}
\min \left(r_{\rm R},\hat r_{\rm B}(\alpha, Q_2)\right), &{\rm{if}}~ \hat r_{\rm M}^{(\rm III)}\geq \min \left(r_{\rm R},\hat r_{\rm B}(\alpha, Q_2)\right) \\
0, &{\rm otherwise.}
\end{array} \right.
\end{align*}
The monitor should jointly adjust $\alpha$ and $Q_2$ to maximize the eavesdropping rate $\hat R_{\rm eav}^{(\rm III)}{(\alpha,Q_2)}$. Note that the jamming power at the monitor is $\mathbb{E}(|\alpha y_{\rm M1}+x_2|^2) = {\alpha}^2P_{\rm A}|h_{\rm AM}|^2+{\alpha}^2\sigma^2+Q_2$, which cannot exceed the maximum value $Q_{\rm max}$. Here, $\mathbb{E}(\cdot)$ denotes the statistical expectation. Under this power constraint, the maximum eavesdropping rate in this mode is given as
\begin{align}
R_{\rm eav}^{(\rm III)} \triangleq &\mathop{{\max}}\limits_{\alpha\ge 0, Q_2\ge 0}~ \hat R_{\rm eav}^{(\rm III)}(\alpha, Q_2)\label{pro:15} \\
{\mathrm{s.t.}} ~& {\alpha}^2P_{\rm A}|h_{\rm AM}|^2+{\alpha}^2\sigma^2+Q_2 \le Q_{\rm max.}
\label{equa:combined:4}
\end{align}
Problem (\ref{pro:15}) will be solved later by determining $\alpha$ and $Q_2$ to balance between the message forwarding to reduce the received signal strength at the SINR numerator versus the AN to increase the interference power at the SINR denominator.
\vspace{-0.5em}
\section{Joint Eavesdropping-Mode Selection and Jamming Power Allocation}
\vspace{-0em}
In this section, we first obtain the maximum eavesdropping rate at the monitor under each individual eavesdropping mode, and then select the best one among them. As $R_{\rm eav}^{(\rm I)}$ for mode~(I) is a constant term, we only need to find $R_{\rm eav}^{(\rm II)}$ and $R_{\rm eav}^{(\rm III)}$ for modes (II) and (III) by solving problems (\ref{equa:noise:3}) and (\ref{pro:15}), respectively.\vspace{-0em}
\subsection{Optimal Noise Jamming Power for Mode (II)}\vspace{-0em}
First, we solve problem (\ref{equa:noise:3}) to obtain $R_{\rm eav}^{(\rm II)}$ in mode (II). In the case when the achievable rate $r_{\rm R}$ at the relay is larger than $r_{\rm M}$ at the monitor, the jamming power is denoted by
\begin{align}
\tilde Q_1=\max\left(\frac{(|h_{\rm AR}|^2P_{\rm A}-|h_{\rm RM}|^2P_{\rm R}){\sigma}^2}{|h_{\rm MR}|^2|h_{\rm RM}|^2P_{\rm R}},0\right)
\label{equa:opnoise:1}
\end{align}
such that $\tilde r_{\rm R}(\tilde Q_1)$ at the relay is reduced to be equal to $\tilde r_{\rm M}^{(\rm II)}$ in (\ref{equa:noise:2}). We can then easily obtain the optimal solution to problem (\ref{equa:noise:3}) in the following proposition.
\begin{proposition}\label{proposition:3.1}
The optimal noise jamming power to problem (\ref{equa:noise:3}) is given as
\begin{align*}
Q_1^\star = \left\{\begin{array}{ll}
\tilde Q_1, &{\rm if}~ \tilde r_{\rm M}^{(\rm II)}< \min \left(r_{\rm R}, r_{\rm B}\right)~{\rm and} ~ Q_{\rm max}\ge \tilde Q_1,\\
0, &{\rm otherwise,}
\end{array}\right.
\end{align*}
where $\tilde Q_1$ is given in (\ref{equa:opnoise:1}). The corresponding maximum eavesdropping rate is
\begin{align*}
&R_{\rm eav}^{(\rm II)} = \nonumber\\
&\left\{\begin{array}{ll}
\min \left(r_{\rm R},r_{\rm B}\right), &{\rm{if}}\ \tilde r_{\rm M}^{(\rm II)}\geq \min \left(r_{\rm R}, r_{\rm B}\right), \\ \tilde r_{\rm M}^{(\rm II)}, &{\rm{if}}\ \tilde r_{\rm M}^{(\rm II)}< \min \left(r_{\rm R}, r_{\rm B}\right)~{\rm and}~ Q_{\rm max} \geq \tilde Q_1,
\\ 0, &\rm otherwise.
\end{array} \right.
\end{align*}
\end{proposition}
This proposition can be intuitively explained by considering two cases. First, when $\tilde r_{\rm M}^{(\rm II)}\geq \min \left(r_{\rm R}, r_{\rm B}\right)$, the monitor can successfully eavesdrop the suspicious message in the second hop even without any jamming, and thus we have $Q_1^\star = 0$ and $R_{\rm eav}^{(\rm II)}= \min \left(r_{\rm R},r_{\rm B}\right)$. Next, when $\tilde r_{\rm M}^{(\rm II)}< \min \left(r_{\rm R}, r_{\rm B}\right)$, a minimum jamming power $\tilde Q_1$ is required for successful eavesdropping. In this case, if $\tilde Q_1 \le Q_{\rm max}$, we have $Q_1^\star =\tilde Q_1$ and $R_{\rm eav}^{(\rm II)} = \tilde r_{\rm M}^{(\rm II)}$; otherwise, we have $Q_1^\star =0$ and the eavesdropping is unsuccessful with $R_{\rm eav}^{(\rm II)} = 0$.
\subsection{Optimal Hybrid Jamming Design for Mode (III)}
Next, we solve problem (\ref{pro:15}) to obtain $R_{\rm eav}^{(\rm III)}$ in mode (III). To facilitate the derivation, we first obtain the minimum achievable rate (\ref{equa:combined:2}) at Bob under the hybrid jamming, by jointly optimizing $\alpha$ and $Q_2$, i.e.,
\begin{align}
\hat{r}_{\rm B}^{\rm min} = \min\limits_{\alpha, Q_2\ge 0} &\hat{r}_{\rm B}(\alpha, Q_2)\label{equa:opcombined:1:min} \\
{\mathrm{s.t.}}~ &(\ref{equa:combined:4}). \nonumber
\end{align}
Then we have the following lemma.
\begin{lemma}\label{lemma:1}
The optimal solution to problem (\ref{equa:opcombined:1:min}) is given as\vspace{-1em}
\begin{small}\begin{align}
\overline{\alpha}=\min\bigg(\sqrt{\frac{Q_{\rm max}}{P_{\rm A}|h_{\rm AM}|^2+\sigma^2}},
\frac{P_{\rm R}|h_{\rm RB}|}{P_{\rm A}|h_{\rm AM}||h_{\rm MB}|}\bigg),\label{eqn:underline:alpha:hat}
\end{align}\end{small}
\begin{align}
\overline{Q}_2&=Q_{\rm max}-(P_{\rm A}|h_{\rm AM}|^2+\sigma^2)\overline{{\alpha}}^2. \label{eqn:Q_min}
\end{align}
The minimum achievable rate at Bob is $\hat{r}_{\rm B}^{\rm min} = \hat{r}_{\rm B}(\overline{\alpha}, \overline{Q}_2)$.
\end{lemma}
\begin{proof}
See the Appendix.
\end{proof}
Based on Lemma \ref{lemma:1}, it follows that if $\hat{r}_{\rm B}^{\rm min} \le \hat r_{\rm M}^{(\rm III)}$, then the monitor is able to jam the Bob receiver for successful eavesdropping. In particular, let $\underline{\alpha}$ and $\underline{Q}_2$ denote the amplifying coefficient and the AN power such that the achievable rate $\hat{r}_{\rm B}(\underline{\alpha}, \underline{ Q}_2)$ at Bob is reduced to be equal to $\hat r_{\rm M}^{(\rm III)}$, i.e., $\hat{r}_{\rm B}(\underline{\alpha}, \underline{ Q}_2) = \hat r_{\rm M}^{(\rm III)}$. Here, $\underline{\alpha}$ and $\underline{ Q}_2$ are generally non-unique, and can be obtained numerically. We then have the following proposition.
\begin{proposition}\label{proposition:3.2}
The optimal solution $\alpha^\star$ and $Q_2^\star$ to problem (\ref{pro:15}) and the achieved maximum eavesdropping rate $R^{(\rm III)}_{\rm eav}$ are given as follows:
\begin{itemize}
\item[1)] When $\hat r_{\rm M}^{(\rm III)}\geq \min \left(r_{\rm R}, r_{\rm B}\right)$, the monitor can eavesdrop the suspicious message in the first hop without jamming; in this case, the monitor should eavesdrop passively with $\alpha^\star = 0$, $Q_2^\star = 0$, and $R^{(\rm III)}_{\rm eav} = \min \left(r_{\rm R}, r_{\rm B}\right)$.
\item[2)] When $\hat r_{\rm M}^{(\rm III)} < \min \left(r_{\rm R}, r_{\rm B}\right)$ and $\hat{r}_{\rm B}^{\rm min} \le \hat r_{\rm M}^{(\rm III)}$, the monitor can eavesdrop successfully with hybrid jamming. In this case, the monitor should choose $\alpha^\star = \underline{\alpha}$ and $Q_2^\star = \underline{ Q}_2$, and we have $R^{(\rm III)}_{\rm eav} = \hat r_{\rm M}^{(\rm III)}$;
\item[3)] Otherwise, the monitor cannot eavesdrop even with jamming at the maximum power; in this case, we have $\alpha^\star = 0$, $Q_2^\star = 0$, and $R^{(\rm III)}_{\rm eav} = 0$.
\end{itemize}
\end{proposition}
\subsection{Eavesdropping Mode Selection}\label{sec:III:C}
After deriving the achievable eavesdropping rates $R_{\rm eav}^{(i)}$'s for the three modes, we next employ the eavesdropping mode selection to choose the best mode with the highest eavesdropping rate, i.e., the selected mode is
\begin{align}
i^\star = \arg \max_{i\in\{{\rm I},{\rm II},{\rm III}\}} R_{\rm eav}^{(i)}.
\end{align}
To provide more engineering insights, we provide intuitive discussions on the selected mode by considering two general cases under fixed channels with path-loss considered only. First, consider that the monitor is near Alice or the relay, such that the achievable rate $r_{\rm M}^{(\rm I)}$ with the MRC at the monitor is no smaller than the end-to-end suspicious communication rate $\min(r_{\rm R},r_{\rm B})$, i.e., $r_{\rm M}^{(\rm I)} \ge \min(r_{\rm R},r_{\rm B})$. In this case, passive eavesdropping is able to eavesdrop successfully, and thus is preferred.
Next, when the monitor is far away from both Alice and the relay such that $r_{\rm M}^{(\rm I)}< \min \left(r_{\rm R},r_{\rm B}\right)$, passive eavesdropping is infeasible, and proactive eavesdropping is necessary. In this case, if the monitor is closer to Alice than the relay, mode (III) performs better than mode (II) by overhearing from the nearer node Alice more clearly; and vice versa. When the monitor is too far away from both Alice and the relay but relatively close to Bob, mode (III) is the only feasible eavesdropping mode by jamming Bob effectively (see Fig. 2 in Section IV).
\section{Numerical Results}
In this section, we provide numerical results to validate the performance of our proposed design. In the simulation, suppose that Alice, the relay, and Bob are located in a line with the x-y coordinates being (0,~0), (500~meters,~0), (1000~meters,~0), respectively. We set the transmit powers at Alice and the relay as $P_{\rm A}=P_{\rm R}=40$~dBm, the jamming power at the monitor as $Q_{\rm max} = 50$~dBm, and the noise power as ${\sigma}^{2}=80$~dBm, respectively.
Fig. \ref{fig:2} shows the selection region for different eavesdropping modes under AWGN channels. Here, we set the channel power gains based on the path-loss model $\kappa\left(\frac{d}{d_0}\right)^{-\zeta}$, where $\kappa=-60$~dB corresponds to the path-loss at the reference distance $d_0=10$~meters, and $\zeta=3$. It is observed that the selected eavesdropping modes under different scenarios are consistent with our discussion in Section~\ref{sec:III:C}.
\begin{figure}
\centering
\epsfxsize=1\linewidth
\includegraphics[width=6.8cm]{picture01.eps}
\caption{Optimal eavesdropping modes of the monitor at different locations in AWGN channel, where only the locations with positive eavesdropping rates are shown.} \label{fig:2}
\vspace{-1em}
\end{figure}
\begin{figure}
\centering
\epsfxsize=1\linewidth
\includegraphics[width=6.8cm]{picture02.eps}
\caption{The average eavesdropping rate versus the monitor's horizontal location in fading channels.} \label{fig:3}
\vspace{-2em}
\end{figure}
Fig.~\ref{fig:3} shows the average eavesdropping rate versus the x-axis of the monitor's location in the Rayleigh fading channel case, where the results are averaged over $10^4$ random realizations and the monitor's y-axis location is fixed as 500 meters. It is observed that due to the averaging over various random channel realizations, our proposed design with optimal eavesdropping mode selection is observed to achieve significantly improved average eavesdropping rate as compared to each individual eavesdropping mode.\vspace{-0.5em}
\section{Conclusion}
This paper studied the wireless surveillance of a two-hop suspicious communication link via a half-duplex legitimate monitor. By exploring the suspicious link's two-hop nature, the monitor can either combine two copies of the suspicious message in both hops to improve the passive eavesdropping performance, or implement noise jamming or hybrid jamming for efficient proactive eavesdropping. We proposed joint eavesdropping mode selection and jamming power allocation to maximize the eavesdropping rate at the monitor. We hope that this new design can provide insights on the wireless surveillance design by taking advantage of multi-hop suspicious communication systems.\vspace{-0.5em}
|
2,877,628,088,482 | arxiv | \section{Acknowledgements}
This work is supported in part by Science and Technology Innovation 2030 - ``New Generation Artificial Intelligence'' Major Project (No. 2018AAA0100904) and National Natural Science Foundation of China (62176135).
\section{Algorithm Details}
In this section, we describe some details of the algorithms mentioned in Section~\ref{algo_meta}.
\label{algo_describ}
\subsection{Pessimistic Value Iteration}
\label{algo_describ_pvi}
In linear MDPs, we can construct $\hat\BB_\gamma\hat{V}$ and $\Gamma$ based on $\cD$ as follows, where $\hat\BB_\gamma\hat{V}$ is the empirical estimation for $\BB_\gamma\hat{V}$. For a given dataset $\cD=\{(s_\tau,a_\tau,r_\tau)\}_{\tau=1}^{N}$, we define the empirical mean squared Bellman error (MSBE) as
\begin{equation*}
M(w) = \sum_{\tau=1}^N \bigl(r_\tau + \gamma \widehat{V}(s_{\tau+1}) - \phi (s_\tau,a_\tau)^\top w\bigr)^2 + \lambda \norm{w}_2^2
\end{equation*}
Here $\lambda>0$ is the regularization parameter. Note that $\hat{w}$ has the closed form
\#\label{eq:w18}
&\hat{w} = \Lambda ^{-1} \Big( \sum_{\tau=1}^{N} \phi(s_\tau,a_\tau) \cdot \bigl(r_\tau + \gamma\hat{V}(s_{\tau+1})\bigr) \Bigr ) , \notag\\
&\text{where~~} \Lambda = \lambda I+\sum_{\tau=1}^N \phi(s_\tau,a_\tau) \phi(s_\tau,a_\tau) ^\top.
\#
Then we simply let
\#
\label{eq:empirical_bellman}
\hat\BB_\gamma\hat{V}=\langle\phi,\hat w \rangle .
\#
Meanwhile, we construct $\Gamma$ based on $\cD$ as
\#\label{eq:linear_uncertainty_quantifier}
\Gamma(s, a) = \beta\cdot \big( \phi(s, a)^\top \Lambda ^{-1} \phi(s, a) \big)^{1/2}.
\#
Here $\beta>0$ is the scaling parameter.
\subsection{Model-based Pessimistic Policy Optimization}
\label{algo_describ_mpo}
To give a proper performance bound, we consider the following model set
\begin{equation}
\label{MLE_model_ellipsoid}
\cM_{\cD}=\left\{P(\cdot|s,a) \in \cM \biggiven \EE_{\cD}\left[\mathrm{D}_\mathrm{TV} (\widehat{P}(\cdot | s,a), P(\cdot | s,a))^2\right] \leq \xi\right\},
\end{equation}
where $\widehat{P}=\argmax_{P}\EE_{\cD}[\ln P(s'\mid s,a)]$ and $\cM$ is the set of linear models. In practice, we can parameterize the model $P_\theta(\cdot|s,a)$ and train the model via the following loss
\begin{equation}
\label{MLE_loss_ellipsoid}
\cL(\theta,\cD)=\frac{1}{N}\sum_{\tau=1}^N \ln P_\theta (s_{\tau+1}|s_\tau,a_\tau).
\end{equation}
When assuming the transitions are Gaussian, the MLE objective can be reduced to prediction loss as follows
\begin{equation}
\label{prediction_loss_ellipsoid}
\cL(\theta,\cD)=\frac{1}{N}\sum_{\tau=1}^N \| f_\theta(s_\tau,a_\tau) - s_{\tau+1}\|_2.
\end{equation}
As to the minimax optimization in \eqref{alg:2_1}, we can use techniques like bi-level optimization~\citep{hong2020two} to get the approximate solution.
\section{Addtional Lemmas and Missing Proofs}
\subsection{Proof of Lemma~\ref{lemma:2}}
\label{proof_lemma_1}
\begin{proof}
For a sufficiently large $\lambda$, it is easy to see that $\mathcal{T}\hat V \coloneqq \max_a (\hat \BB_\gamma \hat{V}-\Gamma)$ is a contraction. Without loss of generality, we assume $\lambda =1$. Then Algorithm~\ref{alg:1} converges and we have
\begin{align*}
&\hat{V}(\cdot) ~~= \max_a {\hat{Q}(\cdot,a)}, \\
&\hat{Q}(\cdot,\cdot) = \hat \BB_\gamma \hat{V}-\Gamma(\cdot,\cdot).
\end{align*}
From the definition of $\delta(\cdot,\cdot)$, we have
\begin{align}
\delta(s,a)=\BB_\gamma \hat{V}(s) - \hat{Q}(s,a) = \BB_\gamma \hat{V}(s) - \hat \BB_\gamma \hat{V}+\Gamma(s,a).
\end{align}
Under the condition of Lemma~\ref{lemma:xi_quantifier}, it holds that
\begin{align}
0\leq\delta(s,a)\leq 2\Gamma(s,a), \text{for all}~s,a. \label{gamma_inequality}
\end{align}
From Lemma~\ref{lemma:subopt_decompose}, we have
\begin{align}
\text{SubOpt}\big(\widehat{\pi},s;\gamma \big) =& - \EE_{\hat{\pi}}\left[\sum_{t=0}^\infty{\gamma^t \delta(s_t,a_t)}\Biggiven s_0=s\right] + \EE_{\pi^*}\left[\sum_{t=0}^\infty{\gamma^t \delta(s_t,a_t)}\Biggiven s_0=s\right]\nonumber\\
&+ \EE_{\pi^*}\left[\sum_{t=0}^\infty{\gamma^t \innerprod{
\hat{Q}(s_t,\cdot),\pi^*(\cdot|s_t)-\hat{\pi}(\cdot|s_t)
}}\Biggiven s_0=s\right]\nonumber\\
\leq & - \EE_{\hat{\pi}}\left[\sum_{t=0}^\infty{\gamma^t \delta(s_t,a_t)}\Biggiven s_0=s\right] + \EE_{\pi^*}\left[\sum_{t=0}^\infty{\gamma^t \delta(s_t,a_t)}\Biggiven s_0=s\right]\nonumber\\
\leq& 2 \EE_{\pi^*}\Bigl[\sum_{t=0}^\infty \gamma^t \Gamma(s_t,a_t) \Biggiven s_0=s\Bigr]\nonumber\\
= & 2 \beta \EE_{\pi^*}\Bigl[\sum_{t=0}^\infty \gamma^t \bigl(\phi(s_t,a_t)^\top \Lambda^{-1}\phi(s_t,a_t)\bigr)^{1/2} \Biggiven s_0=s\Bigr].
\end{align}
Here the first inequality follows from the fact that $\hat{\pi}(\cdot|s) = \argmax_\pi \innerprod{\hat{Q}(\cdot,\cdot),\pi(\cdot|s)}$ and the second inequality follows from Equation~\eqref{gamma_inequality}.
Then the following event
\begin{align}
\label{eq:def_ce}
\cE = \bigg\{\text{SubOpt}\big(\widehat{\pi},s;\gamma \big) \leq 2 \beta \EE_{\pi^*}\Bigl[\sum_{t=0}^\infty \gamma^t \bigl(\phi(s_t,a_t)^\top \Lambda^{-1}\phi(s_t,a_t)\bigr)^{1/2} \Biggiven s_0=s\Bigr]\text{ for all }s\in \cS\bigg\}
\end{align}
holds with probability $1-\xi/2$. From the assumption in Equation~\eqref{eq:event_opt_explore}, the following event
\begin{align*}
\cE^\dagger = \bigg\{c^\dagger \cdot \frac{1}{N}\sum_{\tau=1}^N{\phi(s_\tau,a_\tau)\phi(s_\tau,a_\tau)^\top}\succeq \EE_{\pi^*}\bigl[\phi(s_t,a_t)\phi(s_t,a_t)^\top\biggiven s_0=s\bigr] ~ \text{for all }s\in \cS\bigg\}
\end{align*}
also holds with probability $1-\xi/2$. Then from the union bound, the event $\cE\cap\cE^\dagger$ holds with probability $1-\xi$. We condition on this event here after.
By the Cauchy-Schwarz inequality, we have
\begin{align}
\label{eq:bound_eigen}
&\EE_{\pi^*}\Bigl[ \sum_{t=0}^\infty \gamma^t \bigl(\phi(s_t,a_t)^\top \Lambda^{-1}\phi(s_t,a_t)\bigr)^{1/2} \Biggiven s_0=s\Bigr]\notag \\
&\qquad = \frac{1}{1-\gamma}\EE_{d^{\pi^*}}\Bigl[ \sqrt{\Tr\big(\phi(s,a)^\top \Lambda^{-1}\phi(s,a)\big)} \Biggiven s_0=s\Bigr]\notag \\
&\qquad = \frac{1}{1-\gamma}\EE_{d^{\pi^*}}\Bigl[ \sqrt{\Tr\big(\phi(s,a)\phi(s,a)^\top \Lambda^{-1}\big)} \Biggiven s_0=s\Bigr]\notag \\
&\qquad \leq \frac{1}{1-\gamma}\sqrt{\Tr\Big(\EE_{d^{\pi^*}}\big[\phi(s,a)\phi(s,a)^\top \biggiven s_0=s\big]\Lambda^{-1}\Big)} \notag \\
&\qquad = \frac{1}{1-\gamma}\sqrt{\Tr\Big(\Sigma_{\pi^*,s}^\top \Lambda^{-1}\Big)},
\end{align}
for all $s\in \cS$.
On the event $\cE \cap \cE^\dagger$, we have
\begin{align*}
\text{SubOpt}\big(\widehat{\pi},s;\gamma \big)&\leq 2 \beta \EE_{\pi^*}\Bigl[ \sum_{t=0}^\infty \gamma^t\bigl(\phi(s_t,a_t)^\top \Lambda^{-1}\phi(s_t,a_t)\bigr)^{1/2} \Biggiven s_0=s\Bigr]\\
&\leq\frac{ 2 \beta }{1-\gamma} \sqrt{\Tr\Big(\Sigma_{\pi^*,s}\cdot \big(I + \frac{1}{c^\dagger} \cdot N \cdot \Sigma_{\pi^*,s} \big)^{-1}\Big)}\\
& =\frac{ 2 \beta }{1-\gamma} \sqrt{\sum_{j=1}^d \frac{\lambda_{j}(s)}{1+\frac{1}{c^\dagger}\cdot N \cdot \lambda_{j}(s)}}.
\end{align*}
Here $\{\lambda_{j}(s)\}_{j=1}^d$ are the eigenvalues of $\Sigma_{\pi^*,s}$ for all $s\in \cS$, the first inequality follows from the definition of $\cE$ in Equation~\eqref{eq:def_ce}, and the second inequality follows from Equation~\eqref{eq:bound_eigen} and the definition of $\cE^\dagger$ in Equation~\eqref{eq:event_opt_explore}.
Meanwhile, by Definition \ref{assump:linear_mdp}, we have $\|\phi(s,a)\|\leq 1$ for all $(s,a)\in \cS \times \cA$. By Jensen's inequality, we have
\begin{equation}
\|\Sigma_{\pi^*,s}\|_{\mathop{\text{op}}} \leq \EE_{\pi^*}\big[ \|\phi(s,a)\phi(s,a)^\top \|_{\mathop{\text{op}}}\biggiven s_0=s \big] \leq 1
\end{equation}
for all $s\in \cS$. As $\Sigma_{\pi^*,s}$ is positive semidefinite, we have $\lambda_{j}(s) \in [0,1]$ for all $s\in \cS$ and all $j\in [d]$. Hence, on $\cE \cap \cE^\dagger$, we have
\begin{align*}
\text{SubOpt}\big(\widehat{\pi}, s;\gamma \big)&\leq \frac{ 2 \beta }{1-\gamma} \sqrt{\sum_{j=1}^d \frac{\lambda_{j}(s)}{1+\frac{1}{c^\dagger}\cdot N \cdot \lambda_{j}(s)}} \\
&\leq \frac{ 2 \beta }{1-\gamma} \sqrt{\sum_{j=1}^d \frac{1}{1+\frac{1}{c^\dagger}\cdot N}} \leq \frac{2c r_{\text{max}}}{(1-\gamma)^2} \sqrt{c^\dagger d^3\zeta/N}
\end{align*}
for all $x\in \cS$, where the second inequality follows from the fact that $\lambda_{j}(s) \in [0,1]$ for all $s\in \cS$ and all $j\in [d]$, while the third inequality follows from the choice of the scaling parameter $\beta > 0$. Then we have the conclusion in Lemma~\ref{lemma:2}.
\end{proof}
\begin{lemma}[Suboptimality Decomposition]
\label{lemma:subopt_decompose}
We have
\begin{align}
\text{SubOpt}(\hat{\pi},s;\gamma) =& - \EE_{\hat{\pi}}\left[\sum_{t=0}^\infty{\gamma^t \delta(s_t,a_t)}\Biggiven s_0=s\right] + \EE_{\pi^*}\left[\sum_{t=0}^\infty{\gamma^t \delta(s_t,a_t)}\Biggiven s_0=s\right]\nonumber\\
& + \EE_{\pi^*}\left[\sum_{t=0}^\infty{\gamma^t \innerprod{
\hat{Q}(s_t,\cdot),\pi^*(\cdot|s_t)-\hat{\pi}(\cdot|s_t)
}}\Biggiven s_0=s\right],
\end{align}
where $\langle f,g\rangle=\int_{a\in\cA}{f(a)g(a) \ud a} .$
\end{lemma}
\begin{proof}
We have
\begin{align*}
\text{SubOpt}(\hat{\pi},s;\gamma) &= V^{{\pi}^*}(s) - V^{\hat{\pi}}(s) = V^{{\pi}^*}(s) - \hat{V}(s)+ \hat{V}(s) - V^{\hat{\pi}}(s).
\end{align*}
The first term satisfies
\begin{align*}
\hat{V}(s) - V^{\hat{\pi}}(s) & = \EE_{a\sim\hat{\pi}}\left[\hat{Q}(s,a)\right]- \EE_{a\sim\hat{\pi},s'\sim\cP(\cdot|s,a)}\left[r(s,a)+\gamma V^{\hat{\pi}}(s')\right] \\
& = \EE_{a\sim\hat{\pi},s'\sim\cP(\cdot|s,a)}\left[\hat{Q}(s,a)-r(s,a)-\gamma \hat{V}(s')\right] + \gamma\EE_{a\sim\hat{\pi},s'\sim\cP(\cdot|s,a)}\left[ \hat{V}(s') - V^{\hat{\pi}}(s')\right] \\
& = \EE_{\hat{\pi}}\left[\delta(s,a)\right]+ \gamma\EE_{a\sim\hat{\pi},s'\sim\cP(\cdot|s,a)}\left[\hat{V}(s')-V^{\hat{\pi}}(s')\right] \\
& = \EE_{\hat{\pi}}\left[\delta(s,a)\right]+ \cdots \\
& = \EE_{\hat{\pi}}\left[\sum_{t=0}^{\infty}\gamma^t\delta(s_t,a_t)\given s_0=s\right], \\
\end{align*}
while the second term statisfies
\begin{align*}
V^{{\pi}^*}(s) - \hat{V}(s) & = \EE_{a\sim{\pi}^*,s'\sim\cP(\cdot|s,a)}\left[r(s,a)+\gamma V^{\hat{\pi}}(s')\right]- \EE_{a\sim\hat{\pi}}\left[\hat{Q}(s,a)\right] \\
& = \EE_{a\sim{\pi}^*,s'\sim\cP(\cdot|s,a)}\left[r(s,a)+\gamma V^{\hat{\pi}}(s')-\hat{Q}(s,a)\right]+ \EE_{a\sim{\pi}^*}\left[\hat{Q}(s,a)\right] - \EE_{a\sim\hat{\pi}}\left[\hat{Q}(s,a)\right] \\
& = \EE_{a\sim{\pi}^*,s'\sim\cP(\cdot|s,a)}\left[r(s,a)+\gamma \hat{V}(s')-\hat{Q}(s,a)\right] + \gamma \EE_{a\sim{\pi}^*,s'\sim\cP(\cdot|s,a)}\left[V^{\hat{\pi}}(s') - \hat{V}(s')\right] \\
& + \innerprod{\hat{Q}(s,\cdot), \pi^*(\cdot\given s)-\hat{\pi}(\cdot\given s)}_\cA \\
& = -\EE_{a\sim{\pi}^*,s'\sim\cP(\cdot|s,a)}\left[\delta(s,a)\right] + \innerprod{\hat{Q}(s,\cdot), \pi^*(\cdot\given s)-\hat{\pi}(\cdot\given s)}_\cA + \cdots \\
& = -\EE_{\pi^*}\left[\sum_{t=0}^{\infty}\gamma^t\delta(s_t,a_t)\given s_0=s\right] + \EE_{\pi^*}\left[\sum_{t=0}^{\infty}\gamma^t \innerprod{\hat{Q}(s_t,\cdot), \pi^*(\cdot\given s_t)-\hat{\pi}(\cdot\given s_t)}_\cA \given s_0=s\right]. \\
\end{align*}
Combining the two equations above, we have the desired result.
\end{proof}
\begin{lemma}[$\xi$-Quantifiers]
\label{lemma:xi_quantifier}
Let
\begin{equation}
\lambda =1, \quad \beta= c\cdot d V_{\text{max}}\sqrt{\zeta}, \quad \zeta = \log{(2dN/(1-\gamma)\xi)}.
\end{equation}
Then $\Gamma=\beta \cdot \big( \phi(s, a)^\top \Lambda ^{-1} \phi(s, a) \big)^{1/2}$ specified in Equation~\eqref{eq:linear_uncertainty_quantifier} are $\xi$-quantifiers. That is, with probability at least $1-\xi$,
\begin{equation}
|(\BB \widehat{V})(s,a) -(\widehat{\BB} \widehat{V})(s,a)| \leq \Gamma = \beta \sqrt{\phi(s,a)^\top\Lambda^{-1}\phi(s,a)}, \forall (s,a) \in \cS\times\cA.
\end{equation}
\end{lemma}
\begin{proof}
we have
\begin{align}
\BB \widehat{V} -\widehat{\BB} \widehat{V} & = \phi(s,a)^\top (w-\widehat{w})\notag\\
& = \phi(s,a)^\top w - \phi(s,a)\Lambda^{-1}\left(\sum_{\tau=1}^N{\phi_\tau(r_\tau+\gamma \widehat{V}(s_{\tau+1})}\right)\notag\\
& = \underbrace{\phi(s,a)^\top w - \phi(s,a)\Lambda^{-1}\left(\sum_{\tau=1}^N{\phi_\tau\phi_\tau^\top w}\right)}_{\displaystyle \text{(i)}} +\underbrace{\phi(s,a)\Lambda^{-1}(\sum_{\tau=1}^N{\phi_\tau\phi_\tau^\top w}-\sum_{\tau=1}^N\phi_\tau(r_\tau+\gamma \widehat{V}(s_{\tau+1}))}_{\displaystyle \text{(ii)}}, \label{eq:term1_diff}
\end{align}
Then we bound $\text{(i)}$ and $\text{(ii)}$, respectively.
For $\text{(i)}$, we have
\begin{align}
\label{eq:zzz888}
\text{(i)} &= \phi(s,a)^\top w - \phi(s,a)\Lambda^{-1}(\Lambda-\lambda I)w \notag\\
&= \lambda \phi(s,a)\Lambda^{-1}w\notag\\
&\leq \lambda \norm{\phi(s,a)}_{\lambda^{-1}} \norm{w}_{\lambda^{-1}}\notag\\
&\leq V_{\text{max}} \sqrt{d\lambda} \sqrt{\phi(s,a)^\top\Lambda^{-1}\phi(s,a)},
\end{align}
where the first inequality follows from Cauchy-Schwartz inequality. The second inequality follows from the fact that $\norm{\Lambda^{-1}}_{\text{op}}\leq \lambda^{-1}$ and Lemma~\ref{lemma:bounded_weight_value}.
For notation simplicity, let $\epsilon_\tau = r_\tau+\gamma \widehat{V}(s_{\tau+1}) - \phi_\tau^\top w$, then we have
\begin{align}
\label{eq:define_term3}
|\text{(ii)}| & = \phi(s,a)\Lambda^{-1}\sum_{\tau=1}^N{\phi_\tau\epsilon_\tau}\notag\\
& \leq \norm{\sum_{\tau=1}^N{\phi_\tau\epsilon_\tau}}_{\Lambda^{-1}}\cdot\norm{\phi(s,a)}_{\Lambda^{-1}}\notag\\
& = \underbrace{\norm{\sum_{\tau=1}^N{\phi_\tau\epsilon_\tau}}_{\Lambda^{-1}}}_{\text{(iii)}} \cdot \sqrt{\phi(s,a)^\top \Lambda^{-1}\phi(s,a)}.
\end{align}
The term $\text{(iii)}$ is depend on the randomness of the data collection process of $\cD$. To bound this term, we resort to uniform concentration inequalities to upper bound
\begin{equation}
\sup_{V \in \cV(R,B,\lambda)} \Big\| \sum_{\tau=1}^{N} \phi(x_\tau,a_\tau) \cdot \epsilon_\tau(V) \Big\|,\notag
\end{equation}
where
\begin{equation}
\cV(R,B,\lambda) = \{V(s;w,\beta,\Sigma):\mathcal{S}\rightarrow [0,V_{\text{max}}]~\text{with} \norm{w}\leq R, \beta\in[0,B],\Sigma\succeq \lambda\cdot I\},
\end{equation}
where $V(s; w,\beta,\Sigma) = \max_a\{\phi(s,a)^\top w-\beta \cdot\sqrt{\phi(s,a)^\top\Sigma^{-1}\phi(s,a)}\}$. For all $\epsilon>0$, let $\cN(\epsilon;R,B,\lambda)$ be the minimal cover if $\cV(R,B,\lambda)$. That is, for any function $V\in\cV(R,B,\lambda)$, there exists a function $V^\dagger\in \cN(\epsilon;R,B,\lambda)$, such that
\begin{equation}
\sup_{s\in\cS}{|V(s)-V^\dagger(s)|\leq \epsilon}.
\end{equation}
Let $R_0=V_{\text{max}}\sqrt{Nd/\lambda}, B_0=2\beta$, it is easy to show that at each iteration, $\widehat{V}^{u}\in \cV(R_0,B_0,\lambda)$. From the definition of $\BB$, we have
\begin{equation}
|\BB\widehat{V}-\BB V^\dagger| = \gamma\left|\int{ (\widehat{V}(s')-V^\dagger(s'))\innerprod{\phi(s,a),\mu(s')}\ud s'} \right| \leq \gamma \epsilon.
\end{equation}
Then we have
\begin{equation}
|(r+\gamma V -\BB V)- (r+\gamma V^\dagger -\BB V^\dagger)| \leq 2\gamma \epsilon.
\end{equation}
Let $\epsilon_\tau^\dagger = r(s_\tau,a_\tau)+\gamma V^\dagger(s_{\tau+1})-\BB V^\dagger(s,a)$, we have
\begin{align*}
\text{(iii)}^2 = \norm{\sum_{\tau=1}^N\phi_\tau\epsilon_\tau}^2_{\Lambda^{-1}} &\leq 2 \norm{\sum_{\tau=1}^N\phi_\tau\epsilon^\dagger_\tau}^2_{\Lambda^{-1}} +2 \norm{\sum_{\tau=1}^N\phi_\tau(\epsilon^\dagger_\tau-\epsilon_\tau)}^2_{\Lambda^{-1}} \\
& \leq 2 \norm{\sum_{\tau=1}^N\phi_\tau\epsilon^\dagger_\tau}^2_{\Lambda^{-1}} + 8\gamma^2 \epsilon^2 \sum_{\tau=1}^{N}|\phi_\tau \Lambda^{-1} \phi_\tau|\\
& \leq 2 \norm{\sum_{\tau=1}^N\phi_\tau\epsilon^\dagger_\tau}^2_{\Lambda^{-1}} + 8\gamma^2 \epsilon^2 N^2 / \lambda
\end{align*}
It remains to bound $\norm{\sum_{\tau=1}^N\phi_\tau\epsilon^\dagger_\tau}^2_{\Lambda^{-1}}$. From the assumption for data collection process, it is easy to show that $\EE_\cD{[\epsilon_\tau \given \cF_{\tau-1}]}=0$, where $F_{\tau-1} = \sigma(\{(s_i,a_i)_{i=1}^{\tau}\cup (r_i,s_{i+1})_{i=1}^{\tau} \})$ is the $\sigma$-algebra generated by the variables from the first $\tau$ step. Moreover, since $\epsilon_\tau \leq 2V_{\text{max}}$, we have $\epsilon_\tau$ are $2V_{\text{max}}$-sub-Gaussian conditioning on $F_{\tau-1}$. Then we invoke Lemma \ref{lem:concen_self_normalized} with $M_0=\lambda \cdot I$ and $M_k = \lambda \cdot I + \sum_{\tau =1}^k \phi(x_\tau,a_\tau)\ \phi(x_\tau,a_\tau)^\top$. For the fixed function $V\colon \cS\to [0,V_{\text{max}}]$, we have
\begin{equation}
\PP_{\cD} \bigg( \Big\| \sum_{\tau=1}^{N} \phi(x_\tau,a_\tau) \cdot \epsilon_\tau(V) \Big\|_{\Lambda^{-1}}^2 > 8 V^2_{\text{max}} \cdot \log \Big( \frac{\det(\Lambda)^{1/2}}{\delta \cdot \det( \lambda \cdot I) ^{1/2} } \Big) \bigg ) \leq \delta
\end{equation}
for all $\delta \in (0,1)$. Note that $\|\phi(s,a)\|\leq 1$ for all $(s,a )\in \cS\times \cA$ by Definition \ref{assump:linear_mdp}.
We have
\begin{equation*}
\|\Lambda\|_{\mathop{\text{op}}} = \Big\|\lambda \cdot I + \sum_{\tau=1}^N \phi(s_\tau,a_\tau)\phi(s_\tau,a_\tau)^\top \Big\| _{\mathop{\text{op}}} \leq \lambda + \sum_{\tau = 1} ^N \| \phi(s_\tau,a_\tau)\phi(s_\tau,a_\tau)^\top \|_{\mathop{\text{op}}} \leq \lambda + N,
\end{equation*}
where $\|\cdot\|_{\mathop{\text{op}}}$ denotes the matrix operator norm.
Hence, it holds that
$\det(\Lambda)\leq (\lambda+N)^d$ and $\det(\lambda \cdot I) = \lambda ^d $, which implies
\begin{align*}
& \PP_{\cD} \bigg( \Big\| \sum_{\tau=1}^{N} \phi(s_\tau,a_\tau) \cdot \epsilon_\tau(V) \Big\|_{\Lambda_{-1}}^2 > 4V^2_{\text{max}}\cdot \bigl ( 2 \cdot \log(1/ \delta ) + d\cdot \log(1+N/\lambda)\big)
\biggr) \notag \\
&\qquad \leq \PP_{\cD} \bigg( \Big\| \sum_{\tau=1}^{N} \phi(s_\tau,a_\tau) \cdot \epsilon_\tau(V) \Big\|_{\Lambda_{-1}}^2 > 8 V^2_{\text{max}} \cdot \log \Big( \frac{\det(\Lambda)^{1/2}}{\delta \cdot \det( \lambda \cdot I) ^{1/2} } \Big) \bigg ) \leq \delta. \notag
\end{align*}
Therefore, we conclude the proof of Lemma \ref{lemma:xi_quantifier}.
Applying Lemma~\ref{lemma:xi_quantifier} and the union bound, we have
\begin{equation}
\PP_{\cD} \bigg( \sup_{V \in \cN (\varepsilon)} \Big\| \sum_{\tau=1}^{N} \phi(x^\tau,a^\tau) \cdot \epsilon_\tau(V) \Big\|_{\Lambda^{-1}}^2 > 4V^2_{\text{max}} \cdot \bigl ( 2 \cdot \log(1/ \delta ) + d \cdot \log(1+N/\lambda)\big) \biggr ) \leq \delta \cdot | \cN(\varepsilon ) | .
\end{equation}
Recall that
\begin{equation}
\hat V \in \cV (R_0, B_0, \lambda),\qquad \text{where}~~ R_0 = V_{\text{max}}\sqrt{ Nd/\lambda},~ B_0 = 2\beta,~ \lambda = 1 ,~ \beta = c \cdot d V_{\text{max}} \sqrt{\zeta}.
\end{equation}
Here $c>0$ is an absolute constant, $\xi\in (0,1)$ is the confidence parameter, and $\zeta = \log (2 d V_\text{max} / \xi) $ is specified in Algorithm \ref{alg:1}. Applying Lemma \ref{lem:covering_num} with $\varepsilon = d V_{\text{max}} / N$,
we have
\begin{align}\label{eq:apply_cov_num}
\log | \cN(\varepsilon) | & \leq d \cdot \log ( 1 + 4 d^{-1/2}N^{3/2} ) + d^2 \cdot \log ( 1 + 32 c^2\cdot d^{1/2}N^2\zeta )\notag \\
& \leq d \cdot \log ( 1 + 4 d^{1/2}N^2 ) + d^2 \cdot \log ( 1 + 32 c^2 \cdot d^{1/2}N^2\zeta ).
\end{align}
By setting $\delta=\xi/| \cN(\varepsilon ) |$, we have that with probability at least $1-\xi$,
\begin{align}
\label{eq:bound_term3_8}
& \Big\| \sum_{\tau=1}^{N} \phi(s_\tau,a_\tau) \cdot \epsilon_\tau(\hat{V}) \Big\|_{\Lambda^{-1}} ^2\notag
\\&\qquad \leq 8 V_{\text{max}}^2 \cdot
\bigl ( 2 \cdot \log (V_{\text{max}}/ \xi ) + 4d^2 \cdot \log ( 64c^2\cdot d^{1/2 } N^2 \zeta ) + d\cdot \log(1+N) + 4 d^2 \bigr ) \notag
\\&\qquad \leq 8V_{\text{max}}^2 d^2 \zeta (4+\log{(64c^2)}).
\end{align}
Here the last inequality follows from simple algebraic inequalities. We set $c\geq 1$ to be sufficiently large, which ensures that $36+8\cdot \log(64c^2)\leq c^2/4$ on the right-hand side of Equation~\eqref{eq:bound_term3_8}. By Equations~\eqref{eq:define_term3} and~\eqref{eq:bound_term3_8}, it holds that
\begin{equation}
\label{eq:rrr888}
| \text{(ii)}| \leq c/2 \cdot d V_{\text{max}} \sqrt{ \zeta } \cdot \sqrt{ \phi(x,a) ^\top \Lambda ^{-1} \phi(s,a) } = \beta /2 \cdot \sqrt{ \phi(x,a) ^\top \Lambda ^{-1} \phi(s,a) }
\end{equation}
By Equations \eqref{eq:linear_uncertainty_quantifier}, \eqref{eq:term1_diff}, \eqref{eq:zzz888}, and \eqref{eq:rrr888}, for all $(s,a) \in \cS\times \cA$, it holds that
\begin{equation}
\bigl | (\BB \hat V ) (x,a) - (\hat\BB \hat V ) (s,a) \bigr | \leq ( V_{\text{max}} \sqrt{d} + \beta /2 ) \cdot \sqrt{ \phi(s,a) ^\top \Lambda ^{-1} \phi(s,a) } \leq \Gamma (s,a)
\end{equation}
with probability at least $1 - \xi$. Therefore, we conclude the proof of Lemma~\ref{lemma:xi_quantifier}.
\end{proof}
\begin{lemma}[Bounded weight of value function]
\label{lemma:bounded_weight_value}
Let $V_{\text{max}} = r_{\text{max}}/(1-\gamma)$. For any function $V: \cS \rightarrow [0, V_{\text{max}}]$, we have
\begin{align*}
\norm{w} \leq V_{\text{max}}\sqrt{d}, \norm{\widehat{w}} \leq V_{\text{max}} \sqrt{\frac{Nd}{\lambda}}.
\end{align*}
\end{lemma}
\begin{proof}
since
\begin{align*}
w^\top \phi(s,a) = \innerprod{M,\phi(s,a)} + \gamma \int{V(s')\psi(s')^\top M \phi(s,a)\ud s'},
\end{align*}
We have
\begin{align*}
w &= M + \gamma \int{V(s')\psi(s')^\top M\ud s' } \\
&= r_{\text{max}} \sqrt{d}+\gamma V_{\text{max}} \sqrt{d}\\
&= V_{\text{max}} \sqrt{d}.
\end{align*}
For $\widehat{w}$, we have
\begin{align*}
\norm{\widehat{w}} &= \norm{\Lambda^{-1}\sum_{\tau=1}^N{\phi_\tau (r_\tau+\gamma V(s_{\tau+1}))}} \\
& \leq \sum_{\tau=1}^N{\norm{\Lambda^{-1}{\phi_\tau (r_\tau+\gamma V(s_{\tau+1}))}}} \\
& \leq V_{\text{max}}\sum_{\tau=1}^N{\norm{\Lambda^{-1}{\phi_\tau}}} \\
& \leq V_{\text{max}}\sum_{\tau=1}^N{\sqrt{\phi_\tau^\top \Lambda^{-1/2}\Lambda^{-1}\Lambda^{-1/2}\phi_\tau}} \\
& \leq \frac{V_{\text{max}}}{\sqrt{\lambda}}\sum_{\tau=1}^N{\sqrt{\phi_\tau^\top \Lambda^{-1}\phi_\tau}} \\
& \leq V_{\text{max}}\sqrt{\frac{N}{\lambda}}\sqrt{\mathrm{Tr}(\Lambda^{-1}\sum_{\tau=1}^T{\phi_\tau\phi_\tau^\top })} \\
& \leq V_{\text{max}}\sqrt{\frac{Nd}{\lambda}}.
\end{align*}
\end{proof}
\subsection{Proof of Lemma~\ref{lemma:3}}
\label{proof_lemma_3}
\begin{proof}
We consider the following iteration:
\begin{align}
\label{model_based_iteration}
&V_{\text{min}}~~~\leftarrow \min_{s'}V(s'), \nonumber\\
&Q(s,a)\leftarrow r(s,a) + \gamma(1-\varepsilon)\EE_{s'\sim P_0}V(s') + \gamma \varepsilon V_{\text{min}}, \nonumber\\
&V(s)~~~\leftarrow \max_a Q(s,a).
\end{align}
It is easy to see that if the iteration in~\eqref{model_based_iteration} converges, it is the value function for the policies specified in Equation~\eqref{model_based_policy_opt}. In fact, the iteration above has a unique stationary solution follows from the fact that it is a $\gamma$-contraction for $V(s)$. Then it suffices to show that the solution to the value iteration with discount factor $(1-\varepsilon)\gamma$ is the same as the above stationary solution up to a constant. Let $Q(s,a)$ and $V(s)$ be the solution to the value iteration with discount factor $(1-\varepsilon)\gamma$. Then we have
\begin{align}
Q(s,a) &= r(s,a) + (1-\varepsilon)\gamma \EE_{s'}{V(s')}, \nonumber
\end{align}
Let $\Delta={\gamma\varepsilon\min_s[\max_aQ(s,a)]}/{(1-\gamma)}$ and $\tilde{Q}(\cdot,\cdot)=Q(\cdot,\cdot)+\Delta, \tilde{V}(\cdot) = V(\cdot)+\Delta$,
then we have $$\min_s[\max_a\tilde{Q}(s,a)] = \frac{(1-\gamma+\gamma\varepsilon)\Delta}{\gamma\varepsilon}.$$
This leads to
\begin{align*}
&\tilde{Q}(s,a) \\
=& r(s,a) + \gamma (1-\varepsilon) \EE_{s'} V(s')+\Delta\\
=& r(s,a) + \gamma (1-\varepsilon) \EE_{s'} {\tilde{V}(s')} + (1-\gamma+\gamma\varepsilon)\Delta\\
=& r(s,a) + \gamma (1-\varepsilon) \EE_{s'} \tilde{V}(s') + \gamma \varepsilon \min_s[\max_a \tilde{Q}(s,a)].\\
\end{align*}
This means that $\tilde{Q}$ is the unique stationary solution to the iteration in~\eqref{model_based_iteration}. Then we have the value function for policies in Equation~\eqref{model_based_policy_opt} has the same value function as the policies in Equation~\eqref{small_gamma_policy_opt} up to a constant. Then we finish the proof of Lemma~\ref{lemma:3}.
\end{proof}
\subsection{Proof of Theorem~\ref{theorem:2}}
\label{proof_theorem_2}
\begin{proof}
We first specialize the algorithm for offline value iteration with a lower discount factor and an estimatied model, as depicted in Algorithm~\ref{alg:3}.
\begin{algorithm}[H]
\caption{Generalized Value Iteration}\label{alg:3}
\begin{algorithmic}[1]
\STATE {\bf Require}: Dataset $\cD$, discount factor $\gamma$, dicount factor coefficient $\varepsilon$
\STATE Estimated the model by MLE: $\widehat{P}=\argmax_{P\in \cM}\EE_{\cD}[\ln P(s'\mid s,a)]$.
\STATE Obtain the estimatied Bellman operator $\hat \BB_\gamma$ from learned model $\widehat{P}$.
\WHILE{not converge}
\STATE Set $\hat{Q}(\cdot,\cdot) \leftarrow \min_{e\in \cE(\varepsilon)} \left[(\hat\BB_{(1-e)\gamma} \hat{V})(\cdot,\cdot)\right]$, where $\cE(\varepsilon) = \{e(s,a) | \EE_{\cD}{\left[e(s,a)\right]} \leq \varepsilon\}$.
\STATE Set $\hat{\pi} (\cdot \given \cdot) \leftarrow \argmax_{\pi}\EE_{\pi}{\left[\hat{Q}(\cdot, \cdot)\right]}$.
\STATE Set $\hat{V}(\cdot) \leftarrow \EE_{\hat{\pi}}{\left[\hat{Q}(\cdot, \cdot)\right]}$.
\ENDWHILE
\STATE \textbf{Return} $\hat\pi$%
\end{algorithmic}
\end{algorithm}
Algorithm~\ref{alg:3} is similar to the idea of a lower discount factor $(1-\varepsilon)\gamma$, with only technical differences.
It is easy to show that Algorithm~\ref{alg:3} with discount factor $\gamma$ and a lower yields the same policy as the following optimization problem $$\argmax_{\pi\in\Pi}\argmin_{M\in\cM_\varepsilon} V_{M,\gamma}(\pi),$$
where $$\cM_\varepsilon=\set{M\in\cM \Biggiven \exists~\cP(\cdot|s,a), e\in \cE(\varepsilon),\cP_M(\cdot|s,a)= (1-e)\cP(\cdot|s,a;\widehat{M})+e \cP(\cdot|s,a), \forall (s,a) \in \cD}.$$
Here $\widehat{M}$ is the model obtained from MLE estimator and $\cM$ is the set of all linear models. The proof is similar to Lemma~\ref{lemma:3} and we omit it for simplicity.
Then we prove the theorem with the following steps.
\paragraph{1. Bounding $\cM_\varepsilon$.}~\\
Let $\cM_{\text{TV}}(M_0,\varepsilon)=\set{M\given \EE_{\cD}\left[\mathrm{D}_\mathrm{TV}{(\cP(\cdot|s,a;{M_0}),\cP(\cdot|s,a;M))}^2\right]\leq \varepsilon}$. To etablish the equivalence between $\cM_\varepsilon$ and $\cM_{\text{TV}}(M_0,\varepsilon)$, we need the follow assumption.
\begin{assumption}[Regularity]
\label{assumption_regularity}
We assume that the underlying linear MDP satisfies
\$\tilde{p}=\min\{p_{\text{min}},1-p_{\text{max}}\}>0,\label{model_based_regularity}\$
where $p_{\text{min}}=\inf_{\cP(s'|s,a)>0}{\cP(s'|s,a)},p_{\text{max}}=\sup_{\cP(s'|s,a)<1}{\cP(s'|s,a)}$.
\end{assumption}
Note that this assumption is always true for tabular MDPs. It only rules out the case when there exists a sequence $\{s'_n\}_{n=1}^{\infty}$ such that $\lim_{n\rightarrow \infty} \cP(s'_n|s,a) = 0$ or $\lim_{n\rightarrow \infty} \cP(s'_n|s,a) = 1$.
Then we have
\begin{equation}
\label{eq:m_epsilon_bound}
\cM_{\text{TV}}(\widehat{M},\tilde{p}^2 \varepsilon^2 /4 ) \subseteq \cM_\varepsilon \subseteq \cM_{\text{TV}}(\widehat{M},\varepsilon^2),
\end{equation}
Recall that $\cM_\varepsilon=\set{M| \cP(\cdot|s,a;M)= (1-\varepsilon)\cP(\cdot|s,a;{\widehat{M}})+\varepsilon \cP(\cdot) }$. On the one hand, it is easy to see that the largest deviation in $\cM_\varepsilon$ from $\widehat{M}$ happens when $\cP(\cdot|s,a;{\widehat{M}}))$ is close to 0 or close to 1. On the other hand, we have $\mathrm{D}_\mathrm{TV}{((1-\varepsilon)\cP(\cdot|s,a;M)+\varepsilon \cP(\cdot),\cP(\cdot|s,a;{M}))}\leq \varepsilon$. Then we have the result in Equation~\eqref{eq:m_epsilon_bound}. The following steps largely follows~\citep{uehara2021pessimistic}.
\paragraph{2. Upper-bounding $ \EE_{(s,a)\sim\rho}[\mathrm{D}_\mathrm{TV}(\cP(\cdot \mid s,a;M^{\star}),\cP(\cdot \mid s,a;\widehat{M}))^2 ]$. }~ \\
Let
\begin{align*}
\cM =\braces{\cP(\cdot\given s,a;M) \mid M \in \RR^{d\times r}, \norm{M}_2\leq \sqrt{d},\int \phi(s,a)^\top M\psi(s,a)\ud(s')=1,~ \forall(s,a)},
\end{align*}
and $\cH = \braces{ \sqrt{\frac{\cP+\cP^{\star}}{2}} \mid \cP\in \cM } . $
By invoking Theorem~\ref{thm:mle}, we first show
\begin{align*}
\EE_{(s,a)\sim\rho}[\mathrm{D}_\mathrm{TV}(\cP(\cdot \mid s,a;M^{\star}),\cP(\cdot \mid s,a;\widehat{M}))^2 ]\leq c\{(d/N)\ln^2(Nd)+\ln(c/\delta)/N \}.
\end{align*}
To do that, we calculate the entropy integral with bracketing.
First, we have
\begin{align}\label{eq:bracketing}
\cN_{[]}(\epsilon,\cH,d) \leq \cN_{[]}(\epsilon,\cM,d').
\end{align}
where
\begin{align}
d'(a,b) &=\EE_{(s,a)\sim\rho}\bracks{\int (a(s,a,s')-b(s,a,s'))^2 \ud(s')}^{1/2},\\
d(a,b) &=\EE_{(s,a)\sim\rho}\bracks{\int (\sqrt{a(s,a,s')}-\sqrt{b(s,a,s')})^2 \ud(s')}^{1/2}.
\end{align}
Here, we use two observations. The first observation is
\begin{align*}
d^2\prns{\sqrt{\frac{\cP(M')+\cP^{\star}}{2}}, \sqrt{\frac{\cP(M'')+\cP^{\star}}{2}}}
\leq c_1d'^2( \cP(M'),\cP(M''))
\end{align*}
due to the mean-value theorem
\begin{align*}
\sqrt{a}-\sqrt{b}\leq \max(1/\sqrt{a},1/\sqrt{b})(a-b)
\end{align*}
and Assumption~\ref{assumption_regularity} that $\cP^{\star}(s'\mid s,a)\geq c_0>0$. The second observation is when we have $P'<g<P''$, we also have $\sqrt{(\cP'+\cP^{\star})/2}<\sqrt{(g+P^{\star})/2}<\sqrt{(\cP''+\cP^{\star})/2}$. Then, \ref{eq:bracketing} is concluded.
Next, by letting $M^{(1)},\cdots,M^{(K)}$ be an $\epsilon$-cover of the $d$-dimensional ball with a radius $\sqrt{d}$, i.e, $B_d(\sqrt{d})$, we have the brackets $\{[\cP(M^{(i)})-\epsilon,\cP(M^{(i)})+\epsilon]\}_{i=1}^{K}$ which cover $\cM$. This is because for any $\cP(M)\in \cM$, we can take $M^{(i)}$ s.t. $\|M-M^{(i)}\|_2\leq \epsilon/\sqrt{d} $, then,
\begin{align*}
\cP(M^{(i)})-\epsilon<\cP(M)< \cP(M^{(i)})+\epsilon,\quad \forall(s,a,s')
\end{align*}
noting
\begin{align}\label{eq:braket}
|\cP(M)(s,a,s') -\cP(M^{(i)})(s,a,s')|\leq \sqrt{d}\|M-M^{(i)}\|_2\leq \epsilon, \quad \forall(s,a,s')
\end{align}
The last equality follows from the fact that $\norm{M}_2 \leq \sqrt{d}$ and $\norm{\psi}_2 \leq 1$.Therefore, we have
\begin{align*}
\cN_{[]}(\epsilon,\cM,\|\cdot\|_2)\leq \cN(\epsilon/\sqrt{d}, B_d(c\sqrt{d}),\|\cdot\|_2),
\end{align*}
where $\cN(\epsilon/\sqrt{d}, B_d(c\sqrt{d}),\|\cdot\|_2)$ is a covering number of $ B_d(c\sqrt{d})$ w.r.t $\|\cdot\|_2$. This is upper-bounded by $(c\sqrt{d}/\epsilon)^d$ \citep[Lemma 5.7]{wainwright2019high}. Thus, we can calculate the upper bound of the entropy integral $J_{B}(\delta,\cM,\|\cdot\|_2)$:
\begin{align*}
\int^{\delta}_0 d^{1/2}\ln^{1/2}(cd/u)\mathrm{d}u & \leq \int^{\delta}_0 d^{1/2}\ln(1/u)\mathrm{d}u + \delta d^{1/2}\ln(c_1\sqrt{d}) \\
&=c d^{1/2}(\delta+\delta\ln(1/\delta))+ \delta d^{1/2}\ln(c d)\\
&\leq c d^{1/2}\delta\ln(cd/\delta).
\end{align*}
By taking $G(x)=d^{1/2}x\ln(c \sqrt{d}/x) $ in \ref{thm:mle}, $\delta_n=(d/n)^{1/2}\ln(nd)$ satisfies the critical inequality $$\sqrt{n}\delta^2_n\geq d^{1/2}\delta_n \ln(c d/\delta_n).$$ Finally, with probability $1-\delta$
\begin{align}\label{eq:final}
\EE_{(s,a)\sim\rho}[\mathrm{D}_\mathrm{TV}(P(\cdot \mid s,a;M^{\star}),P(\cdot \mid s,a;\widehat{M}))^2 ]\leq \xi',\quad \xi'\coloneqq \{(d/N)\ln^2(Nd)+\ln(c/\delta)/N \}.
\end{align}
Hereafter, we condition on this event.
\paragraph{3. Upper bounding $\EE_{\cD}[\|P(\cdot\mid s,a;M^\star)-P(\cdot\mid s,a;\hat M) \|^2_1 ]$. }~\\
We take an $\epsilon$-cover of the ball $B_d(R)$ in terms of $\|\cdot\|_2$, i.e., $\bar M=\{M^{(1)},\cdots,M^{(K)}\}$, where $K=(cR/\epsilon)^d$. Then, for any $M \in B_d(R)$, there exists $M^{(i)}$ s.t. $\forall (s,a)\in \cS\times \cA$,
\begin{align}
& \left|\|{P}(\cdot\mid s,a;M)-{ P(\cdot\mid s,a;M^\star)}\|^2_1- \|{ P(\cdot\mid s,a;M^{(i)})}-{P(\theta^{\star})(\cdot\mid s,a)}\|^2_1\right| \nonumber \\
&\leq 4|\|{P(\cdot\mid s,a;M)}-{ P(\cdot\mid s,a;M^\star)}\|_1- \|{ P}(\cdot\mid s,a;M^{(i)})-{P}(\cdot\mid s,a;M^{\star})\|_1| \tag{$a^2-b^2=(a-b)(a+b)$} \\
&\leq 4\|P(\cdot\mid s,a;M)-P(\cdot\mid s,a;M^{(i)})\|_1 \tag{$|\|a\|-\|b\||\leq \|a-b\|$}\\
&\leq 4\|M-M^{(i)} \|_2 \tag{From \eqref{eq:braket} } \\
&\leq 4 \epsilon. \label{eq:mle_general}
\end{align}
Here, note for $\hat M$, we have $M^{(i)}$ s.t. $\|\hat M - M^{(i)} \|_2\leq \epsilon$. Then, we have
\begin{align*}
&\mathbb{E}_{\cD}\|P(\cdot \mid s,a; M^\star)-P(\cdot \mid s,a;\hat M) \|^2_1\\
&\lesssim \mathbb{E}_{\cD}\|P(\cdot \mid s,a; M^\star)-P(\cdot \mid s,a;M^{(i)}) \|^2_1+\epsilon \tag{From \ref{eq:mle_general}}\\
&\lesssim (\mathbb{E}_{\cD}- \mathbb{E}_{(s,a)\sim \rho})\|P(\cdot \mid s,a; M^\star)-P(\cdot \mid s,a;M^{(i)}) \|^2_1+\epsilon+ \mathbb{E}_{(s,a)\sim \rho}\|P(\cdot \mid s,a; M^\star)-P(\cdot \mid s,a;M^{(i)}) \|^2_1\\
&\lesssim \sqrt{ \frac{ \mathrm{var}_{(s,a)\sim \rho}[\|P(\cdot \mid s,a; M^\star)-P(\cdot \mid s,a;M^{(i)}) \|^2_1 ]\ln(K/\delta) }{n} }+\frac{\ln(K/\delta)}{n}\\
&+\epsilon+\mathbb{E}_{(s,a)\sim \rho}\|P(\cdot \mid s,a; M^\star)-P(\cdot \mid s,a;M^{(i)}) \|^2_1 \tag{Berstein inequality}\\
&\lesssim \sqrt{ \frac{ \EE_{(s,a)\sim \rho}[\|P(\cdot \mid s,a; M^\star)-P(\cdot \mid s,a;M^{(i)}) \|^2_1 ]\ln(K/\delta) }{n} }+\frac{\ln(K/\delta)}{n}\\
&+\epsilon+\mathbb{E}_{(s,a)\sim \rho}\|P(\cdot \mid s,a; M^\star)-P(\cdot \mid s,a;M^{(i)}) \|^2_1 \tag{$\|P(\cdot \mid s,a; M^\star)-P(\cdot \mid s,a;M^{(i)}) \|^2_{1}\leq 4$}.
\end{align*}
Then,
\begin{align*}
& \mathbb{E}_{\cD}\|P(\cdot \mid s,a; M^\star)-P(\cdot \mid s,a;\hat M) \|^2_1\\
&\lesssim \sqrt{ \frac{ \{\EE_{(s,a)\sim \rho}[\|P(\cdot \mid s,a; M^\star)-P(\cdot \mid s,a;\hat M) \|^2_1 ]+\epsilon\}\ln(K/\delta) }{n} }+\frac{\ln(K/\delta)}{n}\\
& +\epsilon+\mathbb{E}_{(s,a)\sim \rho}\|P(\cdot \mid s,a; M^\star)-P(\cdot \mid s,a;\hat M) \|^2_1 \\
&\lesssim \sqrt{ \frac{ \{\xi'+\epsilon\}\ln(K/\delta) }{n} }+\frac{\ln(K/\delta)}{n}+\epsilon+\xi' \tag{From \eqref{eq:final}}.
\end{align*}
In the end, by taking $\epsilon=1/n$, we have with probability $1-\delta$,
\begin{align*}
\mathbb{E}_{\cD}\|P(\cdot \mid s,a; M^\star)-P(\cdot \mid s,a;\hat M) \|^2_1 \leq \xi,\quad \xi= c\{(d/n)\ln^2(nR)+\ln(c/\delta)/n \}.
\end{align*}
This implies with probability $1-\delta$, $P^{\star}\in \cM_{\text{TV}}{(\hat M,\xi)}$. Note that $\cM_{\text{TV}}(\widehat{M},\tilde{p}^2\varepsilon^2/4) \subseteq \cM_\varepsilon $, this also implies that with probability $1-\delta$,
$P(\cdot \mid s,a;M^{\star})\in M_{\varepsilon}$, where $\varepsilon=2\sqrt{\xi}/\tilde{p}$.
\paragraph{4. Show $ \EE_{(s,a)\sim \rho}\left[\mathrm{D}_\mathrm{TV}(\cP(\cdot \mid s,a;M^{\star}),\cP(\cdot \mid s,a;M))^2\right]\lesssim \xi,~\forall~\cP(M)\in \cM_\varepsilon $. }
~ \\
We show for any $\cP\in \cM_\varepsilon$, the distance between $\cP^{\star}$ is controlled in terms of TV distance. Formally, we need
\begin{align*}
\EE_{(s,a)\sim \rho}\left[\mathrm{D}_\mathrm{TV}(\cP(\cdot \mid s,a;M^{\star}),\cP(\cdot \mid s,a;M))^2\right]\lesssim \xi,\quad \forall P(M)\in \cM_\varepsilon.
\end{align*}
Since $\cM_\varepsilon \subseteq \cM_{\text{TV}}(\widehat{M},\varepsilon^2)$, it suffices to show that
\begin{align*}
\EE_{(s,a)\sim \rho}\left[\mathrm{D}_\mathrm{TV}(\cP(\cdot \mid s,a;M^{\star}),\cP(\cdot \mid s,a;M))^2\right]\lesssim \xi,\quad \forall P(M)\in \cM_{\text{TV}}(\widehat{M},2\varepsilon).
\end{align*}
For any $P \in \cM_{\text{TV}}(\widehat{M},\varepsilon^2)$, we have
\begin{align*}
& \EE_{\cD} [\mathrm{D}_\mathrm{TV}(P(\cdot \mid s,a),P^\star(\cdot \mid s,a))^2] \nonumber \\
&\leq 2\EE_{\cD} [\mathrm{D}_\mathrm{TV}(\widehat{P}(\cdot \mid s,a),P(\cdot \mid s,a))^2] + 2\EE_{\cD} [\mathrm{D}_\mathrm{TV}(\widehat{P}(\cdot \mid s,a),P^\star(\cdot \mid s,a))^2] \leq 16 \xi / \tilde{p}^2.
\end{align*}
Thus, we have
\begin{align}
&\EE_{s,a\sim \rho} [\mathrm{D}_\mathrm{TV}(P(\cdot \mid s,a),P^\star(\cdot \mid s,a))^2] \nonumber \\
&= \EE_{s,a\sim \rho} [\mathrm{D}_\mathrm{TV}(P(\cdot \mid s,a),P^\star(\cdot \mid s,a))^2] -\EE_{\cD} [\mathrm{D}_\mathrm{TV}(P(\cdot \mid s,a),P^\star(\cdot \mid s,a))^2] +\EE_{\cD}[\mathrm{D}_\mathrm{TV}(P(\cdot \mid s,a),P^\star(\cdot \mid s,a))^2] \nonumber \\
&\leq A(M)+ c\xi, \label{eq:key_version}
\end{align}
where $A(M)\coloneqq |(\EE_{\cD}-\EE_{(s,a)\sim \rho})\left[\mathrm{D}_\mathrm{TV}(\cP(\cdot \mid s,a;M^{\star}),\cP(\cdot \mid s,a;M))^2\right]|.$
We again consider an $\epsilon/\sqrt{d}$-cover of the ball $B_d(\sqrt{d})$ in terms of $\|\cdot\|_2$, i.e., $M'=\{M^{(1)},\cdots,M^{(K)}\}$, where $K=(c_1d/\epsilon)^d$ ($\epsilon=1/N$). Then $M'$ is also an $\epsilon/\sqrt{d}$-cover for $\cM_{\text{TV}}(\widehat{M},\varepsilon^2)$. That is for any $M$ s.t. $\forall {\cP(M)}\in \cM_{\text{TV}}(\widehat{M},\varepsilon^2)$, we can take $M'\in \cM'$ s.t. $\|M-M'\|_2\leq \epsilon/\sqrt{d}$.
Then, we have
\begin{align}\label{eq:m'}
\EE_{(s,a)\sim \rho}\left[\mathrm{D}_\mathrm{TV}(\cP(\cdot \mid s,a;M^{\star}),\cP(\cdot \mid s,a;M))^2\right]\leq A(M)+c\xi,\quad \forall M \in \cM'.
\end{align}
This is because for any $M^{(i)}\in \cM'$, we can take $P(M)\in \cM_{\text{TV}}(\widehat{M},\varepsilon^2)$ such that
\begin{align*}
&\EE_{(s,a)\sim \rho}\left[\mathrm{D}_\mathrm{TV}(\cP(\cdot \mid s,a;M^{\star}),\cP(M^{(i)})(\cdot \mid s,a))^2\right]\\
&\leq \EE_{(s,a)\sim \rho}[\mathrm{D}_\mathrm{TV}(\cP(\cdot \mid s,a;M^{\star}),\cP(M^{(i)})(\cdot \mid s,a))^2-\mathrm{D}_\mathrm{TV}(\cP(\cdot \mid s,a;M^{\star}),P(\cdot \mid s,a;M))^2] \\
&+ \EE_{(s,a)\sim \rho}[ \mathrm{D}_\mathrm{TV}(\cP(\cdot \mid s,a;M^{\star}),\cP(\cdot \mid s,a;M))^2 ]\\
&\leq 4\epsilon + \EE_{(s,a)\sim \rho}[ \mathrm{D}_\mathrm{TV}(\cP(\cdot \mid s,a;M^{\star}),\cP(\cdot \mid s,a;M))^2 ] \\
&\lesssim A(M)+\xi.
\end{align*}
From Bernstein's inequality, we have that with probability $1-\delta$,
\begin{align}\label{eq:bernstein}
A(M)&=| (\mathbb{E}_{\cD}- \mathbb{E}_{(s,a)\sim \rho})[\mathrm{D}_\mathrm{TV}(\cP(\cdot \mid s,a;M),\cP(\cdot \mid s,a;M^{\star}))^2 ] |\nonumber \\
& \lesssim \sqrt{ \frac{ \mathrm{var}_{(s,a)\sim \rho}[\mathrm{D}_\mathrm{TV}(\cP(\cdot \mid s,a;M),\cP(\cdot \mid s,a;M^{\star}))^2 ]\ln(K/\delta) }{N} }+\frac{\ln(K/\delta)}{N},\quad \forall M \in \cM'.
\end{align}
Based on the construction of $\cM'$ and Equation~\eqref{eq:m'}, we have
\begin{align}
\label{eq:bernstein2}
\mathrm{var}_{(s,a)\sim \rho}[\mathrm{D}_\mathrm{TV}(\cP(\cdot \mid s,a;M^{\star}),\cP(\cdot \mid s,a;M))^2]\lesssim A(M)+\xi,\quad \forall M \in \cM'.
\end{align}
Taking Equation~\eqref{eq:bernstein2} into Equation~\eqref{eq:bernstein}, we have $A(M)$ satisfing
\begin{align*}
A^2(M)-A(M)B_1-B_2\leq 0,\quad B_1=\frac{\ln(K/\delta)}{N}, B_2=\xi\frac{\ln(K/\delta)}{N}+\prns{\frac{\ln(K/\delta)}{N}}^2.
\end{align*}
Then, we have
\begin{align}\label{eq:bound_a}
A(M) \leq \frac{\ln(K/\delta)}{N}+\xi^{1/2}\sqrt{\frac{\ln(K/\delta)}{N}}\lesssim \xi,\quad \forall M \in \cM'.
\end{align}
We combine all steps. Recall for any $\forall {P(M)}\in \cM_\varepsilon$, we can take $M'\in \cM'$ s.t. $\|M-M'\|_2\leq 1/n$. Then, for any $P(M)\in \cM_\varepsilon$, we have
\begin{align*}
A(M) &=|(\EE_{\cD}-\EE_{(s,a)\sim \rho})\left[\mathrm{D}_\mathrm{TV}(\cP(\cdot \mid s,a;M),\cP(\cdot \mid s,a;M^{\star}))^2\right]\\
&\leq |(\EE_{\cD}-\EE_{(s,a)\sim \rho})[\mathrm{D}_\mathrm{TV}(\cP(\cdot \mid s,a;M),\cP(\cdot \mid s,a;M^{\star}))^2-\mathrm{D}_\mathrm{TV}(P(\cdot \mid s,a;M'),\cP(\cdot \mid s,a;M^{\star}))^2]\\
&+(\EE_{\cD}-\EE_{(s,a)\sim \rho})[\mathrm{D}_\mathrm{TV}(\cP(\cdot \mid s,a;M'),\cP(\cdot \mid s,a;M^{\star}))^2]\\
&\lesssim 8\varepsilon+|(\EE_{\cD}-\EE_{(s,a)\sim \rho})[\mathrm{D}_\mathrm{TV}(\cP(\cdot \mid s,a;M'),\cP(\cdot \mid s,a;M^{\star}))^2] \\
&\lesssim \xi \tag{From \eqref{eq:bound_a} and $M'\in \cM'$}.
\end{align*}
Then, we have with probability $1-\delta$,
\begin{align}\label{eq:concent_a}
A(M)\lesssim \xi,\quad \forall P(M) \in \cM_\varepsilon.
\end{align}
Finally, for any $P(M)\in \cM_\varepsilon$, with probability $1-\delta$, we have
\begin{align*}
\EE_{(s,a)\sim \rho}[\mathrm{D}_\mathrm{TV}(P(\cdot \mid s,a;M^{\star}),P(\cdot \mid s,a;M))^2] &\leq A(M)+c\xi \tag{From \eqref{eq:m'}} \\
& \lesssim \xi. \tag{From \eqref{eq:concent_a}}
\end{align*}
\paragraph{5.Bounding the performance of $\pi^{*}$ .}~\\
We first prove
\begin{align}
\label{eq:inter_goal}
V^{\pi^{*}}_{P^*}- V^{\pi^{*}}_{P} &\lesssim (1-\gamma)^{-2}\sqrt{c^\ddagger d \xi} \cdot r_{\text{max}},
\end{align}
for all $P\in \cM_\varepsilon$. Recall from the third step, for $P(M)\in \cM_\varepsilon$, we have
\begin{align*}
\EE_{(s,a)\sim \rho}\left[\mathrm{D}_\mathrm{TV}(P(\cdot \mid s,a;M^{\star}),P(\cdot \mid s,a;M))^2\right]\lesssim \xi.
\end{align*}
From the second statement of Lemma~\ref{lem:mixture},
\begin{align*}
\forall V:\cS \to [0,V_{\text{max}}],\quad (M-M^{*})^{\top} \Sigma_{\rho,V }(M-M^{*})\lesssim V_{\text{max}}^2\xi,\quad \Sigma_{\rho,V }=\E_{(s,a)\sim \rho}[\psi_{V}(s,a)\psi^{\top}_{V}(s,a)].
\end{align*}
Here, we have
\begin{align*}
V^{\pi^{*}}_{P^*}- V^{\pi^{*}}_{P} & \leq (1-\gamma)^{-1}\left|\EE_{(s,a)\sim d^{\pi^{*}}}\bracks{\int \{P(s' \mid s,a) - P^\star(s' \mid s,a)\}V^{\pi^{*}}_{P}(s')\ud(s') }\right| \tag{Simulation lemma}\\
&\leq (1-\gamma)^{-1}\left|\EE_{(s,a)\sim d^{\pi^{*}}}\bracks{(M-M^{*})\psi_{V^{\pi^{*}}_{P}}(s,a) }\right| \\
&\leq (1-\gamma)^{-1}\underbrace{\|M-M^{*}\|_{\lambda I+\Sigma_{\rho,V^{\pi^{*}}_{P} }}}_{(a)}\underbrace{\EE_{(s,a)\sim d^{\pi^{*}}} \bracks{\|\psi_{V^{\pi^{*}}_{P}}(s,a)\|_{(\Sigma_{\rho,V^{\pi^{*}}_{P} }+\lambda I)^{-1}} }}_{(b)}. \tag{C-S inequality}
\end{align*}
The first term (a) is upper-bounded by $\sqrt{V_{\text{max}}^2 \xi+\lambda d }$ noting $0\leq V^{\pi^*}_P\leq V_{\text{max}}$. The term (b) is upper-bounded by
\begin{align*}
\EE_{(s,a)\sim d^{\pi^{*}}} \bracks{\|\psi_{V^{\pi^{*}}_{P}}(s,a)\|_{{(\Sigma_{\rho,V^{\pi^{*}}_{P} }+\lambda I)^{-1}} } } &\leq \EE_{(s,a)\sim d^{\pi^{*}}} \tag{Jensen's inequality} \bracks{\|\psi_{V^{\pi^{*}}_{P}}(s,a)\|^2_{{(\Sigma_{\rho,V^{\pi^{*}}_{P} }+\lambda I)^{-1}} } }^{1/2} \\
&= \sqrt{\Tr( \Sigma_{d^{\pi^{*}},V^{\pi^{*}}_{P} }(\lambda I+\Sigma_{\rho,V^{\pi^{*}}_{P} })^{-1} ) }\\
&\leq \sqrt{c_V\Tr( \Sigma_{\rho,V^{\pi^{*}}_{P} }(\lambda I+\Sigma_{\rho,V^{\pi^{*}}_{P} })^{-1} ) }\\
&\leq \sqrt{c_V \mathrm{rank}(\Sigma_{\rho,V^{\pi^{*}}_{P} })}\\
&\leq \sqrt{c_Vd } = \sqrt{ c^\ddagger d},
\end{align*}
where $c_V=\sup_{V\in \{\cS \to [0,V_{\text{max}}]\}}\sup_{x}\frac{ x^{\top}\Sigma_V(d^{\pi^*}) x}{x^{\top}\Sigma_V(\rho) x}$, $c^\ddagger=\sup_{x}\frac{x^{\top } \Sigma(d^{\pi^{*}}) x}{x^{\top} \Sigma(\rho) x }$, and $$\Sigma_V(\mu)=\E_{(s,a)\sim \mu}[\psi_{V}(s,a)\psi^{\top}_{V}(s,a)],~\Sigma(\mu)=\E_{(s,a)\sim \mu}[\phi(s,a)\phi^{\top}(s,a)].$$ The second inequality follows from the definition of $c_V$, and the last inequality follows from the third statement in Lemma~\ref{lem:mixture}. By taking $\lambda$ s.t. $\lambda d \lesssim V_{\text{max}} \xi$, We have the desired Equation~\eqref{eq:inter_goal}.
Finally, combining all things together, with probability $1-2\delta$, for any $\pi^{*}\in \Pi$, we have
\begin{align*}
V^{\pi^{*}}_{P^{\star}}-V^{\hat \pi}_{P^{\star}}&\leq V^{\pi^{*}}_{P^{\star}}-\min_{P\in \cM_\varepsilon}V^{\pi^{*}}_{P}+ \min_{P\in \cM_\varepsilon}V^{\pi^{*}}_{P}- V^{\hat \pi}_{P^{\star}}\\
&\leq V^{\pi^{*}}_{P^{\star}}-\min_{P\in \cM_\varepsilon}V^{\pi^{*}}_{P}+ \min_{P\in \cM_\varepsilon}V^{\hat \pi}_{P}- V^{\hat \pi}_{P^{\star}} \tag{definition of $\hat \pi$}\\
&\leq V^{\pi^{*}}_{P^{\star}}-\min_{P\in \cM_\varepsilon}V^{\pi^{*}}_{P} \tag{Second step, $P^{\star}\in \cM_\varepsilon$}\\
&\lesssim (1-\gamma)^{-2}\sqrt{ c^\ddagger d \xi} \cdot r_{\text{max}}. \tag{From \eqref{eq:inter_goal}}
\end{align*}
Finally, recall from the relationship in the second and third step, we have $\varepsilon=\sqrt{\xi}/2\tilde{p},~\xi=c\{(d/N)\ln^2(Nd)+\ln(c/\delta)/N \}$, which leads to
\begin{equation*}
V^{\pi^{*}}_{P^{\star}}-V^{\hat \pi}_{P^{\star}}\lesssim \frac{c_3}{(1-\gamma)^{-2}}\sqrt{ c^\ddagger d^2 \zeta/N} \cdot r_{\text{max}},~\zeta=\log^2{(c_2Nd/\delta)}.
\end{equation*}
\end{proof}
\subsection{Technical Lemmas}
\begin{lemma}[$\varepsilon$-Covering Number \citep{jin2020provably}]
For all $h\in [H]$ and all $\varepsilon > 0$,
we have \label{lem:covering_num}
$$
\log | \cN ( \varepsilon; R, B, \lambda) | \leq d \cdot \log (1+ 4 R / \varepsilon ) + d^2 \cdot \log\bigl(1+ 8 d^{1/2} B^2 / ( \varepsilon^2\lambda) \bigr). $$
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lem:covering_num}]
See Lemma D.6 in \cite{jin2020provably} for a detailed proof.
\end{proof}
\begin{lemma}[Concentration of Self-Normalized Processes \citep{abbasi2011improved}]
Let $\{\cF_t \}^\infty_{t=0}$ be a filtration and $\{\epsilon_t\}^\infty_{t=1}$ be an $\RR$-valued stochastic process such that $\epsilon_t$ is $\cF_{t} $-measurable for all $t\geq 1$.
Moreover, suppose that conditioning on $\cF_{t-1}$,
$\epsilon_t $ is a zero-mean and $\sigma$-sub-Gaussian random variable for all $t\geq 1$, that is,
\
\EE[\epsilon_t\given \cF_{t-1}]=0,\qquad \EE\bigl[ \exp(\lambda \epsilon_t) \biggiven \cF_{t-1}\bigr]\leq \exp(\lambda^2\sigma^2/2) , \qquad \forall \lambda \in \RR.
\$
Meanwhile, let $\{\phi_t\}_{t=1}^\infty$ be an $\RR^d$-valued stochastic process such that $\phi_t $ is $\cF_{t -1}$-measurable for all $ t\geq 1$.
Also, let $M_0 \in \RR^{d\times d}$ be a deterministic positive-definite matrix and
\$
M_t = M_0 + \sum_{s=1}^t \phi_s\phi_s^\top
\$ for all $t\geq 1$. For all $\delta>0$, it holds that
\begin{equation*}
\Big\| \sum_{s=1}^t \phi_s \epsilon_s \Big\|_{ M_t ^{-1}}^2 \leq 2\sigma^2\cdot \log \Bigl( \frac{\det(M_t)^{1/2}\cdot \det(M_0)^{- 1/2}}{\delta} \Bigr)
\end{equation*}
for all $t\ge1$ with probability at least $1-\delta$.
\label{lem:concen_self_normalized}
\end{lemma}
\begin{proof}
See Theorem 1 of \cite{abbasi2011improved} for a detailed proof.
\end{proof}
Given a function class $\cF$, let $\cN_{[]}(\delta,\cF,d)$ be the bracketing number of $\cF$ w.r.t the metric $d(a,b)$ given by
\begin{align*}
d(a,b )=\E_{(s,a)\sim \rho}\bracks{\int (a(s'\mid s,a)-b(s'\mid s,a))^2\ud(s')}^{1/2}.
\end{align*}
Then, the entropy integral of $\cF$ is given by
\begin{align}\label{eq:entropy}
J_{B}(\delta,\cF,d)=\max\prns{ \int^{\delta}_{\delta^2/2} (\log\cN_{[]}(u,\cF,d))^{1/2}\mathrm{d}u,\delta}.
\end{align}
We also define the localized class of $\cH$:
\begin{align*}
\cH(\delta)=\{h\in \cH : \EE_{(s,a)\sim \rho}[h^2(P(\cdot \mid s,a)\| P^{\star}(\cdot \mid s,a) ) ]\leq \delta^2 \},
\end{align*}
where $h(P(\cdot \mid s,a)\| P^{\star}(\cdot \mid s,a) )$ denotes Hellinger distance defined by
\begin{align*}
\prns{ 0.5\int\{ \sqrt{P(s'\mid s,a)}-\sqrt{P^{\star}(s'\mid s,a)}\}^2\ud(s') }^{1/2}.
\end{align*}
\begin{theorem}[MLE guarantee with general function approximation, \citet{uehara2021pessimistic}]\label{thm:mle}
We take a function $G(\epsilon):[0,1]\to \RR$ s.t. $G(\epsilon)\geq J_{B}[\epsilon,\cH(\epsilon),d]$ and $G(\epsilon)/\epsilon^2$ is a non-increasing function w.r.t $\epsilon$. Then, letting $\xi_n$ be a solution to $\sqrt{n}\epsilon^2 \geq cG(\epsilon)$ w.r.t $\epsilon$.
With probability $1-\delta$, we have
\begin{align*}
\EE_{(s,a)\sim \rho}[\|\hat P_{\mathrm{MLE}}(\cdot \mid s,a)-P(\cdot \mid s,a)\|^2_1]\leq c_1\braces{\xi_n+\sqrt{\log(c_2/\delta)/n}}^2.
\end{align*}
\end{theorem}
\begin{proof}
The proof follows directly by adapting to conditional distribution from Theorem 7.4 in~\citep{geer2000empirical}. Please refer to~\citep{geer2000empirical} for more details.
\end{proof}
\begin{lemma}[Property of linear MDPs]\label{lem:mixture}
Let $P(M)= P(s'\given s,a;M)=\phi(s,a)^\top M\psi(s')$. Suppose $P(M)\in \cS\times \cA \to \Delta(\cS)$. For any function $V\in \cS \to [0,V_{\text{max}}]$, letting $\psi_{V}(s,a)=\int \text{vec}(\phi(s,a)\psi(s')^\top )V(s')\ud(s')$, we suppose $\|\phi(s,a)\|_2\leq 1$ and $\|\psi(s')\|_2\leq 1$. The following theorems hold:
\begin{enumerate}
\item For any $(s,a,s')$, we have $|P(M)(s,a,s')-P(M')(s,a,s')|\leq \|M-M'\|_2$.
\item For any $(s,a)$, we have $\mathrm{TV}(P(M)(s,a,\cdot),P(M')(s,a,\cdot))\leq \|M-M'\|_2$. Besides, for any $V:\cS \to [0,1]$, we have
\begin{align*}
|(M-M')\psi_{V}(s,a)|\leq V_{\text{max}}\mathrm{TV}(P(M)(s,a,\cdot),P(M')(s,a,\cdot)).
\end{align*}
\item
\begin{align*}
\sup_{V\in \{\cS \to [0,V_{\text{max}}]\}}\sup_{x}\frac{ x^{\top}\E_{(s,a)\sim d^{\pi^{*}}}[\psi_{V}(s,a)\psi^{\top}_{V}(s,a) ] x}{x^{\top}\E_{(s,a)\sim \rho}[\psi_{V}(s,a)\psi^{\top}_{V}(s,a) ] x}= \sup_{x}\frac{x^{\top } \E_{d^{\pi^{*}}}[\phi(s,a)\phi(s,a)^{\top}] x}{x^{\top} \E_{\rho}[\phi(s,a)\phi(s,a)^{\top}]x }.
\end{align*}
\end{enumerate}
\end{lemma}
\begin{proof}
See Lemma 12 in~\citep{uehara2021pessimistic} for a detailed proof.
\end{proof}
\clearpage
\section{Complete experimental results on noised D4RL tasks}\label{complete results of TD3+BC}
\begin{figure}[h]
\centering
\subfigure[med(50) noise(0)]{
\includegraphics[scale=0.23]{50+x/walker2d-medium-v2-50-0.pdf}}
\subfigure[med(50) noise(0)]{
\includegraphics[scale=0.23]{50+x/walker2d-medium-v2-50-0-q.pdf}}
\subfigure[med(50) noise(5)]{
\includegraphics[scale=0.23]{50+x/walker2d-medium-v2-50-5.pdf}}
\subfigure[med(50) noise(5)]{
\includegraphics[scale=0.23]{50+x/walker2d-medium-v2-50-5-q.pdf}}
\subfigure[med(50) noise(10)]{
\includegraphics[scale=0.23]{50+x/walker2d-medium-v2-50-10.pdf}}
\subfigure[med(50) noise(10)]{
\includegraphics[scale=0.23]{50+x/walker2d-medium-v2-50-10-q.pdf}}
\subfigure[med(50) noise(15)]{
\includegraphics[scale=0.23]{50+x/walker2d-medium-v2-50-15.pdf}}
\subfigure[med(50) noise(15)]{
\includegraphics[scale=0.23]{50+x/walker2d-medium-v2-50-15-q.pdf}}
\subfigure[med(50) noise(20)]{
\includegraphics[scale=0.23]{50+x/walker2d-medium-v2-50-20.pdf}}
\subfigure[med(50) noise(20)]{
\includegraphics[scale=0.23]{50+x/walker2d-medium-v2-50-20-q.pdf}}
\subfigure[med(50) noise(25)]{
\includegraphics[scale=0.23]{50+x/walker2d-medium-v2-50-25.pdf}}
\subfigure[med(50) noise(25)]{
\includegraphics[scale=0.23]{50+x/walker2d-medium-v2-50-25-q.pdf}}
\caption{Experimental results on walker2d task consisting of 50 medium trajectories and $x$ noised trajectories.
The evaluation metric is the episode return and log $Q$-value.}
\label{fig:my_label}
\end{figure}
\begin{figure}[h]
\centering
\subfigure[med(50) noise(0)]{
\includegraphics[scale=0.23]{50+x/hopper-medium-v2-50-0.pdf}}
\subfigure[med(50) noise(0)]{
\includegraphics[scale=0.23]{50+x/hopper-medium-v2-50-0-q.pdf}}
\subfigure[med(50) noise(5)]{
\includegraphics[scale=0.23]{50+x/hopper-medium-v2-50-5.pdf}}
\subfigure[med(50) noise(5)]{
\includegraphics[scale=0.23]{50+x/hopper-medium-v2-50-5-q.pdf}}
\subfigure[med(50) noise(10)]{
\includegraphics[scale=0.23]{50+x/hopper-medium-v2-50-10.pdf}}
\subfigure[med(50) noise(10)]{
\includegraphics[scale=0.23]{50+x/hopper-medium-v2-50-10-q.pdf}}
\subfigure[med(50) noise(15)]{
\includegraphics[scale=0.23]{50+x/hopper-medium-v2-50-15.pdf}}
\subfigure[med(50) noise(15)]{
\includegraphics[scale=0.23]{50+x/hopper-medium-v2-50-15-q.pdf}}
\subfigure[med(50) noise(20)]{
\includegraphics[scale=0.23]{50+x/hopper-medium-v2-50-20.pdf}}
\subfigure[med(50) noise(20)]{
\includegraphics[scale=0.23]{50+x/hopper-medium-v2-50-20-q.pdf}}
\subfigure[med(50) noise(25)]{
\includegraphics[scale=0.23]{50+x/hopper-medium-v2-50-25.pdf}}
\subfigure[med(50) noise(25)]{
\includegraphics[scale=0.23]{50+x/hopper-medium-v2-50-25-q.pdf}}
\caption{Experimental results on hopper task consisting of 50 medium trajectories and $x$ noised trajectories.
The evaluation metric is the episode return and log $Q$-value.}
\label{fig:my_label_2}
\end{figure}
\begin{figure}[h]
\centering
\subfigure[med(50) noise(0)]{
\includegraphics[scale=0.23]{50+x/halfcheetah-medium-v2-50-0.pdf}}
\subfigure[med(50) noise(0)]{
\includegraphics[scale=0.23]{50+x/halfcheetah-medium-v2-50-0-q.pdf}}
\subfigure[med(50) noise(5)]{
\includegraphics[scale=0.23]{50+x/halfcheetah-medium-v2-50-5.pdf}}
\subfigure[med(50) noise(5)]{
\includegraphics[scale=0.23]{50+x/halfcheetah-medium-v2-50-5-q.pdf}}
\subfigure[med(50) noise(10)]{
\includegraphics[scale=0.23]{50+x/halfcheetah-medium-v2-50-10.pdf}}
\subfigure[med(50) noise(10)]{
\includegraphics[scale=0.23]{50+x/halfcheetah-medium-v2-50-10-q.pdf}}
\subfigure[med(50) noise(15)]{
\includegraphics[scale=0.23]{50+x/halfcheetah-medium-v2-50-15.pdf}}
\subfigure[med(50) noise(15)]{
\includegraphics[scale=0.23]{50+x/halfcheetah-medium-v2-50-15-q.pdf}}
\subfigure[med(50) noise(20)]{
\includegraphics[scale=0.23]{50+x/halfcheetah-medium-v2-50-20.pdf}}
\subfigure[med(50) noise(20)]{
\includegraphics[scale=0.23]{50+x/halfcheetah-medium-v2-50-20-q.pdf}}
\subfigure[med(50) noise(25)]{
\includegraphics[scale=0.23]{50+x/halfcheetah-medium-v2-50-25.pdf}}
\subfigure[med(50) noise(25)]{
\includegraphics[scale=0.23]{50+x/halfcheetah-medium-v2-50-25-q.pdf}}
\caption{Experimental results on halfcheetah task consisting of 50 medium trajectories and $x$ noised trajectories.
The evaluation metric is the episode return and log $Q$-value.
}
\label{fig:my_label_3}
\end{figure}
\section{Conclusion}
This paper examines the two distinct effects of the discount factor in offline RL, i.e., the regularization effect and the pessimistic effect. On the one hand, the discount factor acts as a regulator to trade-off optimality with sample efficiency upon existing offline techniques like a negative uncertainty-based bonus. On the other hand, we show that a lower guidance discount factor is equivalent to the model-based pessimism, where we optimize the policy's performance in the worst possible models. We quantify the above effects by analyzing the performance bounds with a lower guidance discount factor in linear MDPs. Moreover, we verify the above theoretical observations in tabular MDPs and D4RL tasks. Empirical results show that a lower discount factor can significantly improve performance in two scenarios. The first is when the data size is limited or there is poor data coverage, and we can apply a lower discount factor with additional pessimism. The second is when the data is sufficient, and we can directly use a lower discount factor as a proper pessimistic mechanism.
Our work suggests that the discount factor plays a vital role in offline RL, promoting current offline RL methods in complex and diverse scenarios. This leaves several interesting future works: (i) Can a lower guidance discount factor be better integrated with current offline algorithms, like fine-grained guidance $\gamma$ at each transition? (ii) Is there a better theoretical explanation for the success of a lower discount factor in offline RL? (iii) How to develop more efficient offline algorithms in limited data size and insufficient coverage scenarios?
\section{Empirical Results}
The goal of the experiments in this section is to investigate the following questions.
Can the adjusted guidance discount factor improve the performance of current offline RL algorithms?
Can the smaller guidance discount factor improve the performance of online RL algorithms on offline tasks?
How is the optimal discount factor related to data size?
What is the benefit of changing the discount factor compared to the algorithm's original parameter?
\subsection{Toy example}
123
\subsection{Experimental results on D4RL tasks}
\subsubsection{Experimental results of offline algorithms}
\begin{figure*}[t]
\centering
\subfigure{
\includegraphics[scale=0.125]{50+x-summary/walker2d.pdf}}
\hspace{-3.5mm}
\subfigure{
\includegraphics[scale=0.125]{50+x-2/walker2d-medium-v2-50-0.pdf}}
\hspace{-3.5mm}
\subfigure{
\includegraphics[scale=0.125]{50+x-2/walker2d-medium-v2-50-5.pdf}}
\hspace{-3.5mm}
\subfigure{
\includegraphics[scale=0.125]{50+x-2/walker2d-medium-v2-50-10.pdf}}
\hspace{-3.5mm}
\subfigure{
\includegraphics[scale=0.125]{50+x-2/walker2d-medium-v2-50-15.pdf}}
\hspace{-3.5mm}
\subfigure{
\includegraphics[scale=0.125]{50+x-2/walker2d-medium-v2-50-25.pdf}}
\subfigure{
\includegraphics[scale=0.125]{50+x-summary/hopper.pdf}}
\hspace{-3.5mm}
\subfigure{
\includegraphics[scale=0.125]{50+x-2/hopper-medium-v2-50-0.pdf}}
\hspace{-3.5mm}
\subfigure{
\includegraphics[scale=0.125]{50+x-2/hopper-medium-v2-50-5.pdf}}
\hspace{-3.5mm}
\subfigure{
\includegraphics[scale=0.125]{50+x-2/hopper-medium-v2-50-10.pdf}}
\hspace{-3.5mm}
\subfigure{
\includegraphics[scale=0.125]{50+x-2/hopper-medium-v2-50-15.pdf}}
\hspace{-3.5mm}
\subfigure{
\includegraphics[scale=0.125]{50+x-2/hopper-medium-v2-50-25.pdf}}
\subfigure{
\includegraphics[scale=0.125]{50+x-summary/halfcheetah.pdf}}
\hspace{-3.5mm}
\subfigure{
\includegraphics[scale=0.125]{50+x-2/halfcheetah-medium-v2-50-0.pdf}}
\hspace{-3.5mm}
\subfigure{
\includegraphics[scale=0.125]{50+x-2/halfcheetah-medium-v2-50-5.pdf}}
\hspace{-3.5mm}
\subfigure{
\includegraphics[scale=0.125]{50+x-2/halfcheetah-medium-v2-50-10.pdf}}
\hspace{-3.5mm}
\subfigure{
\includegraphics[scale=0.125]{50+x-2/halfcheetah-medium-v2-50-15.pdf}}
\hspace{-3.5mm}
\subfigure{
\includegraphics[scale=0.125]{50+x-2/halfcheetah-medium-v2-50-25.pdf}}
\caption{Experimental results on noised D4RL tasks with various noised trajectories.}
\label{fig1: performance}
\end{figure*}
\begin{table*}[t]
\centering
\begin{tabular}{l|c|c|c|c|c|c|c|c}
& {\color{blue}BCQ} & {\color{red}BCQ~($\gamma$)} & {\color{blue}CQL} & {\color{red}CQL~($\gamma$)} & {\color{blue}AWAC} & {\color{red}AWAC~($\gamma$)} & {\color{blue}COMBO} & {\color{red}COMBO~($\gamma$)}\\
\hline
walker2d (0) & 59.6$\pm$2.7 & 51.5$\pm$3.6 & 65.3$\pm$1.5 & 60.9$\pm$1.4 & 44.8$\pm$2.8 & \textbf{67.5$\pm$1.6} & 26.1$\pm$3.2 & \textbf{65.5$\pm$1.7} \\
walker2d (10) & 53.7$\pm$2.5 & 51.8$\pm$1.3 & 54.4$\pm$2.7 & \textbf{65.5$\pm$3.2} & 56.6$\pm$1.2 & \textbf{64.8$\pm$4.1} & 47.9$\pm$2.3 & \textbf{63.1$\pm$1.6} \\
walker2d (50) & 20.3$\pm$3.3 & \textbf{52.4$\pm$3.9} & 62.2$\pm$0.5 & \textbf{68.5$\pm$0.8} & 54.4$\pm$3.2 & \textbf{68.5$\pm$1.9} & 27.2$\pm$1.6 & \textbf{69.6$\pm$1.9} \\
walker2d (100) & 18.6$\pm$1.9 & \textbf{52.1$\pm$2.2} & 63.1$\pm$1.2 & 57.7$\pm$1.8 & 60.9$\pm$2.6 & \textbf{67.0$\pm$3.2} & 13.3$\pm$1.1 & \textbf{70.7$\pm$2.3} \\
\hline
hopper (0) & 52.8$\pm$2.1 & 40.3$\pm$2.5 & 46.3$\pm$0.7 & \textbf{52.5$\pm$1.0} & 49.4$\pm$1.8 & 50.0$\pm$0.7 & 1.5$\pm$0.1 & \textbf{53.5$\pm$3.2}\\
hopper (10) & 47.9$\pm$2.1 & 41.0$\pm$2.7 & 49.4$\pm$0.4 & \textbf{53.1$\pm$1.6} & 46.3$\pm$0.3 & \textbf{50.4$\pm$0.2} & 1.2$\pm$0.1 & \textbf{56.5$\pm$2.5}\\
hopper (50) & 12.7$\pm$3.5 & \textbf{44.1$\pm$1.9} & 41.7$\pm$1.1 & \textbf{47.9$\pm$1.3} & 47.9$\pm$1.6 & \textbf{51.9$\pm$1.5} & 1.0$\pm$0.1 & \textbf{48.6$\pm$4.2}\\
hopper (100) & 1.0$\pm$0.1 & \textbf{41.6$\pm$0.6} & 44.8$\pm$1.9 & \textbf{47.0$\pm$2.5} & 47.3$\pm$0.4 & \textbf{50.0$\pm$2.7} & 1.3$\pm$0.1 & \textbf{52.3$\pm$1.7}\\
\hline
halfcheetah (0) & 40.2$\pm$1.3 & \textbf{42.1$\pm$1.1} & 0.8$\pm$0.1 & 0.8$\pm$0.0 & 36.1$\pm$0.6 & \textbf{39.2$\pm$0.3} & 32.6$\pm$1.6 & 27.6$\pm$1.5 \\
halfcheetah (10) & 39.5$\pm$0.3 & \textbf{40.2$\pm$3.3} & 0.8$\pm$0.0 & 0.9$\pm$0.1 & 35.7$\pm$1.4 & \textbf{38.8$\pm$0.6} & 32.3$\pm$2.8 & 29.7$\pm$2.7 \\
halfcheetah (50) & 36.5$\pm$0.9 & \textbf{37.8$\pm$0.8} & 0.6$\pm$0.0 & 0.6$\pm$0.0 & 34.6$\pm$1.8 & \textbf{39.6$\pm$0.2} & 31.1$\pm$4.7 & 28.0$\pm$1.6 \\
halfcheetah (100) & 35.4$\pm$1.1 & \textbf{36.4$\pm$1.7} & 0.7$\pm$0.1 & 0.7$\pm$0.1 & 35.9$\pm$2.0 & \textbf{39.4$\pm$1.8} & 30.0$\pm$1.9 & 29.3$\pm$0.6 \\
\hline
\end{tabular}
\caption{Experimental results on noised D4RL tasks with various offline RL methods.}
\label{tab1: performance}
\end{table*}
This section investigates whether a reduced discount factor will benefit generalization in a continuous control with a function approximation setting.
Specifically, we evaluate various offline RL algorithms~(TD3+BC, BCQ, CQL, AWAC and COMBO) with a smaller guidance discount factor on the limited D4RL benchmark of OpenAI gym MuJoCo tasks.
The dataset in each task contains 50 medium trajectories and various noised trajectories to approximate the practical scenarios.
The noised trajectories are fragments of the random datasets in D4RL.
To ensure a fair and identical experimental evaluation across algorithms, we re-run these offline algorithms using the author-provided implementation or the recognized code.
We report the final performance results in Table~\ref{tab1: performance} and display the learning curves of TD3+BC in Figure~\ref{fig1: performance}.
The performance of current offline RL methods with original discount facotr~($\gamma=0.99$) degrades with more noisy data due to the less coverage in the dataset.
In contrast, offline RL methods with a small guidance discount factor~($\gamma=0.95$) achieve stable and robust performance in most scenarios.
Besides, we find small data regimes have a noticeable impact on CQL~(e.g., we cannot improve the performance on the halfcheetah task by changing $\gamma$).
Also, training an accurate model with limited data is challenging.
As a data augmentation version of CQL, the generated state-action pairs of COMBO is noisy~(see the gap between COMBO and COMBO($\gamma$) in walker2d and hopper tasks).
Complete experimental results of TD3+BC on noised D4RL tasks are shown in Appendix~\ref{complete results of TD3+BC}.
\subsubsection{Experimental results of online algorithm}
We evaluate the standard off-policy algorithm TD3 on standard D4RL datasets, which is shown in Figure~\ref{fig2: online algorithm}.
The experimental results show TD3 achieves a noticeable performance improvement on offline tasks with a smaller discount factor, demonstrating the role of the discount factor as a pessimistic mechanism.
\begin{figure}[h]
\centering
\subfigure{
\includegraphics[scale=0.23]{TD3_result/walker2d-medium-v2.pdf}}
\subfigure{
\includegraphics[scale=0.23]{TD3_result/halfcheetah-medium-v2.pdf}}
\caption{Experimental results of online algorithm TD3 on D4RL datasets.}
\label{fig2: online algorithm}
\end{figure}
\subsection{Ablation Study}
\subsubsection{Robustness of the small discount factor}
We evaluate TD3+BC on datasets containing 50 medium and 25 noised trajectories with various $\gamma$.
In Figure~\ref{fig3: gamma}, TD3+BC achieves a high performance by decreasing $\gamma$ from 0.99 to around 0.95.
Different offline RL algorithms have minor difference in selecting $\gamma$ on different tasks~(e.g., in Table~\ref{tab1: performance}, BCQ ($\gamma=0.9$) in hopper, CQL ($\gamma=0.98$) in walker2d and COMBO ($\gamma=0.9$) in hopper will be better).
However, the suitable area of $\gamma$ is consistent~($\gamma \in [0.9, 0.95]$), and we don't need to fine-tune $\gamma$ for each task.
\begin{figure}[h]
\centering
\subfigure{
\includegraphics[scale=0.25]{various_gamma/Walker2d-2.pdf}}
\subfigure{
\includegraphics[scale=0.25]{various_gamma/Halfcheetah-2.pdf}}
\subfigure{
\includegraphics[scale=0.25]{various_gamma/Hopper-2.pdf}}
\caption{Experimental results on limited D4RL datasets with 25 noised trajectories and TD3+BC algorithm.
Results are averaged over five simulations and 1000 evaluation episodes.
Shaded area represents 95\% confidence interval.}
\label{fig3: gamma}
\end{figure}
\subsubsection{Relationship between $\gamma$ and data size}
We evaluate TD3+BC on datasets containing $x$ medium and 100 noised trajectories.
Then, we select the optimal discount factor $\gamma$ on the performance under different data sizes.
Experimental results in Figure~\ref{fig4: data size} show that the optimal discount factor $\gamma$ increases with the number of trajectories, which is consistent with the theoretical analysis.
\begin{figure}[h]
\centering
\subfigure{
\includegraphics[scale=0.24]{various_datasize/Walker2d.pdf}}
\subfigure{
\includegraphics[scale=0.24]{various_datasize/Halfcheetah.pdf}}
\subfigure{
\includegraphics[scale=0.24]{various_datasize/Hopper.pdf}}
\caption{The relationship between optimal $\gamma$ and the number of trajectories.}
\label{fig4: data size}
\end{figure}
\subsubsection{Comparison between $\gamma$ and $\alpha$ in TD3+BC}
123
\section{Introduction}
Online reinforcement learning has achieved great success in various domains, including video games \citep{mnih2015human}, class games~\citep{silver2016mastering} and continuous control~\citep{lillicrap2015continuous}. However, it requires extensive interaction with the environment to learn through trial-and-error. In many real-world problems, like personalized recommendation systems~\citep{swaminathan2015batch} and autonomous driving~\citep{shalev2016safe,singh2020cog}, exploration can be expensive and unsafe, which requires conducting RL in a supervised manner. In contrast, offline RL~\citep{levine2020offline,fujimoto2019off} enables learning from previously collected datasets, which shows great potential for real-world applications.
\begin{table}
\vspace{2em}
\centering
\begin{tabularx}{0.5\textwidth}{|c|X|X|}
\hline
Dataset size/quality & w other pessimisms & w$\backslash$o other pessimisms \\
\hline
Large, good coverage & & pessimism effect \checkmark \\
\hline
Small or bad coverage & regurlarization effect \checkmark & \\
\hline
\end{tabularx}
\caption{The applicability of a lower guidance discount factor.}
\vspace{-1.5em}
\label{summarize}
\end{table}
One of the major challenges of offline RL comes from the distributional shift between the data collection policy and the learned policy~\citep{levine2020offline}, and the direct application of online RL algorithms is known to lead to poor performance~\citep{fujimoto2019off}. One paradigm for algorithm design in offline RL incorporates proper conservatism to the learning algorithm. There are several conservative methods in existing empirical literature, including policy regularization~\citep{fujimoto2019off,peng2019advantage,kumar2020conservative}, ensemble-based uncertainty~\citep{wu2021uncertainty,an2021uncertainty} and model-based penalty~\citep{yu2021combo,yu2020mopo,kidambi2020morel}.
However, as an essential number in RL, the discount factor already provides a natural way of conservatism. The effect of $\gamma$ is extensively discussed in online RL and specifically TD-learning. For instance, \citet{jiang2015dependence} shows that the discount factor can be regarded as a complexity control parameter for the class of policies. \citet{amit2020discount} shows that it is equivalent to L2-regularization of Q-values in TD-learning to improve generalization. However, the analysis of the role of $\gamma$ in the context of offline RL is missing, which naturally leads to the following question:
\begin{center}
{\it What are the roles of the discount factor in the context of offline RL, and how does it contribute to proper conservative offline RL methods?}
\end{center}
In this paper, we examine the two roles of the discount factor in offline RL that affect the performance in two distinct ways.
First, we show that $\gamma$ plays as a regulator upon existing conservative methods and achieves a trade-off between optimality and sample efficiency.
Second, we show that a lower discount factor resembles model-based pessimism. A lower discount factor is equivalent to maximizing the value function in the worst possible models among a confidence set. We summarize the applicability of the two effects in Table~\ref{summarize}.
To give a rigid characterization, we analyze the two effects above under the context of linear MDPs and derive two different performance bounds. These quantitive results also characterize how the impact of a lower guidance discount factor relies on other factors like the size of the dataset and the coverage coefficient in terms of optimal policies.
We empirically verify the two effects on both tabular MDPs and the standard D4RL benchmark~\citep{fu2020d4rl}.
Results indicate that the discount factor can significantly affect the performance of offline RL algorithms, both under small data regimes upon existing pessimism and in large data regimes without other pessimisms. We believe that our findings highlight the importance of setting a proper lower discount factor in offline RL practices.
\subsection{Related Works}
\paragraph{Role of Discount Factor.}
The discount factor is extensively analyzed in online RL.~\citet{petrik2008biasing} first shows that the approximation error bounds may be tightened with a lower discount factor in online RL. \citet{jiang2015dependence} gives a more rigorous analysis by analyzing the size of potentially optimal policy class; ~\citet{amit2020discount} points out the equivalence between the L2 regularization and a smaller discount factor in TD-learning.
~\citet{zhang2020deeper} analyzes the discount factor in actor-critic algorithms from a bias-variance-representation trade-off perspective.
In the practical aspect, ~\citet{chen2018improving} observes a similar regularization effect in partial observable settings; ~\citet{fedus2019hyperbolic} uses a geometry discount factor to learn multiple values in an ensemble way.
Some works~\citep{xu2018meta, sherstan2020gamma, romoff2019separating} suggest learning a sequence of value functions with lower discount factors for online RL.
\paragraph{Convervatism in Offline RL.}
There are extensive works on the conservative offline RL, which can be roughly divided into policy constraint-based and uncertainty estimation-based.
Policy constraint methods attempt to enforce the trained policy to be close to the behavior policy via introducing a generative model~\citep{fujimoto2019off,fujimoto2021td3}, KL-divergence~\citep{peng2019advantage, nair2020accelerating,siegel2020keep,wu2019behavior} or value function regularization~\citep{kumar2020conservative, agarwal2020optimistic, kostrikov2021offline}.
Some more recent policy constraint methods~\citep{yang2021believe,ma2021offline,ghasemipour2021emaq,kostrikov2021offline, brandfonbrener2021offline} also suggested that only trusting the state-action pairs given in the dataset can solve complex tasks well.
As for the uncertainty estimation of model-free methods, researchers attempt to take into account the confidence of the Q-value prediction using dropout or ensemble techniques~\citep{wu2021uncertainty, an2021uncertainty}. Differently, model-based offline methods incorporate the uncertainty in the model space~\citep{yu2020mopo, yu2021combo, kidambi2020morel}.
In addition, there are also some theoretical results for pessimism in offline RL.
\citet{jin2021pessimism} proves that using a negative bonus from the online exploration is sufficient for offline RL.
\citet{rashidinejad2021bridging} proves that a UCB-like penalty is provably sufficient in tabular settings.
\citet{uehara2021pessimistic} studies the properties of model-based pessimisms under partial coverage.
\section{Empirical Results}
\label{experiments}
In this section we examine the role of discount factor through various experiments and aim to investigate the following questions:
1. How effective is the regularization effect of $\gamma$ and how other factors affect its performance?
2. How effective is the pessimistic effect of $\gamma$ and how does it contribute to the performance?
3. Is a lower guidance discount factor effective and essential in practical offline settings?
We answer the questions above by experiments on both tabular MDPs and D4RL tasks. Each experiment result is averaged over three random seeds with the standard deviation.
\subsection{Regularization Effect}
\subsubsection{Tabular experiments}\label{regular toy example}
We first adopt the BCQ-style offline method in a tabular MDP environment to investigate the effectiveness of the lower discount factor.
We consider a random MDP, where the state space consists of a 30 $\times$ 30 grid, and each state has 10 actions.
The reward and the transition probabilities are generated randomly.
Given the tabular MDP, we compute the true optimal value $Q^*_{\gamma_e}$, where $\gamma_e=0.95$.
Then, we calculate the behavior policy according to $\mu(\cdot\mid s) = \text{softmax}( Q^*_{\gamma_e}(s, \cdot))\cdot \text{mask}(s,\cdot)$, where $\text{mask}(s,\cdot)$ is randomly selected in state-action space to approximate the unseen pairs in offline tasks.
Further, we calculate $\hat{\mu}(\cdot\mid s) = \text{softmax}( Q^*_{\gamma_e}(s, \cdot))\cdot \overline{\text{mask}}(s,\cdot)$ to approximate the generative model in BCQ, where $\|\overline{\text{mask}}\|_1 - \|\text{mask}\|_1 > 0$.
The $\text{Noise Ratio} = \frac{\|\overline{\text{mask}}\|_1 - \|\text{mask}\|_1}{\|\text{mask}\|_1} * 100\%$ represents the inaccuracy of the generative model.
We calculate BCQ-style value function $\hat{Q}_{\gamma}$ by constraining the maximization operator into the finite state-action space:
\begin{equation}
\hat{Q}(s,a) = r(s,a) + \gamma \max_{a'\ \text{s.t.}\ \hat{\mu}(a'\mid s')>0}\hat{Q}(s', a')
\end{equation}
In the experiments, the proportion of masked state-action pairs is 0.5, and the noise ratio coefficients are \{4\%, 6\%, 8\%, 12\%\} respectively.
We compute the estimation error $\|(\hat{Q}_{\gamma} - Q^*_{\gamma_e})\|_{\infty}$ at seen state-action pairs for different noise ratio coefficients and different discount factors. The results are summarized in Figure~\ref{fig6: bcq}.
Each plot shows the average estimation error across 100 MDP instances.
The experimental results demonstrate that the lower guidance discount factor significantly reduces the estimation error in the tabular offline task.
Moreover, as the noise ratio increases, the optimal discount factor $\gamma^*$ marked by the star shapes becomes smaller. This indicates that discount regularization is more significant when function approximation error is large, which may result from insufficient data or poor data coverage.
\begin{table*}[t]
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Halfcheetah & random-v2 & medium-v2 & medium-expert-v2 & medium-replay-v2 & expert-v2 \\
\hline
SAC-N ($\gamma$=0.95) & \textbf{30.0$\pm$1.6} & \textbf{65.1$\pm$0.9} & \textbf{51.4$\pm$2.2} & \textbf{28.1$\pm$1.2} & \textbf{82.7$\pm$0.8} \\
\hline
SAC-N ($\gamma$=0.99) & 26.6$\pm$1.5 & 48.7$\pm$1.3 & 26.7$\pm$1.1 & 0.6$\pm$0.1 & 80.2$\pm$0.6 \\
\hline
\hline
Hopper & random-v2 & medium-v2 & medium-expert-v2 & medium-replay-v2 & expert-v2 \\
\hline
SAC-N ($\gamma$=0.95) & 8.4$\pm$1.7 & \textbf{22.4$\pm$2.1} & \textbf{23.1$\pm$1.9} & 15.5$\pm$3.2 & \textbf{14.5$\pm$2.6} \\
\hline
SAC-N ($\gamma$=0.99) & 14.5$\pm$3.5 & 7.1$\pm$2.0 & 15.4$\pm$1.4 & 100.9$\pm$0.3 & 2.3$\pm$0.3 \\
\hline
\end{tabular}
\caption{Experimental results on Halfcheetah and Hopper tasks in D4RL, where the Q-ensemble size N is 2 in Halfcheetah tasks and N is 50 in Hopper tasks.}
\label{tab2: performance}
\end{table*}
\begin{table*}[h]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
Adroit & pen-expert-v0 & door-expert-v0 & hammer-expert-v0 & relocate-expert-v0 \\
\hline
SAC-N (lower $\gamma$) & \textbf{97.1$\pm$3.2} & \textbf{106.4$\pm$1.9} & \textbf{100.6$\pm$2.3} & 0.5$\pm$0.1 \\
\hline
SAC-N ($\gamma$=0.99) & 3.6$\pm$1.1 & 2.2$\pm$0.2 & 65.5$\pm$4.2 & 0.4$\pm$0.1 \\
\hline
\end{tabular}
\caption{Experimental results on Adroit tasks in D4RL, where the Q-ensemble size N is 50. We select $\gamma=0.95$ in pen-expert-v0, hammer-expert-v0 and relocate-expert-v0 tasks. We select lower $\gamma=0.9$ in door-expert-v0 tasks.}
\label{tab4: performance}
\end{table*}
\subsubsection{Experimental results on D4RL tasks}
This section investigates the regularization effect in complex tasks.
Specifically, we evaluate various offline RL algorithms~(TD3+BC~\citep{fujimoto2021td3}, BCQ~\citep{fujimoto2019off} and COMBO~\citep{yu2021combo}) with a lower guidance discount factor on the limited and noised D4RL benchmark of MuJoCo tasks. To test performance in low data regimes and poor coverage scenarios, the training dataset in our experiments contains 50 medium trajectories and additional noisy trajectories ranging from 0 to 100. The noised trajectories are fragments of the random datasets in D4RL.
We use the author-provided implementation or the recognized code to ensure a fair and identical experimental evaluation across algorithms.
The experimental results are shown from two aspects: coverage ratio and data size.
\textbf{Coverage Ratio.}
We report the final performance of TD3+BC with different amounts of noisy data in Table~\ref{tab1: performance} and the detailed training curves in Appendix~\ref{complete results of TD3+BC}.
The training datasets contain 50 medium and $x$ noised trajectories, with $x$ ranging from 0 to 100. The noised trajectories are fragments from the random dataset.
Results show that the performance of current offline RL methods with original discount facotr~($\gamma=0.99$) degrades with more noisy data due to the poor coverage ratio in the dataset.
In contrast, offline RL methods with a lower guidance discount factor~($\gamma=0.95$) achieve stable and robust performance in most scenarios.
Further, the generated data of model-based offline RL is usually noisy since the challenges of the limited data~(please see the performance gap between COMBO and COMBO~($\gamma$) in walker2d and hopper task).
Most scenarios in Table~\ref{tab1: performance} adopt $\gamma=0.95$ as a lower discount factor other than BCQ~($\gamma=0.9$) and COMBO~($\gamma=0.9$) in hopper tasks.
\textbf{Data Size.}
We evaluate TD3+BC on datasets containing $x$ medium and 100 noised trajectories, with $x$ ranging from 0 to 1000. We select the optimal discount factor $\gamma^* \in [0.89, 0.99]$.
Experimental results in Figure~\ref{fig4: data size} show that the optimal discount factor $\gamma^*$ increases with the number of trajectories.
That is, the effect of the discount factor is more significant when the size of the dataset is small, which is consistent with the analysis in Theorem~\ref{theorem:1}.
The experimental results also suggest we prefer a higher discount factor in large datasets~(e.g., the standard datasets in D4RL tasks).
\textbf{Sparse Reward.}
We evaluate EVL~\cite{ma2021offline} with various $\gamma$ on standard Antmaze tasks in D4RL, where the stitching (approximate dynamical programming) is necessary.
Further, Antmaze is a sparse reward task that tightens the bound in Lemma~\ref{lemma:2} up to a $1/(1-\gamma)$ factor, making the optimal $\gamma$ larger.
However, a lower $\gamma$ still works better than the usual $0.99$ even in large tasks~(Please refer to experimental results in Table~\ref{antmaze_IQL}).
These results show that the regularization effects and finetuning $\gamma$ work generally, even in sparse reward tasks.
(In the medium and large tasks, we set the terminal reward $r_T$ as 100 and 50, and the expectile ratio is 0.96.)
\begin{table}[H]
\centering
\begin{tabularx}{0.5\textwidth}{|c|c|c|X|}
\hline
play-v0 & umaze & medium~($r_T$=50) & large~($r_T$=100) \\
\hline
$\gamma$=0.98 & 86.7$\pm$2.5 & 73.6$\pm$2.3 & \textbf{45.7}$\pm$2.2 \\
\hline
$\gamma$=0.99 & 87.4$\pm$2.6 & 71.0$\pm$1.8 & 37.8$\pm$3.0 \\
\hline
\end{tabularx}
\caption{Results on Antmaze-play-v0 tasks with EVL.}
\label{antmaze_IQL}
\end{table}
\subsection{Pessimistic Effect}
We conduct experiments on both tabular MDPs and D4RL tasks with a lower discount factor and no other offline regularization to investigate the pessimistic effect.
\textbf{Tabular MDPs.} We adopt the same setting and evaluation metric as the toy example in Section~\ref{regular toy example}.
In the experiments, we define the coverage ratio as the proportion of masked state-action pairs ranging from 50\% to 90\%.
The experimental results in Figure~\ref{fig6: bcq} show that a lower discount factor promotes offline RL algorithms to have a better estimation, and the effect is more significant when the data coverage is poor.
\begin{figure}[t]
\centering
\subfigure{
\includegraphics[scale=0.35]{heatmap.pdf}}
\caption{Relationship between $\gamma$ and $\alpha$ of TD3+BC on halfcheetah task.
The value is the normalized return metric.
The optimal $\gamma^*$ is marked with orange color.}
\label{fig5: heatmap}
\vspace{-0.5cm}
\end{figure}
\begin{figure*}[h]
\centering
\subfigure{
\includegraphics[scale=0.26]{various_gamma/Walker2d-2.pdf}}
\hspace{2.5mm}
\subfigure{
\includegraphics[scale=0.26]{various_gamma/Halfcheetah-2.pdf}}
\hspace{2.5mm}
\subfigure{
\includegraphics[scale=0.26]{various_gamma/Hopper-2.pdf}}
\caption{Performance of TD3+BC on noised D4RL tasks containing 50 medium and 25 noised random trajectories.
We adopt the normalized return metric proposed by D4RL benchmark~\citep{fu2020d4rl}.
Scores roughly range from 0 to 100.
}
\label{fig3: gamma}
\end{figure*}
\begin{figure*}[h]
\centering
\subfigure{
\includegraphics[scale=0.26]{various_datasize/Walker2d.pdf}}
\hspace{2.5mm}
\subfigure{
\includegraphics[scale=0.26]{various_datasize/Halfcheetah.pdf}}
\hspace{2.5mm}
\subfigure{
\includegraphics[scale=0.26]{various_datasize/Hopper.pdf}}
\caption{The relationship between optimal discount factor $\gamma^*$ and datasize on noised D4RL tasks.
}
\label{fig4: data size}
\end{figure*}
\textbf{D4RL Task.}
We evaluate the standard off-policy algorithm SAC with ensemble networks on standard D4RL datasets, which is shown in Table~\ref{tab2: performance} and Table~\ref{tab4: performance}.
Adroit tasks require dynamic programming to infer the complete action sequence for finishing the task, while a lower $\gamma$ works surprisingly well with $Q$-ensemebles.
Moreover, the experimental results on mujoco tasks show that SAC achieves a noticeable performance with a lower guidance discount factor, demonstrating the role of the discount factor as a pessimistic mechanism.
While the pessimism of lower $\gamma$ is not as good as the state-of-the-art offline algorithms, it serves as a good baseline.
It significantly surpasses online algorithms with no discount factor pessimism.
The pessimistic mechanism of the discount factor is coarse, in the sense that it is a single parameter that affects all state-action pairs, which explains why the pessimism of lower $\gamma$ is not as good as the state-of-the-art offline algorithms.
We leave finding methods for more fine-grained discount factor control as interesting future work.
\subsection{Effectiveness of Discount Regularization}
\subsubsection{Sensitivity of the discount factor}
This section tests the performance sensitivity regarding $\gamma$ and whether we need to fine-tune $\gamma$ for each task.
To this end, we evaluate TD3+BC on datasets containing 50 medium and 25 noised trajectories with various $\gamma$.
In Figure~\ref{fig3: gamma}, TD3+BC achieves a high performance by decreasing $\gamma$ from 0.99 to around 0.95.
Different offline RL algorithms have minor differences in selecting $\gamma$ on various tasks~(e.g., the particular case in Table~\ref{tab1: performance}).
The suitable area of $\gamma$ is almost identical [0.95, 0.99].
\subsubsection{Discussion between $\gamma$ and $\alpha$ in TD3+BC}
Many offline RL algorithms achieve a trade-off between conservation and generalization by some other parameter (e.g., the regularized hyper-parameter $\alpha$ in TD3+BC). This naturally leads to the following question: Is finetuning $\gamma$ more effective than simple regularizations like behavior cloning?
To answer this question, we evaluate TD3+BC on the halfcheetah task containing 100 medium and random trajectories fragments.
Experimental results in Figure~\ref{fig5: heatmap} show that by properly combining $\alpha$ and $\gamma$, we can solve tasks more effectively. This indicates that $\gamma$ offers more flexibility than the original regularization. Moreover, note that the behavior cloning performance in this task is poor~(the normalized score is 0.0). Therefore, the role of the lower discount factor is not equivalent to behavior cloning.
Further, the experimental results also suggest that other conservative regularization affects the optimal guidance discount factor~(shown in orange in Figure~\ref{fig5: heatmap}). In general, the stronger the other regularization, the higher the optimal discount factor, matching our intuition that there is a trade-off between two regularizations.
\input{conclusion}
\input{acknowledgement}
\nocite{*}
\section{Preliminaries}
We consider infinite-horizon discounted Markov Decision Processes (MDPs), defined by the tuple $(\mathcal{S},\mathcal{A},\mathcal{P},r,\gamma),$ where $\mathcal{S}$ is a state space, $\mathcal{A}$ is a action space, $\gamma \in [0,1)$ is the discount factor and $\mathcal{P}: \mathcal{S}\times\mathcal{A}\rightarrow \Delta(\mathcal{S}), r:\mathcal{S}\times\mathcal{A}\rightarrow [0, r_{\text{max}}]$ are the transition function and reward function, respectively. We also assume a fixed distribution $\mu_0 \in \Delta(\mathcal{S})$ as the initial state distribution.\par
To make our analysis more general, we consider the \textit{linear MDP}~\citep{yang2019sample,jin2020provably} as follows, where the transition kernel and expected reward function are linear with respect to a feature map. Note that tabular MDPs are linear MDPs with the canonical one-hot representation.
\begin{definition}[Linear MDP]
We say an infinite-horizon discounted MDP $(\cS,\cA,\cP,r, \gamma)$ is a linear MDP with known feature map $\phi:\cS\times \cA\to \RR^d, \psi:\cS\to \RR^l$ if there exist an unknown matrix parameter $M \in \RR^{d\times l}$ and an unknown vector parameter $\theta \in \RR^d$ such that
\#
&\cP(s'\given s,a) = \phi(s,a)^\top M \psi(s'),\notag\\
&\EE\bigl[r(s, a)\bigr] = \phi(s,a)^\top\theta
\#
for all $(s,a,s')\in \cS\times \cA\times \cS$. And we assume $\|\phi(s,a)\|_\infty\leq 1,\|\psi(s')\|_\infty\leq 1$ for all $(s,a,s')\in \cS\times \cA \times\cS$ and $\max\{\| M \|_2 ,\|\theta\|_2\}\leq \sqrt{d}$.
\label{assump:linear_mdp}
\end{definition}
A policy $\pi : \cS\rightarrow \Delta{(\cA)}$ specifies a decision-making strategy in which the agent chooses its actions based on the current state, i.e., $a_t \sim \pi(\cdot \given s_t)$. The value function $V^\pi_{\gamma,M}: \cS \rightarrow \RR$ is defined as the $\gamma$-discounted sum of future rewards starting at state $s$ for policy $\pi$ in model $M$, i.e.
\begin{equation}
V^\pi_{\gamma,M}(s) = \EE_{\pi}\Big[ \sum_{t=0}^{\infty} \gamma^t r(s_t, a_t)\Biggiven s_0=s \Big],
\label{eq:def_value_fct}
\end{equation}
where the expectation is with respect to the trajectory $\tau$ induced by policy $\pi$. The action-value function $Q^\pi:\cS\times \cA\to \RR$ is similarly defined as
\begin{equation}
Q^\pi_{\gamma,M}(s,a) = \EE_\pi\Big[\sum_{t=0}^{\infty} \gamma^tr(s_t, a_t)\Biggiven s_0=s, a_0=a \Big].
\label{eq:def_q_fct}
\end{equation}
We overload the notation and define $V_{\gamma,M}(\pi)$ as the expected $\gamma$-discounted value of policy $\pi$ under the initial distribution $\mu_0$, i.e. $V_{\gamma,M}(\pi) = \EE_{s_0\sim\mu_0}[V^\pi_{\gamma,M}(s_0)]$, and similarly we have $Q_{\gamma,M}(\pi) = \EE_{\begin{subarray} ss_0\sim\mu_0\\a_0\sim\pi\end{subarray}}[Q^\pi_{\gamma,M}(s_0,a_0)]$. When it does not lead to confusion, we omit the index for $\gamma$ and $M$ for simplicity.
We define the Bellman operator as
\begin{align}
(\BB_\gamma f)(s,a) &= \EE\bigl[r(s, a) + \gamma f(s')\bigr],
\label{eq:def_bellman_op}
\end{align}
for any $f:\mathcal{S}\rightarrow \mathbb{R}$ and $\gamma\in[0,1)$.
The optimal Q-function~$Q^*$, and the optimal value function~$V^*$ are related by the Bellman optimality equation
\begin{align}
&V^*_{\gamma,M}(s) = \max_{a\in \cA}Q^*_{\gamma,M}(s,a),\notag\\
&Q^*_{\gamma,M}(s,a) = (\BB_\gamma V^*_{\gamma,M}) (s,a),
\label{eq:dp_optimal_values}
\end{align}
while the optimal policy is defined as
\begin{align*}
\pi^*_{\gamma,M} (\cdot \given s)&=\argmax_{\pi}\EE_{a\sim \pi}{Q^*_{\gamma,M}(s, a)}.
\end{align*}
We define the suboptimality as the performance difference of the optimal policy $\pi^*_\gamma$ and the policy $\pi$ given the initial distribution $\mu_0$ evaluated with discount factor $\gamma$. That is
\begin{equation}
\text{SubOpt}(\pi;\gamma) = V_{\gamma,M}(\pi^*_\gamma) - V_{\gamma,M}(\pi),
\label{eq:def_regret}
\end{equation}
We also define the suboptimality for each state, that is
\begin{equation}
\text{SubOpt}(\pi,s;\gamma) = V^{\pi^*_\gamma}_{\gamma,M}(s) - V^\pi_{\gamma,M}(s). \notag
\label{eq:def_regret_2}
\end{equation}
\subsection{Pessimistic Offline Algorithms}
\label{algo_meta}
In this section, we sketch two offline algorithms to characterize the effect of a lower guidance discount factor. The first one is \textit{pessimistic value iteration} \citep[PEVI;][]{jin2021pessimism}, as shown in Algorithm~\ref{alg:1}, which uses uncertainty as a negative bonus for value learning.
The second one is \textit{model-based pessimistic policy optimization} \citep[MBPPO;][]{uehara2021pessimistic}, as shown in Algorithm~\ref{alg:2}, which optimizes the worst possible performance of a policy over a set of models.
PEVI uses negative bonus $\Gamma(\cdot,\cdot)$ over standard $Q$-value estimation $\hat{Q}(\cdot,\cdot) = (\hat\BB_\gamma \hat{V})(\cdot)$ to reduce potential bias due to finite data, where $\hat\BB_\gamma$ is the empirical estimation of $\BB_\gamma$ from dataset $\cD$. We use the notion of $\xi$-uncertainty quantifier as follows to formalize the idea of pessimism.
\begin{definition}[$\xi$-Uncertainty Quantifier]
We say $\Gamma :\cS\times\cA\to \RR$ is a $\xi$-uncertainty quantifier for $\hat\BB_\gamma$ and $\widehat{V}$ if with probability $1-\xi$,
\begin{equation}
\big|(\hat\BB_\gamma\hat{V})(s,a) - (\BB_\gamma\hat{V})(s,a)\big|\leq \Gamma(s,a),
\label{eq:def_event_eval_err_general}
\end{equation}
for all~$(s,a)\in \cS\times \cA$.
\label{def:uncertainty_quantifier}
\end{definition}
\begin{algorithm}[H]
\caption{Pessimistic Value Iteration}\label{alg:1}
\begin{algorithmic}[1]
\STATE {\bf Require}: Dataset $\cD=\{(s_\tau,a_\tau,r_\tau)\}_{\tau=1}^{T}$.
\STATE Initialization: Set $\hat{V}(\cdot) \leftarrow 0$ and construct $\Gamma(\cdot, \cdot)$.
\WHILE{not converged}
\STATE Construct $(\hat\BB_\gamma \hat{V})(\cdot,\cdot)$
\STATE Set $\hat{Q}(\cdot,\cdot) \leftarrow (\hat\BB_\gamma \hat{V})(\cdot,\cdot)- \Gamma(\cdot,\cdot)$.
\STATE Set $\hat{\pi} (\cdot \given \cdot) \leftarrow \argmax_{\pi}\EE_{\pi}{\left[\hat{Q}(\cdot, \cdot)\right]}$.
\STATE Set $\hat{V}(\cdot) \leftarrow \EE_{\hat{\pi}}{\left[\hat{Q}(\cdot, \cdot)\right]}$. \label{alg:general_Vhat}
\ENDWHILE
\STATE \textbf{Return} $\hat\pi$%
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[H]
\caption{Model-Based Pessimistic Policy Optimization}\label{alg:2}
\begin{algorithmic}[1]
\STATE {\bf Require}: Dataset $\cD$, discount factor $\gamma$, policy class $\Pi$, Model set $\cM$
\STATE Optimize policy with respect to the worst possible model:
\vspace{-5pt}
\begin{align}\textstyle
\label{alg:2_1}
&\hat\pi = \argmax_{\pi\in\Pi} \min_{M\in\cM} V^{\pi}_{\gamma,M}.
\vspace{-15pt}
\end{align}
\STATE \textbf{Return} $\hat\pi$%
\end{algorithmic}
\end{algorithm}
Intuitively, $\Gamma(s,a)$ represents the uncertainty when estimating the value function. The negative bonus ensures that we do not overestimate the value function due to finite samples with high probability, which allows us to give a performance lower bound with respect to the number of samples in the dataset.
MBPPO, on the contrary, considers the uncertainty in the model space to reduce potential sampling bias. By optimizing the performance in the worst possible model, as shown in Algorithm~\ref{alg:2}, we obtain a model-based offline algorithm, which gives better suboptimality bounds compared to the model-free counterpart when we use a proper set of models.
Both Algorithm~\ref{alg:1} and Algorithm~\ref{alg:2} are meta descriptions rather than detailed implementations. We provide more details of the algorithms in Appendix~\ref{algo_describ}.
\section{Theoretical Analysis}
This section characterizes the effects of a lower guidance discount factor in offline RL. We first show that a similar regularization effect exists in offline algorithms as in online algorithms. We quantify this effect by providing a performance bound to analyze how other factors like the data size and the coverage coefficient affect this regularization effect. We then show an equivalence between a lower discount factor and the model-based pessimism. This equivalence leads to a performance guarantee with a lower guidance discount factor without other conservative regularizations. These two effects indicate that the discount factor plays a vital role in offline reinforcement learning.
\subsection{Regularization Effect}
As~\citet{jiang2015dependence} suggests, the discount factor acts as a regularization coefficient to bound the complexity of the potential optimal policy class. However, it is unclear what affects the effectiveness of $\gamma$ regularization in the offline setting, especially when the data coverage is poor and the algorithms have additional pessimism. Empirically, we found that the quality and size of the dataset are the main factors that affect the regularization from $\gamma$. To shed light on this observation, we first derive a performance bound in the linear MDP setting for model-free pessimistic algorithms like Algorithm~\ref{alg:1}. The analysis is analogous to~\citep{jin2021pessimism}, but we focus on the discount setting.
\begin{lemma}
\label{lemma:2}
Suppose there exists an absolute constant
\begin{align}
\label{eq:event_opt_explore}
c^\dagger = &\sup_{x\in \RR^d} \frac{x^{\top}\Sigma_{\pi^*,s}x}{x^{\top}\Sigma_{\cD} x} < \infty,
\end{align}
for all $s\in\cS$ with probability $1-\xi/2$, where
\$&\Sigma_{\cD}~~~=\frac{1}{N}\sum_{\tau=1}^N{\left[\phi(s_\tau,a_\tau)\phi(s_\tau,a_\tau)^\top\right]},\notag\\
&\Sigma_{\pi^*,s}=\EE_{{\pi^*}}\bigl[\phi(s_t,a_t)\phi(s_t,a_t)^\top\biggiven s_0=s\bigr].
\$
In Algorithm \ref{alg:1}, we follow Equation~\eqref{eq:empirical_bellman} and~\eqref{eq:linear_uncertainty_quantifier}, and set
\begin{equation*}
\lambda =1,~ \beta= c \cdot d r_{\text{\rm max}}\sqrt{\zeta}/(1-\gamma), ~ \zeta = \log{(4dN/(1-\gamma)\xi)},
\end{equation*}
where $c>0$ is an absolute constant and $\xi \in (0,1)$ is the confidence parameter. Then with probability $1-\xi$, the policy $\widehat{\pi}$ generated by Algorithm~\ref{alg:1} satisfies
\begin{align*}\label{eq:event_opt_explore_d}
\text{\rm SubOpt}\big(\widehat{\pi},s;\gamma \big)
\leq \frac{2c r_{\text{\rm max}}}{(1-\gamma)^2} \sqrt{c^\dagger d^3\zeta /N}, \ \forall s\in \cS.
\end{align*}
\end{lemma}
\begin{proof}
See Appendix~\ref{proof_lemma_1} for a detailed proof.
\end{proof}
Equation~\eqref{eq:event_opt_explore} defines a finite coverage coefficient, namely $c^\dagger$, which represents the maximum ratio between the density of empirical state-action distribution and the density induced from the optimal policy. Intuitively, it represents the quality of the dataset. For example, the \verb|expert| dataset has a low coverage ratio while the \verb|random| dataset may have a high ratio. The probability $1-\xi/2$ for a finite coverage coefficient is measured concerning the data collection process. That is, we are only making assumptions about the data collection process rather than the specific dataset.
The dependence on $\gamma$ in Lemma~\ref{lemma:2} suggests that the performance bound $\text{SubOpt}\big(\widehat{\pi},s;\gamma \big)$ decreases as the discount factor gets lower. However, a lower discount factor also biases the optimal policy, as characterized by the following lemma.
\begin{lemma}[\citet{jiang2015dependence}]
\label{lemma:1}
For any MDP $M$ with rewards in $[0,r_{\text{\rm max}}]$, $\forall \pi: \mathcal{S}\times\mathcal{A}\rightarrow \mathbb{R}$ and $\gamma\leq \gamma_{e}$,
\begin{equation}
V_{M,\gamma}(\pi) \leq V_{M,\gamma_{e}}(\pi) \leq V_{M,\gamma}(\pi) + \frac{\gamma_{e}-\gamma}{(1-\gamma)(1-\gamma_{e})}r_{\text{\rm max}},
\end{equation}
where $\gamma_{e}$ is the evaluation discount factor.
\end{lemma}
The bound above is tight in a trivial case, while it is usually very loose in practice.
Together, we have the following bound with a guidance discount factor $\gamma$ different from the true evaluation discount factor $\gamma_e$.
\begin{theorem}
\label{theorem:1}
Based on the same assumption and definition as in Lemma~\ref{lemma:2}, we set $\zeta=\log{(4dN/(1-\gamma)\xi)}$, $\beta=c \cdot d r_{\text{\rm max}}\sqrt{\zeta}/(1-\gamma)$ and $\lambda =1$.
Then with probability $1-\xi$, the suboptimality bound of the policy $\widehat{\pi}$ generated by Algorithm~\ref{alg:1} satisfies
\begin{align*}
\text{\rm SubOpt}\big(\widehat{\pi};\gamma_e \big)
\leq &\frac{2c}{(1-\gamma)^2} \sqrt{c^\dagger d^3\zeta/N}\cdot r_{\text{\rm max}} \nonumber\\
&+ \frac{\gamma_{e}-\gamma}{(1-\gamma)(1-\gamma_{e})}\cdot r_{\text{\rm max}}.
\end{align*}
\end{theorem}
\begin{proof}
The result follows immediately from Lemma~\ref{lemma:2} and Lemma~\ref{lemma:1}.
\end{proof}
Theorem~\ref{theorem:1} gives an upper bound for pessimistic offline algorithms with two terms. Both terms monotonically depend on the guidance discount factor $\gamma$ but in an opposite way, which suggests {\em there exists an optimal trade-off $\gamma^*\in [0, \gamma_{e}]$}.
It also suggests that this optimal trade-off $\gamma^*$ is dependent on other factors like the coverage ratio of the dataset and the size of the dataset. A small or poorly covered dataset (i.e., a large coverage coefficient) makes the first term's coefficient larger, requiring a lower discount factor to achieve optimal performance. We empirically observe this effect on both toy examples as well as large D4RL tasks, as shown in Section~\ref{experiments}.
\subsection{Pessimistic Effect}
This section analyses the pessimism effect of a lower discount factor. We show that, perhaps surprisingly, learning with a lower discount factor is equivalent to one type of the model-based pessimism mechanism, as depicted in Algorithm~\ref{alg:2}. This is characterized by the following lemma.
\begin{lemma}
\label{lemma:3}
The optimal value function with a lower discount factor is equivalent to the pessimistic value function over a set of models.
Formally, let
\begin{equation}
\label{model_based_policy_opt}
\pi^*_{\mathcal{M}_\varepsilon} \in \argmax_{\pi} \min_{M\in\mathcal{M}_\varepsilon} V_{M,\gamma}(\pi),
\end{equation}
where
\begin{equation}
\label{small_gamma_policy_opt}
\mathcal{M}_\varepsilon = \set{M| \cP_M(\cdot|s,a)= (1-\varepsilon)\cP_{M_0}(\cdot|s,a)+\varepsilon \cP(\cdot) }, \notag
\end{equation}
and $\cP(\cdot)$ is an arbitrary distribution over $\cS$, then we have
\begin{equation}
V^*_{M_0,(1-\varepsilon)\gamma}= V_{M_0,\gamma}(\pi^*_{\cM_\epsilon})+\Delta,
\end{equation}
where $\Delta$ is an absolute constant.
\end{lemma}
\begin{proof}
See Appendix~\ref{proof_lemma_3} for a detailed proof.
\end{proof}
The equality in Lemma~\ref{lemma:3} shows that learning with a lower discount factor itself acts as a kind of model-based pessimism, which allows us to derive a bound without any other additional regularizations. We consider the model-based pessimism, where the model parameters are learned through maximum likelihood estimation (MLE). With the techniques in~\citep{geer2000empirical} that allow us to estimate the concentration rate of the MLE estimator, we have the following theorem. The proof of the following theorem is analogous to the analysis in~\citep{uehara2021pessimistic}.
\begin{theorem}
\label{theorem:2}
(\textit{informal}) Suppose there exists an absolute constant
\begin{align}
\label{eq:event_opt_explore_2}
c^\ddagger=\sup_{x\in\mathbb{R}^d} \frac{x^{\top} \Sigma_{\pi^*} x}{x^{\top} \Sigma_\rho x } < \infty,
\end{align}
$\Sigma_{\rho}=\EE_{\rho}[\phi(s,a)\phi(s,a)^{\top}],~\Sigma_{\pi^{*}}=\EE_{d^{\pi^{*}} }[\phi(s,a)\phi(s,a)^{\top}].$
And suppose the underlying MDP follows the regularity condition in Assumption~\ref{assumption_regularity}. Set
$$\gamma = (1-\varepsilon)\gamma_e,~\varepsilon=c_1 \sqrt{d\zeta/N},\zeta = \log^2{(c_2Nd/\xi)}.$$
Then, with probability $1-\xi$, Learning with a guidance discount factor $\gamma$ yields a policy $\hat{\pi}$ such that
\begin{align}
\text{\rm SubOpt}\big(\widehat{\pi};\gamma_e \big) \leq
\frac{c_3}{(1-\gamma_e)^{2}}\sqrt{c^\ddagger d^2 \zeta / N} \cdot r_{\text{max}},
\end{align}
where $c_1,c_2,c_3$ are constants.
\end{theorem}
\begin{proof}
See Appendix~\ref{proof_theorem_2} for a detailed description and proof.
\end{proof}
Similar to $c^{\dagger}$ in Equation~\eqref{eq:event_opt_explore}, $c^{\ddagger}$ can be interpreted as another coverage coefficient, and the difference between $c^{\dagger}$ and $c^{\ddagger}$ are technical. Theorem~\ref{theorem:2} shows that, with a properly chosen $\gamma$, we have a performance guarantee without any other offline techniques. We name this effect of the discount factor the pessimistic effect.
Note that to make this bound meaningful, we need the dataset size $N$ to be large enough so that $1-\varepsilon >0$. This means that this theorem applies when the data is sufficient, contrary to the condition for the regularization effect. Compared to the bound in Theorem~\ref{theorem:1}, we note that the bound in Theorem~\ref{theorem:2} is smaller with a factor of $\sqrt{d}$ because a lower $\gamma$ resembles the model-based pessimism rather than a model-free one.
We empirically verify this effect on the tabular MDPs and the D4RL dataset, where simple discount factor regularization is enough to derive a reasonable performance.
This pessimistic effect suggests that $\gamma$ affects the performance in offline RL differently compared to online settings. |
2,877,628,088,483 | arxiv | \section{Introduction}
Deployment of semiconductor quantum dots (QDs) in the active region of optical devices offers unique electronic and optical properties which can be exploited to design several optoelectronic technologies ranging from lasers\cite{Salhi_1} to semiconductor optical amplifiers (SOAs)\cite{Akiyama_1} or single photon sources\cite{Dousse_1}, where they have successfully overcome critical challenges such as extremely low threshold, high speed response, or entangled photon emission, respectively. However, in these applications, a critical design parameter is the polarization response of QDs, typically characterized in terms of either degree of polarization [DOP = (TE-TM)/(TE+TM)]\cite{Usman_1, Usman_2} or TM/TE ratio\cite{Fortunato_1, Usman_6}, where TE-mode is measured along a direction in the plane of the QD, and TM-mode is measured along the growth [001] direction for the GaAs(001) QDs. Engineering of QD nanostrcutures to achieve isotropic polarization (DOP $\sim$ 0) is critical for the implementation of several optoelectronic devices, for example semiconductor optical amplifiers (SOA's).
InAs QDs grown by the Stranski-Krastonov (SK) self-assembly growth process typically exhibit very poor polarization response (DOP close to 1.0) due to the large compressive biaxial strain surrounding the flat shapes of the QDs. The strain induced splitting between the heavy hole (HH) and the light hole (LH) valence bands leads to a dominant HH character in the few top most valence band states, thus significantly suppressing the TM-mode. Therefore, previous studies of the single InAs QDs have reported very high values of the DOP, typically larger than 0.8.\cite{Usman_1, Fortunato_1, Saito_1, Inoue_1}
The polarization response of InAs QDs is influenced by several parameters such as crystal/atomic symmetry, QD shape, composition profile etc. The atomistic asymmetry of the underlying zincblende crystals implies that the [110] and [$\overline{1}$10] directions are inequivalent. This lowers the overall symmetry of a perfectly circular dome-shaped QD from C$_{\infty v}$ to C$_{2v}$. As a result, TE-mode in the plane of the QD does not remain symmetric and significant in-plane anisotropy may be observed even for an ideal circular-base InAs QD.\cite{Usman_5} Therefore, a single value of the DOP is not sufficient to characterize the polarization response of the QD systems. This, in the past studies\cite{Usman_1, Usman_5, Usman_6}, has lead us to define a direction-dependent value of the DOP,
\begin{equation}
\label{eq:dop}
\begin{array}{cc}
DOP_{[\overrightarrow{n}]} = \frac{(TE_{[\overrightarrow{n}]}-TM_{[001]})}{(TE_{[\overrightarrow{n}]}+TM_{[001]})} \end{array}
\end{equation}
\\
where the direction, [$\overrightarrow{n}$] = [110] or [$\overline{1}$10], associated with the DOP$_{[\overrightarrow{n}]}$ is same as the direction of the TE$_{[\overrightarrow{n}]}$-mode in the plane of the QD.
The value of the DOP$_{[\overrightarrow{n}]}$ also strongly depends on the shape of the QDs, which is significantly affected by the growth dynamics of the self-assembly process during the growth of the capping layers and the post-growth annealing processes\cite{Biasoil_1}. As a result, the shape of a SK self-assembled QD is far from being perfectly circular or square, as typically assumed in the past theoretical studies of the polarization properties. Several experimental investigations have suggested that the actual shape of the QDs significantly deviates from the ideal circular-base (for dome or lens) or square-base (for pyramid), and usually tends to elongate along the [110]\cite{Stevenson_1, Plumhof_1, Pryor_1, Fricke_1} or along the [$\overline{1}$10]\cite{Krapek_1, Hospodkova_1, Songmuang_1, Favero_1} directions.
Furthermore, recent advancements in the growth techniques have allowed to control the shape of QDs, leading to the fabrication of strongly elongated QD like nano-structures.\cite{Dusanowski_1} These offer an enhanced exciton oscillator strength and allow the realization of single exciton-single photon coupling to build the fundamental blocks for the solid state quantum information.\cite{Favero_1} Such elongations of the QDs along the [$\overrightarrow{n}$] = [110] or [$\overline{1}$10] direction can significantly alter the value of the DOP$_{[\overrightarrow{n}]}$ and may be exploited to achieve tailored polarization response for a desired operation.
\textbf{\textit{Brief overview of the past theoretical studies:}} Despite significant experimental evidence for the elongation of the QD shapes and its prospective potential to tune the polarization properties, the impact of the base elongations on the value of the DOP$_{[\overrightarrow{n}]}$ is only barely known. The previous theoretical investigations of the QD elongations are focused on the study and design of the fine structure splitting (FSS = energetic difference between the two bright excitons, e$_{[\overline{1}10]}$ - e$_{[110]}$)~\cite{Schliwa_1, Young_1, Krapek_1, Seguin_1, Plumhof_1, Singh_1, Singh_2} or the spin polarization~\cite{Pryor_1}, with very little emphasis given to the study of the polarization properties (comparison of the magnitudes of the TE and TM modes).\cite{Schliwa_1, Sheng_1, Mlinar_1}
Sheng \textit{et al.}\cite{Sheng_1} applied an effective bond-orbital model and discussed the impact of the base elongations on the in-plane polarization anisotropy of the InGaAs QDs defined as:
\begin{equation}
\label{eq:pol}
\begin{array}{cc}
Pol_{||} = \frac{(TE_{[\overline{1}10]}-TE_{[110]})}{(TE_{[\overline{1}10]}+TE_{[110]})} \end{array}
\end{equation}
\\
and they concluded that the electron-electron interactions and the alloy intermixing have very little contributions in determining the polarization properties of the QDs.
Mlinar \textit{et al.}\cite{Mlinar_1}, using an atomistic pseudo-potential model, focused on the InGaAs QDs and studied the impact of the (In,Ga)As alloy randomness on the polarization properties. They concluded that the alloy composition fluctuations can significantly change the in-plane polarization anisotropy and therefore the experimentally measured polarization anisotropy may not be considered as a reliable measure of the QD shape asymmetry.
Schliwa \textit{et al.}~\cite{Schliwa_1}, based on their \textbf{k}$\centerdot$\textbf{p} calculations, studied the impact of piezoelectricity on the QD optical properties. Although they provided a detailed investigation of the electronic and optical properties of the dome and pyramidal shaped QDs as a function of their vertical aspect ratio (height/base), only one case of the pyramidal shaped QDs (series D in their paper) was investigated for the study of the base elongations (lateral aspect ratio). Furthermore, in their study of the lateral aspect ratios of the pyramidal QDs, they kept the overall volume of the QDs unchanged by altering both the heights and the base diameters of the QDs. Since the QD energies are strongly influenced by a change in their height parameter\cite{Usman_3}, so the reported results did not isolate the impact of the QD base elongations.
Nevertheless, a comprehensive quantitative analysis of the impact of the QD base elongations on the polarization dependent room temperature ground state optical emissions still remain unavailable. This paper, therefore, aims to bridge this gap by providing a detailed study of the DOP$_{[\overrightarrow{n}]}$ and Pol$_{||}$ for the [110]- and [$\overline{1}$10]-elongated QDs. For the calculation of the polarization dependent optical modes (TE and TM), we take into account the highest five valence band states, instead of just a single top most valence band state, in accordance with the recent studies~\cite{Usman_2, Usman_5} where it has been shown that the calculation of the room temperature ground state optical spectra must involve multiple closely spaced valence band states to accurately model the in-plane polarizability and to avoid discrepancy between the theory and experiments. Our calculations show that despite tuning of the DOP$_{[\overrightarrow{n}]}$ over a wide range, the elliptical shapes of the single QDs do not lead to an isotropic polarization. Therefore, we extend our study to multi-layer QD stacks.
\textbf{\textit{Isotropic polarization from multi-layer stacks:}} Past theoretical\cite{Usman_1, Saito_1} and experimental\cite{Inoue_1, Alonso_1, Ikeuchi_1} studies have shown that the polarization response of the QDs can be drastically improved by growing large vertical stacks of QDs (VSQDs), consisting of many closely spaced QD layers to exploit inter-dot strain and electronic couplings. A recent experimental study~\cite{Inoue_1} has shown that a vertical stack of nine QDs (9-VSQDs) exhibits DOP$_{[110]}$ = -0.6; however the measured PL spectra showed a large anisotropy in the in-plane polarization modes: TE$_{[\overline{1}10]} \gg$ TE$_{[110]}$. our atomistic calculations~\cite{Usman_1}, based on an assumption of ideal circular-base dome-shape for the QD layers, reported TE$_{[\overline{1}10]} \gg$ TE$_{[110]}$ in agreement with experimental PL spectra. We attributed this large in-plane polarization anisotropy to a small increase in the TM$_{[001]}$-mode (coming from an enhanced HH/LH intermixing) and a large decrease in the TE$_{[110]}$-mode due to the [$\overline{1}$10]-oriented hole wave function confinements.
A good qualitative agreement of our calculations with the experimental results assuming ideal circular-base for the QDs leads to a fundamental question that how much is the contribution from the QD shape asymmetry in the polarization anisotropy for the 9-VSQDs? This work, based on multi-million-atom calculations, provides the answer that the interfacial hole wave function confinements have major contribution in the experimentally measured in-plane polarization anisotropy, which is counter-intuitive to the common notion where the shape asymmetry is considered mainly response for such anisotropies.\cite{Humlicek_1, Alonso_1}
Furthermore, as an [110]-elongation increases DOP$_{[110]}$ and reduces DOP$_{[\overline{1}10]}$, so we investigate the possibility to exploit it to balance the built-in in-plane anisotropy such that to achieve DOP$_{[110]} \sim$ 0 and DOP$_{[\overline{1}10]} \sim$ 0? This would lead to the design of the QD based SOAs independent of the in-plane direction. Our study reveals an interesting property of the 9-VSQDs that its polarization response is very sensitive to the orientation of its elongation and both TE-modes can not be reduced below TM-mode. Therefore, either DOP$_{[\overline{1}10]}$ or DOP$_{[110]}$ can be tuned for an isotropic polarization, and not both of them simultaneously.
Finally, the quantitative study of the elongation dependent DOP$_{[\overrightarrow{n}]}$ also helps us to determine the geometry of the 9-VSQDs. Our theoretical calculations accurately predict [$\overline{1}$10] elongation of the 9-VSQDs in the experiment, which is also consistent with the TEM images\cite{Kita_1}.
The remainder of the paper is organized in the following sections: section II defines QD geometry parameters and describes the three types of elliptical shapes that we study in this paper. Section III documents our methodologies. Our results for two different AR single QDs are presented in sections IV-A and IV-B. Section IV-C is about the vertical stack of nine QD layers (9-VSQDs). The sections IV-A, IV-B, and IV-C are written as self-contained sections so that a reader interested in only one type of QDs may only require to read the corresponding section. Finally, we provide an overall summary and main conclusions of our results in section V.
\section{Simulated Quantum Dot Systems}
\subsection{Geometry Parameters}
Figs.\ref{fig:Fig1}(a), (b), and (c) show three quantum dot geometries simulated in this study. The InAs quantum dots are embedded inside large GaAs buffers comprised of $\approx$ 15 million atoms (60$\times$60$\times$60 nm$^3$) for (a) and (b), and $\approx$ 25 million atoms (60$\times$60$\times$106 nm$^3$) for (c). The quantum dots are placed on top of 0.5 nm thick InAs wetting layers.
We study three dome-shaped quantum dot systems: (i) a low aspect ratio (AR) InAs QD with 20 nm diameter (d) and 4.5 nm height (h), (AR = h/d = 0.225); (ii) a high AR InAs QD with d = 20 nm and h = 8.0 nm (AR = h/d = 0.40); (iii) a vertical stack comprised of nine QD layers (9-VSQDs) separated by 5 nm thick GaAs spacer layers, where each layer consists of an InAs QD with d = 20 nm and h = 4 nm. The geometrical parameters of this 9-VSQDs are taken from the recent experimental\cite{Inoue_1, Ikeuchi_1} and theoretical~\cite{Usman_1} studies where it has shown great technological relevance for achieving isotropic polarization response.
In remainder of this paper, we label the single QD with AR = 0.225 as a "flat" QD and the single QD with AR = 0.40 as a "tall" QD. In a previous study~\cite{Usman_5}, it has been shown that the flat and tall QDs with an ideal circular-base exhibit drastically different electronic and polarization properties. The hole wave functions tend to reside in the HH pockets for the tall QDs, when the AR $\gtrsim$ 0.25. This introduces a large anisotropy in the in-plane polarization (Pol$_{||}$). Therefore, this paper analyses the impact of the base elongations for both types of the QDs. Furthermore, as the 9-VSQDs consists of strongly coupled QDs so it can essentially be considered as an extension of the single tall QD with a very large AR $\cong$ 45/20 = 2.25. Our calculations presented in the section (IV) confirm that the two single QDs (flat and tall) exhibit drastically different polarization properties as a function of their base elongations, and the 9-VSQDs overall exhibiting many similar characteristics as that of the tall QD.
Note that a typical SK growth of a large vertical QD stack generally results in an increase in the size of the upper layer QDs\cite{Xie_1}, however, no such increase in the QD dimensions is reported in the experimental study\cite{Inoue_1}. Therefore, we keep the size of the QDs uniform for the 9-VSQDs.
\begin{figure}
\includegraphics[scale=0.37]{Figure1.png}
\caption{Schematic of quantum dots are shown for theoretical modeling. (a) A low aspect ratio (flat) dome-shaped InAs quantum dot with the base diameter and height of 20 nm and 4.5 nm, respectively. (b) A high aspect ratio (tall) dome-shaped InAs quantum dot with the base diameter and height of 20 nm and 8 nm, respectively. (c) A vertical stack consisting of nine InAs QDs (9-VSQDs), each with the base diameter and height of 20 nm and 4 nm, respectively. The geometry parameters are directly taken from the experimental study\cite{Inoue_1}. (d) Top view illustrating the base elongations for the Type-I elongation. The elliptical shape is formed by decreasing the diameter either along the [110] or along the [-110] direction. (e) Top view illustrating the base elongations for the Type-II and Type-IIv elongations. The elliptical shape is formed by simultaneously decreasing (increasing) the base diameter along the [110] direction and increasing (decreasing) the base diameter along the [-110] direction. For the Type-IIv elongation, we select values for the d$_{[110]}$ and d$_{[\overline{1}10]}$ such that to keep the overall volume of the QD fixed. (f) Plots of the products of the QD diameters along the [110] and the [$\overline{1}$10] directions, d$_{[110]} \times$d$_{[\overline{1}10]}$, as a function of the elongation factor ($\eta$). For the fixed QD height, this product is directly proportional to the volume of the QD.}
\label{fig:Fig1}
\end{figure}
\vspace{1mm}
\subsection{Elongation along High Symmetry Axis}
In order to study the impact of the base elongations of the three QD systems described above along the high symmetry crystallographic directions ([110] and [$\overline{1}$10]) on their electronic and polarization properties, we consider three types of the elliptical shapes, defined by an elongation factor $\eta$ = d$_{[110]}$/d$_{[\overline{1}10]}$ which is a ratio of the QD lateral diameters along the [110] and [$\overline{1}$10] directions, for the above mentioned three QD systems: \\ \\ Type-I: As schematically shown in Fig.~\ref{fig:Fig1}(d), the diameter d$_{[110]}$ (d$_{[\overline{1}10]}$) is reduced by $\bigtriangleup$d for [110] ([$\overline{1}$10]) elongation, while keeping the other diameter d$_{[\overline{1}10]}$ (d$_{[110]}$) fixed at 20 nm. This will reduce the overall volume of the QD. \\ \\ Type-II: For this type of elongation, as schematically shown in the Fig.~\ref{fig:Fig1}(e), we simultaneously change both d$_{[110]}$ and d$_{[\overline{1}10]}$ diameters by equal amounts $\bigtriangleup$d (one diameter is increased by $\bigtriangleup$d and the other diameter is decreased by $\bigtriangleup$d). Once again, the volume of the QD decreases, however, at much slower rate compared to the Type-I case. \\ \\ Type-IIv: As a special case of the Type-II elongation, we again change the QD diameters along the [110] and [$\overline{1}$10] directions similar to the Type-II case, but keep the overall QD volume unchanged. Most of the previous theoretical studies\cite{Schliwa_1, Singh_1, Singh_2, Pryor_1, Sheng_1, Mlinar_1} have only analysed this type of elongation, so it allows us to make a direct comparison with the existing results. For a circular-base dome-shape QD with d = 20 nm and h = 4.5 nm, the volume of the QD is (1/3)$\pi$d$^2$h $\propto$ d$^2$ $\propto$ 20$^2$. Now to keep the volume unchanged, the other reasonable choice for the diameters d$_{[110]}$ and d$_{[\overline{1}10]}$ is 25 nm and 16 nm that results in d$_{[110]} \times$d$_{[\overline{1}10]}$ = 400. Therefore for the Type-IIv elongation, we choose between 25 nm and 16 nm for the diameters d$_{[110]}$ and d$_{[\overline{1}10]}$.
As the volume of an ellipsoidal QD is proportional to the product of diameters along its major and minor axes (d$_{[110]}$ and d$_{[\overline{1}10]}$), we plot this product as a function of the elongation factor $\eta$ for the three types of elongations in Fig.~\ref{fig:Fig1}(f), quantitatively showing the decrease in the QD volume for the three types of elongations. As it will become clear in the later sections, the large decrease in the QD volume ($\bigtriangleup$V) for the Type-I elongation will significantly impact the electronic and the optical properties, even dominating impact of the diameter changes. For the Type-II elongation, relatively much smaller decrease in the volume will compete with the changes in the diameter and the net impact will be from $\bigtriangleup$d for the small values of $\eta$, and from the $\bigtriangleup$V for the large values of $\eta$.
It should be noted that in all of the three types of elongations, we keep the height of the QDs fixed. It has been shown\cite{Usman_3} that the electronic and optical properties are very sensitive to the height of the QDs, so by keeping it unchanged, we eliminate its contribution and only focus on the impact of the base elongations. We also specify that the previous theoretical investigations\cite{Schliwa_1, Sheng_1} of the QD elongations have used the term "lateral aspect ratio" for the ratio of the QD base diameters, which is equivalent to the elongation factor $\eta$ defined in this study. In order to avoid confusion, we use aspect ratio for the ratio of the QD height and base diameter; and use elongation factor for the ratio of the base diameters. Finally, by definition, the elongation factor $\eta$ is = 1.0, $>$1.0, and $<$1.0 for the circular, [110]-elongated, and $[\overline{1}10]$-elongated QDs, respectively.
\section{Methodologies}
The atomistic simulations are performed using NEMO 3-D simulator\cite{Klimeck_1, Klimeck_2, Klimeck_3}, which is based on strain energy minimization by using the valence force field (VFF) model\cite{Keating_1, Olga_1} and electronic structure calculations by solving a twenty-band \textit{sp$^3$d$^5$s$^*$} tight binding Hamiltonian\cite{Boykin_1}. Both linear and quadratic piezoelectric potentials are calculated by using the published recipe\cite{Usman_4, Schliwa_1} and included in the Hamiltonian. The polarization dependent optical transitions are computed from Fermi's golden rule by absolute values of the optical matrix elements, summed over the spin degenerate states\cite{Usman_1, Usman_5}. The polarization dependent ground state optical transition intensity modes (TE$_{[110]}$, TE$_{[\overline{1}10]}$, and TM$_{[001]}$) are computed as a cumulative sum of the optical transitions between the lowest conduction band state (e1) and the highest five valence band states (h1, h2, h3, h4, and h5)\cite{Usman_1}.
We want to emphasize here that all of the simulations are performed over very large GaAs buffers surrounding the QDs to properly accommodate the impact of long-range strain and piezoelectric potentials in the electronic structure and the optical transition calculations. For the strain relaxation, we use mixed boundary conditions: bottom fixed, periodic in the lateral directions, and the top free to relax. For the electronic structure calculations, we use closed boundary conditions. The dangling bonds at the surface atoms are passivated according to the published model~\cite{Lee_1}.
\section{Results and Discussions}
In the next two subsections, A and B, we present our results for the flat and tall QDs, respectively, and quantitatively analyse the impact of the elliptical shapes on their electronic and polarization properties. The subsequent subsection C presents results for the vertical stack of nine quantum dot layers (9-VSQDs).
\textit{\textbf{Factors that shifts the electron and hole energies:}} Before we start our analysis of the three QD systems under investigation, we specify the factors that shift the electron and hole energies as the base diameters of the QDs are increased or decreased. We identify four major factors as follows: \\ \\(i) Change in QD Volume ($\bigtriangleup$V): Fig.~\ref{fig:Fig1}(f) provides a quantitative estimate of the changes in the QD volume as a function of the elongation factor $\eta$ for the three types of the elongations. Since the volume only decreases for the Type-I and Type-II elongations, so it will result in an increase of the electron energies and a decrease of the hole energies. \\(ii) Change in QD Diameter ($\bigtriangleup$d): The QD base elongations are based on the increase/decrease of the QD diameters along the [110] and [$\overline{1}$10] directions. While this decrease/increase in the diameters will have very little impact on the ground state electron energy e1 (due to its s-type symmetrical wave function), it will affect the electron and hole p-states energies due to their orientation along these directions. The increase (decrease) in the diameters will produce corresponding increase (decrease) in the electron energies and decrease (increase) in the hole energies, oriented along their directions. \\(iii) Strain: The strain directly modifies the band edges and thus impacts the electron and hole confinement energies. The electron energies are shifted by changes in the hydrostatic strain ($\epsilon_{xx}+\epsilon_{yy}+\epsilon_{zz}$) only, whereas the hole energies are affected by changes in both the hydrostatic and the biaxial strain ($\epsilon_{xx}+\epsilon_{yy}-2\epsilon_{zz}$). Simple analytical relations based on the deformation potential theory can be applied to estimate these changes\cite{Usman_3}. \\(iv) Piezoelectric Potential: InAs/GaAs systems are strongly piezoelectric: the orientation and the magnitude of the piezoelectric potentials significantly impacts the orientation and the splitting of the electron and hole excited states. It should be noted that although the piezoelectric potentials do not directly shift the electron and hole p-state energies, but they determine the $\bigtriangleup$d induced changes by controlling the orientation of the p-states.
Overall for the Type-I elongation, $\bigtriangleup$V is large whereas $\bigtriangleup$d is small since we keep one diameter unchanged and only reduce the other diameter by $\bigtriangleup$d, so the impact of $\bigtriangleup$V dominates. For the Type-II elongation, the QD volume only slightly decreases, so the impact of $\bigtriangleup$V is small. However, we increase one diameter by $\bigtriangleup$d and decrease the other diameter by $\bigtriangleup$d, so the overall impact of $\bigtriangleup$d is much stronger. Finally, for the Type-IIv elongation, $\bigtriangleup$V=0, and $\bigtriangleup$d is roughly same as for the Type-II elongation.
\subsection{Flat Quantum Dot (AR=0.225)}
In this subsection, we study the impact of the elongations on the electronic and polarization properties of a flat QD as shown by the schematic in Fig.~\ref{fig:Fig1}(a). The QD has a base diameter of 20 nm and a height of 4.5 nm (AR = 5/20 = 0.225). Such low AR QDs are more commonly obtained from the strain-driven SK self-assembly growth process and their electronic properties have been widely studied in the literature.
\begin{figure*}
\includegraphics[scale=0.28]{Figure2.png}
\caption{(a, b, c) The lowest three conduction band energy levels (e1, e2, and e3) are plotted as a function of the elongation factor ($\eta$) for the (a) Type-I, (b) Type-IIv, and (c) Type-II elongations. (c, d, e) The highest five valence band energy levels (h1, h2, h3, h4, and h5) are plotted as a function of the QD elongation factor ($\eta$) for the (a) Type-I, (b) Type-IIv, and (c) Type-II elongations. The corresponding increase/decrease in the optical gap energy (E$_{g}$) is also specified in each case by using the vertical arrows.}
\label{fig:Fig2}
\end{figure*}
\vspace{1mm}
\begin{figure*}
\includegraphics[scale=0.16]{Figure3.png}
\caption{The top view of the wave function plots for the lowest three conduction band (e1, e2, and e3) and the highest five valence band (h1, h2, h3, h4, and h5) states are shown for the circular-base QD and for the selected elongations of the QD. The intensity of the colors in the plots represent the magnitude of the wave functions, with the dark red color indicating the largest magnitude and the light blue color indicating the smallest magnitude. The boundaries of the QDs are also shown to guide the eye.}
\label{fig:Fig3}
\end{figure*}
\vspace{1mm}
\begin{SCfigure*}
\includegraphics[scale=0.3]{Figure4.png}
\caption{The plots of the total piezoelectric potentials (linear+quadratic) are shown for (a) circular-base QD, (b, d, f) [110]-elongated QDs, and (c, e, g) [$\overline{1}$10]-elongated QDs. In each case, the type and magnitude of elongation is specified. The solid red lines are plotted along the [$\overline{1}$10] direction through the center of the QD, 0.5 nm above its base. The dotted (broken) black lines are plotted along the [110] direction through the center of the QD, 0.5 nm above its base. The boundaries of the QD region are also marked in each case by specifying the lengths of QD along the [110] and [$\overline{1}$10] directions, d$_{[110]}$ and d$_{[\overline{1}10]}$.}
\label{fig:Fig4}
\end{SCfigure*}
\vspace{1mm}
\subsubsection{Electronic properties of the flat QD}
Fig.~\ref{fig:Fig2} plots the lowest three conduction band energy levels (e1, e2, e3) and the highest five valence band energy levels (h1, h2, h3, h4, h5) for the three types of elongations: (a, d) Type-I, (b, e) Type-IIv, and (c, f) Type-II. The figures are plotted using the same scale to facilitate mutual comparison. In order to understand the shifts in the energies, we also plot wave functions and piezoelectric potentials for the few selected cases. Fig.~\ref{fig:Fig3} shows the top view of the wave function plots for the circular-base QD and for the few selected [110] and [$\overline{1}$10] elongations. The QD boundaries and dimensions are also marked in each case.
Fig.~\ref{fig:Fig4} plots the total (linear+quadratic) piezoelectric potentials along the [110] (dotted lines) and [$\overline{1}$10] (solid lines) directions through the center of the QDs, $\approx$0.5 nm above their base. The quadruple nature of the potentials is clearly evident which has been well established in the literature\cite{Usman_4, Schliwa_1, Islam_1} and is shown to strongly influence the orientation of the electron and hole p-states. In this case of the flat QD with AR=0.225, we find that the quadratic component of the piezoelectric potential does not fully cancel the linear component inside the QD region in contrast to the previous \textbf{k}$\centerdot$\textbf{p} study~\cite{Schliwa_1}, where the quadratic and the linear components were found to fully cancel each other for the AR $<$ 0.5. Since the piezoelectric potentials are a strong function of the QD shape and composition, so we find that their results cannot be generalized to all types of QDs. Theoretical studies by Usman \textit{et al}.\cite{Usman_4} and Islam \textit{et al}.\cite{Islam_1} have also shown non-zero values of the net piezoelectric potentials for the similar QDs.
Our calculations show that the piezoelectric potential plots have two peaks at the QD interfaces, one just outside the QD region and one just inside the QD region. The electron and hole wave functions are found to be more influenced by the piezoelectric potential peaks inside the QD, which causes the lower electron p-state (e2) to align along the [$\overline{1}$10] direction and the ground hole state (h1) to slightly elongate along the [110] direction for the circular-base QD (see Fig.~\ref{fig:Fig3} for $\eta$ = 1.0). The alignment of the lower electron p-state (e2) along the [$\overline{1}$10] direction is in agreement with the experimental reports for the similar QDs.\cite{Boucaud_1, Maltezopoulos_1}
\textbf{\textit{Lowest three conduction band energies:}} From our atomistic relaxations, we find that the hydrostatic component ($\epsilon_{xx}+\epsilon_{yy}+\epsilon_{zz}$) of the strain remains unchanged for both the [110] and the [$\overline{1}$10] elongations. Since the electron energies are only affected by the hydrostatic strain, so the strain does not contribute in the shifts of the conduction band energies. Also shown in Fig.~\ref{fig:Fig4} that the piezoelectric potentials do not exhibit any significant change for the elongated QDs. The peaks outside the QD regions are only changed by 1 to 2 meV and the peaks inside the QD region are decreased by 2 to 4 meV. These relatively small changes in the potentials will not result in any noticeable effects on the e2 and e3 energies. Therefore, we conclude that the electron energies are mainly affected by the changes in the diameters ($\bigtriangleup$d) and the volume ($\bigtriangleup$V) of the flat QD, while the strain and the piezoelectric fields have only minor contributions as a function of $\eta$.
The lowest electron energy level (e1) has s-type symmetrical wave function and is mainly affected by the decrease in the QD volume ($\bigtriangleup$V). The QD diameter change ($\bigtriangleup$d) does not have any noticeable impact on its energy. In Fig.~\ref{fig:Fig2}(a), as the QD volume decreases for the [110] and the [$\overline{1}$10] elongations, a nearly symmetric increase in the e1 energy is calculated. When the volume of the QD is kept fixed as in Fig.~\ref{fig:Fig2}(b), e1 energy is almost unchanged confirming that $\bigtriangleup$d has only minor impact on e1. Finally, for the Type-II elongation (Fig.~\ref{fig:Fig2}(c)), $\bigtriangleup$V is very small and a corresponding small increase in the e1 energy is observed. Fig.~\ref{fig:Fig3} shows that the wave function for the e1 state also retains its s-type symmetry, with only slight elongation along the direction in which the QD is being elongated.
The excited conduction band states (e2 and e3) have p-type symmetry and thus gets strongly affected by all of the three types of elongations. The $\bigtriangleup$V is once again a dominant factor, which pushes these energy levels towards higher values. This is evident from Fig.~\ref{fig:Fig2}(a) where both e2 and e3 increase in energy irrespective of the elongation direction. However, if $\bigtriangleup$V=0 as in Fig.~\ref{fig:Fig2}(b), the increase in the QD diameter reduces the energy of the state aligned along its direction and vice versa.
The type-II elongation is an interesting case where the two factors, $\bigtriangleup$V and $\bigtriangleup$d, compete as the QD diameters along the [110] and [$\overline{1}$10] directions are changed by equal values. Since the lower p-state (e2) is always oriented along the major-axis of the elliptical shape, so any increase in the corresponding diameter tends to reduce its energy while the decrease in the QD volume pushes it towards the higher energies. For the small values of the elongation factor ($\eta$ = 0.67, 0.82, 1.22, and 1.5), e2 decreases due to dominance of $\bigtriangleup$d induced shift. However, for the larger values of the elongation factor ($\eta$ = 0.54 and 1.86), the $\bigtriangleup$V induced upward shift overcomes the $\bigtriangleup$d induced downward shift and hence the energy of e2 increases.
The energy of the higher p-state, e3, always increases as a function of $\eta$ because it is oriented along the shorter diameter direction and hence an increase in its energy is supported by both, $\bigtriangleup$V and $\bigtriangleup$d.
\textbf{\textit{Electron p-state splitting:}} The energy difference between the p-states ($\bigtriangleup$e$_p$ = e3-e2) is an important parameter of interest as it provides a measure of the confinement anisotropy between the [110] and [$\overline{1}$10] directions\cite{Schliwa_1} and is sometimes used to characterize the fine structure splitting (FSS)\cite{Singh_2}. We find that $\bigtriangleup$e$_p$ is always larger for the [$\overline{1}$10]-elongations as compared to the [110]-elongations for the same of $\eta$. This is because for the circular-base QD ($\eta$ = 1.0), the cumulative effect of the underline zincblende crystal asymmetry, strain, and piezoelectricity results in $\bigtriangleup$e$_{p} \approx$ 2.4 meV and favours [$\overline{1}$10] direction for the e2 state (see Fig.~\ref{fig:Fig3} for $\eta$ = 1.0). Therefore, any [$\overline{1}$10] elongation merely enhances this asymmetry, whereas a [110] elongation first needs to overcome this inherent $\approx$2.4 meV splitting in order to flip the orientation of e2 state and hence results in overall lower values of $\bigtriangleup$e$_p$.
\textbf{\textit{Separation between the lowest two electron energies:}} Another parameter of interest for the laser design is the difference between the lowest two conduction band energy levels ($\bigtriangleup$e$_{21}$ = e2-e1) which should be large to avoid undesirable occupancy of the excited states\cite{Usman_3}. The largest reductions in $\bigtriangleup$e$_{21}$ are calculated to be $\approx$1 meV, $\approx$8 meV, and $\approx$10 meV for the Type-I, Type-IIv, and Type-II elongations, respectively. These small variations in $\bigtriangleup$e$_{21}$ suggest that the elongations of a flat QD does not deteriorate this parameter for the implementation of laser operation.
\textbf{\textit{Highest five valence band energies:}} Figs.~\ref{fig:Fig2}(d), (e), and (f) plot the highest five valence band energy levels (h1, h2, h3, h4, h5) for the three types of the elongations. The shifts in the valence band energy levels are more complicated to understand due to their intermixed HH/LH characters and much stronger confinements within the QD region. The changes in their energies exhibit drastic differences for the three types of the elongations.
Due to the heavier effective mass and stronger confinement inside the QD region, the orientation of the hole wave functions are mainly determined by the piezoelectric potential peaks inside the QD region. In the case of the circular-base QD as shown in Fig.~\ref{fig:Fig3}, h1 and h2 are slightly elongated towards the [110]-direction due to the negative peaks of the piezoelectric potentials along this direction inside the QD region.
For the [110]-elongations, the piezoelectric potential inside the QD region slightly reduces and the decrease in d$_{[\overline{1}10]}$ diameter results in an enhanced impact of the larger negative peaks of the potential outside the QD along the [$\overline{1}$10] direction. These two factors favour forces the hole wave functions to align along the [$\overline{1}$10] direction as shown in Fig.~\ref{fig:Fig3}. When the QD is elliptical with its major-axis along the [$\overline{1}$10] direction, the internal negative piezoelectric potential slightly increases and dominate to align the hole wave functions along the [110] direction. Overall, we find that the hole wave functions always tend to align along the minor-axis of the elliptical flat QD.
\begin{figure*}
\includegraphics[scale=0.4]{Figure5.png}
\caption{The polarization dependent optical transition modes TE$_{[110]}$, TE$_{[\overline{1}10]}$, and TM$_{[001]}$ are plotted as a function of the (a) Type-I, (b) Type-IIv, and (c) Type-II QD elongations. The figures are plotted using the same scales to facilitate easy mutual comparison. (d) The plots of the in-plane polarization anisotropy (Pol$_{||}$) as defined by Eq.~\ref{eq:pol} are shown for the three types of elongations, exhibiting an inverse quadratic dependence on $\eta$, consistent with the Sheng \textit{et al.}\cite{Sheng_1}. Four cases are marked by using green ovals indicating that the two different types of elongations, with roughly similar values of $\eta$, exhibiting similar values of the Pol$_{||}$.}
\label{fig:Fig5}
\end{figure*}
\vspace{1mm}
The biaxial strain for the Type-I elongation slightly relaxes and along with large $\bigtriangleup$V, it dominates the upward shift induced by $\bigtriangleup$d and pushes the hole energies towards lower values. This is clearly evident for the [110] elongations. However for the [$\overline{1}$10] elongations, since the hole wave functions exhibit stronger alignment along the [110] direction , the increase in energies coming from $\bigtriangleup$d is enhanced and therefore some of the hole energies can be observed shifting slightly upward in the Fig.~\ref{fig:Fig2}(d).
For the Type-IIv elongation in Fig.~\ref{fig:Fig2}(e), $\bigtriangleup$V = 0 and thus the impact of $\bigtriangleup$d, which is much larger than for the Type-I elongation, dominates. We also find that the biaxial strain relaxation is much weaker in this case, and thus its downward shift is also very small. As a result, the hole energies shift towards larger values.
Finally, $\bigtriangleup$V is very small for the Type-II elongation, and $\bigtriangleup$d is very large, so overall its impact dominates and shifts all the hole energies towards higher values. The impact of the biaxial strain relaxation is again very small downward shift.
As a summary, for the flat QD under elongations, the shifts in the electron and hole energies are mainly governed by $\bigtriangleup$V and $\bigtriangleup$d, whereas the direct impact of the strain remains negligibly small.
\textbf{\textit{Optical gap energy, E$_{g}$:}} The optical gap energy (E$_g$ = e1-h1) increases for both, the [110] and the [$\overline{1}$10], Type-I elongations, thus blue shifting the ground state optical wavelength mainly due to a large decrease in the QD volume. However, if the QD volume is fixed or only slightly decreased as for the Type-IIv and Type-II elongations, respectively, E$_{g}$ decreases as a function of the elongation and hence results in a red shift of the ground state optical transition wavelength.
\subsubsection{Polarization properties of the flat QD}
Figs.~\ref{fig:Fig5}(a), (b), and (c) compare polarization dependent TE and TM modes for the three QD elongations under study. The elliptical shape of the QD, irrespective of the orientation of its major-axis, tends to increase the TE-mode along its major-axis and decreases the TE-mode along its minor-axis. This can be understood as follows: the few top most valence band states have dominant heavy hole (HH) character due to the strain induced large splitting between the HH and LH bands. These heavy hole states are mainly comprised of $|X\rangle$ and $|Y\rangle$ symmetry wave functions, where $X$ and $Y$ are selected along the high symmetry [110] and [$\overline{1}$10] directions, respectively. The lowest electron state (e1) is mainly symmetric $|S\rangle$ type wave function (as a good approximation). The elongation of QD along, for example, the $X$-direction will have negligible impact on the $|S\rangle$ type wave function, but it will increase (decrease) $|X \rangle$ ($|Y\rangle$) component of the valence band states. Therefore, the TE$_{X} \propto |\langle X|S\rangle|^2$ component of the electron-holes transition will increase and the TE$_{Y} \propto |\langle Y|S\rangle|^2$ component will decrease as evident from Figs.~\ref{fig:Fig5}(a), (b), and (c).
\begin{table*}
\caption{\label{tab:table1} The ratio of the LH components in the highest five valence band states (h1, h2, h3, h4, h5) of the elliptical-shaped flat QD with respect to the LH components of the corresponding valence band states of the circular-base ($\eta$) QD for a few selected values of the elongation factor, $\eta$. For each case, the corresponding ratio of the electron-hole wave function spatial overlap along the $\vec{z}$ = [001] direction given by e1hi = $| \langle \psi_{e1} | \vec{z} | \psi_{hi} \rangle |$ is also provided.}
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{|l|l|l|l|l|l|l|}
\multicolumn{7}{c}{} \\%[3pt]
\cline{1-7}
\multicolumn{1}{|c|}{\textbf{Elongation type}} &
\multicolumn{1}{c|}{\textbf{$\pmb{\eta}$}} &
\multicolumn{1}{c|}{\textbf{h1 (e1h1)}} &
\multicolumn{1}{c|}{\textbf{h2 (e1h2)}} &
\multicolumn{1}{c|}{\textbf{h3 (e1h3)}} &
\multicolumn{1}{c|}{\textbf{h4 (e1h4)}} &
\multicolumn{1}{c|}{\textbf{h5 (e1h5)}} \\
\cline{1-7}
\multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{Type-I}}} & 1.25 & 1.38 (4.12) & 1.49 (12.88) & 1.15 (1.49) & 0.97 (1.07) & 0.90 (0.57) \\
\cline{2-7}
\multicolumn{1}{|c|}{} & 0.80 & 1.62 (5.05) & 1.19 (13.59) & 1.27 (0.99) & 1.11 (11.53) & 1.04 (2.49) \\
\cline{1-7}
\multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{Type-IIv}}} & 1.565 & 1.31 (7.59) & 1.29 (2.76) & 1.33 (0.22) & 0.96 (1.31) & 1.14 (0.34) \\
\cline{2-7}
\multicolumn{1}{|c|}{} & 0.64 & 1.56 (7.71) & 1.05 (17.76) & 1.09 (1.48) & 0.97 (3.96) & 0.90 (1.96) \\
\cline{1-7}
\multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{Type-II}}} & 1.86 & 1.68 (9.59) & 1.39 (4.94) & 1.45 (0.77) & 1.00 (0.93) & 1.20 (1.96) \\
\cline{2-7}
\multicolumn{1}{|c|}{} & 0.54 & 1.81 (9.71) & 1.12 (23.76) & 1.15 (2.00) & 1.05 (2.38) & 0.94 (0.81) \\
\cline{1-7}
\end{tabular}
\end{table*}
The analysis of the calculated TM$_{[001]}$ component reveals that it also increases for the elliptical QDs. The dominant contribution in the ground state optical intensity for the flat-shaped QDs comes from the highest valence band state (h1), with the lower valence band states (h2, h3, h4, and h5) only adding weak transition strengths\cite{Usman_5}. The strength of the TM$_{[001]}$ component is directly related to the LH mixing in the valence band states and its magnitude is proportional to the electron-hole wave function spatial overlap along the growth ([001]) direction, given by e1hi = $| \langle \psi_{e1} | \overrightarrow{z} | \psi_{hi} \rangle |$. Here $\psi_{e1}$ is the ground electron state, $\psi_{hi}$ is the $i$th valence band state where $i \in \{$ 1, 2, 3, 4, 5$\}$, and $\overrightarrow{z}$ is along the [001]-direction.
Table~\ref{tab:table1} provides values of the ratios of the LH component in the top five valence band states for the few selected elliptical shapes with respect to the circular-base shape. For each case, the corresponding ratio of the spatial overlap (e1hi) is also provided with in the (). Our calculations show an increasing LH mixing in the valence band states for the elliptical QDs with respect to the circular-base QD, in particular for the h1 state which gives dominant e1-h1 transition. The spatial overlap between electron and hole wave funcitons also increases and therefore, a net increase in the TM$_{[001]}$ mode is calculated as a function of $\eta$. This characteristic of the pure InAs QDs is in contrast to the In$_{0.5}$Ga$_{0.5}$As ordered and disordered QDs reported by Singh \textit{et al.}\cite{Singh_1}, where they have shown that the LH character of the h1 valence band state remains unchanged as a function of the QD elongation. Based on these results, we predict that the elliptical shape has a stronger impact on the polarization response of the pure InAs QDs, as compared to the alloyed InGaAs QDs.
\begin{figure*}
\includegraphics[scale=0.3]{Figure6.png}
\caption{The plots of the DOP$_{[110]}$ and DOP$_{[\overline{1}10]}$ are shown as a function of the (a) Type-I, (b) Type-IIv, and (c) Type-II QD elongations. The values of the DOP decrease irrespective of the direction of the QD elongation.}
\label{fig:Fig6}
\end{figure*}
\vspace{1mm}
\textbf{\textit{In-plane polarization anisotropy:}} Fig.~\ref{fig:Fig5}(d) plots the in-plane polarization anisotropy (Pol$_{||}$) as defined by Eq.~\ref{eq:pol} for the three types of the elongations. We find that our results for Pol$_{||}$ are overall in agreement with the findings of Sheng \textit{et al.}\cite{Sheng_1}, that when the height of the QD is kept fixed, similar elongations ($\eta$) results in nearly similar values of the in-plane anisotropy. We highlight four such cases in Fig.~\ref{fig:Fig5}(d) using green dotted circles where nearly same values of Pol$_{||}$ are calculated for the following sets of values of $\eta$: (1) 0.64 in Type-IIv and 0.67 in Type-II, (2) 0.82 in Type-II and 0.8 in Type-I, (3) 1.22 in Type-II and 1.25 in Type-I, and (4) 1.565 in Type-IIv and 1.50 in Type-II. Therefore, we conclude that the in-plane polarization anisotropy (Pol$_{||}$) only depends on the value of $\eta$, irrespective of the type of the elongation provided that the height of the QD is kept fixed. We also find that our calculated values of Pol$_{||}$ for all of the three type of the elongations roughly follow an inverse quadratic dependence on the elongation factor ($\eta$), in agreement with the quadratic dependence on the lateral aspect ratio ($\beta$) reported by Sheng \textit{et al.}\cite{Sheng_1}, as by definition $\eta$ = $\beta^{-1}$.
\textbf{\textit{Tuning of DOP$_{[\overrightarrow{n}]}$:}} Figs.~\ref{fig:Fig6}(a), (b), and (c) investigate the changes in the DOP$_{[\overrightarrow{n}]}$ for the three types of the elliptical shapes. Significant changes in the values of the DOP$_{[\overrightarrow{n}]}$ are observed as a function of $\eta$. More interestingly, it is found that both DOP$_{[110]}$ and DOP$_{[\overline{1}10]}$ decrease irrespective of the type and direction of the elongation. This implies that the elliptical-shape, in general, improves the polarization response compared to the circular-base for the flat QD. The largest decrease ($\approx$23\%) in the value of the DOP$_{[110]}$ (from 97$\%$ to 74.5$\%$) is calculated for the [$\overline{1}$10] elongation ($\eta$ = 0.54).
\subsection{Tall Quantum Dot (AR=0.40)}
In this subsection, we study the impact of elliptical shapes on the electronic and polarization properties of a tall InAs QD, having the same base diameter (20 nm) as of the flat QD of the previous subsection, but with a height of 8 nm (AR = h/d = 8/20 = 0.40). Such high AR QDs are obtained using special growth conditions (very slow growth rate and high temperatures)\cite{Bimberg_1}, or are typically found in the optically active upper layers of the weakly coupled bilayer QD stacks\cite{Usman_2}, where the presence of strain from the lower QD layers results in the larger size of the upper layer QDs. To our knowledge, no detailed theoretical investigation of the polarization properties of such tall elliptical dome-shaped QDs is available in the literature, as the previous theoretical studies have only focused on the flat shapes of the QDs with low ARs: AR = 2/20 = 0.1~\cite{Favero_1}, AR $\approx$ 0.17~\cite{Pryor_1}, AR = 3.5/20 = 0.175~\cite{Singh_2}, AR = 2/25 = 0.08~\cite{Mlinar_1}, and AR = 4.5/28.8 = 0.16~\cite{Sheng_1}.
Schliwa \textit{et al}.~\cite{Schliwa_1} applied \textbf{k}$\centerdot$\textbf{p} theory to study the electronic and optical properties of the QDs by varying their AR from 0.17 to 0.5. However in order to vary the AR, they choose to keep the QD volume constant by simultaneously changing both the base diameter and the height of the QD. Therefore their results do not isolate the impact of only varying the lateral aspect ratio. Their study of the base elongations which is focused at the pyramidal-shaped QDs shows that the linear and quadratic piezoelectric potentials fully cancel each other for all values of the AR $<$ 0.5, whereas our atomistic simulations presented in the previous subsection have already shown a non-zero net piezoelectric potential in the interior of a dome-shaped QD with the AR = 0.225. We therefore conclude that their result can not be generalized for all types of the QDs. In order to extend our study of the previous subsection, here we investigate a tall QD by just increasing the height of the QD from 4.5 nm to 8 nm, keeping its base diameter same (20 nm) as for the flat QD. This allows us to make a direct comparison between the results for the flat and the tall QDs. Once again, for all types of the elongations, the height of the QD is kept constant at 8 nm.
\subsubsection{Electronic properties of the tall QD}
Fig.~\ref{fig:Fig7} plots the lowest three conduction band energies (e1, e2, e3) and the highest five valence band energies (h1, h2, h3, h4, h5) as a function of the elongation factor ($\eta$) for all of the three types of elongations: (a, d) Type-I, (b, e) Type-IIv, and (c, f) Type-II. In order to develop an understanding of these energy shifts, we also plot the wave functions and the piezoelectric potentials for a few selected values of $\eta$ in Figs.~\ref{fig:Fig8} and ~\ref{fig:Fig9}, respectively.
The piezoelectric potential plots in Fig.~\ref{fig:Fig9} show familiar quadrupole symmetry, but exhibit three noticeable differences when compared to the plots for the flat QD (Fig.~\ref{fig:Fig4}):
\begin{description}
\item [(i)] The interior of the QD is nearly field free, with only one peak at the QD interfaces. This is in agreement with Schliwa \textit{et al.}\cite{Schliwa_1} indicating a cancellation between the linear and the quadratic components inside the QD region.
\item [(ii)] A much larger magnitude of the fields (nearly twice).
\item [(iii)] A flip in the sign of the fields. The potential peaks are positive along the [$\overline{1}$10] direction and negative along the [110] direction.
\end{description}
It is interesting to note here that although the piezoelectric potential profiles are drastically different for the flat and the tall QDs, but the electron and hole states for the circular-base case ($\eta$ = 1.0) align in the same direction for both cases: e2 along the [$\overline{1}$10] direction and all hole states along the [110] direction, as shown in the Fig.~\ref{fig:Fig8}. Thus, an experimental measurement on a circular-base dome-shaped QD would not indicate any sign difference for the e$_{[110]}$-e$_{[\overline{1}10]}$ as a function of QD AR, despite the underlying physical details are quite different and require atomistic modeling with realistic simulation domains and physical parameters.
We also find that the wave functions of the hole states for the tall QD are localized close to the QD interfaces due to the presence of the heavy hole (HH) pockets in the valence band edges. In our previous study\cite{Usman_5}, we have shown that such interfacial localization of the hole wave functions for a pure InAs QD starts when the AR is increased above 0.25. Similar results were presented by Narvaez \textit{et al.}\cite{Narvaez_1} for the InAs QDs using pseudo-potential calculations, where they varied the QD AR from (5/25.2) 0.198 to (7.5/25.2) 0.298 and showed that the hole states tend to confine in the HH pockets at the QD interfaces for the tall QDs.
\begin{figure*}
\includegraphics[scale=0.28]{Figure7.png}
\caption{(a, b, c) The lowest three conduction band energy levels (e1, e2, and e3) are plotted as a function of the elongation factor ($\eta$) for the (a) Type-I, (b) Type-IIv, and (c) Type-II elongations. (c, d, e) The highest five valence band energy levels (h1, h2, h3, h4, and h5) are plotted as a function of the QD elongation factor ($\eta$) for the (a) Type-I, (b) Type-IIv, and (c) Type-II elongations. The corresponding increase/decrease in the optical gap energy (E$_{g}$) is also specified in each case by using the vertical arrows.}
\label{fig:Fig7}
\end{figure*}
\vspace{1mm}
\begin{figure*}
\includegraphics[scale=0.15]{Figure8.png}
\caption{The top view of the wave function plots for the lowest three conduction band (e1, e2, and e3) and the highest five valence band (h1, h2, h3, h4, and h5) states are shown for the circular-base of the QD and for the selected elongations of the QD. The intensity of the colors in the plots represent the magnitude of the wave functions, with the dark red color indicating the largest magnitude and the light blue color indicating the smallest magnitude. The boundaries of the QDs are also shown to guide the eye.}
\label{fig:Fig8}
\end{figure*}
\vspace{1mm}
\begin{SCfigure*}
\includegraphics[scale=0.3]{Figure9.png}
\caption{The plots of the total (linear+quadratic) piezoelectric potentials are shown for (a) circular-base QD, (b, d, f) [110]-elongated QDs, and (c, e, g) [$\overline{1}$10]-elongated QDs. In each case, the type and the magnitude of the elongation is specified. The solid red lines are plotted along the [$\overline{1}$10] direction through the center of the QD, 0.5 nm above its base. The dotted (broken) black lines are plotted along the [110] direction through the center of the QD, 0.5nm above its base. The boundary of the QD region is also marked in each case by specifying the lengths of QD along the [110] and [$\overline{1}$10] directions, d$_{[110]}$ and d$_{[\overline{1}10]}$.}
\label{fig:Fig9}
\end{SCfigure*}
\vspace{1mm}
\textit{\textbf{Lowest three conduction band states:}} As for the flat QD of the previous subsection, the hydrostatic strain component does not change as a function of $\eta$, thus the strain contribution in the conduction band energy shifts is negligible because they are only affected by the hydrostatic strain. The piezoelectric fields also only slightly change when the QD is elongated (Fig.~\ref{fig:Fig9}), so the contribution from the changes in the piezoelectric fields in the shifts of the electron energies is minor, with dominating contributions coming from $\bigtriangleup$V and $\bigtriangleup$d.
The lowest conduction band state e1 has s-type symmetry and the impact of $\bigtriangleup$d on its energy is negligible. The Type-I elongation considerably reduces the QD volume (see Fig.~\ref{fig:Fig1}(f)), and therefore e1 energy increases. When the QD volume is unchanged as in the case of the Type-IIv elongation or is only slightly decreased as in the case of the Type-II elongation, e1 shows a very small change. The wave function plots for the e1 state indicate s-type symmetry with only slight elongation, mainly for the [$\overline{1}$10] oriented Type-IIv and Type-II elongations.
The excited electron states, e2 and e3, are separated by $\approx$8.8 meV for the circular-base QD ($\eta$=1.0), which is around four times larger than the $\approx$2.4 meV splitting for the flat QD. This larger splitting is mainly due to the larger (nearly twice) magnitude of the piezoelectric potentials as evident from the comparison of Figs.~\ref{fig:Fig4} and ~\ref{fig:Fig9}.
For the Type-I elongation (Fig.~\ref{fig:Fig7}(a)), the large reduction in the QD volume shifts both, e2 and e3, towards the higher energies. The energy difference $\bigtriangleup$e$_p$ = e3-e2 increases for the [$\overline{1}$10]-elongation and decreases for the [110]-elongation, which is in agreement with what Schliwa \textit{et al.}~\cite{Schliwa_1} also calculated for the square-base pyramidal-shaped QDs. They reported electron p-state degeneracy towards the [110] base elongation.
When the overall QD volume is fixed in the [$\overline{1}$10] Type-IIv elongation, the $\bigtriangleup$d induced shift lowers the energy of the [$\overline{1}$10]-oriented e2 and increases the energy of the [110]-oriented e3, as shown in Fig.~\ref{fig:Fig7}(b). Since e2 and e3 go through a flip of their orientations for the [110] Type-IIv elongation, so only a very small change in their energies is observed (from 8.8 meV to 9.3 meV).
Finally, the changes in the energies of e2 and e3 for the Type-II elongations (Fig.~\ref{fig:Fig7}(c)) follow the similar trends as previously calculated for the flat QD (Fig.~\ref{fig:Fig2}(c)). The two competing factors ($\bigtriangleup$V and $\bigtriangleup$d) contribute to the energy shifts, where $\bigtriangleup$d being dominant for the small elongations ($\eta$ = 0.67, 0.82, 1.22, and 1.5) and $\bigtriangleup$V taking charge for the large elongations ($\eta$ = 0.54 and 1.86).
\textbf{\textit{Separation between the lowest two electron energies:}} The difference between the lowest two conduction band energy levels ($\bigtriangleup$e$_{21}$ = e2-e1) which is important for the laser design robustness increases for the Type-I elongations, with largest increase being $\approx$8 meV calculated for the [110]-elongation. For the Type-II and Type-IIv elongations, only minor reductions in the $\bigtriangleup$e$_{21}$ are calculated. Therefore, we extend our conclusion of the previous subsection that the elongations of both, the flat and the tall QDs, do not deteriorate $\bigtriangleup$e$_{21}$ for the implementation of laser operation.
\textit{\textbf{Highest five valence band energies:}} While the conduction band energy level shifts for the tall QD quite resemble with the corresponding shifts for the flat QD, the hole energy level shifts show significant contrasts. The main reason for this different behavior is the dissimilarity of the net piezoelectric potential profiles for the two types of the QDs as evident from the comparison of Figs.~\ref{fig:Fig4} and ~\ref{fig:Fig9}, which leads to very different confinements of the hole wave functions (see Figs.~\ref{fig:Fig3} and ~\ref{fig:Fig8}). The large negative peaks of the potentials at the QD interfaces along the [110] direction and nearly field free interior of the tall QD result in the hole wave functions being oriented along the [110] direction irrespective of the type and the direction of the elongations as shown in Fig.~\ref{fig:Fig8}. Furthermore, the HH pockets at the QD interfaces results in the hole wave function confinements at the QD interfaces.
\begin{figure*}
\includegraphics[scale=0.3]{Figure10.png}
\caption{The polarization dependent optical transition modes TE$_{[110]}$, TE$_{[\overline{1}10}]$, and TM$_{[001]}$ are drawn as a function of the (a) Type-I, (b) Type-IIv, and (c) Type-II QD elongations. The figures are plotted using the same scales to facilitate easy mutual comparison. (d) The plots of the in-plane polarization anisotropy (Pol$_{||}$) as defined by Eq.~\ref{eq:pol} are shown for the three type of the elongations, exhibiting an inverse quadratic dependence on $\eta$, consistent with the Sheng \textit{et al.}\cite{Sheng_1}. Four cases are also marked by using green ovals indicating that the two different types of elongations, with roughly similar values of $\eta$, exhibit similar values of the Pol$_{||}$.}
\label{fig:Fig10}
\end{figure*}
\vspace{1mm}
For the Type-I elongation (Fig.~\ref{fig:Fig7}(d)), both a large decrease in the QD volume ($\bigtriangleup$V) and a small relaxation of the biaxial strain pushes the hole energy levels towards the lower energies. When the QD is elliptical along the [110] direction, the hole wave functions also being oriented along this direction experience a downward shift in their energies. Thus all of the three factors add to result in a stronger reduction in the hole energy levels. In the case of the [$\overline{1}$10] elongation, d$_{[110]}$ is reduced which pushes the hole energy levels upward. Although the cumulative downward shift from $\bigtriangleup$V and biaxial strain relaxation overall remains dominant, however for some values of $\eta$, $\bigtriangleup$d induced upward shift is evident.
As $\bigtriangleup$V=0 for the Type-IIv elongation and the biaxial strain relaxation is much weaker as compared to the Type-I case, so only $\bigtriangleup$d induced shifts are observed in Fig.~\ref{fig:Fig7}(e): the increase in d$_{[110]}$ decreases the hole energies (for the [110] elongation) and the decrease in d$_{[110]}$ increases the hole energies (for the [$\overline{1}$10] elongation).
Finally, for the Type-II elongation in Fig.~\ref{fig:Fig7}(f), $\bigtriangleup$V is quite small. The biaxial strain relaxation again causes a small downward shift in the hole energies. However, the dominant shift comes from the $\bigtriangleup$d. Therefore, the hole energies move towards the lower values for the [110] elongation and shifts towards the higher values for the [$\overline{1}$10] elongation.
\textit{\textbf{Optical gap energy, E$_{g}$:}} The changes in the optical gap energy, E$_{g}$ = e1 - h1, are also marked with the help of vertical arrows (using red color) for the three types of the elongations in Fig.~\ref{fig:Fig7}. For the Type-I elongation, the increase in e1 energy and the decrease in h1 energy implies a blue shift of E$_{g}$ irrespective of the orientation of the elongation. This is same as earlier calculated for the flat QD. However, whereas for the flat QD, E$_{g}$ red shifts for both the Type-II and the Type-IIv elongations independent of their orientations, in this case of the tall QD, the changes in E$_{g}$ depend on the orientation of the elongation. For the [$\overline{1}$10] elongations, E$_{g}$ red shifts, whereas it blue shifts for the [110] oriented elongations.
\subsubsection{Polarization properties of the tall QD}
The polarization properties of the tall QD are significantly different from the flat QD of the previous subsection because of the two major differences in the hole wave functions even for the circular-base case, as evident from the comparison of Figs.~\ref{fig:Fig3} and ~\ref{fig:Fig8}: all of the hole wave functions for the tall QD are oriented along the [110] direction and are confined inside HH pockets at the QD interfaces in contrast to the flat QD where the hole wave functions are either nearly symmetric at the QD center or [$\overline{1}$10] oriented. Thus, for the circular-base tall QD, we calculate TE$_{[110]} <$ TE$_{[\overline{1}10]}$, and overall TE and TM modes have smaller magnitudes compared to the flat QD due to the relatively smaller spatial overlaps between the electron and the hole wave functions.
\begin{table*}
\caption{\label{tab:table2} The ratio of the LH components in the highest five valence band states (h1, h2, h3, h4, h5) of the elliptical-shaped tall QD with respect to the LH components of the corresponding valence band states of the circular-base ($\eta$) QD for a few selected values of the elongation factor, $\eta$. For each case, the corresponding ratio of the electron-hole wave function spatial overlap along the $\vec{z}$ = [001] direction given by e1hi = $| \langle \psi_{e1} | \vec{z} | \psi_{hi} \rangle |$ is also provided.}
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{|l|l|l|l|l|l|l|}
\multicolumn{7}{c}{} \\%[3pt]
\cline{1-7}
\multicolumn{1}{|c|}{\textbf{Elongation type}} &
\multicolumn{1}{c|}{\textbf{$\pmb{\eta}$}} &
\multicolumn{1}{c|}{\textbf{h1 (e1h1)}} &
\multicolumn{1}{c|}{\textbf{h2 (e1h2)}} &
\multicolumn{1}{c|}{\textbf{h3 (e1h3)}} &
\multicolumn{1}{c|}{\textbf{h4 (e1h4)}} &
\multicolumn{1}{c|}{\textbf{h5 (e1h5)}} \\
\cline{1-7}
\multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{Type-I}}} & 1.25 & 1.17 (1.42) & 1.18 (1.28) & 1.21 (1.40) & 1.21 (1.53) & 1.12 (1.25) \\
\cline{2-7}
\multicolumn{1}{|c|}{} & 0.80 & 1.07 (1.84) & 1.05 (1.89) & 1.04 (1.32) & 1.06 (1.50) & 1.00 (1.64) \\
\cline{1-7}
\multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{Type-IIv}}} & 1.565 & 1.17 (1.45) & 1.15 (1.63) & 1.18 (1.73) & 1.15 (3.00) & 1.10 (3.89) \\
\cline{2-7}
\multicolumn{1}{|c|}{} & 0.64 & 0.93 (2.92) & 0.93 (3.34) & 0.85 (1.85) & 0.86 (2.14) & 0.86 (2.66) \\
\cline{1-7}
\multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{Type-II}}} & 1.86 & 1.27 (3.29) & 1.26 (4.95) & 1.00 (5.80) & 1.08 (3.86) & 1.15 (4.39) \\
\cline{2-7}
\multicolumn{1}{|c|}{} & 0.54 & 1.00 (5.88) & 1.00 (6.17) & 0.86 (3.07) & 0.89 (3.19) & 0.90 (4.06) \\
\cline{1-7}
\end{tabular}
\end{table*}
\begin{figure*}
\includegraphics[scale=0.32]{Figure11.png}
\caption{The plots of the DOP$_{[110]}$ and DOP$_{[\overline{1}10]}$ are shown as a function of the (a) Type-I, (b) Type-IIv, and (c) Type-II elongations. A large decrease in the value of the DOP$_{[\vec{n}]}$ along the minor-axis of the elliptical QDs is calculated.}
\label{fig:Fig11}
\end{figure*}
\vspace{1mm}
We also want to specify that for the tall QD, the lower lying hole wave functions have larger contribution in the ground state optical transition intensity when compared to the flat QDs where mainly e1-h1 transition is dominant\cite{Usman_5}. The dependence on the elongation factor is also quite different with all the polarization modes TE$_{[110]}$, TE$_{[\overline{1}10]}$, and TM$_{[001]}$ increasing irrespective of the type and the orientation of the elongation. The large increase in the TE$_{[\overrightarrow{n}]}$ mode for the elongation along the [$\overrightarrow{n}$] is same as also calculated for the flat QD and is explained in the section IV-A-2. However, the small increase in the other TE mode is attributed to an increased electron-hole spatial overlap because of elliptical shape of the QD.
Table~\ref{tab:table2} provides the ratio of the LH components in the top five valence band states for the elliptical QDs with respect to the circular-base QD. In each case, we also provide the corresponding ratio of the spatial overlap between the electron and hole wave functions (e1hi) along the [001]-direction as defined earlier for the flat QD. The increase in the TM$_{[001]}$ mode is directly related to an increase in the LH mixing of the valence band states. However for a giving LH mixing the magnitude of the TM$_{[001]}$ is proportional to the spatial overlap between the electron and hole wave functions along the [001] direction.
As the shape of the QD is elongated, the electron wave function e1 also gets elongated along the major axis of the ellipse as evident from Fig.~\ref{fig:Fig8}. This increased spread of e1 wave function results in a relative increase in its spatial overlap with the hole wave functions for the elliptical QDs with respect to the circular-base QD as also noticeable from the values of e1hi provided in Table~\ref{tab:table2}. For $\eta >$ 1.0 ([110] elongation), the LH mixing increases and along with an increase in e1hi is responsible for the enhanced TM$_{[001]}$ mode. The [$\overline{1}$10] elongation ($\eta <$ 1.0) slightly reduces the LH character of the valence band states, however as the hole wave functions are oriented along the minor-axis, so a large increase in the electron-hole wave function spatial overlap overcomes a small ($<$ 15\%) decrease in the LH component and therefore causes a net increase in the TM$_{[001]}$ mode strength. It should also be noted that for the same amount of elongation, $\eta >$ 1.0 results in larger TM$_{[001]}$ mode when compared to $\eta <$ 1.0. This is because of the fact that for the $\eta >$ 1.0, both the LH mixing and the spatial overlap increase, whereas for the $\eta <$ 1.0 the LH mixing decreases and only the spatial overlap increases.
\textbf{\textit{In-plane polarization anisotropy:}} As shown in Fig.~\ref{fig:Fig10}(d), the dependence of Pol$_{||}$ on the elongation factor ($\eta$) for the tall QD is also quite similar to the case of the flat QD. The calculated values of Pol$_{||}$ for all of the three types of the elongations again follow inverse quadratic dependence. Furthermore, for the fixed height of the QD, similar values of $\eta$ exhibit nearly same in-plane anisotropy. We highlight four such cases in Fig.~\ref{fig:Fig5}(d) using green dotted circles where nearly same values of the Pol$_{||}$ are calculated for the following sets of the values of $\eta$: (1) 0.64 in Type-IIv and 0.67 in Type-II, (2) 0.82 in Type-II and 0.8 in Type-I, (3) 1.22 in Type-II and 1.25 in Type-I, and (4) 1.565 in Type-IIv and 1.50 in Type-II. Therefore, we extend our conclusion of the previous subsection that the in-plane polarization anisotropy (Pol$_{||}$) only depends on the value of the $\eta$, irrespective of the type and orientation of the elongation for both flat and tall QDs, provided the height of the QDs is kept fixed.
\textbf{\textit{Tuning of DOP$_{[\overrightarrow{n}]}$:}} To conclude this discussion about the polarization response of the tall QD, we compare the values of the DOP$_{[110]}$ and DOP$_{[\overline{1}10]}$ for the three types of elongations in Figs.~\ref{fig:Fig11}(a), (b), and (c). Since TE$_{[110]} >$ TE$_{[\overline{1}10]}$ for the circular-base ($\eta$ = 1.0), so a much larger difference ($\approx$13$\%$) is present between the DOP$_{[110]}$ and DOP$_{[\overline{1}10]}$ for $\eta$ = 1.0, as compared to only $\approx$0.2$\%$ difference for the flat QD. This difference is in agreement with the previous comparison\cite{Usman_5} between the similar flat and tall QDs and is attributed to the stronger orientation and confinements of the hole wave functions for the tall QDs.
The elliptical shape of the tall QD once again reduces the value of the DOP$_{[\overrightarrow{n}]}$ along its minor-axis. This is mainly because the TM$_{[001]}$-mode increases for the elongated QDs whereas the corresponding TE-mode along the minor-axis does not increase much. In contrast to the case of the flat QD, the elliptical shape, however, does not reduce the value of the DOP$_{[\overrightarrow{n}]}$ along the major-axis which in fact first slightly increases and then remains nearly constant. This is because for the flat QD, the TE-mode is much larger than the TM-mode and hence the values of the DOP$_{[\overrightarrow{n}]}$ are not very sensitive to changes in the TM-mode. However, for the tall QD, the TE-modes are comparatively smaller in magnitude due to the smaller electron-hole spatial overlaps, so the values of the DOP$_{[\overrightarrow{n}]}$ become more sensitive to the changes in the TE- and TM-modes. For the small values of $\eta$, a larger increase in the TE-mode produces slight increase in the values of the DOP$_{[\overrightarrow{n}]}$. For the large values of $\eta$, the increase in the TM-mode also becomes important and hence the values of the DOP$_{[\overrightarrow{n}]}$ do not show any further increase.
To summarize this subsection, we find that overall a much larger tuning of the polarization response is possible by elongating the tall QDs. The largest reduction ($\approx$51$\%$) in the value of the DOP$_{[\overrightarrow{n}]}$ is calculated for the Type-II elongation at $\eta$ = 0.54. Since the red shift of the optical wavelength and the isotropic polarization response, both, are desired for the design of the optical devices operating at telecommunication wavelengths (1.3-1.5 $\mu$m), our model calculations suggest that the Type-II [$\overline{1}$10] elongations are more suitable as they fulfil both requirements (see Figs.~\ref{fig:Fig7}(c), (f) and Fig.~\ref{fig:Fig11}(c)). It should also be noted that the confinements of the hole wave functions at the interfaces for the circular-base tall QDs reduce the oscillator strengths by an order of magnitude when compared to the circular-base flat QDs. However, with the elliptical shapes, the oscillator strengths increase and even become comparable to the flat QDs, in particular for the large Type-II elongations.
\subsection{Vertical Stack of Nine QDs (9-VQDS)}
Vertical stacks of QDs (VSQDs) have shown great potential for tuning of the polarization properties. Recent experiments~\cite{Inoue_1, Alonso_1, Humlicek_1, Fortunato_1} and theoretical investigations~\cite{Usman_1, Saito_1} have demonstrated that an isotropic polarization response can be realized by geometrical engineering of the VSQDs. In this subsection, we study a vertical stack of closely spaced nine QD layers (9-VSQDs) as shown by the schematic diagram of Fig.~\ref{fig:Fig1}(c). This 9-VSQDs has been a topic of the recent studies ~\cite{Usman_1, Inoue_1, Saito_1} due to its significant technological relevance to achieve isotropic polarization for the implementation of the QD based SOA's. The optimized geometrical parameters of the 9-VSQDs are chosen directly from the experiment~\cite{Inoue_1}, so that our results remain relevant to the experimental community.
We have recently shown\cite{Usman_1} by experimental PL measurements and theoretical calculations that the 9-VSQDs can exhibit TM$_{[001]} >$ TE$_{[110]}$ leading to DOP$_{[110]} < 0$. However, a significant anisotropy in the in-plane TE-mode was measured resulting in TE$_{[\overline{1}10]} >$ TM$_{[001]}$ and DOP$_{[\overline{1}10]} >$ 0. Similar anisotropies in the DOP$_{[\overrightarrow{n}]}$ were independently measured by alonso-Alvarez \textit{et al.}~\cite{Alonso_1} and Humlicek \textit{et al.}~\cite{Humlicek_1}, which they were unable to explain. Our multi-million atom simulations~\cite{Usman_1} assuming circular-base for the QDs qualitatively explained that this anisotropy (DOP$_{[110]} \neq$ DOP$_{[\overline{1}10]}$ ) is due to a strong confinement of the hole wave functions at the interfaces of QDs (similar to the case of the tall QD of the previous subsection) which tend to align along the [$\overline{1}10]$-direction, and thus significantly reduce the TE$_{[110]}$-mode. The TE$_{[\overline{1}10]}$-mode, on the other hand, does not observe any such decrease. The small increase in the TM$_{[001]}$ mode due to the relaxation of the biaxial strain, in particular around the center of the 9-VSQDs\cite{Usman_1, Saito_1}, is also not sufficient to overcome the TE$_{[\overline{1}10]}$ mode and thus the DOP$_{[\overline{1}10]}$ remains considerably larger than zero.
\begin{figure*}
\includegraphics[scale=0.3]{Figure12.png}
\caption{The plots of (a) the lowest three conduction band energies (e1, e2, e3) and (b) the highest five valence band energies (h1, h2, h3, h4, h5) for the 9-VSQDs as a function of the Type-II elongation factor, $\eta$. All of the energies increase irrespective of the orientation of the elongation. (c, d) The plots of the electron and hole energies as in (a) and (b), but without including the effect of strain. In the absence of strain, only $\bigtriangleup$d and $\bigtriangleup$V contribute in the energy shifts. (e) The plots of the biaxial strain component ($\epsilon_{xx}+\epsilon_{yy}-2\epsilon_{zz}$) along the [110] direction through the center of the 9-VSQDs with circular-base ($\eta$=1) and the [110] elongated base ($\eta$=1.75). (f) The plots of the biaxial strain component ($\epsilon_{xx}+\epsilon_{yy}-2\epsilon_{zz}$) along the [$\overline{1}$10] direction through the center of the 9-VSQDs with circular-base ($\eta$=1) and the [$\overline{1}$10] elongated base ($\eta$=0.57).}
\label{fig:Fig12}
\end{figure*}
\vspace{1mm}
The good agreement of our theoretical results with the experimental PL measurements even for an ideal circular-base 9-VSQDs shape leads to a fundamental question that how much is the contribution from the realistic shapes which are normally elongated. Here we systematically elongate the QD layers inside the 9-VSQDs base and analyse its impact on the electronic and polarization properties. In contrast to the single QDs of the previous two subsections, where the QD base elongations result in a significant tuning of the DOP$_{[\overrightarrow{n}]}$, the magnitude of the DOP$_{[\overrightarrow{n}]}$ for the 9-VSQDs is relatively insensitive to the value of $\eta$. However, the sign of the DOP$_{[\overrightarrow{n}]}$ is a strong function of the orientation of the elongation, and even a very small elongation (0.5-1.0 nm) is sufficient to control the sign of the DOP$_{[\overrightarrow{n}]}$. Furthermore, we explore the possibility to achieve DOP$_{[\overrightarrow{n}]} <$ 0 for both $[\overrightarrow{n}]$ = [110] and [$\overline{1}$10] by elongating the 9-VSQDs along the [110] direction. Our calculations predict that such a scenario is not possible due to a very high sensitivity of the DOP$_{[\overrightarrow{n}]}$ with the value of $\eta$, and therefore the value of the DOP$_{[\overrightarrow{n}]}$ may be reduced below zero for only one of the two spatial directions.
\begin{figure*}
\includegraphics[scale=0.25]{Figure13.png}
\caption{Top views of the wave function plots for the lowest three conduction band (e1, e2, and e3) and the highest five valence band (h1, h2, h3, h4, and h5) states are shown for the circular-base 9-VSQDs and the two selected Type-II elongations of the 9-VSQD. The intensity of the colors in the plots represent the magnitude of the wave functions, with the dark color color indicating the smallest magnitude and the light green color indicating the largest magnitude. The boundaries of the QDs are also shown to guide the eye. }
\label{fig:Fig13}
\end{figure*}
\vspace{1mm}
\subsubsection{Electronic properties of the 9-VSQDs}
\textbf{\textit{Lowest three conduction band energies:}} Fig.~\ref{fig:Fig12}(a) plots the lowest three conduction band energies (e1, e2, e3) as a function of the elongation factor ($\eta$) for the Type-II elongation. The corresponding wave functions are also shown in Fig.~\ref{fig:Fig13}, indicating that all of the lowest three conduction band states have s-type symmetry. Due to the strong coupling between the closely spaced QD layers, these electron states are hybridized over multiple QD layers\cite{Usman_1}. Only a very small increase (less than 10 meV) in the energies of e1, e2, and e3 is calculated as the 9-VSQDs is elongated. Due to the s-type symmetry, the effect of the $\bigtriangleup$d is negligible. We also find that in contrast to the single QDs, the energies of these molecular electron states are insensitive to the $\bigtriangleup$V, as also confirmed by Fig.~\ref{fig:Fig12}(c) where no shift in the e1, e2, and e3 energies is calculated when the impact of strain is excluded. This is due to the fact that the electron energies are spread over multiple QD layers as well as occupies the GaAs spacers in between them, so a small decrease in the QD volume due to the Type-II elongation only negligibly impact their energies. Our calculations find that the hydrostatic component of the strain ($\epsilon_{xx}+\epsilon_{yy}+\epsilon_{zz}$) slightly increases for the elongated 9-VSQDs and results in a small upward shift in the conduction band energies. Also, the shifts in the energies are nearly equal for e1, e2, and e3, and therefore the separation between them remains nearly unchanged.
\begin{figure*}
\includegraphics[scale=0.45]{Figure14.png}
\caption{(a) The plots of the polarization dependent TE and TM modes as a function of the Type-II elongation factor $\eta$ for the 9-VSQDs. (b) Plot of the POl$_{||}$ as a function of the Type-II elongation factor for the 9-VSQDs. The plots indicate a high degree of in-plane anisotropy for the 9-VSQDs, which is very sensitive to the elongation factor. (c) Plots of the DOP$_{[110]}$ and DOP$_{[\overline{1}10]}$ as a function of the Type-II elongation factor for the 9-VSQDs. For the circular-base 9-VSQDs, $\eta$=1.0, the DOP$_{[110]}$ is negative and the DOP$_{[\overline{1}10]}$ is positive. As we [110]-elongate the 9-VSQDs by 1 nm (marked by blue oval), the DOP$_{[\overline{1}10]}$ becomes close to zero, however the DOP$_{[110]}$ drastically increases to +0.60, suggesting that it is not possible to simultaneously engineer both DOPs below or close to zero for this 9-VSQDs using [110] elongation engineering.}
\label{fig:Fig14}
\end{figure*}
\vspace{1mm}
\textbf{\textit{Highest five valence band energies:}} Fig.~\ref{fig:Fig12}(b) plots the energies for the highest five valence band states (h1, h2, h3, h4, and h5) as a function of the Type-II elongation. All of the hole energies increase for both, the [110] and the [$\overline{1}10$], elongations. Fig~\ref{fig:Fig13} shows the top view of the hole wave functions for $\eta$=0.57, 1.0, and 1.75. For the circular-base 9-VSQDs, all the hole wave functions are aligned along the [$\overline{1}$10] direction due to the presence of the HH pockets, similar to the case of the single QDs with large aspect ratios\cite{Usman_5, Narvaez_1} and the bilayer QDs.\cite{Usman_2}
For the elliptical 9-VSQDs, the hole wave functions align along the major-axis. This is due to the larger biaxial strain along these directions which pushes the HH pockets towards higher energies and hence the hole wave functions residing in these pockets also move towards higher energies. Fig.~\ref{fig:Fig12}(e) and (f) plots the biaxial strain components ($\epsilon_{xx}+\epsilon_{yy}-2\epsilon_{zz}$) through the center of the 9-VSQDs along the [110] and the [$\overline{1}$10] directions, respectively. The large increase in the biaxial strain is clearly evident for the elliptical 9-VSQDs when compared to the circular-base case. Since the hole energies are pushed up by an increase in the biaxial strain component, this shift dominates the small downward shift due to a small increase in the hydrostatic component ($\epsilon_{xx}+\epsilon_{yy}+\epsilon_{zz}$). As a result, all of the hole energies are shifted towards the higher values as evident from the Fig.~\ref{fig:Fig12}(b). The contributions from $\bigtriangleup$d and $\bigtriangleup$V are also very small, which is confirmed from Fig.~\ref{fig:Fig12}(d), where the strain effect is excluded, and the hole energies move towards lower values due to the combined shift induced by $\bigtriangleup$d and $\bigtriangleup$V.
In Fig.~\ref{fig:Fig12}(b), a relatively smaller increase in the hole energies for the [110] elongation as compared to the [$\overline{1}$10] elongations is due to the fact that all of the hole wave functions are initially oriented along the [$\overline{1}$10] direction for the circular-base ($\eta$ = 1.0), and they go through a 90$^\circ$ rotation to align along the [110] direction for the [110] elongation. It should also be noted that the hole energy separations reduce as a function of the elongation factor ($\eta$) for the 9-VSQDs, suggesting even enhanced contributions from the lower lying valence band states in the ground state optical intensity measured at the room temperature for the elliptical 9-VSQDs.
\begin{figure*}
\includegraphics[scale=0.33]{Figure15.png}
\caption{Normalized Polar plots for the 9-VSQDs system with (a) circular-base, (b) Type-II [110]-elongation ($\eta$=1.095), and (c) Type-I [110]-elongation ($\eta$=1.047). In each case, the polarization direction
of the incident light (n) is kept along the [110]-direction (red squares) for the TE$_{[\overline{1}10]}$-mode and along the [$\overline{1}$10]-direction (blue circles) for the TE$_{[110]}$-mode. The polar plots based on the cumulative sum of the optical transition strengths between the lowest conduction band state and the highest five valence band states are drawn with respect to the angle $\theta$ between the [001]-direction and either the [110] or the [$\overline{1}$10] directions.}
\label{fig:Fig15}
\end{figure*}
\vspace{1mm}
\textbf{\textit{Optical gap energy, E$_{g}$:}} From Figs.~\ref{fig:Fig12}(a) and (b), both the electron and hole energies increase as a function of the elongation, however the hole energies have larger slope and therefore the optical gap energy, E$_{g}$ = e$_{1}$ - h$_{1}$, decreases as a function of the elongation factor ($\eta$). We calculate a maximum red shift of $\approx$30 meV in E$_g$ for $\eta$ = 0.57.
\subsubsection{Polarization properties of the 9-VSQDs}
Fig.~\ref{fig:Fig14}(a) plots the polarization dependent TE$_{[110]}$, TE$_{[\overline{1}10]}$, and TM$_{[001]}$-modes as a function of the Type-II elongation. For the circular-base ($\eta$=1.0) case, TE$_{[110]} <$ TM$_{[001]}$ and TE$_{[\overline{1}10]} >$ TE$_{[110]}$.
When the base of the 9-VSQDs is elongated, the increase in the biaxial strain component (see Figs.~\ref{fig:Fig12}(e) and (f)) increases the splitting between the LH and the HH bands, which will decrease the TM$_{[001]}$-mode. However, the small increase in the TM$_{[001]}$-mode as plotted in the Fig.~\ref{fig:Fig14}(a), is due to the hole wave functions confining towards the middle of the 9-VSQDs for the elliptical shapes where the HH/LH intermixing is larger as compared to its edges\cite{Usman_1}. For example, by comparing the $\eta$=1.0 and $\eta$=0.57 cases, we find following shifts in the spatial positions of the top-most five hole wave functions within the 9-VSQDs: h1 moves from the QD layer 2 to the QD layer 3, h2 moves from the QD layer 3 to the QD layer 5, h3 moves from the QD layer 2 to the QD layer 4, h4 stays in the QD layer 8, and h5 moves from the QD layer 3 to the QD layer 5; here the QD layers in the 9-VSQDs are numbered from 1 to 9 starting from the bottom towards the top, as mentioned in the schematic diagram of Fig.~\ref{fig:Fig1}(c). Similar trends in the spatial confinements of the hole wave functions are observed for other values of $\eta$ which are responsible for the small increase in the TM$_{[001]}$-mode as a function of $\eta$.
The magnitude of the TE$_{[\overrightarrow{n}]}$ modes is calculated to be very sensitive to the elongation of the 9-VSQDs. Even for a very small [110] elongations ($\leq$ 1 nm), the TE$_{[110]}$ mode quickly increases above the TM$_{[001]}$-mode. For example, for $\eta$=1.095, the TE$_{[\overline{1}10]}$ reduces to become close to the TM$_{[001]}$, but the TE$_{[110]}$ has already increased by a factor of $\approx$13.
It should also be noted that while the elliptical shape increases the TE$_{[\overrightarrow{n}]}$ mode along its major-axis, it has only very little impact on the TE-mode along its minor-axis. When the 9-VSQDs is elongated along the [$\overline{1}$10] direction, the TE$_{[\overline{1}10]}$ increases, but the TE$_{[110]}$ mode remains nearly unchanged. Similarly for the [110] elongation, the TE$_{[\overline{1}10]}$-mode quickly decreases and then remains nearly unchanged for $\eta >$ 1.2.
\textbf{\textit{In-plane polarization anisotropy:}} Fig.~\ref{fig:Fig14}(b) plots the in-plane polarization (Pol$_{||}$) defined by Eq.~\ref{eq:pol} as a function of the Type-II elongation factor ($\eta$). The large magnitudes of the Pol$_{||}$ for both [110] and [$\overline{1}10$] elongations suggest a high degree of the in-plane polarization anisotropy for the 9-VSQDs. Even for the perfectly circular-base ($\eta$=1.0), Pol$_{||}$ is $\approx$0.82, indicating that TE$_{[\overline{1}10]} \gg$ TE$_{[110]}$. Any elongation along the [$\overline{1}10$]-direction further increases this anisotropy. The [110]-elongation sharply increases TE$_{[110]}$-mode and changes the sign of Pol$_{||}$. This is because of the 90$^\circ$ rotation of the hole wave functions (see Fig.~\ref{fig:Fig13}). Even an elongation as small as of 1 nm along the [110]-direction can change the value of the Pol$_{||}$ from +0.82 to -0.51. Therefore, we conclude that the 9-VSQDs exhibits highly anisotropic in-plane polarizations. Similar in-plane polarization anisotropies were measured by Alonso-Alvarez \textit{et al.}\cite{Alonso_1} and Humlicek \textit{et al.}\cite{Humlicek_1} for the vertical QD stacks.
It should also be noted that whereas the in-plane polarization (Pol$_{||}$) for the single QDs, irrespective of their AR, exhibits an inverse quadratic relation with respect to $\eta$ (see Figs.~\ref{fig:Fig5}(d) and ~\ref{fig:Fig10}(d)), it demonstrates nearly a step function like dependence on $\eta$ for the strongly coupled 9-VSQDs.
\textbf{\textit{Tuning of DOP$_{[\overrightarrow{n}]}$:}} Fig.~\ref{fig:Fig14}(c) plots the DOP$_{[110]}$ and DOP$_{[\overline{1}10]}$ as a function of the Type-II elongation. For the ideal circular-base case ($\eta$ = 1.0), DOP$_{[110]}$ and DOP$_{[\overline{1}10]}$ have values of -0.45 and +0.6. This is in qualitative agreement with the experimental PL measurements\citep{Inoue_1} and leads to a question that how much impact would be from a realistic elliptical shape. Our calculations show that the orientation of the base elongation determines the sign of the DOP$_{[\overrightarrow{n}]}$, whereas the magnitude of the elongation (value of $\eta$) has a very little impact on the magnitude of the DOP$_{[\overrightarrow{n}]}$. This is clearly evident from Fig.~\ref{fig:Fig14}(c) for $\eta \leq$ 1.0 and for $\eta \geq$ 1.2, where a very small change in the magnitude of the DOP$_{[\overrightarrow{n}]}$ is observed as the value of $\eta$ is changed.
The strong dependence of the sign of the DOP$_{[\overrightarrow{n}]}$ on the orientation of the elongation is highlighted by using an oval in the Fig.~\ref{fig:Fig14}(c), where even for a 1 nm [110]-elongation, the DOP$_{[110]}$ drastically changes its sign from -0.45 to +0.6. This large change in the value of DOP$_{[\overrightarrow{n}]}$ for 1.0 $< \eta <$ 1.2 is remarkable as it indicated that only a very small shape asymmetry is capable of overcoming the effect of atomistic symmtery lowering effect. This also implies that the elliptical shape of the 9-VSQDs can not be exploited to simultaneously engineer both DOP$_{[110]}$ and DOP$_{[\overline{1}10]}$ below zero.
We want to highlight that although a tuning of the DOP$_{[\overrightarrow{n}]}$ over a wide range of values is possible by the elongation of the single QDs, it remains relatively insensitive with respect to the magnitude of $\eta$ for the 9-VSQDs. Therefore, we expect that the elongation of the 9-VSQDs would not offer much improvement in its polarization response.
\textit{\textbf{Polar plots:}} The strong impact of the [110]-elongations on the polarization properties of the 9-VSQDs is further confirmed in Fig.~\ref{fig:Fig15} by comparing the normalized polar plots for (a) circular-base ($\eta$=1.0), (b) Type-II 1 nm [110]-elongation ($\eta$=1.095), and (c) Type-I 1 nm [110]-elongation ($\eta$=1.047). In each case, two polar plots are drawn: (i) as a function of the angle $\theta$ between the [001]-direction and the [110]-direction for the TE$_{[110]}$ (blue circles); (ii) as a function of the angle $\theta$ between the [001]-direction and the [$\overline{1}10$]-direction for the TE$_{[\overline{1}10]}$ (red squares).
As the 9-VSQDs is elongated, a 90$^\circ$ rotation of the polar plots is calculated for the TE$_{[110]}$-modes in both the Type-I and the Type-II cases. This clearly suggests that both [110] elongations will result in TE$_{[110]}$-mode $>$ TM$_{[001]}$-mode. For TE$_{[\overline{1}10]}$-mode, the polar plot rotates anti-clockwise by $\approx$30$^\circ$ and $\approx$45$^\circ$ for the Type-II and the Type-I elongations, respectively. This causes a reduction between the relative magnitudes of the TE$_{[\overline{1}10]}$-mode and the TM$_{[001]}$-modes, thus reducing DOP$_{[\overline{1}10]}$ from +0.6 to 0.102 in (b) and to -0.07 in (c).
\textit{\textbf{Geometry of the experimentally grown 9-VSQDs:}} The above discussion about the strong dependence of the polarization properties on the [110]-elongations allows us to theoretically probe the geometrical shape of the 9-VSQDs as grown by Inoue \textit{et al.}\cite{Inoue_1}. It was reported that the 9-VSQDs are not isotropic and the TEM images suggested very little anisotropy in the lateral extent\cite{Ikeuchi_1}, possibly a [$\overline{1}10$]-elongation~\cite{Kita_1}. Our multi-million-atom calculations show that the polarization response is very sensitive to the elongation factor ($\eta$) and even a 0.5-1.0 nm [110]-elongation increases DOP$_{[110]}$ above zero. Therefore according to our model results, the experimentally measured DOP$_{[110]}$ = -0.6 implies that the shape of the 9-VSQDs can only have [$\overline{1}$10] elongated base which confirms the findings from the TEM images~\cite{Kita_1}. It should also be noted that as the 9-VSQDs studied here has pure InAs QD layers, so our finding does not contradict with the conclusions of Mlinar \textit{et al.}\cite{Mlinar_1} where they report that for the alloyed InGaAs QDs, the alloy random configurations may significantly impact the polarization properties and make the correlation between the measured polarization response and the QD geometry unreliable.
\section{Summary and Conclusions}
We have performed multi-million-atom simulations to understand the impact of the elliptical shapes on the electronic and polarization properties of the single and the multi-layer vertical stacks of InAs QDs. The comparison between a flat QD and a tall QD, having aspect ratios of 0.225 and 0.40 respectively, reveals drastically different electronic and polarization properties as a function of their base elongation. The key outcomes of the comparison are:
\begin{description}
\item [(i)] The quadratic component of the piezoelectric potential completely cancel the linear component inside the tall QD region, whereas only partial cancellation occurs for the flat QD.
\item [(ii)] Although the stain and the piezoelectric potentials are drastically different for the flat and tall QDs, the lower electron p-state (e2) is oriented along the [$\overline{1}$10] direction for both systems.
\item [(iii)] The hole wave functions are confined inside the flat QD mainly at its center, whereas they are confined at the QD interfaces inside the HH pockets for the tall QD. This leads to a reduction of the oscillator strengths for the circular-base tall QD by approximately an order of the magnitude due to smaller electron-hole wave function spatial overlaps. For the elliptical-shaped tall QDs, the oscillator strengths increase and even become comparable to the flat QD for some values of the elongation factor ($\eta$).
\item [(iv)] The Type-I elongation, irrespective of its orientation, blue shifts the optical gap energy (E$_{g}$) for both, the flat and the tall QDs. The Type-II and Type-IIv elongations red shift E$_{g}$ for the flat QD irrespective of the elongation direction, whereas for the tall QD the shift in E$_{g}$ strongly depends on the orientation of the elongation: red shift of E$_{g}$ for the [$\overline{1}$10] elongation and blue shift for the [110] elongation.
\item [(v)] The elliptical shape of the flat QD always improves its polarization properties by reducing the value of the DOP$_{[\overrightarrow{n}]}$, whereas it only reduces the DOP$_{[\overrightarrow{n}]}$ along the minor-axis for the tall QD.
\item [(vi)] The elliptical shape of the tall QD allows a tuning of the DOP$_{[\overrightarrow{n}]}$ over a much wider range, when compared to the flat QD. This property can be further exploited in large stacks of strongly coupled QD layers, where essentially very high values of the ARs can be achieved.
\end{description}
Although the understanding of the single layers of the QDs provides significant physical insight of the impact of the shape asymmetry, they do not lead to isotropic polarization response (DOP$_{[\overrightarrow{n}]} \sim$ 0) for the QD base elongations of up to 6 nm studied in this paper. Therefore, we prbextend our study of the elliptical shapes to the experimentally reported vertical stack of nine QDs (9-VSQDs) which has demonstrated DOP$_{[110]}$ $<$ 0. The key features of our analysis about the elliptical 9-VSQDs are:
\begin{description}
\item [(i)] In contrast to the single QD layers where the elliptical shape only very slightly reduces the magnitude of the biaxial stain, a significant increase in the magnitude of the biaxial strain is calculated for the 9-VSQDs. Therefore, the shifts in the hole energies are dominated by the changes in the biaxial strain, rather than $\bigtriangleup$V and $\bigtriangleup$d.
\item [(ii)] The hole wave functions are confined inside the HH pockets at the interfaces of the 9-VSQDs, similar to the case of the tall QD. This introduces a large in-plane polarization anisotropy.
\item [(iii)] While the elongations of the single QDs result in the tuning of DOP$_{[\overrightarrow{n}]}$ over a wide range, the magnitude of DOP$_{[\overrightarrow{n}]}$ is largely insensitive to the magnitude of $\eta$ for the 9-VSQDs. Therefore, we conclude that the elliptical shapes of the 9-VSQDs do not provide any noticeable improvement in the polarization response. This is clearly evident from Fig.~\ref{fig:Fig14}(c) for $\eta \leq$ 1.0 and for $\eta \geq$ 1.2.
\item [(iv)] Our calculations show that the sign of DOP$_{[\overrightarrow{n}]}$ is very sensitive to the orientation of the elongation. Even a very small variation of $\eta$ from 1.0 is capable of controlling the sign of the DOP$_{[\overrightarrow{n}]}$. We find that the in-plane polarization (Pol$_{||}$) roughly follows a step function like abrupt dependence on $\eta$, as compared to an inverse quadratic dependence for the single QDs. Such a large in-plane anisotropy of the polarization allows to accurately predict the shape elongation of the 9-VSQDs studied in this paper, in agreement with the TEM findings.
\end{description}
In summary, we have presented a detailed analysis of the dependence of the polarization properties as a function of the elongation factor $\eta$ that would serve as a guidance to engineer the geometry parameters for the tuning of DOP$_{[\overrightarrow{n}]}$ from semiconductor QDs, which is a critical design parameter for several challenging applications.
\textbf{\textit{Acknowledgements:}} The author gratefully acknowledges Stefan Schulz (Tyndall National Institute) for critically reading the manuscript and providing valuable suggestions. Computational resources are acknowledged from National Science Foundation (NSF) funded Network for Computational Nanotechnology (NCN) through \url{http://nanohub.org}. NEMO 3D simulator was developed in parts at NASA JPL/Caltech and Purdue University by a number of researchers supervised by Prof. Gerhard Klimeck (Purdue University), whom work have been cited in the corresponding references\cite{Klimeck_1, Klimeck_2, Klimeck_3}. NEMO 3D based open source tools are available at: \url{https://nanohub.org/groups/nemo_3d_distribution}.
\bibliographystyle{apsrev4-1}
|
2,877,628,088,484 | arxiv | \section{\label{giris}Introduction}
Static correlation,\cite{handy01,becke13}
arising from the tendency of electrons to distribute themselves over the various centers,
is pronounced in materials containing localized {\it d} or {\it f} electrons
such as some transition-metal or rare-earth compounds.
The local density approximation\cite{kohn65} (LDA) or the generalized gradient approximation\cite{PBE} (GGA)
commonly employed in Kohn-Sham density functional theory\cite{kohn65} (DFT)
inherently assume a {\it localized} exchange-correlation hole,
implying that static correlation is treated in an unrestrained manner in these approximations.\cite{becke05}
Thus local or semilocal exchange-correlation energy $E_{\rm xc}$ functionals are often
either combined with a Hubbard parameter $U$ as in the LDA+$U$ method\cite{anisimov97}
or mixed with a fraction $\alpha$ of exactly computed \cite{becke93} (Fock) exchange energy $E_{\rm x}^{\rm exact}$,
yielding a hybrid functional
\begin{equation}
E_{\rm xc}^{\rm hybrid}=E_{\rm x}^{\rm exact}+(1-\alpha)(E_{\rm x}^{\rm GGA}-E_{\rm x}^{\rm exact})+E_{\rm c}^{\rm GGA},
\end{equation}
where the second term models the static correlation energy.\cite{csonka10}
For $\alpha > 0$, the static correlation energy is reduced
in favor of the suppression of electron fluctuations,
leading to a better description for the localized electron states
(as evidenced by the improved prediction of
the binding energy of localized $d$ states,\cite{oba08,wrobel09,burbano11,pozun11,li12,zhang12}
band gaps,\cite{heyd05,marsman08,oba08,wrobel09,burbano11,wu11,lucero12,li12,zhang12,pozun11,kanan12}
and magnetic moments\cite{rodl09,liao11,chen12,heinemann13,pozun11,kanan12}).
In the LDA+$U$ approach, where a $d$ ion is treated as an open system with fluctuation number of electrons,\cite{anisimov97}
a term including $U$ is added to the total energy,
which penalizes more fluctuating configurations\cite{perdew07,perdew09}
and therefore leads to
a better description of the localized states
(as evidenced by the improved prediction of
the binding energy of localized $d$ states,\cite{rohrbach03,mikaye06,petersen06,zhang12,zhang13}
band gaps,\cite{anisimov97,dudarev98,rohrbach03,rohrbach04,mikaye06,gupta07,devey09,rodl09,liao11,arroyo11,zhang12,zhang13,kanan12}
and magnetic moments\cite{anisimov97,rohrbach03,rohrbach04,rollmann04,wang06,petersen06,gupta07,devey09, liao11,zhang13,kanan12}).
Thus the hybrid functional scheme and the LDA+$U$ approach could be regarded
as alternative means\cite{tran06,jollet09} for fixing inaccuracies of the semilocal density approximations,
which result from insufficient localization of $d$ electrons.
Indeed, it has recently been proposed\cite{hong12,andriotis13} to derive the value of $U$ from hybrid functional calculations.
In contrast, we think it is appropriate to adopt
a perspective where the hybrid-functional and DFT+$U$ methods are treated {\it complementary}
(inasmuch as they both reduce the static correlation energy),
which led us to combine hybrid functionals with the Hubbard $U$.
From a different point of view, this means that
one of the two methods (DFT+$U$) is utilized
to reduce the {\it residual} self-interaction error\cite{perdew81}
of the other one (hybrid-functional),
which is pragmatically justified.
Furthermore,
Iv\'ady {\it et al.} (Ref.~\onlinecite{ivady14}) have recently shown that
a hybrid exchange-correlation potential
could be cast into a mathematical form
that is reminiscent of the on-site Hubbard potential
for a subsystem of {\it localized} orbitals,
providing theoretical justification for our methodology:
An additional on-site (DFT+$U$) potential is added
to the hybrid exchange-correlation potential,
which is applied {\it only} to strictly localized states.
This improves the physical description
because localized $d$-band states and delocalized crystal states are differentiated in the hybrid-functional+$U$ approach,
which are indifferent to the hybrid functional itself.
It is also interesting in this regard to point out that
the DFT+$U$ and hybrid-functional methods could both be regarded as approximations to the $GW$ method,\cite{hedin65}
as articulated in Refs.~\onlinecite{anisimov97} and \onlinecite{henderson11}, respectively.
The incentive of using these two methods together
is then to increase the level of approximation,
provided that they are complementary.
\begin{figure}
\begin{center}
\resizebox{0.50\textwidth}{!}{%
\includegraphics{Figur1.pdf}
}
\end{center}
\vspace{-0.4cm}
\caption{
(Color online)
Calculated versus measured values of the band gap $E_g$ (a) and
the $d$ band position $\varepsilon_d$ relative to the valence band maximum (b)
for zinc and cadmium monochalcogenides.
The experimental values of $E_g$ and $\varepsilon_d$
are taken from Refs.~\onlinecite{bandgap1,bandgap2,bandgap3} and
Refs.~\onlinecite{dstate1,dstate2}, respectively.
The values obtained from the present GGA (HSE) calculations are connected by blue dashed (red dot-dashed) lines to guide the eye.
The solid black lines passes through the experimental values.
}
\label{Egcalvsexp}
\end{figure}
It is usually necessary to perform a calibration\cite{lutfalla11,krcha13}
for the value of $U$ that is {\it optimal} with respect to the material properties under consideration.
Besides $U$ is not only element-specific\cite{lutfalla11} but also material-specific.\cite{karazhanov06,barcaro10}
Thus it is appealing to employ a hybrid functional
with an exchange mixing coefficient $\alpha$
that is in practice fixed to a single {\it universal} value,
e.g., $\alpha=1/4$ in both global\cite{perdew96} and range-separated Heyd-Scuseria-Ernzerhof\cite{heyd03} (HSE) hybrid functionals.
It should, however, be noted that
setting the optimal value for $\alpha$ as $1/4$ in Ref.~\onlinecite{perdew96} was accomplished {\it empirically}
(via error analysis of the atomization energies),
which would not necessarily be optimal for {\it other} material properties.\cite{vydrov06,park11,lim12,himmetoglu13}
We found,
in line with earlier reports,\cite{paier06,fuchs07,henderson11}
that the hybrid (HSE) functional calculations with $\alpha=1/4$ improve
the prediction of both the $d$ band position $\varepsilon_d$ relative to the valence band maximum and the band gap $E_g$
but these improvements are not sufficient to make the predictions agree with the experimental data.
This is demonstrated in Fig.~\ref{Egcalvsexp} for zinc and cadmium monochalcogenides,
where the calculated and measured values of $E_g$ (left panel) and $\varepsilon_d$ (right panel) are plotted with respect to each other.
Figure~\ref{Egcalvsexp}(a) shows
that
(i) the improvement for the band gap is impressive
for systems with a somewhat small band gap,
and
(ii) the band gap is {\it still} significantly underestimated for wide band gap semiconductors such as ZnO.
As explored in Appendix,
both the GGA band gap error $\Delta E_g^{\rm GGA}$ and the HSE correction $E_g^{\rm HSE}-E_g^{\rm GGA}$
are inversely proportional to the high-frequency dielectric constant $\epsilon_\infty$
so that $ \Delta E_g^{\rm GGA} \simeq A /\epsilon_\infty $ and
$E_g^{\rm HSE}-E_g^{\rm GGA} \simeq A^\prime /\epsilon_\infty $,
where the constants $A$ and $A^\prime $ satisfy $A^\prime < A$.
Owing to the latter, the HSE improvement falls short for materials with relatively small dielectric constant
(with the exception of CdO for which the HSE calculation yields the right direct and indirect band gaps, cf. Ref.~\onlinecite{burbano11}).
Figure~\ref{Egcalvsexp}(b) shows that
the HSE-calculated $\varepsilon_d$ is {\it still} too high
although there is a significant correction of about $1.3\pm0.4$~eV.
It should be noted that
the prediction of $\varepsilon_d$
could further be improved by adding a Hubbard $U$ term to the hybrid functional,
which would enable one to adjust the $d$ band position.
It is also interesting to note that
the measured values of $\varepsilon_d$ could indeed be reproduced
by using adjusted $U$ values, cf. Fig. 3 of Ref.~\onlinecite{karazhanov06},
in the case of zinc monochalcogenides.
These observations also motivate us to treat
the hybrid functional scheme and the DFT+$U$ method
as {\it complementary} rather than {\it alternative} approaches.
Accordingly,
we propose here to combine the screened hybrid functional of Heyd, Scuseria, and Ernzerhof with the Hubbard $U$.
The main advantage of the latter is that
strictly localized and delocalized states are {\it screened differently}
since {\it only} the former are subject to an additional on-site (DFT+$U$) potential.\cite{ivady14,ivady13}
In contrast, localized and delocalized states are
indifferent to the original HSE functional
as long as the same set of parameters, viz. the exchange mixing coefficient $\alpha$ and the screening parameter $\omega$, are used
for {\it all} states.
Additionally,
we regard $U$ as a semiempirical parameter,
in line with the perspective\cite{albers09} that
the Hubbard term added to the density functionals
is essentially a phenomenological many-body correction.
Our findings show that
the HSE+$U$ calculations
performed by using an {\it adjusted} $U$ value
reproduce the measured band gap and, at the same time,
result in an improved physical description
not only of the electronic structure
but also of the crystal structure and energetics
for the semiconductors with localized $d$ electrons.
This is obviously very convenient
for practical purposes such as setting the range of the electron chemical potential accurately
in the point defect calculations, e.g., Ref.~\onlinecite{oba11}.
It is also very convenient
because it enables one to employ the measured $E_g$, instead of $\varepsilon_d$, in setting the $U$ value.
Note that there is usually some scatter in the measured data for $\varepsilon_d$,
which partly reflects the fact that the width of the $d$ bands is nonzero no matter how localized the states are.
The underestimation of the band gap in the HSE calculations, cf. Fig.~\ref{Egcalvsexp}(a),
could partially be attributed to lacking the correlation part of the discontinuity of the exchange-correlation potential.\cite{seidl96}
Similarly,
the discontinuity of the exchange-correlation potential
is not fully restored in the LDA/GGA+$U$ calculations
even though the $U$ term added to the density functionals yields a discontinuous contribution.\cite{anisimov93}
It should also be commented that
setting the right value of $U$ {\it empirically} is not straightforward
because one needs
to take accounts of hybridization and screening of $d$ electrons {\it a priori}.
Furthermore,
the measured value of $E_g$ could not be reproduced no matter how large a value of $U$ is used
in the LDA+$U$ calculations performed for zinc monochalcogenides,
cf. Fig. 3 of Ref.~\onlinecite{karazhanov06}.
Our study provides a
resolution to this difficulty with the aid of hybrid functional,
and proves that an adequate $U$ value could be determined
by simply matching the experimental band gap.
The rest of the paper is organized as follows:
The next section is devoted to the method of calculation,
which also summarizes the computational details.
This is followed by a discussion of the calculation results
before concluding remarks given in the last section.
\section{\label{yontem}Method}
All calculated properties reported here were obtained
via semilocal or hybrid DFT calculations
using the Perdew-Burke-Ernzerhof\cite{PBE} (PBE) or Heyd-Scuseria-Ernzerhof\cite{heyd03} (HSE) functionals, respectively.
In the hybrid functional calculations,
we employed the HSE06\cite{HSE06} functional by setting the screening parameter\cite{HSE06,wrobel09} $\omega=0.207$ \AA$^{-1}$
(and exchange mixing coefficient $\alpha=0.25$ as implied in Section~\ref{giris}).
In the HSE+$U$ calculations we used the simplified (rotationally invariant) approach\cite{dudarev98}
where the difference between the on-site Coulomb $\bar{U}$ and exchange $\bar{J}$ parameters
is employed as the {\it effective} Hubbard parameter $U=\bar{U}-\bar{J}$.
We performed a variety of calculations for zinc and cadmium monochalcogenides
by employing the projector augmented-wave (PAW) method,\cite{PAW}
as implemented in VASP code.\cite{VASP,VASPpaw}
The 2s and 2p,
3s and 3p,
4s and 4p,
5s and 5p,
3d and 4s, and
4d and 5s states are treated
as valence states for
oxygen,
sulfur,
selenium,
tellurium,
zinc, and
cadmium, respectively.
Plane wave basis sets were used to represent the electronic states,
which were determined
by imposing a kinetic energy cutoff of 520 eV for the systems that include oxygen atoms and
400 eV for the rest of the systems.
We first carried out optimization of the crystal structures
where concurrent relaxations of the cell volume and shape as well as the ionic positions
were performed until the total energy was converged within 1 meV
and the maximum value of residual forces on atoms was reduced to be smaller than 0.01 eV/\AA.
In these optimizations,
we used the primitive unit cells of the crystals,
whose Brillouin zones were sampled by
$8\times 8\times 6$ (for the crystals with wurtzite structure) or
$8\times 8\times 8$ or $9\times 9\times 9$ (for the crystals with rocksalt and zincblende structures)
{\bf k}-point meshes generated according to Monkhorst-Pack scheme,\cite{MP76}
enabling us to achieve convergence of the energy within 1 meV/atom.
Using the optimized crystal structures,
we then performed band-structure and density-of-states calculations
in order to obtain the band gap $E_g$ and the $d$ band position $\varepsilon_d$, respectively.
Besides we performed geometry optimizations
for the O$_2$ and S$_8$ molecules and the bulk solids of Se, Te, Zn, and Cd,
and employed the respective equilibrium total energies
in the computation of the formation energy $\Delta H_f$.
As indicated in Section~\ref{giris},
we set the value of $U$ by reproducing the experimental value of the band gap in the HSE+$U$ calculations,
which is justified in Section~\ref{sonuclar}.
Thus, we carried out the HSE+$U$ calculations for a range of $U$ values,
and studied the calculated band gap as a function of $U$.
Since our results showed that
the variation of the band gap with $U$ is virtually linear,
we performed a linear fit to obtain the value of $U$
that corresponds to the measured band gap.
The value of $U$ obtained via this procedure,
which is {\it optimal} in reproducing the experimental value of the band gap,
is denoted by $U^\ast$.
The HSE+$U$ calculation that yields the experimental, i.e., {\it targeted}, value of the band gap
is named here as the HSE+$U^\ast$ calculation.
\begin{figure}[b]
\begin{center}
\resizebox{0.50\textwidth}{!}{%
\includegraphics{Figur2.pdf}
}
\end{center}
\vspace{-0.4cm}
\caption{(Color online)
The band gap error $\Delta E_g$ versus
the difference $\Delta \varepsilon_{pd}=\varepsilon_{p}^{\rm Ch}-\varepsilon_{d}^{\rm Me}$
for zinc and cadmium monochalcogenides.
The PBE- and HSE-calculated values are marked by the empty and filled symbols, respectively, in the top-most panel (a).
In the lower panels (b)-(i), the results of the combined HSE+$U^\ast$ ($\oplus$) calculations
are presented together with those of the PBE (empty symbols) and HSE (filled symbols) calculations.
}
\label{dEgvsdepd}
\end{figure}
It should be mentioned that the HSE band energy differences
depend on the value of the screening parameter $\omega$,
which is not necessarily universal.
It was, however, demonstrated\cite{brothers08,janesko09} that $\omega=0.207$ \AA$^{-1}$
as used in HSE06
is an average optimal value
for which the band energy differences
approximate rather accurately {\it quasiparticle excitation energies},
for a variety of semiconductors.
Therefore, the HSE band energy differences are often {\it directly} compared
to the experimental band gaps\cite{henderson11}
(e.g., in order to demonstrate\cite{heyd05} the success of the HSE calculations
in reproducing the experimental band gaps).
In addition to this,
as long as the HSE+$U$ approach could be regarded as an approximation to the $GW$ method,
it would be preferential to use the quasiparticle energy differences (the $GW$-calculated band gaps)
in our procedure for setting the value of $U^\ast$.
However, the $GW$-calculated band gaps are usually in good agreement with the experimental band gaps
(e.g., Ref.~\onlinecite{shishkin07}).
It should, on the other hand, be also noted that the $GW$@HSE calculations {\it overestimate} the band gap
of a number of semiconductors including CdS and ZnS (Ref.~\onlinecite{fuchs07}).
Hence, we preferred to utilize the experimental band gaps instead of the $GW$-calculated energy differences,
which is also convenient from a practical point of view
since it enables one to avoid performing quasiparticle calculations
that might easily become computationally exhaustive, especially for large-scale (e.g., defect) calculations.
\section{\label{sonuclar}Results and Discussion}
We first quantify the relationship between the band gap error $\Delta E_g$ in the GGA and HSE calculations
and the position of $d$ level
in the case of zinc and cadmium chalcogenides
since the latter is, in effect, adjusted by varying the value of $U$.
Figure~\ref{dEgvsdepd}(a) shows a plot of $\Delta E_g$
versus
the difference $\Delta \varepsilon_{pd}=\varepsilon_{p}^{\rm Ch}-\varepsilon_{d}^{\rm Me}$,
where $\varepsilon_{p}^{\rm Ch}$ and $\varepsilon_{d}^{\rm Me}$ denote the $p$- and $d$-state
energies of the chalcogen and metal atoms, respectively.
In zinc and cadmium chalcogenides,
the $d$ band is located below and next to the topmost valence band.\cite{supplemental}
Thus, the valence-band maximum turns out to be {\it above} its actual position
if the metal $d$ states are positioned {\it too} high (as in both the GGA and HSE calculations),
which contributes to the underestimation of the band gap.
The difference $\Delta \varepsilon_{pd}$ is therefore used here
to quantify the relationship between the band gap error and the position of $d$ level.
In Fig.~\ref{dEgvsdepd}(a), a linear trend is noticeable for each set of data, cf. the solid lines,
with the exception of data points for CdO.
It is seen that the band gap error is proportional
(with a negative slope) to $\Delta \varepsilon_{pd}$.
We obtain, via fitting,
\begin{eqnarray}
\Delta E_g&=& -0.48~\Delta \varepsilon_{pd}+3.33 ~~~({\rm PBE})\nonumber \\
&=& -0.20~\Delta \varepsilon_{pd}+1.52 ~~~({\rm HSE})
\end{eqnarray}
for Zn chalcogenides, and
\begin{eqnarray}
\Delta E_g&=& -0.50~\Delta \varepsilon_{pd}+3.84 ~~~({\rm PBE})\nonumber \\
&=& -0.26~\Delta \varepsilon_{pd}+1.97 ~~~({\rm HSE})
\end{eqnarray}
for Cd chalcogenides (excluding CdO),
where $\Delta E_g$ and $\Delta \varepsilon_{pd}$ are both in eV.
It is clear, comparing the data points represented by empty (PBE) and filled (HSE) symbols connected by dashed lines,
that the band gap error is reduced
when the difference between the chalcogen $p$- and metal $d$-state energies is increased.
This applies to all II-VI semiconductors studied here, including CdO.
As shown in Figs.~\ref{dEgvsdepd}(b)-\ref{dEgvsdepd}(i),
$\Delta \varepsilon_{pd}$ is significantly increased in the HSE+$U^\ast$ calculations,
making $\Delta E_g$ vanish.
This is reassuring that the optimal Hubbard parameter $U^\ast$ could be determined
by matching the experimental band gap.
\begin{figure}
\begin{center}
\resizebox{0.50\textwidth}{!}{%
\includegraphics{Figur3.pdf}
}
\end{center}
\vspace{-0.4cm}
\caption{(Color online)
The band gaps $E_g$ obtained in our HSE+$U$ calculations
as a function of the effective Hubbard parameter $U$
for zinc and cadmium monochalcogenides.
The symbols represent the calculated $E_g$ values,
and the solid lines connecting the symbols represent linear fits to the calculated points.
The vertical dot-dashed lines
mark the values for the optimal Hubbard parameter $U^\ast$ in eV,
which correspond to the experimental $E_g$ values (marked by the horizontal dot-dashed lines).
}
\label{optU}
\end{figure}
\begin{table}
\caption{\label{tablo}
The optimal Hubbard parameter $U^\ast$,
the experimental band gap $E_g$, and
the HSE band gap error $\Delta E_g^{\rm HSE}$ (all in eV)
for zinc and cadmium monochalcogenides.
}
\begin{ruledtabular}
\begin{tabular}{llccc}
Semiconductor & Crystal structure & $U$$^\ast$ & $E_g$ & $\Delta E_g^{\rm HSE}$ \\ \hline
CdO & rocksalt & 0.0 & 0.84 & 0.00 \\
CdTe & zincblende & 0.8 & 1.48 & 0.03 \\
$c$-CdSe & zincblende & 3.0 & 1.68 & 0.19 \\
$w$-CdSe & wurtzite & 3.4 & 1.75 & 0.21 \\
ZnTe & zincblende & 5.0 & 2.35 & 0.28 \\
$c$-CdS & zincblende & 4.2 & 2.40 & 0.27 \\
$w$-CdS & wurtzite & 4.5 & 2.50 & 0.31 \\
ZnSe & zincblende & 5.0 & 2.71 & 0.38 \\
$c$-ZnO & zincblende & 6.1 & 3.27 & 0.95 \\
$w$-ZnO & wurtzite & 6.0 & 3.37 & 0.92 \\
$\beta$-ZnS & zincblende & 5.0 & 3.72 & 0.45 \\
$\alpha$-ZnS & wurtzite & 6.0 & 3.91 & 0.58
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{figure}
\begin{center}
\resizebox{0.50\textwidth}{!}{%
\includegraphics{Figur4.pdf}
}
\end{center}
\vspace{-0.4cm}
\caption{
(Color online)
The HSE band gap error $\Delta E_g^{\rm HSE}$ versus the ratio $U^\ast/2\epsilon_\infty$.
}
\label{UvsEg}
\end{figure}
\begin{figure*}
\begin{center}
\resizebox{1.00\textwidth}{!}{%
\includegraphics{Figur5.pdf}
}
\end{center}
\vspace{-0.4cm}
\caption{(Color online)
A comparison of errors in the predictions via HSE+$U^\ast$ (red bars),
HSE (green bars) and
PBE (blue bars) calculations for
the unit cell volume $V$ (a),
the ratio $c/a$ of lattice parameters $a$ and $c$ (b),
the internal lattice parameter $u$ (c),
the $d$ band position $\varepsilon_d$ (d), and
the absolute value of formation energy $\left | \Delta H_f \right |$ (e).
}
\label{karsilastir}
\end{figure*}
We now determine the $U^\ast$ values that corresponds to vanishing $\Delta E_g$
for the II-VI semiconductors under consideration.
Thus, the results of HSE+$U$ calculations for a range of $U$ values
are given in Fig.~\ref{optU} where the calculated band gap is plotted as a function of $U$.
Note that the variation of the band gap with the effective Hubbard parameter is virtually linear
(with a different slope for each system).
For each compound, a linear fit is thus performed,
which yields the solid lines in Fig.~\ref{optU}.
The $U^\ast$ values are marked by vertical dot-dashed lines,
which correspond to the measured band gap (marked by horizontal dot-dashed lines).
Table~\ref{tablo} gives
the optimal Hubbard parameters and corresponding band gaps
for zinc and cadmium monochalcogenides.
It should be remarked
that one obtains $U^\ast = 0$ for CdO
since the measured value of the band gap of CdO is reproduced already in the HSE calculation, as mentioned in Section~\ref{giris}.
Next we compare the values of $\Delta E_g$ and $\Delta \varepsilon_{pd}$
obtained in the HSE+$U^\ast$ calculations to those obtained in the PBE and HSE calculations.
Figures~\ref{dEgvsdepd}(b)-\ref{dEgvsdepd}(i)
show a plot of the band gap error $\Delta E_g$ versus the difference $\Delta \varepsilon_{pd}$
for the II-VI semiconductors under consideration.
As already noted, the HSE calculations yield
an increased value for $\Delta \varepsilon_{pd}$
in association with a reduced band gap error,
in comparison to the PBE calculations.
The difference $\Delta \varepsilon_{pd}$ is further increased in the HSE+$U$ calculations,
reducing the band gap error further.
Having $U=U^\ast$ in this trend makes $\Delta E_g$ vanish, with adequate increase of $\Delta \varepsilon_{pd}$.
It is seen in Table~\ref{tablo}
that the larger $E_g$ the greater $U^\ast$ (with few exceptions).
This implies that employing a large (small) $U^\ast$ would be necessary
for a wide (narrow) band gap semiconductor for which
the HSE band gap error $\Delta E_g^{\rm HSE}$ is rather large (small), cf. Figure~\ref{Egcalvsexp}(a).
Thus, having a large band gap error in the HSE calculation
necessitates using a large $U^\ast$ for correction.
Furthermore,
there appears to be a roughly monotonic relationship
between $U^\ast$ and $\Delta E_g^{\rm HSE}$, cf. Table~\ref{tablo}.
Our analysis presented in Fig.~\ref{UvsEg}
shows that
this relationship could be quantified
by taking into account the screening effects through the high-frequency dielectric constant $\epsilon_\infty$.
A plot of $\Delta E_g^{\rm HSE}$ versus $U^\ast/2\epsilon_\infty$ is given in
Fig.~\ref{UvsEg}
where all data points satisfy
\begin{equation}
\Delta E_g^{\rm HSE}= \frac{U^\ast}{2\epsilon_\infty} \pm 0.14 ~{\rm eV}.
\label{UvsdEg}
\end{equation}
Here both $U^\ast$ and $\Delta E_g^{\rm HSE}$ are in eV.
Note that the shift in the occupied (unoccupied) $d$ state energies
due to the $U^\ast$ term would be
$-U^\ast/2$ ($U^\ast/2$) if the hybridization and screening effects are ignored.\cite{anisimov93}
Thus, the correction to the band gap would be proportional to $U^\ast/2$,
ignoring the dielectric screening,
for the II-VI semiconductors studied here
since their lower conduction bands have virtually no contribution from the metal $d$ states.\cite{supplemental}
On the other hand, the band gap correction needs to be scaled by $\epsilon_\infty$
in order to reflect the dielectric screening of the Coulomb potential in a solid.\cite{harrison85}
Thus, the $U^\ast$ term added to the hybrid (HSE) functional
results in a correction of $U^\ast/2\epsilon_\infty$ to the band gap.
This explanation justifies our means of setting the value of $U^\ast$
by matching the experimental band gap.
It also implies that an approximate value for the optimal Hubbard parameter
could {\it a priori} be obtained by inverting Eq.~(\ref{UvsdEg}), i.e.,
$U^\ast \approx 2\epsilon_\infty(E_g - E_g^{\rm HSE})$, provided that
the experimental and HSE-calculated band gaps $E_g$ and $E_g^{\rm HSE}$
as well as the high-frequency dielectric constant $\epsilon_\infty$
are available.
Note that the hybrid-functional calculations could be utilized
to obtain $\epsilon_\infty$
when the experimental data is not available, cf. Table~I of Ref.~\onlinecite{paier08}.
It is interesting to point out that one could assign a single $U^\ast$ value of $\sim 5$~eV for ZnTe, ZnSe and $\beta$-ZnS
while $U^\ast \sim 6$~eV for $c$-ZnO, $w$-ZnO and $\alpha$-ZnS, cf. Table~\ref{tablo}.
Thus, a mean value of $U^\ast_{\rm Zn} \approx 5.5$~eV appears to be adequate
for {\it all} Zn compounds studied here.
It is clearly pleasing to obtain a single (universal) $U^\ast$ value for Zn,
which is almost independent of the composition or crystal structure of the relevant zinc compounds,
for its practical importance
since it would allow one to set $U^\ast_{\rm Zn} \approx 5.5$~eV
in the studies on {\it alloyed} systems made of Zn, O, S, Se, Te atoms.
In order to assess the improvement of the HSE+$U$ approach
in relation to the general physical description of the foregoing semiconductors,
we computed the mean error in
(i) the optimized crystal structures,
(ii) the $d$ band positions, and
(iii) the formation energies
of the metal chalcogenides under consideration.
Accordingly, a comparison of errors in the predictions of the HSE+$U^\ast$, HSE and PBE calculations is presented
Fig.~\ref{karsilastir} where the comparison is performed
for the unit cell volume $V$ [in Fig.~\ref{karsilastir}(a)],
for the ratio $c/a$ of (wurtzite) lattice parameters $a$ and $c$ [in Fig.~\ref{karsilastir}(b)],
for the internal parameter $u$ of wurtzite structure [in Fig.~\ref{karsilastir}(c)],
for the $d$ band position $\varepsilon_d$ [in Fig.~\ref{karsilastir}(d)], and
for the formation energy $\Delta H_f$ [in Fig.~\ref{karsilastir}(e)].
Our analysis reveals the following:
First, we see in Figs.~\ref{karsilastir}(a)-(c) that
the optimization of the crystal structure
via HSE or HSE+$U^\ast$ calculation results in a similarly more accurate description,
in comparison to the PBE calculations.
Thus, the HSE+$U^\ast$ calculations seem to preserve the accuracy of the HSE calculations
in the crystal structure optimizations.
Secondly, Fig.~\ref{karsilastir}(d) shows that there is a significant correction
to the $d$ band position thanks to adding $U^\ast$ term to the HSE functional:
The mean error in the $\varepsilon_d$ prediction becomes $\sim$~0.6 eV
in the HSE+$U^\ast$ calculations,
compared to $\sim$~2.3 (3.6) eV in the HSE (PBE) calculations.
It should also be noted
that the variation of the difference $\Delta \varepsilon_d^\ast = \varepsilon_d^{{\rm HSE}+U^\ast}- \varepsilon_d^{\rm HSE}$
with $U^\ast$ is almost linear,\cite{ikinci}
which is consistent with $\Delta \varepsilon_d^\ast \approx -0.35~U^\ast $,
where both $\Delta \varepsilon_d^\ast$ and $U^\ast $ are in eV.
Thus, using a larger $U^\ast$ yields a larger correction to $\varepsilon_d$,
shifting the $d$ band to a lower position that is closer to its experimental location.
Recall that employing a larger $U^\ast$ is necessary for the systems
with a larger HSE band gap error (cf. Table~\ref{tablo}).
Hence, the improvement in predicting the $d$ band position
via HSE+$U^\ast$ calculations
is warranted
{\it since} the value of $U^\ast$ is determined by matching the experimental band gap.
Finally, as for the improvement of the HSE+$U$ approach in the prediction of formation energies,
Fig.~\ref{karsilastir}(e) shows that
the mean absolute error in $\left | \Delta H_f \right |$
is on the order of $\sim$~0.1, 0.2, and 0.4~eV per formula unit
in the HSE+$U^\ast$, HSE, and PBE calculations, respectively.
Thus, the HSE+$U^\ast$ calculations
result in a more accurate description of crystal energetics of zinc and cadmium monochalcogenides,
compared to the HSE and PBE calculations.
Note that the mean error in $\left | \Delta H_f \right |$
turns out to be {\it positive} in the HSE+$U^\ast$ calculations,
which is {\it negative} in the HSE calculations.
This indicates that the error in the formation energies
could be further reduced,
whenever necessary,
by re-adjusting the value of $U^\ast$.
\section{\label{netice}Conclusion}
In this work,
we treated the hybrid functional scheme and the DFT+$U$ method as complementary
rather than alternative approaches
in studying a set of II-VI semiconductors with localized $d$ states.
This led us to introduce the HSE+$U$ approach
where the range-separated HSE hybrid functional is combined with the Hubbard $U$.
Furthermore, we regarded $U$ as a semiempirical parameter.
This enabled us to determine an optimal value $U^\ast$ of the
Hubbard parameter, for which the HSE+$U$ calculation
yields a {\it targeted} (e.g., experimental) value of the band gap.
We find that the correction to the band gap
due to the additional $U^\ast$ term is roughly given by
$U^\ast/2\epsilon_\infty$,
which is in line with theoretical reasoning.
The results of a variety of HSE+$U^\ast$ calculations
performed for zinc and cadmium monochalcogenides, viz., a subset of the semiconductors with localized $d$ states,
indicate that
an improved description of the electronic structure as well as crystal structure and energetics
is obtained
in these calculations,
compared to the hybrid functional calculations employing the HSE functional without an additional Hubbard term.
The present study thus shows that
adding the $U^\ast$ term to the HSE functional leads to more accurate prediction of both the electronic and crystal structures
of II-VI semiconductors with localized states.
\begin{acknowledgments}
The numerical calculations reported here were carried out at the High Performance and Grid Computing Center (TRUBA Resources) of TUBITAK ULAKBIM.
\end{acknowledgments}
|
2,877,628,088,485 | arxiv | \section{Introduction}
The electric dipole polarizability $\alpha$ is one of the most fundamental atomic properties important for many atomic and molecular properties and applications \cite{bonin1997,Bishop1999,maroulis2004,maroulis2006,Mitroy2010,champagne2010,Safronova2015}. Whereas we have some knowledge of static dipole polarizabilities of all the known elements in the Periodic Table up to high nuclear charge, its accurate determination still remains a considerable challenge for both experiment and theory \cite{Hohm-2000,schwerdtfeger2006,Thierfelder2009,Mitroy2010}. This is especially the case when open-shell atoms are considered where, beside the scalar, the tensor component in the correct coupling scheme needs to be taken into account \cite{Fleig-2005,Thierfelder2008,Safronova2015}. On the other hand, considerable progress has been made over the past two decades in the accurate determination of closed-shell static dipole polarizabilities of the neutral Group 2, 12 and 18 elements of the Periodic Table, both from theory and experiment \cite{Safronova2015,schwerdtfeger2017table}.
Palladium is a rare element that is used in many applications such as catalytic converters and fuel cell technologies. The valence ground state electron configuration of atomic palladium is closed-shell 4$d^{10}$5$s^0$, differing from all the other Group 10 members Ni (3$d^8$5$s^2$), Pt (5$d^9$6$s^1$), and Ds (6d$^8$7$s^2$), which are open-shell \cite{Dzuba2016a,Demissie2018a}. In fact, Pd is the only known atom in its ground electronic state not to have at least one electron in an outer-shell n$s$ or n$p$ orbital \cite{Kramida}. Given this unique feature, it is particularly important to have reliable values of its fundamental properties such as ionization energy, electron affinity, and static electric dipole polarizability. While accurate experimental values for both its ionization energy \cite{Kramida} and electron affinity \cite{Scheer1998} are available, we are not aware of any experimental determination of its dipole polarizability. Recently published theoretical and empirical estimates differ widely from about 13--62~a.u., see Table \ref{tab:pd-polarizability-overview}. If the polarizability is actually less than about 30~a.u. it would make palladium, along with the superheavy elements copernicium ($Z=112$, $\alpha$=27.64 a.u. \cite{Pershina2008}) or nihonium ($Z=113$, $\alpha$=29.9 a.u. \cite{Pershina2008b}), a contender for having the smallest polarizability of any metal atom in the periodic table. Moreover, an accurate value of the dipole polarizability helps to benchmark other methods such as density functional theory \cite{Bast2008b}.
For an accurate quantum theoretical treatment of the electronic response to an applied external electric field, both relativistic and electron correlation have to be taken into account \cite{Schwerdtfeger1994,Lim1999,Thierfelder2008,Thierfelder2009}. For palladium, the main contribution from the sum-over-states formula of the dipole polarizability will come from 4d$\rightarrow$5p excitations, and we may therefore expect relativistic effects to be rather small but not negligible for an accurate electronic structure treatment. The present calculations were undertaken in an attempt to establish an accurate value for the dipole polarizability of the closed-shell palladium atom using relativistic coupled cluster theory. We also provide a value of the fourth-order term with respect to the applied electric field, the (second) hyperpolarizability $\gamma$, although higher order derivatives with respect to the electric field are known to be more problematic from a computational point of view \cite{Kassimi1994}.
\begin{table}
\centering
\caption{Reported literature values for the static electric dipole polarizability $\alpha$ of Pd (all values in atomic units). Abbreviations used: NR: Nonrelativistic, R: relativistic, DKH2: relativistic effects from second-order Douglas-Kroll-Hess, Dirac: Dirac-Coulomb Hamiltonian, SR-ECP: scalar-relativistic effective-core potential, HF: Hartree-Fock, MP2: second-order M{\o}ller-Plesset, CCSD: coupled cluster with single and double excitations, RPA: random phase approximation, TD-DFT: time-dependent density functional theory, LDA: local density approximation, CAMB3LYP: Coulomb attenuated B3LYP functional, PGG: Petersilka-Gossmann-Gross kernel \cite{Petersilka-1996}, IP: ionization potential.}
\label{tab:pd-polarizability-overview}
\begin{tabular}{lll}
\hline
$\alpha$ (a.u.) & Comments & Refs. \\
\hline
{\it ab-initio} \\
23.1 & NR-HF & \cite{Fraga1973} \\
75.6 & NR-HF ($^3F$ state, $d^8s^2$) & \cite{Thorhallsson1968} \\
21.15 & R-Dirac-HF & \cite{Bast2008b} \\
21.17 & R-RPA & \cite{Johnson1983} \\
24.581 & R-DKH2-MP2 & \cite{Granatier2011a} \\
26.612 & SR-ECP-CCSD & \cite{Bast2008b} \\
{\it DFT} \\
32 & R-Dirac-LDA & \cite{Miller2002,Doolen-1987} \\
30.15 & R-Dirac-LDA & \cite{Bast2008b} \\
26.60 & R-Dirac-CAMB3LYP & \cite{Bast2008b} \\
20.94--31.61 & R-Dirac, various DFT approx. & \cite{Bast2008b} \\
61.7 & TD-DFT(PGG) & \cite{Gould2016b} \\
20.0 & TD-DFT(PGG) & \cite{Gould2016c} \\
{\it empirical} \\
32$\pm$6 & Empirical, IP correlation & \cite{Fricke1986} \\
58.8 & Empirical, Slater rules & \cite{Ghosh2006a} \\
12.84 & Empirical, IP + radius correlations & \cite{Hohm2012} \\
47$\pm$24 & NR-HF, scaled & \cite{Miller1978} \\
\hline
\hline
\end{tabular}
\end{table}
\section{Computational Method}
The total energy in an homogeneous electric field for a closed-shell atom $E(F)$ can be written as (electric field set arbitrarily in $z$-direction, $F=F_z$),
\begin{equation}
\label{eq:e-of-f-for-atoms}
E(F) = E(0) + \frac{1}{2} \frac{\partial^2 E(F)}{\partial^2 F}\bigg\rvert_{0}F^2 +
\frac{1}{24} \frac{\partial^4 E(F)}{\partial^4 F}\bigg\rvert_{0}F^4 \cdots \, \,.
\end{equation}
with the static electric dipole polarizability $\alpha$ and (second) hyperpolarizability $\gamma$ defined as
\begin{equation}
\label{eq:polarizability-in-finite-field}
\alpha = -\frac{\partial^2 E(F)}{\partial^2 F}\bigg\rvert_{F=0} \quad {\rm and} \quad \gamma = -\frac{\partial^4 E(F)}{\partial^4 F}\bigg\rvert_{F=0} \, \,.
\end{equation}
We computed the electronic energies of Pd in external electric fields (see section above) with field strengths in the range $F=[0.0,0.002]$ a.u. and step size $\Delta F=0.00025$ a.u. at increasing levels of theory to get insight into how much the inclusion of relativistic and correlation effects, and specifically higher electronic excitations, influence the static atomic dipole polarizability. We used relativistic coupled-cluster theory which included excitations from singles, doubles and perturbative triples (CCSD(T)) as implemented in DIRAC-15 \cite{Dirac15,Visscher2001}. All 46 electrons and virtual orbitals up to 30~a.u. were considered in the correlation treatment. Calculations were performed with doubly-augmented, uncontracted, all-electron, triple- and quadruple-$\zeta$ (TZ and QZ) quality basis sets dyall.ae3z [30$s$22$p$15$d$8$f$5$g$] and dyall.ae4z [35$s$27$p$19$d$12$f$8$g$5$h$], respectively \cite{Dyall2007,Dyall2012}. The energies were then extrapolated to the complete basis set (CBS) limit using two-point extrapolation schemes utilizing exponential extrapolation for the Hartree-Fock energies \cite{Zhong2008} and inverse cubic extrapolation for the correlation energies \cite{Helgaker1997a}, respectively. We used values of 5.79 and 3.05 for $\alpha_{34}$ and $\beta_{34}$, respectively, as proposed by Neese \textit{et. al}. \cite{Neese2011}.
The CCSD(T)/CBS calculations were performed non-relativistically, as well as with inclusion of scalar-relativistic effects (X2C-Spinfree) \cite{Saue2011} and two-component, which includes spin-orbit coupling, and are denoted CCSD(T)\textsubscript{NR}, CCSD(T)\textsubscript{SR} and CCSD(T)\textsubscript{SO}, respectively. The two-component calculations were carried out using the exact two-component X2C-mmf+Gaunt Hamiltonian of DIRAC-15, obtained from a transformation to a two-spinor basis after solving the four-component Dirac-Hartree-Fock equations.\cite{Sikkema2009}
Higher order excitations of the valence electrons were calculated as correction terms to the atomic energies at the scalar-relativistic level of theory, using the second-order Douglas-Kroll-Hess (DKH2) Hamiltonian \cite{Douglas1974,Hess1986,Jansen1989,Reiher2004,Reiher2004a,Peng2012}, and subsequently added to the CBS limit CCSD(T)\textsubscript{X2C} energies. Here we utilized Molpro 2015.1 \cite{Werner,Werner2012,Hampel1992,Deegan1994} in conjunction with the multi-reference coupled cluster code MRCC \cite{Kallay,Rolik2013,Kallay2005a,Kallay2008,Bomble2005}. An augmented, correlation-consistent, core-valence, Douglas-Kroll-Hess basis set aug-cc-pwCVTZ-DK \cite{Peterson2007} was used for these calculations. While we could correlate all electrons in the coupled-cluster calculations with full triples (CCSDT), we had to restrict the active occupied space to the 4$d$ electrons at the full quintuples level of theory (CCSDTQP).
To obtain the individual correction terms, we subtracted the lower-level result from the higher-level one. The full triple correction $\Delta$[T -- (T)]\textsubscript{SR} was obtained by subtracting the perturbative triple CCSD(T)-DKH2\textsubscript{(AE)}/aug-cc-pwCVTZ-DK contribution from full triple energy CCSDT-DKH2\textsubscript{(AE)}/aug-cc-pwCVTZ-DK. For the full quadruple corrections $\Delta$[Q -- T]\textsubscript{SR} we took the energy difference between the CCSDTQ-DKH2\textsubscript{(4d)}/aug-cc-pwCVTZ-DK and CCSDT-DKH2\textsubscript{(4d)}/aug-cc-pwCVTZ-DK calculations with only the $4d$ electrons correlated. The same procedure was applied for the Quintuples correction. However, the CCSDTQ(P) calculations with the TZ basis set were already at the limit of our computational resources, and for the full quintuples we had to restrict the basis set to double-$\zeta$ (DZ) quality.
\section{Results and Discussion}
The results of our calculations are shown in Table \ref{tab:pd-polarizability}. Scalar-relativistic effects lead to a non-negligible increase in $\alpha$ of 1.482 a.u. at the HF level and 1.836~a.u. at the CCSD(T) level of theory, which originate from the relativistic $4d$ orbital expansion. Spin-orbit coupling increases the dipole polarizability only by 0.032 and 0.026~a.u. at the HF and CCSD(T) level of theory, respectively. Electron correlation contributes 5.056~a.u. to $\alpha$ at the relativistic level. Out of this, 2.067 a.u. come from perturbative triples, 0.101~a.u. from the variational contribution to the triple correction (correcting the perturbative treatment of triples in CCSD(T)), while the quadruple and quintuple corrections are responsible for raising the polarizability of Pd by 0.079~a.u.
Our final value of 26.135~a.u. for the atomic polarizability of Pd is in good agreement with the, for example, recently reported non-relativistic, effective core potential CCSD value of 26.612~a.u, \cite{Bast2008b} the DK-MP2 relativistic value of 24.581~a.u. \cite{Granatier2011a} and various DFT-calculated values (e.g., the value of 26.60~a.u. from a CAMB3LYP DFT calculations \cite{Bast2008b}, see Table \ref{tab:pd-polarizability-overview}).
\begin{table}[htb!]
\centering
\caption{Nonrelativistic (NR), scalar relativistic (SR) and X2C/Gaunt relativistic (R) atomic polarizabilities $\alpha$ and hyperpolarizabilities $\gamma$ (in atomic units) of Pd computed with the finite field method at different levels of theory.\footnote{Terminology is as follows: CCSD(T)\textsubscript{NR} = CCSD(T)\textsubscript{(AE)}/CBS; CCSD(T)\textsubscript{SR} = CCSD(T)-X2C-Spinfree\textsubscript{(AE)}/CBS, CCSD(T)\textsubscript{X2C} = CCSD(T)-X2C-Gaunt\textsubscript{(AE)}/CBS; $\Delta$[T -- (T)]\textsubscript{SR} = CCSDT-DKH2\textsubscript{(AE)}/TZ -- CCSD(T)-DKH2\textsubscript{(AE)}/TZ; $\Delta$[Q -- T]\textsubscript{SR} = CCSDTQ-DKH2\textsubscript{(4d)}/TZ -- CCSDT-DKH2\textsubscript{(4d)}/TZ; $\Delta$[P -- Q]\textsubscript{SR} = CCSDTQ(P)-DKH2\textsubscript{(4d)}/TZ -- CCSDTQ-DKH2\textsubscript{(4d)}/TZ + CCSDTQP-DKH2\textsubscript{(4d)}/DZ -- CCSDTQ(P)-DKH2\textsubscript{(4d)}/DZ.}}
\label{tab:pd-polarizability}
\begin{tabular}{lccc|ccc}
\hline
& \multicolumn{3}{c|}{$\alpha$} & \multicolumn{3}{c} {$\gamma$} \\
& NR & SR & R & NR & SR & R\\
\hline
HF TZ & 19.612 & 21.109 & 21.142 & 19252 & 13381 & 14203 \\
HF QZ & 19.575 & 21.060 & 21.092 & 19311 & 12527 & 14591 \\
HF CBS & 19.565 & 21.047 & 21.079 & 20149 & 12328 & 14669 \\
CCSD TZ & 22.898 & 24.560 & 24.587 & 27364 & 23855 & 22108 \\
CCSD QZ & 22.511 & 24.141 & 24.165 & 27339 & 21418 & 21002 \\
CCSD CBS & 22.252 & 23.864 & 23.888 & 26992 & 20135 & 19217 \\
CCSD(T) TZ & 24.718 & 26.606 & 26.635 & 33071 & 31640 & 27525 \\
CCSD(T) QZ & 24.344 & 26.198 & 26.225 & 34091 & 30245 & 27941 \\
CCSD(T) CBS & 24.093 & 25.929 & 25.955 & 34324 & 29734 & 28137 \\
\hline
$\Delta$[T -- (T)]\textsubscript{SR} & & & +0.101 & & & +3013 \\
$\Delta$[Q -- T]\textsubscript{SR} & & & +0.161 & & & +5088 \\
$\Delta$[P -- Q]\textsubscript{SR} & & & -0.082 & & & +3499 \\
\hline
Final value & & & \textbf{26.135} & & & \textbf{39737} \\
\hline
\hline
\end{tabular}
\end{table}
The atomic properties of all known Group 10 elements are summarized in Table \ref{tab:atomic-data-group-10}. We estimated the uncertainty in our calculations from the size of the quadruples plus quintuples corrections
(0.079 a.u.) and the error estimated from the finite field and CBS limit extrapolation (-0.018 a.u.) using different number of finite field values. Thus, we estimate the total uncertainty to be 0.10 a.u. We note that the Gaunt term of the Breit operator included in our calculations decreases the polarizability by only 0.05 a.u. at the CCSD(T) level of theory using the augmented quadruple-zeta basis set. Furthermore, we estimated QED contributions using Pyykk\"o and Zhao's local approximation for the self-energy contribution \cite{Pyykko-Zhao-2003}. As one expects from polarizing a fully occupied 4d shell by an external electric field \cite{ThiSch10}, the change in the polarizability is very small (-0.007 eV at the Hartree-Fock level using Dyall's augmented QZ basis set) and well within our given uncertainty.
It is apparent from Table \ref{tab:atomic-data-group-10} that the trend in polarizability values for the four Group 10 elements alternates in magnitude from top (Ni) to bottom (Ds) in group 10 of the periodic table. This trend is not consistent with the steady increase in the values of the ionization energy for these four elements, casting doubt on the practice of using ionization energies to predict polarizabilities. For Pd, the small polarizability value clearly results from the different electron occupation compared to the other elements. On the other hand, the small value of the Ds ($6d^87s^2$) polarizability originates from the strong relativistic $7s$ shell contraction.
\begin{table}[htb!]
\centering
\caption{Atomic data (ionization potential, IP, electron affinity, EA, and dipole polarizability, $\alpha$) for the Group 10 elements Ni, Pd, Pt and Ds.}
\label{tab:atomic-data-group-10}
\begin{tabular}{l|rrr}
\hline
\hspace{0.8cm} & \hspace{0.5cm} IP/eV\footnote{Ref. \cite{Kramida}} & \hspace{0.8cm} EA/eV\footnote{Ref. \cite{Scheer1998,Bilodeau1999}.} & \hspace{1.0cm} $\alpha$/a.u. \\
\hline
Ni & 7.639877 & 1.15716 & 49$\pm$3\footnote{Average of recent theoretical values for scalar $\alpha$~ \cite{Chandler1987,Pou-Amerigo,Klos2005}. The uncertainty shows the range of reported values.} \\
Pd & 8.33686 & 0.56214 & 26.14(10)\footnote{This work.} \\
Pt & 8.95883 & 2.12510 & 48$\pm$4\footnote{Average of empirical and theoretical values for scalar $\alpha$~
\cite{Miller2002,Hohm2012,Gould2016b}. The uncertainty shows the range of reported values.} \\
Ds & 11.2(1) & -- & 32(3)\footnote{From Dirac-Hartree-Fock +RPA calculations with use of fractional occupation
numbers \cite{Dzuba2016a}} \\
\hline
\hline
\end{tabular}
\end{table}
We also provide information for the second hyperpolarizability of Pd, see Table \ref{tab:pd-polarizability}. Here we see extremely large relativistic effects due to the indirect coupling between the $4d$ and the relativistically contracted $5s$ orbitals (note that from the sum-over-states formula for the hyperpolarizabilities \cite{Saue2018book} we couple states of angular momentum $l$ with states ranging from $l-2$ to $l+2$ with $l\ge 0$). Electron correlation effects are therefore extremely large, which is well known for hyperpolarizabilities in general \cite{Kassimi1994}. For Pd, the triple, quadruple and quintuple contributions are so large that an accurate prediction of the hyperpolarizability cannot be made at this stage. Indeed, for such properties sum-over-states Monte Carlo configuration interaction (CI) equivalent to full CI have recently used to obtain close to exact values \cite{Coe-2014}. However, for transition elements this would be a formidable task. Future experimental measurements would therefore be welcome.
\section{Summary}
Our calculated value for the palladium atomic polarizability of 26.14(10)~a.u. is the most accurate obtained so far, and is exceptionally small compared to all other $d$-block and $f$-block atoms. This is a result of its unique 4$d^{10}$5$s^0$ valence electron configuration with a rather compact closed $4d$-shell. The hyperpolarizability is extremely sensitive to both relativistic and electron correlation effects and requires further detailed investigation at even higher correlated level.
\begin{acknowledgements}
We acknowledge financial support by the Alexander-von-Humboldt Foundation (Bonn, Germany) and the
Marsden Fund of the Royal Society of New Zealand (17-MAU-021). JN is grateful to the Bowdoin College for sabbatical leave support. We thank Prof. Trond Saue (Toulouse) for useful discussions.
\end{acknowledgements}
|
2,877,628,088,486 | arxiv | \section{Introduction}
Ammonia (NH$_3$) was the first polyatomic molecule discovered in space
\citep{Cheung1968}, and has subsequently been one of the most extensively
observed molecules in the interstellar medium \citep{Ho1983}, with a set 23
GHz of inversion transitions that are readily detectable from many radio
telescopes. Emission lines resulting from these transitions are widely
detected in the dense parts of both dark and star-forming molecular clouds
\citep[see e.g.][]{Harju1993,Jijina1999,Wienen2012}. Because ammonia is a
symmetric top molecule, an analysis of its excitation allows the effects of
temperature and density to be determined separately; as a result, the
metastable lines of ammonia are often used to derive the temperature and
density of dense clumps in molecular clouds.
\citep{Walmsley1983,Danby1988,Kirsanova2014}.
Ammonia has also been detected in the circumstellar envelopes of evolved
stars, but has been observed less extensively in such environments than
in the interstellar medium. Several absorption features in the $\nu_2$
vibration$-$rotational bands around 10\,$\mu$m were detected in some
asymptotic giant branch (AGB) stars, as well as in a few massive supergiants
\citep[][]{Betz1979,McLaren1980,Betz:1987lr}. At about the same time, the
1.3 cm wavelength inversion transitions of ammonia were detected from C-rich
AGB and post-AGB stars \citep[][]{Betz:1987lr,
Nguyen-Q-Rieu1984,Nguyen-Q-Rieu:1986th}.
In IRC$+$10216 (CW\,Leo), ammonia was observed for the first time by its
infrared absorption lines in the $\nu_2$ band around 10\,$\mu$m
\citep{Betz1979}, and its detection in radio inversion transitions was
announced shortly thereafter \citep{Bell:1980hc}. In the following years,
new observations of IRC$+$10216 were performed in both infrared absorption
\citep{Keady1993,Monnier2000b} and radio inversion lines
\citep{Kwok1981,Bell1982,Nguyen-Q-Rieu1984,Gong2015}.
IRC$+$10216 was detected in the Two-micron Sky Survey as the brightest
infrared sky object at 5\,$\mu$m outside the solar system
\citep{Becklin1969}, and soon its heavily obscured central star was
identified as being C-rich based on the presence of strong CN absorption
bands \citep{Herbig1970,Miller1970}. Presently, IRC$+$10216 is the best
studied C-rich AGB star. About half of the molecules detected in space are
seen in the envelope of this source, due to its proximity (distance, $d$, of
130~pc, see \citep{Menten2012}) and a relatively large mass loss rate
$\sim 1-4 \times10^{-5}$ M$_{\sun}$ yr$^{-1}$
\citep[e.g.][]{Groenewegen:1998rw,Truong-Bach:1991qa}.
The development of heterodyne technology for observations in the
far-infrared (FIR) enabled the first detection of the 1$_0$(s)-0$_0$(a)
rotational transition of ortho-NH$_3$ by the {\it Odin} satellite toward
IRC$+$10216 \citep{Hasegawa2006}, which was followed by the detection of the
same transition in some O-rich AGB stars and red-supergiants
\citep{Menten:2010pd} by the Heterodyne Instrument for the Far Infrared
\citep[{HIFI},][]{de-Graauw:2010qe} on board the
{\it Herschel}
Space Observatory
\citep[{\it Herschel},][]{Pilbratt:2010oq}. Furthermore, by fully
exploiting the capabilities of the HIFI instrument, we were able to observe
all nine rotational transitions up to the $J=3$ levels of ortho- and
para-NH$_3$ (this paper) in the envelope of IRC$+$10216.
There is a clear discrepancy between the estimated amount of para-NH$_3$
relative to molecular hydrogen in IRC$+$10216 derived from radio inversion
lines \citep[$3\times10^{-8}$;][]{Kwok1981} and that of ortho-NH$_3$ derived
from its lowest rotational line \citep[about $10^{-6}$;][]{Hasegawa2006}.
Nevertheless, both values, which lie within the range of ammonia abundances
observed towards other C-rich and O-rich stars, and cannot be explained by
standard chemical models \citep[see discussion in][]{Menten:2010pd}.
However, the chemical model presented recently by \cite{Decin:2010kl} seems
able to reproduce the high abundance of ammonia observed in IRC$+$10216 (see
their Fig.\,3). Their model, constructed with the aim of explaining the
presence of water vapour in the envelope of this C-rich star, assumes a
clumpy envelope structure, where a fraction of the interstellar UV photons
is able to penetrate deep into the envelope and dissociate mostly $^{13}$CO,
providing oxygen for O-rich chemistry in the inner warm parts of a C-rich
circumstellar envelope. On the other hand, \cite{Neufeld:2013fk} have
suggested, on the basis of H$_2$O isotopic ratios, that a recent model
invoking shock chemistry \citep{Cherchneff2012} may provide a more
successful explanation for the presence of water in envelopes of C-rich
stars. However, this model does not provide information on the ammonia
formation.
In this paper we present new {\it Herschel}/HIFI observations of nine
rotational transitions of NH$_3$ (three ortho and six para lines), eight of
which have been detected for the first time, and an analysis of their
implications with the use of detailed modelling.
The observations and data reduction are presented in Section 2. Section 3
is devoted to the description of the molecular structure of ammonia, while
Section 4 presents details of the modelling procedure and the best fits found
for all rotational transitions. In Section 5 we discuss the results thereby
obtained and their consequences. Finally, a summary of this study is
presented in Section 6.
\section{Observations}
\begin{figure}
\centering
\includegraphics[width=8.5cm]{fig1.eps}
\caption{
Diagram of energy levels of ortho- (|$K$|=0,3, etc) and para-NH$_{3}$ (|$K$|=1,2,4, etc).
The inversion splitting between levels of different symmetry is
exaggerated for clarity. The observed rotational transitions
with frequencies in GHz are indicated with arrows.
}
\label{fignh3}
\end{figure}
\begin{table*}
\caption{Summary of NH$_3$ observations in IRC$+$10216.}
\label{TableObs}
\begin{tabular}{lrcrrccccrl}
\hline\hline
Transition\tablefootmark{a} & Frequency & Band &
E$_u$\tablefootmark{b} & {\it Herschel} & Obs. date &
Phase\tablefootmark{c} &
HPBW &
$\eta_{mb}$ &
Int. flux & Observing \\
& (GHz) & & (K) & OBSID & & & & &(K\,km\,s$^{-1}$) & mode \\
\hline
{\bf 1$_0$(s)-0$_0$(a)} & {\bf 572.498} & 1b & 29 & 1342195794 & 2010-05-04 & 0.22 & 37.5\arcsec & 0.62 & 22.7$\pm$1.2 & single point \\
& & & & 1342196413 & 2010-05-11 & 0.22 & & & & single point \\
2$_1$(s)-1$_1$(a) & 1168.452 & 5a & 58 & 1342196514 & 2010-05-13 & 0.23 & 18.2\arcsec & 0.59 & 19.5$\pm$2.0 & spectral scan \\
{\bf 2$_0$(a)-1$_0$(s)} & {\bf 1214.853} & 5a & 86 & 1342196514 & 2010-05-13 & 0.23 & 17.5\arcsec & & 39.2$\pm$3.9 & spectral scan\\
2$_1$(a)-1$_1$(s) & 1215.246 & 5a & 60 & 1342196514 & 2010-05-13 & 0.23 & 17.5\arcsec & & 21.8$\pm$2.2 & spectral scan \\
{\bf 3$_0$(s)-2$_0$(a)} & {\bf 1763.524} & 7a & 170 & 1342233281 & 2011-11-28 & 0.13 & 12.0\arcsec & 0.58 & 48.5$\pm$7.2 & single point \\
3$_1$(s)-2$_1$(a) & 1763.601 & 7a & 143 & 1342233281 & 2011-11-28 & 0.13 & 12.0\arcsec & & 23.7$\pm$3.6 & single point \\
3$_2$(s)-2$_2$(a) & 1763.823 & 7a & 127 & 1342233281 & 2011-11-28 & 0.13 & 12.0\arcsec & & 23.2$\pm$2.3 & single point \\
3$_1$(a)-2$_1$(s) & 1808.935 & 7b & 144 & 1342196574 & 2010-05-15 & 0.23 & 11.7\arcsec & 0.58 & 18.0$\pm$9.0 & spectral scan \\
3$_2$(a)-2$_2$(s) & 1810.380 & 7b & 128 & 1342196574 & 2010-05-15 & 0.23 & 11.7\arcsec & & 18.1$\pm$9.0 & spectral scan \\
\hline
\hline
\end{tabular}
\tablefoot{
\tablefoottext{a}{ortho transitions and their frequencies are indicated in bold
face.}
\tablefoottext{b}{energies of levels for para-NH$_3$ should be increased by
22\,K if put on the common energy scale with ortho-NH$_3$.}
\tablefoottext{c}{
counted from the reference maximum phase of the light curve, $\phi$=0 on Julian date JD=2454554,
and period of 630 days \citep{Menten2012}.}
}
\end{table*}
Observations of IRC$+$10216 were carried out with the {\it Herschel}/HIFI
instrument as part of the HIFISTARS Guaranteed Time Key Program (Proposal
Id: KPGT\_vbujarra\_1; PI: V. Bujarrabal); in addition, a spectral line
survey has been carried out
for
this object (Proposal Id:
GT1{\_}jcernich\_4; PI: J. Cernicharo). The observed rotational lines of
ortho- and para-NH$_3$ are listed in Table~\ref{TableObs} and indicated in
Fig.\,\ref{fignh3}, which shows the diagram of the lowest rotational levels
of ammonia. For each observed transition, Table~\ref{TableObs} gives the
line identification, its frequency in GHz, the corresponding HIFI band, the
energy of the upper level in K, the {\it Herschel} OBServation ID, the date
of the observation, the optical phase ($\phi$) of the observation (counted
from the reference maximum phase of the light curve, $\phi$=0, on Julian
date JD=2454554 with an assumed period of 630 days \citep{Menten2012}), the
half power beam width (HPBW) of the {\it Herschel} telescope at the observed
frequency, the main beam efficiency $\eta_{mb}$, the integrated flux in
K\,km\,s$^{-1}$ with its estimated uncertainty, and the observing mode:
single tuning or spectral scan. In this paper, we analyse wide band
spectrometer (WBS) data only.
The HIFI observations of IRC$+$10216 were reduced with the latest version of
HIPE (13.0) with data processed with Standard Product Generator (SPG) version 13.0.0. The data were processed using the HIPE pipeline to Level 2,
which was set to provide intensities expressed as the main beam temperature.
Main beam efficiencies were taken from the recent measurements
by Mueller et al.
2014\footnote{\path{http://herschel.esac.esa.int/twiki/pub/Public/HifiCalibrationWeb/HifiBeamReleaseNote_Sep2014.pdf}}.
They differ by up to 20\,\% from the older determinations
by \cite{Roelfsema2012}.
Frequencies are always given in the frame of the local standard of rest (LSR).
The statistical uncertainties in the integrated line fluxes, as derived
formally from the r.m.s.\ noise in the spectra (see below) are relatively
small, while it is known that systematic uncertainties in the HIFI flux
calibration are much larger \citep{Roelfsema2012}. Therefore, somewhat
arbitrarily, we have assumed a 5\% uncertainty in the integrated line flux
for lines observed in Band 1b, a 10\% uncertainty for lines in bands 5a and
7a, and a 50\% uncertainty for lines in band 7b. However, since two of the
lines observed in band 7a are blended (see below), we increased the assumed
uncertainty in their integrated flux from 10 to 15\%. The method used to
deconvolve line blends and to derive individual line fluxes is described in
Sect.\,2.2. The observed line profiles are shown by the solid lines in
Figure~\ref{figb} .
\subsection{Single point observations}
The HIFISTARS observations were all performed in the dual beam switch (DBS)
mode. In this mode, the HIFI internal steering mirror chops between the
source position and two positions located 3\arcmin\ on either side of the
science target. There are two entries in Table 1 for the ground-state transition
1$_0$(s)-0$_0$(a) of ortho-NH$_3$, since these two observations were made
with slightly shifted local oscillator frequencies. This procedure was adopted to
confirm the assignment of any observed spectral feature to either the upper
or lower sideband of the HIFI receivers \citep{Neufeld2010}. The resultant
spectra were co-added since no contamination was found to originate from the other
side band. Spectra obtained for the horizontal (H) and vertical (V)
polarizations were found to be very similar (differences smaller than 5\,\%)
and were co-added too. After resampling to a channel width of 1 km s$^{-1}$,
the final baseline r.m.s. noise in the coadded spectra was 6 mK in
band 1b and 40 mK in band 7a.
\subsection{Spectral scan observations}
We made use of data from the full spectral scan of IRC$+$10216
(GT1{\_}jcernich\_4 by Cernicharo et al.,
in preparation)
to obtain
measurements of five additional transitions of ammonia observed in two
bands, 5a and 7b (see Table~\ref{TableObs}). The spectral scan observing
mode allows for the unique reconstruction of the single-sideband spectrum in
a wide spectral range at the price of significantly lower signal-to-noise
ratio. Spectral scan observations were also made in dual beam switch mode
with fast chop. Both the spectral resolution and signal-to-noise ratio are
lower for these observations than for the single point ones. Lines observed
in the scan mode were deconvolved to obtain a single-sideband spectrum. To
reduce the noise in the profiles, the final spectra were resampled to 1 km
s$^{-1}$ in band 5a and 2 km s$^{-1}$ in band 7b. The final r.m.s noise in
the resampled spectra was 125 mK and 200 mK in bands 5a and 7b,
respectively.
The para-NH${_3}$ transition 3$_1$(s)-2$_1$(a) at 1763.601 GHz (middle
panel) overlaps the ortho-NH${_3}$ transition 3$_0$(s)-2$_0$(a) at
1763.524 GHz (upper left panel). Therefore, the shape of the former line was
approximated in its blueshifted part using, as a template, a properly
rescaled profile of the para-NH$_{3}$ 3$_2$(s)-2$_2$(a) emission line (middle
right panel), which was observed with the same beam size and has a
similar excitation. The line profile of the ortho-NH${_3}$ transition
3$_0$(s)-2$_0$(a) was then obtained by subtracting the derived
para-NH$_{3}$ 3$_1$(s)-2$_1$(a) line emission. The resulting profiles are
shown by dot-dashed lines in Fig.\,2.
\subsection{Other observations}
To verify our best fit models, we used radio and mid-infrared data for
ammonia available from the literature. In this paper, we exploited radio
data from \cite{Gong2015}, who recently observed five metastable inversion
transitions ($J,K$) = (1,1), (2,2), (3,3), (4,4), (6,6) of ortho- and para-
ammonia in a 1.3\,cm line survey toward IRC$+$10216. The dates of those
observations correspond to phases, $\phi$, from about $\sim$0.62 to
$\sim$0.97 \citep{Gong2015}. The half power beam width amounts to
40\arcsec\ at the frequency of the inversion lines (about 24 GHz) for
observations with the Effelsberg 100\,m radio telescope \citep{Gong2015}.
Flux densities, $S_{\nu}$ in Jy, were converted to main beam temperature
$T_{\rm mb}$, in K, using the conversion factors suggested by the authors:
$T_{\rm mb}/S_{\nu}=1.33$ K/Jy for the (6,6) line at 25 GHz and $T_{\rm
mb}/S_{\nu}=1.5$ K/Jy for the remaining lines.
By way of comparison with the mid-infrared absorptions of ammonia, we used
profiles of the $\nu_2$ transitions published by \cite{Keady1993} that were
observed at a spectral resolution of 0.009\,cm$^{-1}$ (corresponding to 2.9
km~s$^{-1}$) with the Fourier Transform Spectrometer at the Coud{\'e} focus
of the 3.8\,m Kitt Peak Mayall telescope. Observations of absorption from
the metastable rotational levels within the ground vibrational level of
ortho- and para- ammonia, $aR(0,0)$, $aQ(2,2)$, $aQ(3,3)$, $aQ(4,4)$,
$aQ(6,6)$ were also used for comparison with the results of our modelling.
Formally, from the aperture of the telescope, we can estimate the HPBW to be $0\farcs67$, using the expression
$1.22\times\lambda/D$, where $\lambda$ is wavelength and $D$ is diameter of
the telescope aperture. However, taking into account all instrumental
effects, we estimate that the HPBW, in this case, could be as large as
1\arcsec.
\begin{figure*}
\centering
\includegraphics[width=17cm]{fig2.eps}
\caption{
HIFI observations of rotational transitions of ortho-NH$_{3}$ (left column)
and para-NH$_{3}$ (middle and right columns) are shown by solid lines.
Approximated line profiles (see Sect.\,2.2) of two blended lines
3$_0$(s)-2$_0$(a) and $3_1$(s)-$2_1$(a) are shown by dot-dashed lines.
Emission profiles are overplotted with theoretical profiles (red dashed
lines)
from our best fit models computed separately for each ammonia spin
isomer. The effect of line overlapping in case of the two blends is shown with
blue dashed lines. The theoretical profiles for the three lines that were
observed at phase $\phi=$\,0.13 (see Table\,1) are rescaled up to mimic
computations at phase $\phi=$\,0.23 (see Sect\,4.5 for details). Our
approach in searching for the best fit is described in Sect.\,4.5, and the best
fitting parameters are compiled in Table\,2.
}
\label{figb}
\end{figure*}
\section{Ammonia model}
Different relative orientations of the spins of the three hydrogen nuclei
give rise to two distinct species of NH$_3$: ortho and para. The general
criterion governing which levels belong to ortho- and which to para-ammonia
is formulated in terms of representations of the molecular symmetry group
\citep[see][]{BunkerJensen}. According to this formalism, ortho states
belong to the A$_2$' and A$_2$" representations of the inversional symmetry
group D$_{3h}$, and para states to the E' and E" representations. For the
electronic ground state of ammonia, this translates into the rule that
ortho-NH$_3$ has levels with $K=3n$, where $n=0,1$,.., and para-NH$_3$ has
all other levels. The two species do not interact either radiatively or
collisionally. The ortho- to para-NH$_3$ ratio is determined at the moment
of the formation of the molecule. Hence, we consider ortho- and
para-ammonia as separate molecular species. Transitions of the two species
whose frequencies overlap, such as the ortho-NH$_3$ 3$_0$(s)-2$_0$(a)
transition and the para-NH$_3$ 3$_1$(s)-2$_1$(a) transition, open the
possibility of an interaction through the emission and absorption of line
photons. This is considered further in the discussion below.
Ammonia may oscillate in six vibrational modes: the symmetric stretch
$\nu_1$, symmetric bend $\nu_2$, doubly-degenerate asymmetric stretch
$\nu_3^{l_3}$, and the doubly-degenerate asymmetric bend $\nu_4^{l_4}$.
Transitions from the ground vibrational state to each of these excited
states are observed as vibrational bands at 3, 10, 2.9, and 6 $\mu$m,
respectively. The band intensities are characterized by the vibrational
transition moments 0.027, 0.24, 0.018, and 0.083, respectively
\citep{Yurchenko2011}. As long as we consider only the lowest excited
vibrational modes, the rule governing the assignment of levels of given $K$
to the ortho and para species, discussed above for the ground state, is
preserved for the rotational levels of the symmetric modes and exchanged for
the asymmetric ones.
The vibrational ground state of ammonia is split into two states of opposite
parities, a consequence of the low energy barrier to inversion of the
molecule. Symmetry considerations exclude half of the rotational levels for
the $K=0$ ladder. Only transitions with $\Delta K=0$ are electric
dipole-allowed. As a result, there is a characteristic doubling of
rotational transitions allowed between levels of opposite parities for
transitions with $K\neq 0$. Forbidden transitions ($\Delta K\neq 0$) are
significantly weaker. Transitions between split sub-levels are allowed as
well and give rise to a large number of lines around 23-25\,GHz, the so
called inversion lines. Their hyperfine structure (hfs) splitting may be
neglected in the rotational and vibrational transitions but does influence
the line profiles for inversion lines although for IRC+10216, the maximal
spread of the hfs components is smaller than the CSE's expansion velocity.
Moreover, the intensities of hfs satellite features relative to that of
the main
feature (at the central frequency) only become appreciable if an inversion
transition (with possible exception of the (1,1) transition) attains
significant optical depth.
The list of transitions and their strengths was extracted from the recent
BYTe computations \citep{Yurchenko2011}. In the initial analysis, we
extended the sets of molecular data for ortho- and para-NH$_3$ compiled in
the LAMDA database \citep{Schoier2005} by adding energy levels for the excited
vibrational states. Two data sets were explored, the first one with only the
$\nu_2$=1 levels,
and the second one with $\nu_1$=1, $\nu_2$=1 and 2, $\nu_3$=1, and $\nu_4$=1 levels.
The rotational quantum numbers of the vibrational levels were limited to those
originally used in the ground state, i.e. $J,K\le$7,7 in ortho- and
$J,K\le$5,6 in para-ammonia. Both dipole ($\Delta K = 0$) and forbidden
($\Delta K\neq$0) transitions between the levels were included.
We found that the inclusion of the $\nu_2$=1 rovibrational states has a
dramatic effect on the flux predicted for the observed pure rotational
transitions, when compared with predictions obtained without the inclusion
of vibrationally excited states. However, the additional inclusion of the
$\nu_1$=1, $\nu_2$=2,$\nu_3$=1, and $\nu_4$=1 vibrational states has a
negligible effect on the computed fluxes for the observed transitions, a
maximum increase of only 2 percent being obtained for the flux of the ground
rotational transition of ortho-NH$_{3}$ with even smaller increases for the
remaining emission lines. This conclusion, which is based on a model of
a molecular structure limited to rotational levels up to $J=7$ in each
vibrational state, is explained by the fact that the vibrational transition
moments to the symmetric bending state are significantly higher than those
to the remaining vibrational states. In fact, the radiative pumping rate in
the $\nu_2$ mode dominates over radiative pumping rates in the remaining
modes, even in the innermost parts of the envelope where shorter-wavelength
radiation prevails. At larger distances from the star, dust opacity
decreases the photon density at shorter wavelengths more effectively,
further reducing the relative effect of radiative pumping in other modes.
The IR pumping effects for ammonia have been investigated previously by
\cite{Schoier2011} and \cite{Danilovich2014}.
Solving the radiative transfer, we do not distinguish between photons
produced in the envelope and photons coming from the central star. However,
by comparing the radiative rates from our best fit model with those
obtained when only stellar photons are present, we were able to estimate
their relative importance. Only in the inner part of the envelope is radiative
excitation in the $\nu_2$ transition dominated by the stellar photons.
Very quickly, already at seven stellar radii the contribution from the
envelope begins to prevail over the contribution from the central star,
reaching a ratio of twenty at the outer edge.
Our final model for ammonia includes all levels up to $J=15$ in both the
vibrational ground state and the $\nu_2$=1 vibrational state, amounting to a
total of 172 levels and 1579 transitions for ortho-NH$_3$ and 340 levels and
4434 transitions for para-NH$_3$. We include the ground state and
vibrational levels up to 3300 cm$^{-1}$, corresponding to 4750\,K, above
ground. The completeness of the levels used in the computations may be
judged by comparing the partition function calculated for the model with
that obtained
from
the full list of lines from the BYTe computations
\citep{Yurchenko2011}. With the number of levels given above, the partition
function for the ortho species is equal to the BYTe number for temperatures
to 300~K, and is lower by $2\%$, $28\%,$ and $40\%$, respectively, at 600,
1000, and 1200~K. The partition function of para species is also complete
up to 300~K, and is lower by $10\%$, $40\%,$ and $55\%$, respectively, at
600, 1000, and 1200~K. The incompleteness in the levels may result in an
overestimate of their populations in the dense and hot inner parts of
the envelope.
We adopted the collisional rate coefficients from \cite{Danby1988}.
The rate coefficients for collisional de-excitation
are available only for low-lying levels below $J=6$ and for
a maximum temperature of 300 K. Collisional rates were extrapolated to
higher temperatures with a scaling proportional to the square root of the gas
temperature. The extension of collisional rates to other levels is necessarily quite
crude. At first we neglected collisional rates between levels not
available in Danby's work. In this case, the ladder of levels with $K$ higher
than 6 in ortho-ammonia and higher than 5 in para-ammonia are populated only
by forbidden radiative transitions. To analyse the influence of the unknown
collisional rates on the excitation of NH$_3$, we carried out additional computations
based on crude estimates of the collisional rates; here, we adopted collisional
depopulation rates of 10$^{-11}$ or 10$^{-10}$\,cm$^{3}$\,s$^{-1}$ for rotational states
in the ground vibrational state, and rates of
$10^{-14}$\,cm$^{3}$\,s$^{-1}$ for excited vibrational states.
None of these approximations seem to
significantly influence our conclusions inferred from the modelling of the
observed transitions.
\section{Modelling}
Here, we present our numerical code and then describe all the assumptions made and
parameters investigated during our search for the best fits to the observed
rotational transitions of ammonia. After that, we describe the method used to
search for the best fit to the data, and present the results thereby obtained.
\subsection{Numerical code and procedures}
To model circumstellar absorption and emission lines, we developed a
numerical code, MOLEXCSE (Molecular Line EXcitation in CircumStelar
Envelopes), to solve the non-LTE radiative transfer of molecular lines and
the dust continuum. Here, the methodology was to consistently include the effects
of optical pumping by the central star and infrared pumping by
circumstellar dust upon the population of the molecular levels. The first
effect is more critical in the case of post-AGB stars, and the second one
more important in AGB stars \citep[see e.g.][]{TruongBach1987}. The
effect of the dust on the radiation intensity inside the envelope is
included by solving the radiative transfer for the line and
continuum radiation. For this purpose, the critical properties of the dust --
the coefficients of
extinction, scattering, and thermal emission -- are determined by a separate
code by modelling the observed spectral energy distribution of the source,
as described in Section 4.4. Reproducing the
observed continuum fluxes allows the absorption
line profiles for mid-infrared vibrational transitions to be computed.
The radiative transfer equation is formulated in the comoving frame
\citep{Mihalas1975} to include the effects of an expanding spherical
shell. The only difference is the linearization of the differential equations
along tangent rays on a geometrical grid instead of on
grid of optical depths.
The purpose of this modification was to avoid numerical problems when
maser or laser transitions emerge during the iterations.
The simultaneous solution of the statistical equilibrium equations and of
the radiative transfer in lines is a non-linear process and, to be efficient,
requires linearization of the equations. Full linearization of the
radiative transfer equation is very complex and its solution is
time-consuming. In the code we follow the approach presented by
\citet{Schoenberg1986}. The original formulation of their approximate
Newton-Raphson operator was modified to include the geometrical formulation
of radiative transfer mentioned above.
The program possesses two additional features useful in the application to
ammonia. First, it enables the simultaneous solution of the radiative transfer
problem for more than one molecule. Second, the radiative transfer may be
solved with the inclusion of line overlap effects between different
molecules. With these features, one can consistently compute the effects of line
blending between ortho- and para-NH$_3$. The prominent example of such a case
is the overlap between rotational transitions of para-NH${_3}$
3$_1$(s)-2$_1$(a) and of ortho-NH${_3}$ 3$_0$(s)-2$_0$(a).
In computing the profiles of the inversion lines, we resolved the hyperfine
structure of ammonia, including the effects of quadrupole and magnetic
splitting, as measured by \citet{Kukolich1967} and updated by
\citet{Rydbeck1977}. Line strengths were computed when necessary using the
formula given by \citet{Thaddeus1964}. These theoretical line strengths are
in good agreement with the observed intensities both in laboratory
experiments \citep{Kukolich1967} and in observations of molecular clouds
\citep{Rydbeck1977}. The magnetic splitting is much smaller than the
quadrupole splitting and has little effect on the line profiles and
therefore was neglected in our computations. When solving the radiative
transfer, the populations of the sublevels are distributed according to the
statistical weights, i.e. assuming local thermodynamical equilibrium (LTE)
between sublevels. The relative LTE ratios of intensities as defined by
\cite{Osorio2009} were applied including effects of line overlap. The code
has been tested by comparison with previously published models of AGB
circumstellar envelopes. A detailed description of our code and the tests
we conducted will be published elsewhere.
\subsection{Thermal and density structure of the envelope}
The detailed structure of IRC$+$10216's envelope has been the subject of many studies.
We mention here only those based on the analysis of CO emission lines
\citep{Crosas1997, Schoier:2002nx, Agundez2012, deBeck2012} or the spectral energy
distribution \citep[e.g.][]{Menshchikov:2001kl}.
\begin{figure}
\centering
\includegraphics[width=8.5cm]{fig3.eps}
\caption{
Gas temperature within the envelope of IRC$+$10216 adopted for this work
(upper panel) from \cite{Crosas1997} is shown by a solid line, while that
from \cite{deBeck2012} is shown by a blue dotted line.
The dust temperature is shown by a red dashed line.
The lower panel shows the
distribution of ammonia from our best fitting models (solid line),
and its theoretical distribution (dashed line) from \cite{Decin:2010kl}.
The angular distance is given in the top axis of the upper panel for an assumed
distance to IRC$+$10216 of 130 pc.
}
\label{figmod}
\end{figure}
Several models for the thermal structure have been proposed to explain the
CO emissions from IRC$+$10216. These models differ in the assumed distance,
mass loss rate and variation of gas velocity in the inner parts of the
envelope. For the purpose of this paper, we have chosen, for the basic
temperature structure of the envelope, the model of \cite{Crosas1997}, but
with the distance reduced from 150 to 130 pc following
\cite{Groenewegen:2012fu} \citep[see also][]{Menten2012}. The gas
temperature structure of \cite{Crosas1997} is based on self-consistent
computations of the temperature structure and radiative transfer in CO,
constrained by observations of CO emissions up to $J=6-5$.
The adopted distance, expansion velocity and mass loss rate are listed in Table\,2
and the gas temperature structure, $T_{\rm gas}$, is
presented in the upper panel of Figure~\ref{figmod} by the solid line.
The temperature profile has been extended to the inner part of the envelope by
assuming a constant value of 1200~K.
More recently, \cite{deBeck2012} used the GASTRoNOoM model (Decin et al.\ 2006; 2010)
to derive the gas temperature structure shown by the dashed line
in the upper panel of Figure~\ref{figmod}, and found it capable
of explaining HIFI observations of CO up to $J=16-15$.
Figure~\ref{figmod} shows these
representative $T_{\rm gas}$ models to have distinct
temperature structures, which differ mainly in the middle part of the
envelope.
The density structure of the envelope is computed from the equation of mass
conservation, assuming that the gas is entirely composed of molecular
hydrogen and that the mass loss rate and outflow velocity are constant. The
microturbulent velocity was set to be constant throughout the envelope and
equal to 1.0\,km\,s$^{-1}$ \citep{Crosas1997}. In fact this value seems to
explain the observed shape of the lowest ortho-NH$_{3}$
1$_0$(s)-0$_0$(a) emission well.
\subsection{Distribution of ammonia}
The distribution of ammonia in the circumstellar envelope is governed by the
poorly-understood process of ammonia formation, and the well-understood
process of its photodissociation in the outer part of the envelope. A
detailed chemical model predicting the formation of ammonia and water as a
result of photo-processes in IRC$+$10216 was proposed by \cite{Agundez2010}
(see also \cite{Decin:2010kl}). This model predicts peak abundances
of ammonia within the observed values but, as we show below, in its present
version cannot explain the observed intensities of the rotational
transitions.
For the purpose of this paper, we assumed that
in the middle parts of the
envelope the maximum abundance of ammonia relative to H$_2$, $f_0$, is
constant, while in the inner and outer layers it is increasing and
decreasing, respectively. To distinguish between ortho- and para-ammonia,
their individual abundances are further designated $f$(ortho-NH$_3$) and
$f$(para-NH$_3$). The abundance profile adopted for
the ammonia in the inner layers is based on the model of
\cite{Decin:2010kl} (see their
supplementary material).
For each kind
of ammonia, we parametrized the increase of its abundance in the inner parts
of the envelope by two free parameters: the formation radius, $R_{\rm f}$, and
$f_{0}$. To describe the increase in the
ammonia abundance in the inner envelope, we used a fit to the predictions of
\cite{Decin:2010kl} of the form
$f(r) = f_{0} \times 10^{-0.434 (R_{{\rm f}}/r)^{3}}$, where $r$ is
radial distance from the central star. This parametrization, despite its
theoretical origin, is a more realistic description of the ammonia
distribution than a rather unphysical sharp increase of the ammonia abundance at
a fixed radius. We note that $R_{\rm f}$ may be formally lower than the inner dust
shell radius. To describe the decrease in the ammonia abundance near
the photodissociation radius, $R_{\rm ph}$, in the
outer layers of the
envelope, we used the analytical parametrization $f(r) = f_0 \times {\rm exp}(-{\rm ln} 2
(r/R_{\rm ph}$)$^\alpha$), which is frequently used in a parametrization of
CO photodissociation \citep[see e.g.][]{Schoier2001}. A slope $\alpha =
3.2$ was adopted here, in accord with the fit to a detailed chemical
model of NH$_{3}$ photodissociation computed using the CSENV code by
\cite{Mamon1988}. The ortho- and para-ammonia abundances, the inner radius
of ammonia formation, $R_{{\rm f}}$, and its photodissociation radius,
$R_{{\rm ph}}$, are free parameters in the fitting procedure.
The distribution of ammonia from our best-fit models and from
\cite{Decin:2010kl} are shown in Fig.~\ref{figmod} by the solid and dashed
lines, respectively.
\subsection{Dust shell model of IRC$+$10216}
\begin{figure}
\centering
\includegraphics[width=8.5cm]{fig4.eps}
\caption{
Fit to the flux for L$_{\rm avg}$=8500\,L$_\odot$ and
effective temperature of 2300 K is shown in both panels by black solid line.
The contribution from the central star is shown in the bottom panel
by the dash-dotted magenta line.
SWS and LWS ISO spectra are shown by red dotted lines. Photometric data
below 1\,$\mu$m are from GSC 2.3, those between 1.25 and 5 $\mu$m are
from \cite{LeBertre1992} at three different phases:
0.02 (stars), 0.21 (filled squares) and 0.23 (dotted squares),
while at longer wavelengths the flux is from IRAS
(crosses) and COBE DIRBE (x) measurements.
The dust model obtained for L$_{max}$=11850\,L$_\odot$
is shown by dashed lines on each panel (see text for details).
}
\label{figd}
\end{figure}
As explained in Sect.\,3, the excitation of ammonia is dominated by
radiative pumping via the 10\,$\mu$m $\nu_2$=1 band. Therefore, to model
the ammonia lines we need to estimate the continuum flux at
10\,$\mu$m inside the whole envelope. For that purpose, we used a dust radiative
transfer model \citep{Szczerba:1997eu}, which is able to provide all
necessary physical variables to MOLEXCSE. We fit a combination of flux
measurements from the Guide Star Catalogue ver.\ 2.3 (GSC 2.3) for
$\lambda\,<\,1\mu$m, photometric data between 1.25 and 5 $\mu$m from
\cite{LeBertre1992}, and at longer wavelengths from IRAS and COBE DIRBE. In
addition, as the strongest constraint, we have used ISO Short Wavelength
Spectrograph (SWS) and Long Wavelength Spectrograph (LWS) spectra. They
were obtained on May 31 1996 (JD=2450235) and correspond to phase $\phi$ of
0.24, counted from the reference maximum phase of the light curve, $\phi$=0,
on November 17 1988 (JD=2447483), and given a period of 649 days
\citep{Menshchikov:2001kl}. We have used this older parameterization of the
IRC$+$10216 variability as it was obtained closer to the date of
the ISO observations. For this phase of pulsation and the distance of 130 pc
determined recently by \cite{Groenewegen:2012fu}, the average luminosity of
IRC$+$10216 was estimated to be about 8500 L$_\sun$
\citep{Menshchikov:2001kl, Menten2012}.
As dust constituents, we adopted amorphous carbon of AC type from
\cite{Rouleau:1991nx} and SiC from \cite{Pegourie:1988eu}, both with a
power-law distribution of grain radii with an index of $-$3.5 between 0.1
and 0.43\,$\mu$m. We assumed a maximum allowed dust temperature of 1300\,K.
We assumed a constant outflow velocity equal to the observed terminal
velocity of 14.5\,km\,s$^{-1}$, in spite of a clear variation of the
outflowing velocity in the inner part of envelope seen in higher molecular
transitions \citep{Agundez2012}, and a constant dust mass loss rate. The
derived dust mass loss rates were $1.0\times10^{-7}$ M$_\odot$ yr$^{-1}$ and
$3.0\times10^{-9}$ M$_\odot$ yr$^{-1}$ for AC and SiC dust, respectively.
The inner shell radius corresponding to the assumed maximum dust temperature
is reached at a distance of about 3 stellar radii. The required total
optical depth of the envelope at V is 18.5 magnitudes, which corresponds to
$\tau_{10{\mu}m}$ equal to 0.18. The parameters of our model for ammonia in
IRC$+$10216 are listed in Table~\ref{TableModel} and the mean dust
temperature is shown by red dashed
line
in Figure~\ref{figmod}. As is shown
below, the dust temperature determines the radiation intensity in the
continuum, which is important for vibrational pumping.
\begin{table}
\caption{Model for IRC$+$10216}
\label{TableModel}
\begin{tabular}{ll}
\hline\hline
Parameter & Value \\
\hline
Distance & $d=130$ pc \\
Expansion velocity & $v_{\rm exp}=14.5$ km s$^{-1}$ \\
Mass loss rate (total hydrogen) & ${\dot{M}}=3.25\times10^{-5}$ M$_\odot$ yr$^{-1}$\\
Stellar radius & $R_{\star} = 3.9\times10^{13}$ cm \\
Effective temperature & $T_{\rm eff}=2300$ K \\
Luminosity (average) & $L_{\rm avg} = 8500$ L$_{\odot}$ \\
Luminosity at maximum & $L_{\rm max} = 11850$ L$_{\odot}$\\
Inner shell radius & $1.2\times10^{14 }$ cm ($\sim$3\,$R_{\star}$)\\
\,
{\it Best-fit parameters:} & \\
Formation radius $R_{\rm f}$ & $1.0\times10^{14 }$ cm
~~($\sim$2.5\,R$_{\star}$)\\
Photodissociation radius $R_{\rm ph}$ & $> 2-3\times10^{16}$ cm \\
$f$(ortho-NH$_3$) & $(2.8\pm0.5)\times10^{-8} (\frac{3.25\times10^{-5}}{\dot{M}})$ \\
$f$(para-NH$_3$) & $(3.2^{+0.7}_{-0.6})\times10^{-8} (\frac{3.25\times10^{-5}}{\dot{M}})$ \\
\hline
\hline
\end{tabular}
\end{table}
The fit obtained for a stellar luminosity $L_{\rm avg}$=8500\,L$_\odot$ and
an effective temperature of 2 300 K is shown in both panels of Fig.\ref{figd}
by the black solid line. In our modelling, we did not include MgS, which is
commonly used for modelling of the 30\,$\mu$m structure, since this material has
no optical constants in the optical range so the precise determination of
its temperature is impossible \citep[see e.g.][]{Szczerba:1997eu}. However,
since we are interested in the continuum emission at 10\,$\mu$m, this
approach seems to be justified.
Most of the ammonia lines were observed at phase $\phi=0.23$ (see Table\,1),
when the luminosity of the central star was close to its average value
$L_{\rm avg}$=8500\,L$_\odot$. However, three of the lines were observed at
$\phi \sim 0.13$ so, for the purpose of ammonia line modelling, we rescaled
their observed integrated area down to $\phi$=0.13 using the predicted
variability from models obtained at maximum and at average luminosity.
Since we do not have ISO spectra taken at $\phi$=0, we cannot constrain the
dust properties at the maximum stellar luminosity. Therefore, we
decided to recompute our dust model by keeping all parameters (including the
inner radius of dust shell) constant, except for the stellar luminosity,
which was raised to $L_{\rm max}$. The model results thereby obtained are
shown by dashed lines in both panels of Fig.\,\ref{figd}.
After modelling the dusty envelope, we exported the dust thermal emission
coefficient, along with the total extinction and scattering coefficients, to
the code MOLEXCSE, as a function of wavelength and radial distance.
MOLEXCSE, which was described above in Sect.\,4.1, was then used to solve
the multilevel radiative transfer in lines and continuum. See Eq.\,(1) in
\cite{Szczerba:1997eu} for details concerning the exported physical
quantities.
\subsection{The best-fit models}
Using our code described in Sect\,4.1, the ammonia abundance profile given
in Sect.\,4.3, the model parameters specified in Table\,2, together with the
gas temperature distribution from \cite{Crosas1997} and, taking into account
vibrational pumping by infrared radiation, we computed a grid of separate
models for ortho-NH$_3$ and para-NH$_3$. The grid of models was computed
for formation radii $R_{\rm f}=2.5, 5, 10, 20, 40,$ and 80 R$\star$,
photodissociation radii $R_{\rm ph}$ ranging from $1.5\times10^{16}$\,cm to
$6\times10^{16}$\,cm in steps of $\Delta R_{\rm ph}=0.5\times10^{16}$\,cm,
and $f$(ortho-NH$_3$) and $f$(para-NH$_3$) ranging from $1.4\,10^{-8}$ to
$4.8\,10^{-8}$ in steps of $\Delta f$(NH$_3$)$=0.2\,10^{-8}$.
For each model we defined the figure-of-merit as \begin{equation} \chi^2 =
\sum_{i=1,N_{\rm lines}} \left(\frac{F_{i}^{\rm obs}-F_{i}^{\rm
th}}{\sigma_{i}}\right)^{2}, \end{equation} where $F_i^{\rm obs}$ (see
Table\,1) and $F_i^{\rm th}$ are the observed and theoretically predicted
integrated fluxes for the $i$-th line, respectively, $\sigma_i$ (see
Table\,1) is the estimated uncertainty in $F_i^{\rm obs}$, and N$_{\rm
lines}$ is 3 for ortho- and 6 for para-NH$_3$ (see Fig.\,2). Here, the
three lines observed in 2011 at phase $\phi$=0.13 were rescaled to phase
$\phi$=0.23, assuming that they vary co-sinusoidally between the maximum and
average luminosity of the central star. From the computed integrated line
fluxes for these stellar luminosities, we obtained the amplitude of their
variations as 4.0, 3.1, and 8.2 K\,km\,s$^{-1}$ for the transitions at
1763.524, 1763.601, and 1763.823 GHz, respectively. Since the value of the
cosine function describing $\phi$ decreases by about 0.6 between phase 0.13
and 0.23, we reduced the integrated fluxes given in Table\,1 by 0.6 times
the estimated amplitude of the line variations, i.e. by 2.4, 1.9, and 5.0
K\,km\,s$^{-1}$. The quality of each fit is measured by the reduced
$\chi^2$ parameter, which for two free parameters is defined here as
$\chi^2_{red} = \chi^2 / (N_{\rm lines}-2)$.
\begin{figure*}[]
\centering
\includegraphics[width=5.8cm]{fig5a.eps}
\includegraphics[width=5.8cm]{fig5b.eps}
\includegraphics[width=5.8cm]{fig5c.eps}
\caption{
Contours of $\chi^2$, indicating the sensitivity of the fit to the
model's free parameters. The left panel shows the dependence of $\chi^2$ on the
photodissociation radius $R_{\rm ph}$ and abundance of ortho-NH$_3$, given the
best fitting value for the formation radius, $R_{\rm f}$ = 2.5 $R_\star$.
The middle and the right
panels show the dependence of $\chi^2$ on the formation radius $R_{\rm
f}$ and the abundances of para- and ortho-NH$_3$, given a photodissociation
radius $R_{\rm ph}$ = 3\,10$^{16}$\,cm. The contours correspond to 1, 2, and 3 $\sigma$
confidence levels. The best fits are marked with a cross and the
corresponding $\chi^2_{red}$ are shown on the plots.
}
\label{figchisquare}
\end{figure*}
We found that for $R_{\rm ph}\geq$\,3$\times$10$^{16}$\,cm, the minimum
value of $\chi^2_{\rm red}$ is achieved for $R_{\rm
f}=2.5$\,R$\star=1.0\times10^{14}$ cm, which is the minimum formation radius
possible for the assumed inner radius of the envelope considered during
modelling. However, since the assumed $R_{\rm ph}$ is decreased, the formation radius
that minimizes $\chi^2_{\rm red}$ increases (from $R_{\rm
f}=10$\,R$\star$ for $R_{\rm ph}=$\,2.5$\times$10$^{16}$\,cm to $R_{\rm
f}=40$\,R$\star$ for $R_{\rm ph}=$\,1.5$\times$10$^{16}$\,cm); at the same time, however,
the minimum $\chi^2_{\rm red}$ also increases. Hence, the value of the
photodissociation radius was found by searching for the minimum $\chi^2_{\rm
red}$ among the models for $R_{\rm f}=2.5$\,R$\star$.
The best fit to the ortho-NH$_3$ lines is achieved for $R_{\rm ph} >
2-3\times10^{16}$\,cm, and $f$(ortho-NH$_3)=(2.8\pm0.5)\times10^{-8}$ at a
3\,$\sigma$ confidence level. There is no strong constraint on the maximum
value that $R_{\rm ph}$ could have. To estimate the errors, we constructed
a $\chi^2$ map for ortho-NH$_3$, which is presented in the left panel of
Fig.\,5, and shows how $\chi^2$ varies as a function of the
photodissociation radius $R_{\rm ph}$ and $f$(ortho-NH$_3$) for the best-fit
formation radius $R_{\rm f}=1.0\times10^{14}$ cm. The contours correspond
to 1, 2, and 3\,$\sigma$ confidence limits. The best fit is indicated with the
cross and the corresponding value of $\chi^2_{\rm red}=1.2$ is shown on the
plot. Rotational transitions of para-NH$_3$ are even less sensitive to the
photodissociation radius, and a plot similar to that shown on left panel of
Fig.5 does not yield any constraints on $R_{\rm ph}$. This behaviour could
be related to the fact that at a given distance from the star the pumping
mid-IR radiation is the same, while at low gas temperatures the excitation
of the 1$_0$(s)-0$_0$(a) transition of ortho-ammonia is much more efficient
than the excitation of the 2$_1$(s)-1$_1$(a) transition of para-ammonia (see
Fig.1). On the other hand, the lowest para-NH$_3$ transition,
$2_1$(s)-$1_1$(a), was observed with HPBW of 18.2\arcsec, corresponding to a
projected radius of 1.8$\times$10$^{16}$ cm at the assumed distance of 130
pc; thus, for values of R$_{\rm ph}$ that are larger than this projected
radius, this line is less sensitive to R$_{\rm ph}$ than the
1$_0$(s)-0$_0$(a) ortho-NH$_3$ transition, for which the HPBW is twice as
large.
The derived abundance of para-NH$_3$ for the best-fit model is
($3.2^{+0.7}_{-0.6})\times10^{-8}$, which means that, to within the error
bars, the ratio of ortho- to para-NH$_3$ is equal to 1, a ratio that is
characteristic of the formation of NH$_3$ at high temperatures
\citep{Umemoto:1999bs}. The average abundance for the two species is
(3.0$\pm0.6)\times 10^{-8}$. The error estimate for the para-NH$_3$
abundance given above is the 3\,$\sigma$ confidence interval obtained from
the $\chi^2$ map for para-NH$_3$, which is presented in the middle panel of
Fig.\,5 and shows how $\chi^2$ varies as a function of the formation radius
$R_{\rm f}$ and abundance of para-NH$_3$ at the best-fitting
photodissociation radius $R_{\rm ph}$ = $3\times10^{16}$\,cm. The meaning
of the contours is the same as in the left panel. The best fit, indicated with
the cross, and the corresponding $\chi^2_{\rm red}$ are also shown on the
plot. The minimum $\chi^2_{\rm red}$ of 0.5 implies an excellent fit to the
six para-NH$_3$ lines (with four degrees of freedom).
While the $\chi^2$ analysis described above made use of integrated line
fluxes instead of the detailed line profiles, the model successfully reproduces the
line shapes. The emission line profiles resulting from the best-fit
models are shown in Fig.\,2 by dashed lines, and the best-fit parameters
are listed in Table\,2. As indicated in Table 2, the abundances derived for
ortho- and para-NH$_3$ are inversely proportional to the assumed mass-loss rate;
this can be understood as a consequence of the facts that the NH$_3$ lines are
optically thin and that the derived ammonia abundance is only weakly dependent
on the exact choice of the gas temperature (see below), so the same total mass
of ammonia is required to explain the observations regardless of the mass-loss rate.
To check how strongly the derived best-fit parameters depend on the assumed
gas temperature inside the envelope, we performed some test computations
using the $T_{\rm gas}$ structure from \cite{deBeck2012}. For the best
fitting parameters found above and the $T_{\rm gas}$ structure of
\cite{deBeck2012}, we found that the line shapes are only slightly
different, but $\chi^2_{\rm red}$ increases to 3.8 for ortho- and to 4.6 for
para-NH$_3$. Searching for the model minimizing $\chi^2_{\rm red}$, we found
that the best fit is obtained for a formation radius $R_{\rm
f}$=$1\times10^{14}$\,cm, a photodissociation radius $R_{\rm ph}$ =
$3\times10^{16}$\,cm, and abundances of ortho-NH$_3$ and para-NH$_3$ equal to
($3.0^{+0.6}_{-0.5})\times10^{-8}$ and ($3.6^{+0.8}_{-0.7})\times10^{-8}$,
respectively. These values lie within the error bounds estimated
using the \cite{Crosas1997} temperature profile, implying that the abundances
and photodissociation radius derived from rotational transitions are only weakly dependent
on the choice of temperature profile.
Although we obtained the best-fit results based solely on rotational
transitions, our model also offers predictions for the other observed
transitions. Therefore to make a quality test of our best-fit models,
we examined whether they can reproduce both the inversion and mid-IR transitions of
ammonia.
While \cite{Gong2015} have obtained observations of the inversion transitions
at several different phases (in range 0.20-0.96), our models do not predict any dependence
on the integrated fluxes
in
the phase (i.e. luminosity) of the central star.
Calculations show that the expected variation of flux in the (1,1)
inversion line is up to 3 percent between
phases of maximum ($\phi$=0) and mean ($\phi$=0.25), and is even lower in higher inversion
transitions.
Thus, our model computed for the average luminosity of the star
can be used for comparison with
the inversion line observations. Figure\,\ref{figinv} shows the observed transitions by
means of solid lines \citep{Gong2015} and the computed theoretical profiles
with dashed lines.
Vertical sticks mark positions of hyperfine components - the height of
the central stick is chosen arbitrary, while the heights of the other sticks
are scaled according to their relative theoretical intensities.
Our best-fit model for the rotational transitions fits
the upper inversion transitions very well, but overestimates the flux
measured for the para-NH$_3$ (2,2) line, and significantly overestimates
that for the lowest para-NH$_3$ (1,1) line. We have found that
decreasing $R_{\mathrm{ph}}$ to 1.5$\times10^{16}$\,cm gives a very good fit
to the all observed inversion lines (shown by dotted lines in
Fig\,\ref{figinv}). However, this model is not able to match
the rotational lines as successfully as our best-fit model.
Using the best-fit abundance of ortho-NH$_{3}$ and decreasing the
photodissociation radius to 1.5$\times10^{16}$\,cm
significantly increases the $\chi^2_{\rm red}$, to a value 28.
On the other hand, the best-fit model with fixed R$_{\rm
ph}$=1.5$\times10^{16}$\,cm and free formation radius and abundance of
ortho-NH$_{3}$ moves R$_f$ to about 40 R$_\star$ and
f(ortho-NH$_3$) to 4.2$\times$10$^{-8}$ with $\chi^2_{\rm red}$ equal to
3.8.
To investigate this discrepancy, we extended the search for the best-fit model by adding the inversion lines. To this purpose, we took the
integrated flux from \cite{Gong2015} and converted it to K\,km\,s$^{-1}$ scale.
However, our theoretical fit to the inversion lines seems to suggest the
presence of hyperfine components, as we describe below in Section 5.4 of
discussion, at least in the case of the two lowest inversion lines, (1,1)
and (2,2). Therefore, we repeated the calculation of the integrated
flux of these two lines, extending the integration range accordingly. This
increased the integrated flux of those two transitions up to about 20\%\ in case the of (1,1). The integrated fluxes used for the analysis
of the inversion lines for ortho-NH$_3$ (3,3) and (6,6) transitions are
0.$47\pm0.5$ and $0.15\pm0.03$ K\,km\,s$^{-1}$, respectively, while for the
para-NH$_3$ (1,1), (2,2), and (4,4) transitions are $0.84\pm0.8$,
$0.64\pm0.9$, and $0.20\pm0.4$ K\,km\,s$^{-1}$, respectively. Now,
considering simultaneously the inversion and rotational lines of para-NH$_3$
put a strong constraint on the photodissociation radius. For the formation
radius fixed to the $R_{\rm f}=1.0\times10^{14}$ cm, we found that the
photodissociation radius is R$_{\rm ph}=(1.5\pm0.2)\times10^{16}$\,cm and
the abundance of para-NH$_{3}$ f(para-NH$_3$) = $(3.2\pm0.5)\times$10$^{-8}$
with $\chi^2_{\rm red}$ equal to 0.86. However, the same approach for
ortho-NH$_3$ gives rather a different estimation of the photodissociation
radius, which is equal to $(5_{-3.}^{>+1.})\times10^{16}$\,cm and the
abundance of ortho-NH$_{3}$ equal to $(2.4\pm0.4)\times$10$^{-8}$ with
$\chi^2_{\rm red}$ equal to 1.9. Thus, we see, that a global fit gives ortho-
and para-NH$_3$ abundances that are quite similar (within error bars) to those
obtained from the analysis limited to the rotational lines only. However,
the problem of requiring different photodissociation radii for ortho- and
para-NH$_3$ remains.
\begin{figure}
\centering
\includegraphics[width=8.5cm]{fig6.eps}
\caption{
Profiles of the inversion lines as computed for the best-fit models, which
assume $R_{\rm ph} = 3\times10^{16}$ cm and the $T_{\rm gas}$ distribution from
\cite{Crosas1997}, (blue dotted lines) are overplotted on the observed inversion
transitions (solid lines) from \cite{Gong2015}.
Theoretical profiles for the same $T_{\rm gas}$ profile, but
with the photodissociation radius reduced to $R_{\rm ph} = 1.5\times10^{16}$\,cm
are shown with red dashed lines. Vertical sticks show positions and relative
intensities of hyperfine components.
}
\label{figinv}
\end{figure}
The infrared transitions to the $\nu_2$ levels are seen in absorption
against the background dust continuum emission. The observed depths of
lines depend on the telecope beam size. In addition, the line spectrum is
smoothed by the instrumental profile. The observed $\nu_2$ line profiles
published by \cite{Keady1993} and synthesized using our best-fit models are
shown in Figure~\ref{figir}. There is reasonable consistency between our
model results (dashed line) with the observed profiles (solid line). Some
disagreement with the aQ(6,6) line is most probably due to the neglect of
the inner velocity structure of the envelope or to a higher value of the
formation radius.
\begin{figure}
\centering
\includegraphics[width=8.5cm]{fig7.eps}
\caption{
Profiles of NH$_3$ lines in the $\nu_2$ band observed by \cite{Keady1993}
(solid lines) and synthesized profiles using data from the best-fit models
(dashed lines). Theoretical lines have been convolved with a Gaussian profile
to achieve a spectral resolution of 2.9\,km\,s$^{-1}$.
}
\label{figir}
\end{figure}
\section{Discussion}
\subsection{Abundance of ammonia}
Over the past 30 years, the determination of the NH$_3$ abundance in the
envelope of IRC$+$10216 has been the subject of observational studies at
radio, mid-infrared, and submillimeter wavelengths. All these observations
appeared to provide divergent abundances but, as we discuss below, the main
reason for the different results comes from the inclusion (or not) of
radiative pumping to the vibrational levels and differences in column
densities.
Prior to this study, the only observation of a NH$_3$ rotational transition
was made with the spectrograph on board the
{\it Odin} satellite
\citep{Hasegawa2006}. The rather poor signal-to-noise ratio obtained for the lowest
$1_0$(s)-0$_0$(a) transition of ortho-NH$_3$ did not provide a
precise measurement of the line profile. The ammonia abundance relative to H$_2$
was determined as being $1\times 10^{-6}$, significantly above our
determination, $f$(ortho-NH$_3$)=$2.8\times10^{-8}$. However, the analysis
of the line was based on a simplified model of a single $K=0$ ladder, limited
to the ground vibrational state and, more importantly, neglecting
radiative pumping by the infrared radiation. For the
adopted model of the IRC$+$10216 envelope, results from our code show that, with the
neglect of IR pumping
the abundance of ammonia has to be increased by factor of about 20 to
explain the measured 1$_0$(s)-0$_0$(a) line flux. In addition, the
line profile becomes parabolic, which is characteristic of unresolved
optically thick emission, in disagreement with the profiles observed by {\it
Herschel}. Thus, we can conclude that the largest discrepancy in the
determination of the ammonia abundance seems to be resolved.
The effect of infrared pumping on the molecular abundances derived for
evolved stars was previously noted
in the paper of \cite{Agundez2006}, who show that by including the
$\nu_2$ mode of H$_2$O the abundance of water was decreased by a factor of $\ge 10$
with respect to that derived assuming pure rotational excitation. For the specific case
of NH$_3$, \cite{Schoier2011} have shown that taking into account IR
pumping via vibrationally excited states will decrease the abundance of ammonia
by an order of magnitude. In addition, \cite{Danilovich2014} have derived
the abundance of ammonia in the S-type star W Aql, including the $\nu_2$
state in their analysis of the radiative transfer.
The early observations of the (1,1) and (2,2) inversion transitions of
para-NH$_3$ around 23\,GHz were made by \cite{Kwok1981, Bell1982}, and
\cite{Nguyen-Q-Rieu1984}. For their assumed distance of 200\,pc and mass
loss rate of $2\times10^{-5}$ M$_{\odot}$\,yr$^{-1}$, they found
$f$(para-NH$_3$) as being between $3\times10^{-8}$ and $2\times10^{-8}$. At
the distance of 130\,pc adopted here, and for optically
thin transitions, the corresponding values of $f$(para-NH$_{3}$) are
$0.8\times10^{-8}$ and $0.5\times10^{-8}$, respectively, for the assumed
mass loss rate of $3.25\times10^{-5}$. The estimates are a factor of 4
to 6 lower than our determination of para-NH$_{3}$ abundance, namely
$(3.2^{+0.7}_{-0.6})\times10^{-8}$. This difference may be
due to simplifying assumptions made in the derivation of the ammonia abundance in
these earlier papers. Recently, \cite{Gong2015} have inferred the abundances of
ortho- and para-NH$_{3}$ from observations of their metastable inversion
transitions $(J, K) = (1,1), (2,2), (3,3), (4,4), (6,6)$. They lie between
$7.1\times10^{-8}$ to $1.4\times10^{-7}$ for ortho-NH$_3$, and
between $1.1\times10^{-7}$ and $1.3\times10^{-7}$ for para-NH$_3$. Here, the
assumed mass-loss rate was $2\times10^{-5}$\,M$_{\odot}$ so, for our adopted mass
loss rate, the abundances correspond to values from $4.4\times 10^{-8}$ to
$8.6\times 10^{-8}$ for ortho-NH$_3$ and from $6.8\times 10^{-8}$ to $8.0\times
10^{-8}$ for para-NH$_3$. Moreover, these values are dependent on the
column density of molecular hydrogen, which is uncertain by a factor of 2
\citep{Gong2015}.
Observations of mid-IR $\nu_2$ transitions in absorption performed by
\cite{Betz1979, Keady1993} have yielded another estimate of the total abundance of
ammonia. For an assumed mass-loss rate of $2\times10^{-5}$ M$_{\odot}$\,yr$^{-1}$, these
two studies inferred total ammonia abundances of $1\times10^{-7}$ and
$1.7\times10^{-7}$, respectively. Rescaled to our adopted
mass-loss rate, these values transform to $6\times10^{-8}$ and $1\times10^{-7}$,
in agreement with the total ammonia abundance derived by us
$f$(NH$_3)=(6.0\pm0.6)\times10^{-8}$.
Finally, we note that the remaining differences --
at a factor of a few
-- between the NH$_3$ abundances determined by various methods may not
be so significant. The determination of the NH$_3$ abundances using mid-IR
absorption $\nu_2$ transitions requires an assumption for the column density
of ammonia. The values range from about $2\times10^{15}$ cm$^{-2}$
\citep{Betz1979, Keady1993} to $8\times10^{15}$ cm$^{-2}$
\citep{Monnier2000b}. This may be compared to the total column density
along the sight line to the central star in our best-fit model,
$1.5\times10^{16}$ cm$^{-2}$. The different observations average the column
density over different parts of the envelope and therefore probe different
regions of the CSE. We note that the column density is very sensitive to the
densest inner parts of the envelope. For example, for our model with
constant velocity and with $R_{\rm f} = 10 R_{\star}$ the total column
density drops to only $5\times10^{15}$ cm$^{-2}$. The assumed velocity
profile in the acceleration zone also plays a role. For example,
\cite{Monnier2000b} had to assume a much higher column density of
$8\times10^{15}$ cm$^{-2}$, by modifying only the behaviour of the velocity
field in the inner part of the envelope, while using the model of the dusty
envelope similar to the original one from \cite{Keady1993}.
\subsection{Ammonia formation radius}
The best-fit models for ortho- and para-NH$_3$, marked by $+$ signs on the
middle and on the right panels of Fig.\,5, respectively, are obtained for
$R_{\rm f}=1.0\times10^{14}$ cm or 2.5~$R_{\star}$. This means that a lower
limit on the ammonia formation radius is not provided by our model fits. On
the other hand, the distribution of contours for 1, 2, and 3\,$\sigma$
levels on these panels show that ammonia cannot be formed too far from the
stellar photosphere. At the 1$\sigma$ level, the upper limit on the
formation radius must be at most 20\,R$_{\star}$, while at the 2$\sigma$
level it is around 35\,R$_{\star}$. This seems to be in rough agreement
with the formation radius of ammonia as estimated by \cite{Keady1993} and
\cite{Monnier2000b} on the basis of their analysis of mid IR lines. The
analysis of \cite{Keady1993} allows a distance closer than 10\,R$_{\star}$,
whereas \cite{Monnier2000b} conclude that ammonia is formed at a distance
$\geq20$\,R$_{\star}$. This last conclusion was based on interferometric
observations of the mid-IR bands in the continuum and at line centre and
further modelling of the ratio of visibility functions.
The rotational line profiles of NH$_3$ were fit assuming that the
velocity field is constant. However, we performed test computations with
the gas velocity field inside the envelope used by \cite{Crosas1997}. We
found that they produce a narrow emission feature in the centre of the
highest rotational and inversion transitions that are not seen in the
observed line profiles. To remove this feature, created by the acceleration
region, it was necessary to constrain the formation radius by shifting
$R_{\rm f}$ above $5-10$ R$_{\star}$, although the quality of the fit -- as measured by
$\chi^2_{\rm red}$ -- was worse in this case. Nevertheless, it seems that the
more realistic gas velocity field suggests that ammonia is not formed at the
stellar photosphere as suggested by our formal fit to the
rotational transitions, assuming a constant outflow velocity.
\subsection{Photodissociation radius}
The photodisociation radius has been constrained by observations of the of
the NH$_3$ (1,1) and (2,2) inversion lines \citep{Nguyen-Q-Rieu1984}. Their
analysis of the observed lines required a cut-off of the molecular abundance
beyond a radius of $(0.6-1)\times10^{17}$ cm at the assumed distance of
200\,pc. This corresponds to $(4.0-6.5)\times10^{16}$\,cm at distance
assumed in this paper, d=130\,pc, and is in the range allowed by our
modelling, $R_{\mathrm ph}>2-3\times10^{16}$ cm. Models for the NH$_3$
distribution in the circumstellar envelope of IRC$+$10216 presented by
\cite{Decin:2010kl}, and shown by the dotted line in Fig.\,3, predict a
decrease in the ammonia abundance in the outer envelope that is more rapid
than is allowed by the observations. With this type of distribution, we cannot
fit the rotational lines of ammonia, especially the lowest transition of
ortho-NH$_3$, since its upper level is underpopulated at still quite high
gas temperatures (see Figs.\,1 and 3). Similarly, this is also the case for
the reduced photodissociation radius inferred from the fit to the
observations of inversion lines by \cite{Gong2015}.
\subsection{Line shapes}
Thanks to the high signal-to-noise ratio (S/N) obtained for the ortho-NH$_3$
$1_0$(s)-0$_0$(a) spectrum, the details of the line profile are
well-constrained (see bottom left
panel in Fig.\,2). Another line, the $3_2$(s)$-2_2$(a) line at 1763.823 GHz,
was also observed at a high S/N ratio and has an identical shape (see the middle
right panel in Fig.\,2). The energy of the upper level for these two
transitions is quite different (29 and 127\,K, see Table\,\ref{TableObs}),
and the similarity of their shapes suggests that both of them are excited in
a rather similar region within the envelope. Surprisingly, their shapes show two
peaks resembling the profiles of resolved optically thin transitions.
However, this is not possible because, given the {\it Herschel} beam size at the
frequency of the $1_0$(s)-0$_0$(a) transition, it would require an outer
radius for the ammonia-emitting region to be in excess of $10^{17}$ cm, a value close to
the CO photodissociation radius. Very similar shapes of rotational lines of
SiO (J=2-1 through J=7-6 in their ground vibrational state) have been
observed in IRC$+$10216 with the IRAM 30-m telescope by \cite{Agundez2012} (see
their Fig.\,4), who suggested that the observed peaks may be formed
by additional excitation provided by shells with enhanced density observed
in the outer layers of the envelope of IRC$+$10216 \citep[see
e.g.][]{Cernicharo:2015uq}. The line shape can also indicate the presence of
spiral shells \citep{Decin:2015qy} in the inner envelope, and/or a hole in
the middle of the envelope in the ammonia distribution.
Another possible hypothesis that explains the slightly double-peaked shape of
NH$_3$ rotational transitions could be additional excitation of levels by
line overlap. In particular, this effect may couple ortho- and para-NH$_3$
levels. This hypothesis has been tested by simultaneous computations of
both ortho and para species including effects of line overlap. We found,
however, that this process cannot explain the asymmetric double-peak line
profiles that are observed. On the other hand, it does reproduce the
resulting composite profile of the overlapping transitions (para-NH$_3$
$3_1$(s)$-2_1$(a) at 1763.601 GHz and ortho-NH$_3$ 3$_0$(s)-2$_0$(a) at
1763.524 GHz) shown in Figure~\ref{figb} by the long dash line.
The theoretical profiles for the inversion transitions show a complex
structure, due to the overlap of emission in the hyperfine components. In
particular, the profile of the para-NH$_3$ (1,1) line (see the lowest panel
in the Fig.\,\ref{figinv}) shows at least the central and the two long
wavelength components of the hyperfine multiplet. The two narrow peaks at
$-32$ and $-20$\,km\,s$^{-1}$ caused by the overlap of the blue edge of one
component and red edge of the next one further confirms the presence of at
least one of the short wavelength components. Observations with higher
S/N would probably reveal the shortest wavelength
component.
\section{Summary}
We present {\it Herschel}/HIFI observations obtained with high spectral
resolution for all nine rotational transitions up to the $J = 3$ levels for
ortho- and para-NH$_3$ in the envelope of the C-rich AGB star IRC$+$10216.
Using our numerical code MOLEXCSE, which solves the non-LTE radiative
transfer in molecular lines and dust continuum, we searched for the best-fit
parameters that explain all three ortho- and six para-NH$_3$ rotational lines.
Computations were done separately for ortho- and para-NH$_3$. Three free
parameters were constrained by the modelling effort: the ammonia formation
radius, its photodissociation radius and the abundance. The best fits were
obtained when infrared pumping of NH$_3$ was included, using the gas
temperature structure from \cite{Crosas1997}, which is representative of
self-consistently computed $T_{\rm gas}$ structures. Test computations with
the gas temperature profile of \cite{deBeck2012} showed that, while the fit
obtained is slightly worse, the best-fit parameters are only weakly
sensitive to the assumed temperature structure.
We found that the best fit to the rotational lines is obtained if the
abundance of ortho- and para-NH$_{3}$ are equal to
($2.8\pm0.5$)$\times10^{-8}$ and ($3.2^{+0.7}_{-0.6})\times10^{-8}$,
respectively.
The fit, including both rotational and inversion lines,
gives the ortho- and para-NH$_3$ abundances
that are quite similar (within the error bars) to those obtained from the analysis
limited to the rotational lines only
(f(ortho-NH$_3$) = $(2.4\pm0.4)\times$10$^{-8}$ and
f(para-NH$_3$) = $(3.2\pm0.5)\times$10$^{-8}$).
These values are compatible with an ortho/para ratio of one,
characteristic for the formation of ammonia at high temperatures. The
derived abundance of ortho-NH$_3$ has
solved the long-lasting problem of the
discrepancy between the abundance derived from the lowest submillimeter
rotational line and those from radio inversion or infrared absorption
transitions. It was found that the main process that brought the
abundances of ammonia derived from different spectral transitions close to
each other was the inclusion of the NIR radiative pumping of ammonia via the
$10\,\mu$m $\nu_2=1$ band. The average abundance of NH$_3$ derived from rotational
lines agree to within of a factor a few with those from radio inversion or
infrared absorption transitions, and we argued that this difference may not
be significant, since both methods rely on an uncertain determination of the ammonia
column density.
In addition, since the MOLEXCSE code also offers predictions for other
NH$_3$ transitions in the radio and mid-IR range, we tested whether our
best-fit models were able to explain the observations of \cite{Gong2015} and
\cite{Keady1993}. In the case of the inversion transitions, we found that
reducing R$_{\rm ph}$ to 1.5$\times$10$^{16}$ cm gives a very good fit to
the lower para-NH$_3$ (1,1) and (2,2) inversion transitions, but with so
small a R$_{\rm ph}$ that the fit to the rotational lines of ortho-NH$_3$ is
significantly worse. We admit, however, that our best fit model for the
rotational transitions may overpredict the envelope size as a result of
inaccuracies in modelling of the inner parts of the envelope. In the case of the
MIR absorption lines, we found that our best-fit model reproduces the
observed profiles from \cite{Keady1993} fairly well. Some disagreement with
the $aQ(6,6)$ line profile is most probably due to our neglect of the inner
velocity structure in the envelope.
The ammonia formation radius is not well-constrained in our approach, and the
best fitting abundances of ortho- and para-NH$_3$ were obtained at the
minimum value of the formation radius considered in our analysis, 2.5\,R$_\star$.
The best fits were obtained assuming that the outflow velocity is
constant, so we performed test computations with the gas velocity,
increasing inside the acceleration zone. We found that for $R_{\rm
f}=2.5$\,R$_\star$ this type of velocity field produces a narrow emission feature in
the centre of the highest inversion transitions that is not seen in the
observed line profiles. To remove this feature, created in the acceleration
region, it was necessary to shift $R_{\rm f}$ above $5-10$ R$_{\star}$,
still inside the 1$\sigma$ limits obtained from fitting the line strengths.
Nevertheless, the more realistic gas velocity
field suggests that ammonia is not formed at the stellar photosphere as had been
suggested by the formal fits to the rotational transitions,
assuming a constant gas velocity field.
A lower limit on the ammonia photodissociation radius seems to be implied by
our fit to the rotational transitions, while the maximum value for $R_{\rm
ph}$ is unconstrained. Our value of $(2-3)\times 10^{16}$ cm seems to agree
quite well with other determinations. However, the best fit to the both
rotational and inversional transitions does not give fully consistent
results for the ortho- (R$_{\rm ph}=(5_{-3.}^{>+1.})\times10^{16}$\,cm) and
para-NH$_3$ (R$_{\rm ph}=(1.5\pm0.2)\times10^{16}$\,cm). Furthermore, a
distribution of NH$_3$ with the fast decrease predicted by the model of
{\cite{Decin:2010kl} does not yield a good fit to the lowest rotational
transitions of ortho-ammonia.
The two rotational lines that have the highest S/N (1$_0$(s)-0$_0$(a)
of ortho- at 572.498 GHz and $3_1$(s)$-2_1$(a) of para-NH$_3$ at 1763.601
GHz) show two asymmetric peaks in their profile. These can be due to an
excess of radiation from almost concentric rings or spiral shells as
suggested in literature. We also investigated the possibility that they
result from the coupling of the ortho- and para-NH$_3$ levels by the effects
of line overlap. The computations showed that this process cannot explain
the observed peaks, but it can successfully account for the
$3_1$(s)$-2_1$(a) at 1763.601 GHz and ortho-NH$_3$ 3$_0$(s)-2$_0$(a) line at
1763.524 GHz.
\begin{acknowledgements}
MSc and RSz acknowledge support by the National
Science Center under grant (N 203 581040).
JHe thanks the support of the NSFC research grant No. 11173056.
Funded by Chinese Aacademy Of Sciences President's International Fellowship
Initiative.
We acknowledge support from EU FP7-PEOPLE-2010-IRSES programme in the
framework of project POSTAGBinGALAXIES (Grant Agreement No.269193).
Grant No. 2015VMA015.
V.B., J.A. and P.P. acknowledge support by the Spanish MICINN, program CONSOLIDER
INGENIO 2010, grant 'ASTROMOL' (CSD2009-00038), and MINECO, grants
AYA2012-32032 and FIS2012-32096.
J. Cernicharo thanks Spanish MINECO for funding
under grants AYA2009-07304, AYA2012-32032, CSD2009-00038, and
ERC under ERC-2013-SyG, G.A. 610256 NANOCOSMOS.
L.~D. acknowledges funding by the European Research Council under the European
Community’s H2020 program/ERC grant agreement No. 646758 (AEROSOL).
K.~J. acknowledges the funding from the Swedish national space board.
D.~A.~N. was supported by a grant issued by NASA/JPL.
HIFI has been designed and built by a
consortium of institutes and university departments from across Europe,
Canada and the United States under the leadership of SRON Netherlands
Institute for Space Research, Groningen, The Netherlands and with major
contributions from Germany, France and the US. Consortium members are:
Canada: CSA, U.Waterloo; France: CESR, LAB, LERMA, IRAM; Germany: KOSMA,
MPIfR, MPS; Ireland, NUI Maynooth; Italy: ASI, IFSI-INAF, Osservatorio
Astrofisico di Arcetri- INAF; Netherlands: SRON, TUD; Poland: CAMK, CBK;
Spain: Observatorio Astron\'omico Nacional (IGN), Centro de
Astrobiolog\'{\i}a (CSIC-INTA); Sweden: Chalmers University of Technology -
MC2, RSS \& GARD; Onsala Space Observatory; Swedish National Space Board,
Stockholm University - Stockholm Observatory; Switzerland: ETH Zurich, FHNW;
USA: Caltech, JPL, NHSC. HCSS / HSpot / HIPE is a joint development by the
Herschel Science Ground Segment Consortium, consisting of ESA, the NASA
Herschel Science Center, and the HIFI, PACS and SPIRE consortia.
\end{acknowledgements}
\bibliographystyle{aa}
|
2,877,628,088,487 | arxiv | \section{Introduction}
Basing on \cite{Perlings}, it was proposed in \cite{VMSS} the following model describing a nonlinear elastic medium with internal inclusions, cavities or microcracks:
\begin{eqnarray}
u_t+\frac{1}{\nu+2}\partial_x\,\left(\beta+\sigma\partial^2_x\right) \rho^{\nu+2} =0, \label{Perl_1A}\\
\rho_t+\rho^2\,u_x=0, \label{Perl_1B}
\end{eqnarray}
where $\beta>0,$ $\sigma\neq 0$, $\nu>-1$.
In paper \cite{VMSS} a family of traveling wave (TW) solutions satisfying the system (\ref{Perl_1A})-(\ref{Perl_1B}) is investigated and conditions are formulated under which the soliton-like TW solutions exist. Depending on the sign of the parameter $\sigma$, the soliton-like solutions describe the waves of compression (when $\sigma>0$) or the waves of rarefaction (corresponding to $\sigma<0$). Additionally, in paper \cite{VMSS} the stability of soliton-like solutions is investigated, based on the numerical study of the Evans function \cite{Evans3,Evans4,KaPromis}. Unfortunately, it is impossible to trace the analytical relationship between stability and the parameters' values, using numerical studies. However, the possibility to get the analytical results appears in the case when the model under investigation allows a Hamiltonian description. The rigorous studies of stability properties of TW solutions to various nonlinear models have been carried out in papers \cite{Benjamin,PW,BriDerks_97,BriDerks_02,AS,Zumburn_2006,pap12} in which some general results are formulated concerning the properties of spectral operators, that make it possible to estimate a number of unstable modes. In some cases, it is possible to completely eliminate the presence of unstable modes by investigating the function of the spectral parameter put forward by Evans \cite{Evans3,Evans4} and bearing his name. In the general case, this function can only be calculated numerically, but for our purposes it is sufficient to study its asymptotic properties, as well as its behavior at the origin, which can be done analytically. Following this way, we succeeded in obtaining restrictions on the parameters which give the sufficient conditions for the spectral stability of soliton-like TW solutions. The structure of this work is as follows. In Section 2 we pass from the system (\ref{Perl_1A})-(\ref{Perl_1B}) to the equivalent system, having nice a Hamiltonian representation, and state the conditions assuring the existence of soliton-like TW solutions. In Section 3 we concentrate on the analysis of the spectral stability. We study the linearized system obtained by varying the soliton-like solution. Using the approach based on the Sturm-like theorems, we first estimate the maximal number of unstable modes and then formulate the conditions, which guarantee their absence. In this section, we use the technique based on a somewhat cumbersome multisymplectic representation of the Hamiltonian system. So in order not to clutter the main text, we provide some technical details in Appendices A and B. Finally, in Section 4 we summarize the results obtained and discuss further research.
\section{Hamiltonian representation and soliton-like solutions}
Let us consider the following substitution
\begin{eqnarray}\label{nloctransf}
u=\left(\gamma-\kappa\partial^2_x\right) w, \qquad \eta=\frac{1}{\rho},
\end{eqnarray}
where $\gamma=\beta/(\nu+2)>0,$ $\kappa=-\sigma/(\nu+2)>0.$
Inserting (\ref{nloctransf}) into (\ref{Perl_1A})-(\ref{Perl_1B}), we get the equations
\begin{eqnarray*}
-\frac{1}{\eta^2} \left\{\eta_t-\left(\gamma-\kappa\partial^2_x\right) w_x \right\} =0, \label{UP1} \\
\left(\gamma-\kappa\partial^2_x\right) \left\{w_t+\partial_x\,\eta^{-(\nu+2)} \right\} =0. \label{UP2}
\end{eqnarray*}
Under the above assumptions the operator
$
\mathscr{P}=\gamma-\kappa\partial^2_x
$
is invertible, so we can rewrite the initial system in the following equivalent form
\begin{eqnarray}
w_t=-\partial_x\,\eta^{-(\nu+2)}, \label{MGM1} \\
\eta_t=\left(\gamma-\kappa\partial^2_x\right) w_x. \label{MGM2}
\end{eqnarray}
By direct verification, one can get convinced that the system (\ref{MGM1})-(\ref{MGM2}) admits the Hamiltonian representation
\begin{equation}\label{Perl_Hamilt_1}
U_t=\partial_x \,\left(\begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right) \delta H=J\cdot \delta H,
\end{equation}
where $U=\left(w,\,\,\eta \right)^{tr},$
\[
H= \int_{-\infty}^{+\infty}\left\{\frac{1}{2}\left[\gamma\,w^2+\kappa w_x^2\right]- \int_{\eta_{\infty}}^\eta\,\left[p(\xi)-p(\eta_{\infty}) \right]d\,\xi\right\}\,d\,x,
\]
$0<\eta_{\infty}=\lim\limits_{|x| \to \infty }\eta(t,x),\,$ $p(\xi)=1/\xi^{\nu+2}.$
In the sequel we will be interested in a family of the TW solutions $w=w_s(z),$ $\eta=\eta_s(z),$ where $z=x-s\,t,$ so we rewrite the system (\ref{Perl_Hamilt_1}) with the traveling wave coordinates $\bar t=t,\,\,\,\bar z=x-s\,t$:
\begin{equation}\label{Hamilt_TW}
U_{\bar t}=\partial_{\bar z} \,\left(\begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right) \delta ( H+s\,Q),
\end{equation}
where
\[
Q=\int_{-\infty}^{+\infty} w (\eta-\eta_{\infty})\,d\,z
\]
is the generalized momentum (we omit bars over the independent variables in what follows). Since in the new coordinates the TW solutions are stationary, they satisfy the system
\begin{equation}\label{Var_TW}
\partial_{z} \,\left(\begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right)\,\delta ( H+s\,Q)|_{w_s (z),\, \eta_s (z)}=0.
\end{equation}
Now we are going to formulate the conditions which guarantee the existence of homoclinic solutions representing the solitary waves. The system (\ref{Var_TW}) can be rewritten as follows:
\begin{equation}\label{VarTW1}
\partial_z\left\{s \,w_s-\eta_s^{-(\nu+2)}+\eta_\infty^{-(\nu+2)} \right\}=0,
\end{equation}
\begin{equation}\label{VarTW2}
\partial_z\left\{\left(\gamma-\kappa\,\partial_{z}^2 \right) w_s+s\left(\eta_s-\eta_\infty \right) \right\}=0.
\end{equation}
Integrating these equations from $-\infty$ to $z$ and taking into account the asymptotics
\begin{equation}\label{asympt}
\lim\limits_{|z| \to \infty } \eta_s(z)=\eta_\infty, \quad \lim\limits_{|z| \to \infty } w_s(z)=0,
\end{equation}
we get the system
\begin{equation}\label{varsol1}
s \,w_s+\eta_{\infty}^{-(\nu+2)} - \eta_s^{-(\nu+2)} =0,
\end{equation}
\begin{equation}\label{varsol2}
\left(\gamma-\kappa\,\partial_{z}^2 \right) w_s+s\left(\eta_s-\eta_\infty \right)=0.
\end{equation}
Using Eq.~(\ref{varsol1}), we can eliminate the function $w_s$ from Eq.~(\ref{varsol2}). Next, introducing a new variable $\theta=\eta^\prime_s$ and using the integrating factor $\varphi=\eta_s^{-(\nu+3)}$, we can rewrite Eq. (\ref{varsol2}) in the form of a Hamiltonian system
\begin{equation}\label{HDS}\left\{ \begin{array}{l}
\frac{d}{d\,T}\eta_s=\theta\kappa\,(\nu+2)\,\varphi^2=\mathcal{H}_\theta, \\
\frac{d}{d\,T} \theta=\varphi\left\{\kappa(\nu+2)(\nu+3) \theta^2\,\eta_s^{-(\nu+4)}- \right.\\
\left. \hspace{25mm}-\left[s^2\left(\eta_s-\eta_\infty \right)+\gamma\left(\frac{1}{\eta_s^{\nu+2}}-\frac{1}{\eta_\infty^{\nu+2}} \right)\right] \right\}= -\mathcal{H}_{\eta_s}, \end{array} \right.
\end{equation}
where $\frac{d}{d\,T}=\kappa\,(\nu+2)\,\varphi^2\,\frac{d}{d\,z},$
$
\mathcal{H}=E_k(\eta_s,\,\theta)+V(\eta_s),
$
\[
E_k(\eta_s,\,\theta)=\frac{\kappa}{2}\,(\nu+2)\,\eta^{-2(\nu+3)}\theta^2
\]
is the kinetic energy, while
\[
V(\eta_s)=s^2\left[ \frac{\eta_\infty}{(\nu+2)\eta_s^{\nu+2}}-\frac{1}{(\nu+1)\eta_s^{\nu+1}}\right]+\gamma \left[\frac{1}{(\nu+2)(\eta_s\eta_\infty)^{\nu+2}}-\frac{1}{2\,(\nu+2)\eta_s^{2(\nu+2)}} \right]
\]
is the potential energy.
Using the well-known properties of two-dimensional Hamiltonian systems \cite{AndrKhajk}, we can perform exhaustible qualitative analysis of the system (\ref{HDS}). It is evident, that all stationary points of this system are placed on the horizontal axis. The coordinate $\eta$ of a stationary points satisfies the equation
\begin{equation}\label{etastac}
s^2 \left(\eta_\infty-\eta\right)=\gamma\left(\frac{1}{\eta^{\nu+2}}-\frac{1}{\eta_\infty^{\nu+2}} \right).
\end{equation}
Eq. (\ref{etastac}) is fulfilled when $\eta=\eta_\infty,$ so $(\eta_\infty,\,0)$ is the stationary point. We are looking for soliton-like solutions satisfying the asymptotic conditions (\ref{asympt}) and thus corresponding to the phase trajectories bi-asymptotic to the stationary point $(\eta_\infty,\,0)$ which must be a saddle. This is so if the eigenvalues of the Jacobi matrix
\begin{equation}\label{Jacmatr}
\mathscr{R}|_{\eta=\eta_\infty,\,\theta=0}=\left( \begin{array}{cc} 0 & \varphi(\eta_\infty)\,\kappa (\nu+2)\,\eta_\infty^{-(\nu+3)} \\ -\varphi(\eta_\infty) \left[s^2-\gamma\,(\nu+2)\,\eta_\infty^{-(\nu+3)}\right] & 0 \end{array} \right),
\end{equation}
are real numbers of different signs, or, in other words, if the following inequality holds:
\begin{equation}\label{rightineq}
s^2<\gamma\,(\nu+2)\,\eta_\infty^{-(\nu+3)}.
\end{equation}
\begin{figure}[h]
\centering
\includegraphics[totalheight=2.5 in]{figure1.pdf
\caption{Graphical solution of Eq.~(\ref{etastac}).}\label{Fig:1}
\end{figure}
The fulfillment of the condition (\ref{rightineq}) also implies the existence of a second solution of Eq.~(\ref{etastac}), located to the right of $\eta_\infty$, see Fig. \ref{Fig:1}. The second solution, which we denote by $\eta_1$, satisfies the inequality
\[
s^2>\gamma\,(\nu+2)\,\eta_1^{-(\nu+3)}.
\]
Under the above condition, the eigenvalues of the Jacobi matrix $\mathscr{R}$ in the stationary point $(\eta_1,\,0)$ are pure imaginary, so it is a center.
An extra condition, which, together with (\ref{rightineq}), guarantees the existence of the homoclinic loop is connected with the general features of two-dimensional Hamiltonian systems. As is well-known \cite{AndrKhajk}, the Hamiltonian function remains constant on the phase trajectories of the Hamiltonian system. The potential energy of the system (\ref{HDS}) has exactly two local extrema, namely, the local maximum at the point $\eta_\infty$ and the local minimum at the point $\eta_1.$ It is easy to check that $\lim\limits_{\eta \to +\infty } V(\eta)=0$ and, depending on the values of the parameters, two distinct configurations occur. If $V(\eta_\infty)\,\geq\,0,$ then the level line passing through the point of local maximum unlimitedly extends to the right without intersecting the graph of the function $V(\eta)$ (see Fig. \ref{Fig:2}, left panel). In this case the stable and unstable saddle separatrices do not form a closed loop, and the region of the phase plane $(\eta,\,\theta)$ bounded by these separatrices is filled with the periodic trajectories spreading up to infinity. If $V(\eta_\infty)\,<\,0,$ then the level line passing through the point of local maximum intersects the graph of the function $V(\eta)$ at a point $\eta_{*}$, $\eta_1<\eta_{*}<\infty$ (see Fig. \ref{Fig:3}).
\begin{figure}[h]
\centering
\subfigure{\includegraphics[totalheight=2 in]{figure2a.pdf}}\quad
\subfigure{\includegraphics[totalheight=2 in]{figure2b.pdf}
\caption{Graph of potential energy $V(\eta)$, case $V(\eta_\infty)>0$ (left panel) and the corresponding phase portrait (right panel). All the trajectories shown represent the periodic solutions. }\label{Fig:2}
\end{figure}
\begin{figure}[h]
\centering
\subfigure{\includegraphics[totalheight=2 in]{figure3a.pdf}}\qua
\subfigure{\includegraphics[totalheight=2 in]{figure3b.pdf}
\caption{Graph of potential energy $V(\eta)$, case $V(\eta_\infty)<0$ (left panel) and the corresponding phase portrait (right panel). Dashed line corresponds to the homoclinic loop; solid lines represent the periodic solutions. }\label{Fig:3}
\end{figure}
\noindent
The phase trajectory cannot have a coordinate $\eta$ greater than $\eta_*$, so at the point $(\eta_*,\,0)$ the outgoing trajectory of the saddle point $(\eta_\infty,\,0)$ is reflected from the horizontal axis. Since the Hamiltonian function is not changed under the replacement $\theta$ by $-\theta$, the straight and reflected trajectories are symmetric with respect to the horizontal axis and form a single homoclinic trajectory bi-asymptotic to the saddle. Condition $V(\eta_\infty)\,<\,0,$ together with the condition which guarantees that the stationary point $(\eta_\infty,\,0)$ is a saddle, forms the pair of inequalities
\begin{equation}\label{hclexist}
\frac{\beta (\nu+1)}{2(\nu+2) \eta_\infty^{\nu+3}}<s^2<\frac{\beta}{\eta_\infty^{(\nu+3)}}
\end{equation}
assuring the presence of soliton-like regimes in the set of TW solutions.
\begin{rmk}\label{existHCL}
If we introduce the parameter $\eta_{0,\,\infty}=s^{2/(\nu+3)}\,\eta_\infty$, then the velocity $s$ will be eliminated from (\ref{hclexist}) which acquires the following form
\begin{equation}\label{hclexist0}
\frac{\beta (\nu+1)}{2(\nu+2) }<\eta_{0,\,\infty}^{\nu+3}<{\beta}.
\end{equation}
In what follows we treat the parameter $\eta_{0,\,\infty}$ as independent of $s.$
\end{rmk}
To conclude, let us note that the conditions presented in (\ref{hclexist}) coincide with those obtained in paper \cite{VMSS} after the performance of substitution $R_1=\eta_\infty^{-1}.$
\section{Spectral stability of the soliton-like solutions}
\subsection{Restrictions on the number of unstable modes}
In order to study the stability of solitary wave solutions, the following set of perturbations is considered:
\begin{equation}\label{perspec}
\left(\begin{array}{c} w(t,\,z) \\ \eta(t,\,z) \end{array} \right)=
\left(\begin{array}{c} w_s(z) \\ \eta_s(z) \end{array} \right)+
\varepsilon\,e^{\lambda\,t} \left(\begin{array}{c} M(z) \\ N(z) \end{array} \right).
\end{equation}
Inserting (\ref{perspec}) into (\ref{Hamilt_TW}), we get, up to $O(\varepsilon^2)$ the eigenvalue problem
\begin{equation}\label{eigenvalprobl}
\lambda\,\left(\begin{array}{c} M(z) \\ N(z) \end{array} \right)=J \, \mathcal{L}^s\,\left(\begin{array}{c} M(z) \\ N(z) \end{array} \right):=\mathbb{L}\,\left(\begin{array}{c} M(z) \\ N(z) \end{array} \right),
\end{equation}
where
\begin{equation}\label{Ls}
\mathcal{L}^s=\delta^2\,\left( H+s\,Q \right)|_{w_s,\,\eta_s}=
\left(\begin{array}{cc} \gamma-\kappa\,\partial_z^2 & s \\ s & (\nu+2)/{\eta^{\nu+3}_s(z)} \end{array} \right).
\end{equation}
We denote the spectrum of the operator $\mathbb{L}$ by $\sigma(\mathbb{L})$ and accept the following definition:
\begin{dfn}
The soliton-like solution $U_s(z)=\left(w_s(z),\,\eta_s(z)\right)^{tr}$ is said to be spectrally stable if the intersection of $\sigma(\mathbb{L})$ with the positive half-plane $\mathbb{C}^+$ of the complex plane is empty.
\end{dfn}
In this section the following statement will be proved:
\begin{thm}
The set $\sigma(\mathbb{L})\cap\,\mathbb{C}^+$ consists of at most one isolated point $\lambda_0.$ If $\sigma(\mathbb{L})\cap\,\mathbb{C}^+$ is nonempty, then $\lambda_0$ is a real positive number.
\end{thm}
The proof of this theorem is based on a number of auxiliary statements. Some of them are sufficiently general and applicable to a wide class of spectral problems. To begin with, let us localize the essential spectrum $\sigma_{ess}(\mathbb{L}).$ In the case under consideration it coincides with the spectrum of the limiting operator \cite{D_Henry,KaPromis}
\[
\mathbb{L}_\infty=\mathbb{L}_{\pm\infty}=\lim\limits_{|z| \to \infty }J \cdot \mathcal{L}^s=
J\cdot \left(\begin{array}{cc} \gamma-\kappa\,\partial_z^2 & s \\ s & \frac{\nu+2}{\eta_\infty^{\nu+3}} \end{array} \right).
\]
The spectrum of the operator $\mathbb{L}_\infty,$ having the constant coefficients, coincides with the set
\[
\sigma_{ess}(\mathbb{L})=\left\{\lambda\in \mathbb{C}: \det\left(\begin{array}{cc} - i\,\xi\,s-\lambda & - i\,\xi(\nu+2)/\eta_\infty^{\nu+3} \\ - i\,\xi (\gamma+\kappa\xi^2) & - i\,\xi\,s-\lambda \end{array} \right)=0, \,\,\xi\in\,\mathbb{R} \right\}.
\]
The set of possible values of the spectral parameter $\lambda$ is given by the formula
\[
\lambda=- i \xi s \pm i \sqrt{\xi^2 (\nu+2)\,(\gamma+\kappa \xi^2)/\eta_\infty^{\nu+3}}, \quad \xi\in\,\mathbb{R}.
\]
It coincides with the imaginary axis.
Next, the following general statement is applied to our problem (cf with \cite{KaPromis}).
\begin{lem}
The point spectrum $\sigma_{pt}(\mathbb{L})$ is symmetric with respect to the coordinate axes, that is, if $\lambda \in \sigma_{pt}(\mathbb{L}), $ then simultaneously $-\lambda,$ and $\pm\lambda^{*}$ belong to the point spectrum of the operator $\mathbb{L}.$
\end{lem}
{\bf Proof.}
Suppose that $\lambda\in\sigma_{pt}(\mathbb{L}),$ $\psi\,\in L^2(\mathbb{R})$ is the eigenvector corresponding to $\lambda$. Then
\[
\left(\mathbb{L}\,\psi\right)^{*}=\mathbb{L} \psi^{*}=\lambda^{*} \psi^{*},
\]
hence $\lambda \in \sigma_{pt}(\mathbb{L})$ implies $\lambda^{*} \in \sigma_{pt}(\mathbb{L})$.
Next,
if $\mathbb{L}\,\psi=J\cdot \mathcal{L}^s\,\psi=\lambda\,\psi,$ then
\[
\left(\psi | J\cdot \mathcal{L}^s \psi \right)=
\left(\psi | \lambda\, \psi \right)=
\lambda\,\left(\psi | \psi \right)=
\left(\lambda^{*}\,\psi | \psi \right).
\]
On the other hand,
\[
\left(\psi | J\cdot \mathcal{L}^s \psi \right)=-\left(J\,\psi | \mathcal{L}^s \psi \right)=
-\left( \mathcal{L}^s \cdot J\,\psi | \psi \right),
\]
hence
$
\mathcal{L}^s \cdot J\,\psi=-\lambda^{*}\,\psi,
$
which implies the equality $\mathbb{L} \left(J \,\psi\right)=- \lambda^{*}\left(J\,\psi\right).$ Thus, $\lambda \in \sigma_{pt}(\mathbb{L})$ implies $-\lambda^{*} \in \sigma_{pt}(\mathbb{L})$ and similarly $\lambda^{*} \in \sigma_{pt}(\mathbb{L})$ implies $-\lambda \in \sigma_{pt}(\mathbb{L}).$
The next statement, borrowed from the paper \cite{PW} is the following.
\begin{thm}
Suppose that $J$ is a skew-symmetric operator while $\mathcal{L}^s$ is self-adjoint. Suppose in addition that $\mathcal{L}^s$ has exactly $k$ strictly negative eigenvalues, counting multiplicities and $k<\infty.$ Then $\mathbb{L}=J\cdot\,\mathcal{L}^s$ has at most $k$ eigenvalues in the right half-plane of the complex plane.
\end{thm}
So, all the auxiliary assertions needed have been formulated, and we can now concentrate on estimating the number of discrete eigenvalues of the operator $\mathcal{L}^s$ lying on the negative semiaxis $\mathbb{R}^-$. Thus, we consider the spectral problem $\mathcal{L}^s\,\left(M,\,N\right)^{tr}=\mu\,\left(M,\,N\right)^{tr},$ which can be presented as follows:
\begin{equation}\label{spectrLs}
\left\{ \begin{array}{l} (\gamma-\kappa \partial_z^2)\,M+s\,N=\mu\,M, \\ \\
s\,M+\frac{\nu+2}{\eta^{\nu+3}_s(z)}\,N=\mu\,N. \end{array} \right.
\end{equation}
Note that if we put in (\ref{spectrLs}) $\mu=0$ and make the replacement $M=w'_s$, $N=\eta'_s$ then as a result we get the system (\ref{varsol2})-(\ref{varsol1}). Hence the following statement is true:
\begin{lem}\label{eigenzero}
$U_s^\prime=(w_s^\prime,\,\eta_s^\prime)^{tr}$ is the eigenvector of the operator $\mathcal{L}^s,$ corresponding to the eigenvalue $\mu=0.$
\end{lem}
Now, using the second equation of the system (\ref{spectrLs}), we can express the function $N$ as follows:
\begin{equation}\label{ansatzN}
N=s\,M\,\left(\mu-\frac{\nu+2}{\eta^{\nu+3}_s(z)} \right)^{-1}.
\end{equation}
Inserting (\ref{ansatzN}) into the first equation of the system (\ref{spectrLs}), we get the following generalized eigenvalue problem:
\begin{equation}\label{geneigen}
\kappa \frac{d^2}{d\,z^2} \,M=\left[\gamma-\mu+\frac{s^2}{\mu-\frac{\nu+2}{\eta^{\nu+3}_s(z)}} \right] \,M.
\end{equation}
From lemma \ref{eigenzero} we immediately obtain the following:
\begin{crl}
Function $M(z)=w^\prime_s(z)$ is the eigenvector of the generalized spectral problem (\ref{geneigen}) corresponding to the eigenvalue $\mu=0.$
\end{crl}
Now, let us consider the Wronskian
\begin{equation}\label{WR}
W(z)=M_1^\prime(z)\,M_2(z)-M_2^\prime(z)\,M_1(z),
\end{equation}
on a set $(a,\,b)\in \mathbb{R}$ (finite or infinite), where $\left\{M_i\right\}_{i=1}^2$ are solutions of Eq.~(\ref{geneigen}) corresponding to the eigenvalues $\mu_i.$ Taking the derivative of (\ref{WR}) with respect to $z$ and then integrating the expression obtained we get
\[W(\xi)\,|_{\xi=a}^{\xi=z}=\int_a^z W^\prime(\xi)\,d\,\xi,\]
which after some manipulation attains the form
\begin{equation}\label{intWR}
W(z)-W(a)=\frac{\mu_2-\mu_1}{\kappa}\int_a^z M_1(\xi)\,M_2(\xi) \Phi(\xi)\,d\,\xi,
\end{equation}
where
\[
\Phi(\xi)=1+
\frac{s^2\,\eta_s^{2(\nu+3)}(\xi)}{\left[(\nu+2)-\mu_1\,\eta_s^{\nu+3}(\xi) \right]\,\left[(\nu+2)-\mu_2\,\eta_s^{\nu+3}(\xi) \right]}.
\]
Let us note, that $\Phi(\xi)>0$ when $\mu_i\,\leq\,0,\,\,i=1,\,2.$
\begin{lem}\label{Sturm_1}
Let us assume that $\mu_1<\mu_2\leq 0$ are the eigenvalues while $M_1(z),\,\,M_2(z)$ are the corresponding eigenfunctions of the generalized spectral problem (\ref{geneigen}), $c\,\in\,(a,\,b)$ and the following conditions hold:
\begin{itemize}
\item
$\lim\limits_{z \to a+0 } M_1(z)=\lim\limits_{z \to a+0 } M_2(z)=0$;
\item
$M_2|_{(a,c)}>0; \quad \exists\, \epsilon>0: \,\,M_1|_{(a,a+\epsilon)}>0.$
\end{itemize}
Then $M_1\,|_{(a,c)}>0$. If in addition the conditions
\begin{itemize}
\item
$M_2(c)=0, \quad M_2^\prime(c)<0$
\end{itemize}
are fulfilled, then $M_1(c)>0.$
\end{lem}
{\bf Proof.}
The proof of the first statement: let there exists $d\in (a,\,c)$ such that $M_1(d)=0$ and $M_1^\prime(d)<0$ (we'll assume that $d$ is the first point at which $M_1(z)$ intersects the horizontal axis). Then the function (\ref{intWR}) is growing and non-negative on $(a,\,d).$ On the other hand, it appears from (\ref{WR}), that under the above assumption $W(d)=M_1^\prime(d)\,M_2(d)<0.$ The resulting contradiction eliminates this possibility. Now let us address the second statement. It appears from (\ref{intWR}) that $W(c)>0.$ Using the additional assumptions, we conclude from (\ref{WR}) that $W(c)=-M_2^\prime(c)\,M_1(c).$ But this expression can be positive only if $M_1(c)>0.$
\begin{lem}\label{Sturm_2}
We use the same assumptions as in the first part of the lemma \ref{Sturm_1}. In addition, we assume that
\begin{itemize}
\item
$M_2(c)=0; \quad M_2^\prime(c)<0; \quad \exists\,\, e>c\,: M_2|_{(c,\,e)}<0;$
\item $\lim\limits_{z \to e-0 } M_1(z)=\lim\limits_{z \to e-0 } M_2(z)=0.$
\end{itemize}
Then $M_1\,|_{(c,e)}>0.$
\end{lem}
{\bf Proof.}
The lemma is proved by contradiction. Assume that $M_1(z)$ intersects the horizontal axis $OZ$ for the first time at some point $f\in (c,\,e).$ Then $M_1(f)=0, \quad M_1^\prime(f)<0,$ hence $W(f)=M_1^\prime(f) M_2(f)-M_2^\prime(f) M_1(f)>0.$ Let us also make an additional assumption that $M_1(z)$ does not have intersections with the horizontal axis on the segment $(f,\,e).$ Then we get
\[
W(e)=W(f)+\int_f^e{M_1(\xi)\,M_2(\xi)\,\Phi(\xi)\,d\,\xi}>0.
\]
On the other hand, $M_1^\prime(e) M_2(e)-M_2^\prime(e) M_1(e)=0,$ so we get the contradiction.
Now let us assume that there exists $g\in(f,\,e)$ such that $M_1(g)=0$ and $M_1^\prime(g)>0.$ Then
\[
W(g)=W(f)+\int_f^g{M_1(\xi)\,M_2(\xi)\,\Phi(\xi)\,d\,\xi}>0.
\]
On the other hand, $W(g)=M_1^\prime(g) M_2(g)-M_2^\prime(g) M_1(g)<0.$ The contradiction obtained ends the proof.
\begin{lem}\label{Sturm_3}
Suppose that the spectral problem (\ref{geneigen}) has three discrete eigenvalues $\mu_0<\mu_1<\mu_2\leq 0,$ and the corresponding eigenfunctions $M_0(z),\,\,M_1(z),\,\,M_2(z)$ are defined on $(a,b). $ We assume in addition that
\begin{itemize}
\item
$\lim\limits_{z \to a+0 } M_i(z)=\lim\limits_{z \to b-0 } M_i(z)=0, \quad i=0,\,1,\,2;$
\item
there exists $c\in(a,\,b)$ such that $M_2(c)=0,$ $M_2^\prime(c)<0,$ and $M_2(z)$ does not have another points of intersection with the horizontal axis on the segment $(a,\,b).$
\end{itemize}
Then there does not exist the eigenfunction $M_0(z),$ not identically equal to zero, corresponding to the eigenvalue $\mu_0.$
\end{lem}
{\bf Proof.}
Without loss of generality, we can assume that $M_1\,|_{(a,\,b)}>0$ and $M_2|_{(a,\,c)}>0$ (by virtue of lemmae \ref{Sturm_1}, \ref{Sturm_2} $M_1(z)$ does not intersect the horizontal axis on $(a,\,b)$). Now assume that $M_0(z)$ is not identically zero on $(a,\,b).$ Then, in accordance with the lemma \ref{Sturm_1}, $M_0(z)$ does not intersect the horizontal axis on this segment (we compare $M_0$ with the function $M_2$) and we can assume in addition that $M_0|_{(a,b)}>0.$
On virtue of the above assumptions, the function
\[
W(z)=\frac{\mu_1-\mu_0}{\kappa}\int_a^z M_0(\xi)\,M_1(\xi) \Phi(\xi)\,d\,\xi
\]
is growing and non-negative on the segment $(a,\,b),$ hence $W(b)>0.$ But on the other hand, $W(b)=M_0^\prime(b) M_1(b)-M_1^\prime(b) M_0(b)=0,$ so we get a contradiction.
\begin{crl}
The following assertions are true:
\begin{itemize}
\item
The eigenvalue problem (\ref{geneigen}) has at most one discrete eigenvalue $\mu<0,$ corresponding to the nonzero eigenfunction $M(z)$.
\item
If such an eigenvalue does exist, then it is simultaneously the discrete eigenvalue of the operator $\mathcal{L}^s,$ corresponding to the eigenfunction
\[
\left\{M(z),\,s\,M(z)\left(\mu-\frac{\nu+2}{\eta_s(z)^{\nu+3}}\right)^{-1}\right\}^{tr}.
\]
\item
The operator $\mathbb{L}=J\,\cdot\,\mathcal{L}^s$ has at most one discrete eigenvalue lying in $\mathbb{C}^{+}$.
\item
If such an eigenvalue does exist, then it belongs to $\mathbb{R}^{+}.$
\end{itemize}
\end{crl}
\subsection{The Evans function and spectral stability}
\subsubsection{Introductory remarks}
We are going to formulate the conditions excluding the existence of the discrete eigenvalues of the operator $\mathbb{L}$ belonging to $\mathbb{C}^{+}.$ For this purpose, we use a technique based on some properties of the Evans function \cite{Evans3,Evans4,KaPromis}, an analytic function of the spectral parameter $\lambda\in\mathbb{C}^{+},$ that nullifies on those values of the parameter $\lambda$ which belong to the set $\sigma_{pt}(\mathbb{L}) \cap \mathbb{C}^+.$ Usually, $E(\lambda)$ is defined as a Wronskian constructed on the solutions of a dynamical system equivalent to the corresponding spectral problem. The Evans function most often is studied numerically, but some of its asymptotic properties (essentially used in this paper) can be analyzed analytically.
To begin with, let us remind that, on virtue of lemma \ref{eigenzero}, $\,0\,\in\,\sigma_{pt}( \mathbb{L} ).$ Next, we observe, that the variational equation
\begin{equation}\label{vareq1}
\delta \left(H+s\,Q \right) \,|_{U_s}=0,
\end{equation}
where $U_s=(w_s(z),\,\eta_s(z))^{tr}$, is equivalent to the traveling wave ODEs (\ref{varsol1})-(\ref{varsol2}). Differentiating (\ref{vareq1}) with respect to $s,$ we get:
\[
\mathcal{L}^s\,\partial\,U_s/\partial\,s=-\left(\begin{array}{lc} 0 & 1 \\ 1 & 0 \end{array}\right) U_s.
\]
Multiplying both sides of this equality by $J$ from the left, we obtain:
\[
\mathbb{L}\left( \partial\,U_s/\partial\,s \right)\,=\,-{U_s}^\prime.
\]
From this we conclude that span$\left\{{U_s}^\prime,\,\partial\,U_s/\partial\,s \right\}$\,$\subset$\,gker($\mathbb{L}$), which, in turn, implies the equality $E^\prime(0)=0.$ If we were able to estimate the $sign\,E^{\prime\prime}(0)$, and also sign\,$E(+\infty)$ (the latter can be done by several standard methods), then from the equality
\begin{equation}\label{sigmin}
sign{E(+\infty)} \,\cdot\,sign{E^{\prime\prime}(0)}=+1,
\end{equation}
it would follow that the number of intersections of the graph of the function $E(\lambda), \,\,\,\lambda\in\,\mathbb{R}^+$ with the horizontal axis $Re(\lambda)$ should be even. However, since this contradicts the results obtained above (see the corollary at the end of the previous subsection), then there would be no intersections in this case at all. On the contrary, the negativity of the product appearing in the formula (\ref{sigmin}) indicates the existence of an unstable mode.
\subsubsection{The multi-symplectic representation}
In the evaluation of the sign of $E^{\prime\prime}(0)$ we follow the papers \cite{BriDerks_97,BriDerks_02}. From there, most of the designations are borrowed. The main formula is based on the theory of multi-symplectic systems. We will not fully cover this rather cumbersome theory here, but only concentrate on those fragments that are necessary for deriving the basic formula. Thus, first of all, the Hamiltonian system must be written in the equivalent multi-symplectic form
\begin{equation}\label{multisymgen}
\hat M\,Z_t+\hat K\,Z_x=\nabla S(Z),
\end{equation}
where $Z\in R^{2n}$, $\hat M,\,\hat K$ are $2\,n\,\times\,2\,n$ skew-symmetric constant matrices, $S(Z)$ is a smooth function and $\nabla
$ is the gradient in $R^{2n}.$ The matrices $\hat M,\,\hat K$ generate in $R^{2n}\,\times\,\,R^{2n}$ two-forms
\[
\omega(\zeta_1,\,\zeta_2)=\left(\hat M \zeta_1,\,\zeta_2 \right), \quad
k(\zeta_1,\,\zeta_2)=\left(\hat K \zeta_1,\,\zeta_2 \right)
\]
and
\[
\Omega(\zeta_1,\,\zeta_2)=\left(\hat J_s \zeta_1,\,\zeta_2 \right),
\]
where $\hat{J}_s=\hat{K}-s\,\hat{M}.$ It is assumed that $\det \hat{J}_s \neq 0$ and hence the form $\Omega$ is not degenerate. In the multi-symplectic approach the function $\tilde Z(z; a,\,b,\,s)$ is considered, describing the shape of a multiparameter family of solitary waves and satisfying the dynamical system
\[
\hat J_s \tilde Z^\prime=\nabla V(\tilde Z),
\]
where $V(\cdot)$ is $S(\cdot)$ plus additional features arising from symmetry (we will give the precise definition of them when addressing the system (\ref{MGM1})-(\ref{MGM2})). The linearization $U(z)$ about the solitary wave solutions satisfies the dynamical system
\[
U^\prime(z)=A(z,\,\lambda,a,\,b,\,s) U(z), \quad U\in C^{2\,n},
\]
where $\lambda\in \,\mathbb{C}$ is the spectral parameter,
\[
A(z,\,\lambda,a,\,b,\,s)=\hat{J}_s^{-1}\left\{D^2 V\left( \tilde Z (z;\,a,\,b,\,s \right)-\lambda\,\hat M \right\}.
\]
It can be shown that the shape function $\tilde Z(z;\,a,\,b,\,s)$ satisfies the variational equation
\[
\frac{\delta}{\delta\,\tilde Z} \left(H(\tilde Z)-s\,I(\tilde Z) \right)=0,
\]
where
\[
H(\tilde Z)=\frac{1}{2} \int_{-\infty}^{+\infty}\left[k(\tilde Z,\,\tilde Z^\prime)+2\,V(\tilde Z) \right]d\,z
\]
is the Hamiltonian function, while
\begin{equation}\label{GenIimpuls}
I(\tilde Z)=\frac{1}{2} \int_{-\infty}^{+\infty} \omega(\tilde Z,\,\tilde Z^\prime)\,d\,z
\end{equation}
is the generalized momentum. In this notation the sign of $E^{\prime\prime}(0)$ is expressed as follows:
\begin{equation}\label{sgnEgen}
sgn\, E^{\prime\prime}(0)=\zeta_{00}^{-} \left[\frac{d\,I}{d\,s}-B(s)\right],
\end{equation}
where $\zeta_{00}^{-}$ and $B(s)$ are expressed in terms of the combination of the vectors $Z_0^{-}(\,a,\,b,\,s)=\lim\limits_{z \to -\infty }\tilde Z(z;\,a,\,b,\,s)$ and $Z_0^{+}(\,a,\,b,\,s)=\lim\limits_{z \to +\infty }\tilde Z(z;\,a,\,b,\,s).$
The multi-symplectic formalism is described below with reference to the system under study.
\subsubsection{Multi-symplectic representation of the system (\ref{MGM1})-(\ref{MGM2}) and evaluation of the $sign\,E^{\prime\prime}(0)$}
In order to take advantage of the formalism proposed in
\cite{BriDerks_97,BriDerks_02}, we should write down the initial system in the multi-symplectic form. Introducing new functions
\begin{equation*}
q=\eta^{-(\nu+2)}, \quad \Phi_x=q^{-\frac{1}{\nu+2}}, \quad v=w_x, \quad r_x=w-C_0, \quad
p=-\Phi_t+\gamma\,w-\kappa\,v_x,
\end{equation*}
we can rewrite (\ref{MGM1})-(\ref{MGM2}) as the first-order system
\begin{equation}\label{multis1}
-\Phi_t-\kappa v_x=p-\gamma\,w,
\end{equation}
\begin{equation}\label{multis2}
\Phi_x=q^{\frac{-1}{\nu+2}},
\end{equation}
\begin{equation}\label{multis3}
w_x=v,
\end{equation}
\begin{equation}\label{multis4}
w_t+q_x=0,
\end{equation}
\begin{equation}\label{multis5}
p_x=0,
\end{equation}
\begin{equation}\label{multis6}
r_x=w-C_0.
\end{equation}
The multi-symplectic form of the system (\ref{multis1})-(\ref{multis6}) is then as follows
\begin{equation}\label{multisym}
\hat M\,Z_t+\hat K\,Z_x=\nabla S,
\end{equation}
where $Z=\left(w,\,q,\,v,\,\Phi,\,r,\,p\right)^{tr},$
\[
\hat M=\left( \begin{array}{cccccc}
0 & 0 & 0 & -1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
\end{array} \right), \quad
\hat K=\left( \begin{array}{cccccc}
0 & 0 & -\kappa & 0 & 0 & 0 \\
0 & 0 & 0 & -1 & 0 & 0 \\
\kappa & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & -1 \\
0 & 0 & 0 & 0 & 1 & 0 \\
\end{array} \right),
\]
\[
S=p(w-C_0)-\frac{\gamma}{2}w^2-\frac{1}{\alpha}\,q^\alpha+\frac{\kappa}{2} v^2, \quad \alpha=\frac{\nu+1}{\nu+2}.
\]
The next step will be the use of symmetry properties for the purpose of
constructing {\it a manifold at infinity } $\mathcal{M}(a,\,b)$, \cite{BriDerks_97,BriDerks_02}. The system (\ref{multisym}) is evidently invariant with respect to the translation group $Z \rightarrow Z+ \epsilon (0,\,0,\,0,1,\,0,\,0)^{tr}$, $\epsilon\in \mathbb{R}$ having the generator $\hat X=\partial/\partial \Phi.$ To this symmetry corresponds a pair of functions \cite{BriDerks_97,BriDerks_02} $P=-w,$ $Q=-q$ with the properties
\[
\hat M\,\hat X (Z)=\nabla P(Z), \qquad \hat K\,\hat X (Z)=\nabla Q(Z).
\]
In addition, the symmetry of the initial system will be used, which allows one to extend the homoclinic solution to a two-parameter family of analogous solutions. A direct verification shows that the following assertion holds
\begin{lem}
The system (\ref{MGM1})-(\ref{MGM2}) is invariant with respect to the family of transformations:
\begin{equation}\label{extend_sol}
\bar t=e^\mu\,t, \quad \bar x=x, \quad \bar w=e^{-\frac{\nu+1}{\nu+3}\mu}w+A, \quad \bar \eta=e^{\frac{2}{\nu+3}\mu}\eta,
\end{equation}
where $\mu,\,A\,\in \mathbb{R}$ are arbitrary parameters.
\end{lem}
The above symmetry induces the following group of invariance of the system
(\ref{VarTW1})-(\ref{VarTW2}) describing the TW solutions:
\begin{equation}\label{TWeq_sym}
\tilde w(z)=e^{-\frac{\nu+1}{\nu+3}\mu}w_s(z)+A, \quad \tilde \eta(z)=e^{\frac{2}{\nu+3}\mu}\eta_s(z), \,\, \tilde \eta_\infty=e^{\frac{2}{\nu+3}\mu}\eta_\infty, \,\,\tilde z=z,\,\, \tilde s=e^{-\mu}\,s.
\end{equation}
Combining the translational symmetry of (\ref{multisym}) with the symmetry of the system (\ref{VarTW1})-(\ref{VarTW2}) which makes it possible to extend the set of homoclinic solutions to a multiparametric family, we can eventually construct a non-degenerate manifold $\mathcal{M}(a,\,b)$, which is necessary for analyzing formula (\ref{sgnEgen}) in our particular case. The vector-function $\tilde Z=\left(\tilde w,\,\tilde q,\, \tilde v,\, \tilde \Phi,\, \tilde r,\, \tilde p \right)^{tr}$ satisfies the following variational equation (cf with \cite{BriDerks_02}):
\begin{equation}\label{extendvar}
\left(\hat K-\tilde s\,\hat M\right)\,\tilde Z^\prime=
\nabla S(\tilde Z)-a\nabla P(\tilde Z)-b\nabla Q(\tilde Z),
\end{equation}
which, when written out componentwise, looks as follows
\begin{equation}\label{TWmulti1}
\tilde s\,\tilde \Phi_z-\kappa \tilde v_z=\tilde p+a-\gamma\,\tilde w,
\end{equation}
\begin{equation}\label{TWmulti2}
-\tilde \Phi_z=b-\tilde q^{\frac{-1}{\nu+2}},
\end{equation}
\begin{equation}\label{TWmulti3}
\kappa\,\tilde w_z=\kappa\, \tilde v,
\end{equation}
\begin{equation}\label{TWmulti4}
-\tilde s\, \tilde w_z+\tilde q_z=0,
\end{equation}
\begin{equation}\label{TWmulti5}
-\tilde p_z=0,
\end{equation}
\begin{equation}\label{TWmulti6}
\tilde r_z=\tilde w-C_0.
\end{equation}
Integrating Eq. (\ref{TWmulti2}) over the segment $(-\infty,\,z)$ and using the requirement that the function $\tilde \Phi(z)$ should be bounded, we obtain the condition
\[
e^\mu=\left(\frac{b}{\eta_\infty} \right)^{\frac{\nu+3}{2}}.
\]
Integrating (\ref{TWmulti2}) and taking into account the above formula, we get
\begin{equation}\label{t_Phi}
\tilde\Phi(z)=\frac{b}{\eta_\infty}\int_{-\infty}^z \left[\eta_s(\xi)-\eta_\infty \right]\,d\,\xi+C_1.
\end{equation}
The requirement of the boundedness of the function $\tilde r(z)$ leads to the condition $C_0=A.$ Taking them into account, we get the expression
\begin{equation}\label{t_r}
\tilde r(z)=\left(\frac{\eta_\infty}{b}\right)^{\frac{\nu+1}{2}}\int_{-\infty}^z w_s(\xi)\,d\,\xi+C_2.
\end{equation}
The remaining functions are expressed as follows:
\begin{equation}\label{t_wqp}
\left\{ \begin{array}{l} \tilde w=\left(\frac{\eta_\infty}{b}\right)^{\frac{\nu+1}{2}}
w_{ s}(z)+a(1+\gamma^{-1}), \\
\tilde q= \left(\frac{b\,\eta_s(z)}{\eta_\infty} \right)^{-(\nu+2)}, \\
\tilde v=\left(\frac{\eta_\infty}{b} \right)^{\frac{\nu+1}{2}} w_s^\prime(z), \\
\tilde p=\gamma\,a,
\end{array}
\right.
\end{equation}
and, thus, the vector-valued functions $\tilde Z,$ $\tilde Z^\prime$ are represented in the form
\begin{equation}\label{tld_Z}
\begin{split}
\tilde Z=
\Bigl( \left(\frac{\eta_\infty}{b}\right)^{\frac{\nu+1}{2}}
w_{ s}(z)+a(1+\gamma^{-1}),\,\left(\frac{b\,\eta_s(z)}{\eta_\infty} \right)^{-(\nu+2)},\,\left(\frac{\eta_\infty}{b} \right)^{\frac{\nu+1}{2}} w_s(z)^\prime,\
\\
\qquad \qquad\qquad \qquad \theta(z)+C_1,\,\varphi(z)+C_2,\,\gamma \,a \Bigr)^{tr},
\end{split}
\end{equation}
\begin{equation}\label{tld_Zpr}
\begin{split}
\tilde Z^\prime=
\Bigl(\left(\frac{\eta_\infty}{b}\right)^{\frac{\nu+1}{2}}
w_{ s}(z)^\prime,\,-(\nu+2) \frac{\eta_\infty^{\nu+2} \eta_s(z)^\prime}{b^{\nu+2}\eta_s^{\nu+3}},\,\left(\frac{\eta_\infty}{b} \right)^{\frac{\nu+1}{2}} w_s(z)^{\prime\prime},
\\
\qquad \qquad\qquad \qquad\frac{b}{\eta_\infty} \left[\eta_s(z)-\eta_\infty \right],\,\left(\frac{\eta_\infty}{b}\right)^{\frac{\nu+1}{2}} w_s(z),\,0 \Bigr)^{tr},
\end{split}
\end{equation}
where
\begin{equation}\label{theta_varphi}
\theta(z)=\frac{b}{\eta_\infty}\int_{-\infty}^{z} \left[\eta(\xi)-\eta_\infty \right]\,d\,\xi, \quad
\varphi(z)=\left(\frac{b}{\eta_\infty}\right)^{-\frac{\nu+1}{2}}\int_{-\infty}^{z} w_s(\xi)\,d\,\xi.
\end{equation}
From the formula (\ref{tld_Z}) we calculate the vectors $Z^{\pm}_0$, which are as follows:
\begin{equation}\label{Z_0_minus}
Z_0^{-}=\lim\limits_{z \rightarrow -\infty} \tilde Z(z)=
\left(a (1+\gamma^{-1} ),\,b^{-(\nu+2)},\,0,\,C_1,\,C_2,\,\gamma\,a \right)^{tr},
\end{equation}
\begin{equation}\label{Z_0_plus}
Z_0^{+}=\lim\limits_{z \rightarrow +\infty} \tilde Z(z)=
\left(a (1+\gamma^{-1} ),\,b^{-(\nu+2)},\,0,\,\theta_\infty+C_1,\,\varphi_\infty+C_2,\,\gamma\,a \right)^{tr},
\end{equation}
where $\theta_\infty=\lim\limits_{z \rightarrow +\infty} \theta(z),$
$\varphi_\infty=\lim\limits_{z \rightarrow +\infty} \varphi(z).$
Now that almost all the necessary tools have been laid out, we can proceed to an analysis of the sign of the second derivative of the Evans function, which, according to \cite{BriDerks_02}, is expressed by the relation
\begin{equation}\label{secorderE}
E^{\prime\prime}(0)=\chi_{00}^{-} \left( \frac{d}{d\,s}\,I(\tilde Z) -\omega(Z_0^{+},\,\partial_s Z_0^{+}) \right).
\end{equation}
The coefficient $\chi_{00}^{-}$ is obtained from the condition for normalizing the eigenvectors of the matrix $A_\infty=\lim\limits_{z \rightarrow \infty}\,A(z; a,\,b,\,s)$ and the eigenvectors of the adjoining matrix $A_\infty^*$. The computations of this quantity are rather cumbersome, so they are moved to Appendix A, in which the following formula is derived:
\begin{equation}\label{chi_00}
\chi_{00}^{-}=\frac{\left(s\,\eta_\infty^{\nu+3}\right)^2}{2\,\tilde C^2\,\mathcal{D}^{3/2}\,\kappa\,(\nu+2)^2}, \quad \mathcal{D}= \frac{\beta-s^2\,\eta_\infty^{\nu+3}}{\kappa\,(\nu+2)}>0,
\end{equation}
$\tilde C$ is a constant.
Thus, we proceed to calculate the remaining terms appearing in formula
(\ref{secorderE}). Since we are not interested in the whole extended family
(\ref{extend_sol}), but only in the special case of solutions of system
(\ref{VarTW1})-(\ref{VarTW2}) satisfying the asymptotic conditions (\ref{asympt}), we carry out calculations for $a=0$ and $b=\eta_\infty.$
The generalized impulse is calculated on the basis of formula (\ref{GenIimpuls}):
\begin{equation}\label{Perl_GenImpuls}
I(\tilde Z)|_{b=\eta_\infty}=\int_{-\infty}^{+\infty} w_s(z)\left[\eta_s(z)-\eta_\infty \right]d\,z.
\end{equation}
In order to calculate the derivative of the functional $I(\tilde Z)$ with respect to the variable $s$, we need to obtain the explicit dependence of $w_s(z)$ and $\eta_s(z)$ on the velocity. This can be done if we exclude the speed from the system
(\ref{varsol1})-(\ref{varsol2}) using the scaling $w_s(z)=s^\alpha w_0(z),$ $\eta_s(z)=s^\delta \eta_0(z).$ Indeed, if we put $\alpha=(\nu+1)/(\nu+3),$ $\delta=-2/(\nu+3),$ then we obtain the system
\begin{equation}\label{varsol1_s_less}
\,w_0-\eta_0^{-(\nu+2)}+\eta_{0,\,\infty}^{-(\nu+2)} =0,
\end{equation}
\begin{equation}\label{varsol2_s_less}
\left(\gamma-\kappa\,\partial_{z}^2 \right) w_0+\eta_0-\eta_{0,\,\infty} =0,
\end{equation}
which does not contain the parameter $s.$ Thus we have:
\begin{equation}\label{derImpulse}
\frac{d}{d\,s} I(\tilde Z)|_{b=\eta_\infty}=s^{-4/(\nu+3)}\,\frac{\nu-1}{\nu+3}\,
\int_{-\infty}^{+\infty} w_0(z)\left[\eta_0(z)-\eta_{0,\,\infty} \right]d\,z.
\end{equation}
Since the homoclinic loop representing the solitary wave solution lies to the right from the saddle point $(\eta_\infty,\,0)$ then $\eta_s(z)-\eta_{\infty}>0,$ and this implies the inequality $\eta_0(z)-\eta_{0,\,\infty}>0.$ The inequality $w_0(z)<0$, in turn, appears directly from Eq.~(\ref{varsol1_s_less}). So the integral in the formula (\ref{derImpulse}) is negative and the whole expression is positive if $\nu\in\,(-3,\,1).$
We must also calculate the expression $\omega(Z_0^{+},\,\partial_s Z_0^{+})$, which appears in formula (\ref{secorderE}). Taking the derivative of (\ref{Z_0_plus}) with respect to $s$, we get:
\[
\partial_s\,Z_0^{+}=\left(0,\,0,\,0,\,-\frac{2}{\nu+3}s^{-\frac{\nu+5}{\nu+3}}\,\int_{-\infty}^{+\infty} \left[\eta_0(z)-\eta_{0,\,\infty} \right]d\,z, \right.
\]
\[
\left. \frac{\nu+1}{\nu+3}\,s^{-\frac{2}{\nu+3}}\,\int_{-\infty}^{+\infty} w_0(z) \,d\,z,\, 0 \right).
\]
Thus,
\[
\omega(Z_0^{+},\,\partial_s Z_0^{+})=-\frac{2}{\nu+3}a\,\left(1+\gamma^{-1} \right)\,
s^{-\frac{\nu+5}{\nu+3}} \int_{-\infty}^{+\infty} \left[\eta_0(z)-\eta_{0,\,\infty} \right]d\,z,
\]
which is zero when $a=0.$ Combining (\ref{derImpulse}) with (\ref{hclexist0}) and taking into account that the homoclinic loop exists when $\nu>-1,$ we can formulate the following assertion:
\begin{thm}\label{Mainth}
The solitary wave solution of the system (\ref{MGM1})-(\ref{MGM2}) moving with velocity $s>0$ and having the asymptotics $\lim\limits_{|x|\rightarrow +\infty}w(t,\,x)=0,$ $\lim\limits_{|x|\rightarrow +\infty}\eta(t,\,x)=s^{-\frac{2}{\nu+3}}\,\eta_{0,\,\infty}\,>0$ is spectrally stable if $\nu\in(-1,\,1)$, and the inequality (\ref{hclexist0}) holds.
\end{thm}
Thus, we've established that under the above conditions the operator of the spectral problem (\ref{eigenvalprobl}) does not have eigenvalues belonging to $\mathbb{C}^{+}.$ In connection with this, the question arises: can the result obtained be used to formulate the conditions for the stability of soliton-like TW solutions of the system (\ref{Perl_1A})-(\ref{Perl_1B})? As it was mentioned before, the stability of soliton-like solutions supported by this system was investigated in \cite{VMSS} using the numerical methods which do not give the possibility to obtain any qualitative result. In order to formulate the spectral problem, let us consider the system (\ref{Perl_1A})-(\ref{Perl_1B}) written in TW coordinates $t,\,z=x-s\,t$:
\begin{eqnarray}
u_t=s\,u_z-\partial_z\,\left(\gamma-\kappa\partial^2_z\right) \rho^{\nu+2}, \label{Perl_1TWcoord}\\
\rho_t=s\,\rho_z-\rho^2\,u_z, \label{Perl_2TWcoord}
\end{eqnarray}
where we assume, as before, that $\gamma=\beta/(\nu+2)>0,$ $\kappa=-\sigma/(\nu+2)> 0$. In accordance with \cite{VMSS}, we denote the soliton solutions of the system (4) by the symbols $u=u_s(z),\,\,\rho=R_s(z).$
Inserting the perturbations of the form
\[
u(t,\,z)=u_s(z)+\epsilon\,e^{\lambda\,t}\,\hat U(z), \quad
\rho(t,\,z)=R_s(z)+\epsilon\,e^{\lambda\,t}\,\hat \rho(z)
\]
into (\ref{Perl_1TWcoord})-(\ref{Perl_2TWcoord}) and dropping out the terms of the order $O(\epsilon^2)$, we get the following spectral problem:
\begin{equation}\label{Perl_spectr}
\left\{ \begin{array}{l} \lambda\,\hat U=\partial_z\left[ s\,\hat U-\left(\gamma-\kappa\partial_z^2\right)\,(\nu+2) R_s^{\nu+1}\,\hat\rho\right],
\\ \\
\lambda\,\hat \rho=s\,\partial_z \hat\rho-2\,R_s \,u_s^\prime \hat\rho-R_s^2\partial_z \hat U. \end{array} \right.
\end{equation}
Below we'll show that the following assertion holds:
\begin{thm}
The point spectra of the problems (\ref{Perl_spectr}) and (\ref{eigenvalprobl}) are identical.
\end{thm}
{\bf Proof.}
In analysing the connection between the two spectral problems, we will use the following easily verifiable identities:
\begin{equation}\label{identities} \begin{array}{l}
R_s(z)=\eta_s^{-1}(z), \qquad\qquad u_s(z)=\left(\gamma-\kappa\partial^2_z\right)\,w_s(z), \\ \\
\hat \rho(z)=-\frac{1}{\eta_s^{2}(z)}\,N(z), \qquad\qquad \hat U(z)=\left(\gamma-\kappa\partial^2_z\right) \,M(z). \end{array}
\end{equation}
Taking into account the invertibility of the operator $\left(\gamma-\kappa\partial^2_z\right) $ and using the relations (\ref{identities}), we can rewrite the first equation of the system (\ref{eigenvalprobl}) in the form
\[
\lambda \left(\gamma-\kappa\partial^2_z\right)^{-1} \hat U=
\left(\gamma-\kappa\partial^2_z\right)^{-1}\,\partial_z \left[s\,U-\left(\gamma-\kappa\partial^2_z\right) \,(\nu+2) R_s^{\nu+1} \hat\rho \right],
\]
which is identical with the first equation of the system (\ref{Perl_spectr}). The second equation of the system (\ref{eigenvalprobl}) can be converted in the following way:
\[
\lambda N=\partial_z\left[\left(\gamma-\kappa\partial^2_z\right) M+s\,N \right]
\]
implies
\[
-\lambda \eta_s^2\,\hat\rho=\partial_z\left[\hat U- s\,\eta_s^2\,\hat\rho \right],
\]
or
\[
\lambda \hat\rho={R_s^2}\partial_z\left[s\,\frac{1}{R_s^2}\,\hat\rho-\hat U \right],
\]
or
\[
\lambda \hat\rho={R_s^2}\left[s\left(\frac{1}{R_s^2}\,\partial_z\hat\rho -\frac{2}{R_s^3}R_s^\prime\hat\rho\right)-\partial_z\hat U \right]=
s\,\partial_z\hat\rho-R_s^2\partial_z\hat U-2\,s\,\frac{R_s^\prime}{R_s}\hat\rho,
\]
which is identical with the second equation of the system (\ref{Perl_spectr}) on virtue of the identity $s\,R'_s=R^2_s\,u'_s.$
Moving in the opposite direction, we can rewrite the equation
\[ \lambda\,\hat U=\partial_z\left[ s\,\hat U-\left(\gamma-\kappa\partial_z^2\right)\,(\nu+2) R_s^{\nu+1}\,\hat\rho\right],
\]
as
\[
\lambda\,\left(\gamma-\kappa\partial^2_z\right)\,M=
\partial_z\left(\gamma-\kappa\partial^2_z\right)\left[ s\,M+\frac{\nu+2}{\eta_s^{\nu+3}}\,N \right],
\]
which is equivalent to the first equation of the system (\ref{eigenvalprobl}).
The equation
\[
\lambda\,\hat \rho=s\,\partial_z \hat\rho-2\,R_s \,u_s^\prime \hat\rho-R_s^2\partial_z \hat U\]
is equivalent to
\[
-\lambda \frac{N}{\eta_s^2}=-s\,\partial_z \,\left( \frac{N}{\eta_s^2}\right)+2\,R_s\,u_s^\prime \left( \frac{N}{\eta_s^2} \right)- R_s^2\partial_z \left(\gamma-\kappa\partial^2_z\right)\,M,
\]
or
\[
\lambda \frac{N}{\eta_s^2}=s\,\partial_z \,\left( \frac{N}{\eta_s^2}\right)-2\,R_s\,u_s^\prime \left( \frac{N}{\eta_s^2} \right)+\frac{1}{\eta_s^2}\partial_z \left(\gamma-\kappa\partial^2_z\right)\,M,
\]
or
\[
\lambda \,{N}={\eta_s^2}\left\{s\,\left[\frac{1}{\eta_s^2}\partial_z N-2\,\frac{N}{\eta_s^3}\partial_z\eta_s \right]+2\,s\,N\frac{\eta_s^\prime}{\eta_s^3} +\frac{1}{\eta_s^2}\left(\gamma-\kappa\partial^2_z\right)\,M \right\},
\]
which is equivalent to the second equation of the system (\ref{eigenvalprobl}).
To obtain the last equality, we took advantage of the identity $u_s^\prime=-s\,\eta^\prime_s.$
Since earlier in \cite{VMSS} it was shown that the essential spectrum of the operator appearing in formula (\ref{Perl_spectr}) coincides with the imaginary axis, then on the basis of the results obtained above it is possible to formulate
\begin{crl}
Under the assumptions of the theorem \ref{Mainth}, the soliton-like TW solutions of the system (\ref{Perl_1A})-(\ref{Perl_1B}) are spectrally stable.
\end{crl}
\section{Discussion}
Thus, we have shown that fulfillment of the inequality $\nu\,\in\,(-1,\,1)$ assures that the soliton-like TW solutions to the system (\ref{Perl_1A})-(\ref{Perl_1B}) describing the wave of rarefaction are spectally stable. The result obtained in this work is in agreement with that obtained numerically in paper \cite{VMSS}. In conclusion, we would like to note that the presence of higher derivatives in the first equation of the system (\ref{Perl_1A})-(\ref{Perl_1B}) is related to the spatial nonlocality in the lowest approximation. In this connection it is of interest to consider the problem in the following approximation and to trace how the inclusion of additional terms affects the dynamics and stability of soliton-like solutions.
\section*{ Appendix A}
In order to trace the behavior of the vectors $\tilde Z,\,\,\tilde
Z^\prime$ for large values of the arguments, we consider the linearization of the dynamical system
\begin{equation}\label{App_DS1}
\left\{\begin{array}{l} \frac{d}{d\,z}\eta_s=\dot \eta_s, \\ \\
\frac{d}{d\,z}\dot\eta_s=\frac{\eta_s^{\nu+3}}{\kappa(\nu+2)}\left[
\kappa(\nu+2)(\nu+3)\dot\eta_s^2/\eta_s^{\nu+4}-s^2(\eta_s-\eta_\infty)
+\gamma \frac{\eta_s^{\nu+2}-\eta_\infty^{\nu+2}}{\left(\eta_s\,\eta_\infty\right)^{\nu+2}}\right], \end{array} \right.
\end{equation}
which is equivalent to (\ref{HDS}). The linear part of the system (\ref{App_DS1}) in variables $x=\eta_s-\eta_\infty,\,\,\,y=\dot\eta_s$ will have the following form:
\begin{equation}\label{App_DS_lin}
\left(\begin{array}{c} x \\ y \end{array}\right)^\prime=
\left(\begin{array}{cc} 0 & 1 \\ \mathcal{D} & 0 \end{array}\right)\,\left(\begin{array}{c} x \\ y \end{array}\right),
\end{equation}
where $\mathcal{D}=
\frac{1}{\kappa (\nu+2)}\,\left[\beta-s^2\,\eta_\infty^{\nu+3}\right]>0.$ Thus, for $|z|\gg 1$ we obtain the asymptotics
\[
x=\eta_s-\eta_\infty \cong\left\{\begin{array}{c} \tilde C e^{\sqrt{\mathcal{D}}\,z}, \qquad z\ll -1, \\ \\ \tilde C e^{-\sqrt{\mathcal{D}}\,z}, \qquad z \gg 1. \end{array} \right.
\]
Constants at the exponential functions are the same because of the symmetry of the homoclinic trajectory with respect to the horizontal axis. Using the first equation of the system (\ref{varsol1}), we get the asymptotics for another component of the homoclinic solution:
\[
w_s \cong -\frac{\nu+2}{s\,\eta_\infty^{\nu+3}}\left\{\begin{array}{c} \tilde C e^{\sqrt{\mathcal{D}}\,z}, \qquad z \ll-1, \\ \\ \tilde C e^{-\sqrt{\mathcal{D}}\,z}, \qquad z \gg 1. \end{array} \right.
\]
From this we get the asymptotics (cf with \cite{BriDerks_02}):
\[
\begin{split}
\Psi^{\pm}=\lim\limits_{z \rightarrow \pm \infty}e^{\pm \sqrt{\mathcal{D}} z} \tilde Z^\prime=\tilde C \sqrt{\mathcal{D}}
&\left[ \pm\left(\frac{\eta_\infty}{b}\right)^{\frac{\nu+1}{2}} \frac{\nu+2}{s\,\eta_\infty^{\eta+3}};\, \pm \left(\frac{\eta_\infty}{b}\right)^{\nu+2} \frac{\nu+2}{\eta_\infty^{\eta+3}};\,
-\sqrt{\mathcal{D}}\left(\frac{\eta_\infty}{b}\right)^{\frac{\nu+1}{2}} \frac{\nu+2}{s\,\eta_\infty^{\eta+3}}; \right. \\
&\left. \frac{b}{\eta_\infty\,\sqrt{\mathcal{D}}};\,
-\left(\frac{\eta_\infty}{b}\right)^2 \frac{\nu+2}{s\,\eta_\infty^{\eta+3}\sqrt{\mathcal{D}}};\,0
\right].
\end{split}
\]
The coefficient $\chi_{00}^{-}$ is obtained from the normalization condition
\[
1=\left(\hat J_s\,\eta_1^{-},\,\Psi^{+}\right),
\]
where $\eta_1^{-}=\chi_{00}^{-}\Psi^{-},$ $\hat J_s=\hat K-s\,\hat M$ (see \cite{BriDerks_02}, section 3).
In the general case, the above formula is very cumbersome, but in the case we are interested in, i.e., when $b=\eta_\infty$, a more straightforward expression emerges:
\[
1=\chi_{00}^{-}\,\left(\hat J_s \Psi^{-} \right)^{tr}\,\Psi^{+}=
\]
\[
=\tilde C^2\,\mathcal{D}\,\chi_{00}^{-}\left[
\frac{\kappa (\nu+2)\mathcal{D}+s^2\eta_\infty^{\nu+3}}{s\eta_\infty^{\nu+3}\sqrt{\mathcal{D}}};-\frac{1}{\sqrt{\mathcal{D}}};-\frac{\kappa(\nu+2)}{s\eta_\infty^{\nu+3}};0;0;-\frac{\nu+2}{s\sqrt{\mathcal{D}}\eta_\infty^{\nu+3}} \right]\,*
\]
\[
*\left[\frac{\nu+2}{s\eta_\infty^{\nu+3}};\frac{\nu+2}{\eta_\infty^{\nu+3}};
-\frac{\sqrt{\mathcal{D}}(\nu+2)}{s\eta_\infty^{\nu+3}};\frac{1}{\sqrt{\mathcal{D}}};
-\frac{\nu+2}{s\eta_\infty^{\nu+3}\sqrt{\mathcal{D}}};0\right]^{tr}=
\frac{2\chi_{00}^{-}\tilde C^2\,\mathcal{D}^{3/2}\,\kappa\,(\nu+2)^2}{\left(s\,\eta_\infty^{\nu+3}\right)^2}.
\]
Hence
\[
\chi_{00}^{-}=\frac{\left(s\,\eta_\infty^{\nu+3}\right)^2}{2\,\tilde C^2\,\mathcal{D}^{3/2}\,(\nu+2)^2\kappa}>0.
\]
\section*{ Appendix B}
Here we analyze the fulfillment of the hypotheses from \cite{BriDerks_02}, which guarantee the existence of the normalization of the Evans function, under which $\lim\limits_{\lambda \rightarrow +\infty}E(\lambda)=1.$ Substituting into Eq. (\ref{multisym}) a perturbation of the form
$Z(t,\,z)=Z(z)+\varepsilon\,e^{\lambda\,t}\,U(z),$ performing elementary algebraic transformations, and dropping the higher-order terms in $\varepsilon$, we get the linear dynamical system:
\begin{equation}\label{PertMult}
U^\prime=\hat A(z;\,\lambda)\,U(z), \qquad U\in R^{2\,n},
\end{equation}
where
\[
\hat A(z;\,\lambda)=\hat J_s^{-1}\left(\hat B(z)-\lambda\,\hat M \right)=\left( \begin{array}{cccccc}
0 & 0 & 1 & 0 & 0 & 0 \\
-\lambda & 0 & s & 0 & 0 & 0 \\
\gamma/\kappa & -s\,R(z)/\kappa & 0 & -\lambda/\kappa & 0 & -1/\kappa \\
0 & -R(z) & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
\end{array} \right),
\]
$\hat B(z)=D^2 S(Z),$ $R(z)=\eta_s^{\nu+3}(z)/(\nu+2).$ The matrix $\hat A(z;\,\lambda)$ is related to a pair of constant matrices
\[\hat A^{\pm}(\lambda)=\lim\limits_{z\rightarrow \pm\infty} \hat A(z;\,\lambda).\] In our case
\[
A^{+}(\lambda)=A^{-}(\lambda)=A_{\infty}(\lambda)=
\left( \begin{array}{cccccc}
0 & 0 & 1 & 0 & 0 & 0 \\
-\lambda & 0 & s & 0 & 0 & 0 \\
\gamma/\kappa & -s\,R_\infty/\kappa & 0 & -\lambda/\kappa & 0 & -1/\kappa \\
0 & -R_\infty & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
\end{array} \right),
\]
where $R_\infty=\eta_\infty^{\nu+3}/(\nu+2).$ The spectral problem for the matrix $A_{\infty}(\lambda)$ can be written as follows:
\begin{equation}\label{eigAinf}
det\left[A_{\infty}(\lambda)-\mu\,I \right]=\mu^2 \left[\kappa\mu^4-\gamma\mu^2+R_\infty (\lambda-s\mu)^2 \right]=0.
\end{equation}
We prove the following assertions necessary for applying the results of \cite{BriDerks_02} to the investigation of the asymptotics of the Evans function at infinity.
\begin{lem} The spectrum of the matrix $A_{\infty}(0)$ contains a pair of real eigenvalues $\pm \zeta \neq 0;$ whereas the remaining eigenvalues nullify.
If $\lambda\in \mathbb{C}^{+},$ then the spectrum of $A_{\infty}(\lambda)$ contains a pair of eigenvalues $\mu_1^-,\,\mu_2^-$ with negative real parts, while the remaining eigenvalues have non-negative real parts.
\end{lem}
{\bf Proof.}
The first assertion is obvious, since for $\lambda=0$ it is not difficult to calculate the eigenvalues from the formula (\ref{eigAinf}):
\[
\mu_{1,\,2}=\pm\sqrt{\frac{\beta-s^2\,\eta_\infty^{\nu+3}}{\kappa (\nu+2)}}=\pm \zeta,
\]
while $\mu_{3,...6}=0.$
To prove the second assertion, we apply the method of asymptotic expansions, representing the eigenvalues in the form of a series $\mu=a_0+a_1\,\lambda+....$ Substituting this expression in (\ref{eigAinf}) and equating the coefficients of the corresponding powers of $\lambda$ to zero, we obtain a system of algebraic equations. In view of the awkwardness of the computations, we used the {\it Mathematica} package for deriving these equations. Thus, nullifying zero-order coefficients, we get the equation
\begin{equation}\label{Mpacksys0}
a_0^2\left[a_0^2\kappa+\left(s^2\,R_\infty-\gamma\right)\right]=0,
\end{equation}
having the pair of nonzero solutions
\[
a_{0}^\pm=\pm\sqrt{\frac{\beta-s^2\,\eta_\infty^{\nu+3}}{\kappa (\nu+2)}}=\pm \zeta.
\]
Equating to zero the coefficient of $\lambda^1$, we get:
\begin{equation}\label{Mpacksys1}
2\,a_0\,\left[a_1\left(s^2\,R_\infty-\gamma \right)+2\,a_0^2 \,a_1\kappa-s\,R_\infty\right]=0.
\end{equation}
For $a_0\neq 0$ Eq. (\ref{Mpacksys1}) gives the expression
\[
a_1\,|_{a_0^{\pm}}=\frac{s\eta_\infty^{\nu+3}}{\beta-s^2\eta_\infty^{\nu+3}}.
\]
So for $0<\lambda<<1,$ we have the following pair of roots:
\[
\mu_{1}^-=-\zeta+a_1\,\lambda +O(\lambda^2)<0
\]
and
\[
\mu_{1}^+=\zeta+a_1\,\lambda +O(\lambda^2)>0.
\]
The second pair of roots, corresponding to $a_0=0,$ is obtained from the following approximation. Putting the coefficient
of $\lambda^2$ to be equal to zero, we obtain the equation
\begin{equation}\label{Mpacksys3}
R_\infty (s a_1-1)^2-a_1^2\,\gamma=0,
\end{equation}
whose solutions are expressed as follows:
\[
a_{11}=\frac{s\,R_\infty-\sqrt{\gamma R_\infty}}{s^2 R_\infty-\gamma}>0,
\]
\[
a_{12}=\frac{s\,R_\infty+\sqrt{\gamma R_\infty}}{s^2 R_\infty-\gamma}<0.
\]
Using these solutions, we obtain the second pair of the roots:
\[
\mu_{2}^{-}=a_{12}\,\lambda+O(\lambda^2)<0,
\]
\[
\mu_{2}^{+}=a_{11}\,\lambda+O(\lambda^2)>0.
\]
Since the characteristic Eq.~(\ref{eigAinf}) always has a pair of zero solutions, the above construction exhausts all possible cases corresponding to small values of the parameter $\lambda>0.$ And this is enough to complete the proof of the second point because of the fact that, as can easily be seen, $Re(\mu_i)$ change signs only when the parameter $\lambda$ belongs to the imaginary axis. Thus, Eq.~(\ref{eigAinf}) will have exactly two solutions with negative real part for any $\lambda$ with positive real part.
\section*{ Acknowledgements} The authors gratefully acknowledge Maxim Pavlov for paying attention at the transformation leading to the Hamiltonian representation of the source system. We are also greatly indebted to Sergij Kuzhel for valuable discussions during the preparation of this manuscript. One of the authors (VV) acknowledges support from the Polish Ministry of Science and Higher
Education.
|
2,877,628,088,488 | arxiv | \section*{\refname}}
\begin{document}
\preprint{APS/123-QED}
\title{Constraint-free wavelength conversion supported by giant optical refraction \\ in a 3D perovskite supercrystal}
\author{Ludovica Falsi}
\affiliation{Dipartimento di Fisica, Universit\`{a} di Roma ``La Sapienza'', 00185 Rome, Italy}
\affiliation{Dipartimento S.B.A.I., Sezione di Fisica, ``Sapienza'' Universit\`{a} di Roma, I-00161 Roma, Italy}
\author{Luca Tartara}
\affiliation{Dipartimento di Ingegneria Industriale e dell'Informazione, Universit\`{a} di Pavia, I-27100 Pavia, Italy}
\author{Fabrizio Di Mei}
\affiliation{Dipartimento di Fisica, Universit\`{a} di Roma ``La Sapienza'', 00185 Rome, Italy}
\author{Mariano Flammini}
\affiliation{Dipartimento di Fisica, Universit\`{a} di Roma ``La Sapienza'', 00185 Rome, Italy}
\author{Jacopo Parravicini}
\affiliation{Dipartimento di Scienza dei Materiali, Universit\`{a} di Milano-Bicocca, I-20125 Milano, Italy}%
\author{Davide Pierangeli}
\affiliation{Dipartimento di Fisica, Universit\`{a} di Roma ``La Sapienza'', 00185 Rome, Italy}
\author{Gianbattista Parravicini}
\affiliation{Dipartimento di Fisica, Universit\`{a} di Pavia, I-27100 Pavia, Italy}%
\author{Feifei Xin}
\affiliation{Dipartimento di Fisica, Universit\`{a} di Roma ``La Sapienza'', 00185 Rome, Italy}
\affiliation{College of Physics and Materials Science, Tianjin Normal University, Tianjin, China, 300387}
\author{Paolo Di Porto}
\affiliation{Dipartimento di Fisica, Universit\`{a} di Roma ``La Sapienza'', 00185 Rome, Italy}
\author{Aharon J. Agranat}
\affiliation{Applied Physics Department, Hebrew University of Jerusalem, IL-91904 Jerusalem, Israel}
\author{Eugenio DelRe}
\affiliation{Dipartimento di Fisica, Universit\`{a} di Roma ``La Sapienza'', 00185 Rome, Italy}
\affiliation{ISC-CNR, Universit\`a di Roma ``La Sapienza'', 00185 Rome, Italy}
\date{\today}
\begin{abstract}
\noindent
\textbf{Abstract} Nonlinear response in a material increases with its index of refraction as $n^4$. Commonly, $n \sim$ 1 so that diffraction, dispersion, and chromatic walk-off limit nonlinear scattering. Ferroelectric crystals with a periodic 3D polarization structure overcome some of these constraints through versatile Cherenkov and quasi-phase-matching mechanisms. Three-dimensional self-structuring can also lead to a giant optical refraction. Here, we perform second-harmonic-generation experiments in KTN:Li in conditions of giant broadband refraction. Enhanced response causes wavelength conversion to occur in the form of bulk Cherenkov radiation without diffraction and chromatic walk-off, even in the presence of strong wave-vector mismatch and highly focused beams. The process occurs with a wide spectral acceptance of more than 100 nm in the near infrared spectrum, an ultra-wide angular acceptance of up to $\pm 40^{\circ}$, with no polarization selectivity, and can be tuned to allow bulk supercontinuum generation. Results pave the way to highly efficient and adaptable nonlinear optical devices with the promise of single-photon-to-single-photon nonlinear optics.
\end{abstract}
\pacs{Valid PACS appear here}
\maketitle
\section*{Introduction}
\noindent
Frequency conversion and parametric amplification are fundamental ingredients for a wide family of applications, including light sources, detection, optical processing, and quantum-state-generation \cite{Boyd2008,Shen1984, Loudon2010, Brown2003}. For quantum technology, a versatile and super-efficient nonlinear process is the key to photon-based quantum computing \cite{Tiecke2014,Reiserer2014,Chang2014}. In most schemes, optical nonlinearity can only be effectively harnessed when the coupling mechanism is driven by a cumulative wave interaction based on constructive interference. This imposes specific constraints on the available conversion schemes, so-called phase-matching conditions that depend both on the polarization, wavelength, and direction of propagation of the interacting waves and on the specific nonlinear susceptibility of the medium \cite{Boyd2008}.\\
These constraints can be overcome in engineered ferroelectric crystals with a full 3D periodic spontaneous polarization distribution through quasi-phase-matching \cite{Wei2018,Xu2018,Zhang2019,Stoica2019,Liu2019} and Cherenkov phase-matching \cite{Jelley1958,Mathieu1969,Tien1970,Zhang2008,Sheng2010,Sheng2012,Roppo2013,Ni2016}.
3D lattices of spontaneous polarization also occur naturally in nanodisordered ferroelectrics in the form of supercrystals \cite{Pierangeli2016}, in which case the highly ordered domain mosaic also leads to giant broadband optical refraction \cite{DiMei2018}. This has a direct effect on nonlinear scattering. Considering material polarization $P$ in terms of the Taylor series expansion in the propagating optical field $E_{opt}$, i.e., $P=\epsilon_0\left(\chi^{(1)}E_{opt}+\chi^{(2)}E^2_{opt}+...\right)$ \cite{Armstrong1962,Boyd2008}, the first term describes linear response through the first order susceptibility $\chi^{(1)}= n^2-1$, while higher-order terms describe nonlinear effects. The validity of the expansion implies $\chi^{(m+1)}/\chi^{(m)} \sim 1/E_{at}$, where $E_{at}$ is the scale of the atomic electric field of the substance. It follows that the intensity of an arbitrary allowed nonlinear scattering processes scales with $(\chi^{(m)})^2(E_{opt})^{2m} \sim (\chi^{(1)}E_{opt})^2(E_{opt}/E_{at})^{2m}$, and the intensity of any higher order scattering processes scales with $n^4$\cite{Loudon2010}. In these term giant refraction, i.e., an index of refraction $n\gg1$ across the visible and near infrared spectrum, forms a direct route to strongly enhanced nonlinear response. \\
\noindent
Here we investigate second-harmonic-generation in a ferroelectric supercrystal manifesting giant refraction. Enhanced response allows the process to occur through bulk nonlinear Cherenkov radiation even for highly focused non-phase-matched beams, a method to achieve constraint-free wavelength conversion.
\section*{Results and Discussion}
\subsection{Giant Refraction Cherenkov SHG}
\noindent
In the paradigm nonlinear optical process, second-harmonic-generation (SHG), waves are generated at wavelength $\lambda/2$ (and angular frequency $2\omega$) by the anharmonic response of dipoles driven by the pump at wavelength $\lambda$ \cite{Kleinman1962}. The process occurs most efficiently when the converted signal interferes constructively with the pump itself, a phase-matching condition that embodies momentum conservation for the interaction. For any given material, dispersion causes phase-matching to occur naturally in the direction of the pump only for converted light (signal beam) whose wavevector $\mathbf{k}_{2\omega}$ forms a finite angle $\theta_C'$ relative to the pump itself $\mathbf{k}_{\omega}$. This leads to wavelength-dependent constraints on the process geometry while the wavevector mismatch $\Delta \mathbf{k}=\mathbf{k}_{2\omega}-2\mathbf{k}_{\omega}$ is accompanied by chromatic walk-off \cite{Shen1984} (see Fig. 1a top panels). Collinear phase-matching ($\Delta \mathbf{k}=0$) can, in turn, be achieved using material birefringence, which introduces wavelength and polarization constraints \cite{Boyd2008}, and quasi-phase-matching, that requires periodic material microstructuring and is also wavelength-selective \cite{Fejer1992,Saltiel2009}. With $n\gg1$, the angle at which Cherenkov phase-matching occurs is greatly reduced ($\theta_C'\simeq 0$) so that chromatic walk-off does not intervene (see Giant refraction Cherenkov phase-matching in Methods and in Fig. 1a (bottom panels)).
The specific geometrical structure of Cherenkov SHG combined with giant refraction is illustrated in Fig. 1b. The pump propagates inside the sample along the normal to its input facet irrespective of launch angle $\theta_i$ (left panels) and the Cherenkov SHG copropagates with the pump inside the sample ($\theta_C'\simeq0$, central panels). At the output facet the pump and signal separate at a now finite angle $\theta_C$, as illustrated for the two cases of transverse electric (TE) and trasverse magnetic (TM) polarizations (central and right panels, respectively). For the TE case, while the pump exits with an angle to the normal $\theta_0=\theta_i$, the Cherenkov SHG forms two beams angled with respect to the pump by $\theta_0\pm \theta_C$, with the same polarization as the pump and on the incidence plane (the xz plane). In turn, for the TM case, the SHG Cherenkov radiation separates at the output in the orthogonal plane (the yz plane) (see Giant refraction Cherenkov SHG in Methods).
\begin{figure*}[!ht]
\centering
\includegraphics[width=2.0\columnwidth]{figure_1.pdf}
\caption{\textbf{Giant refraction Cherenkov Second-Harmonic-Generation.} (a) For $n\sim1$, a finite Cherenkov phase-matching $\theta_C'$ leads to a limited beam interaction region associated to a finite beam width and chromatic walk-off. For $n\gg1$, chromatic walk-off $\theta_C' \simeq 0$, so that the interaction region is expanded. (b) Geometry of giant refraction Cherenkov SHG for the TE and TM cases (see Giant refraction Cherenkov SHG Section in Methods). (c) Giant refraction is observed in a nanodisordered KTN:Li crystal cooled 15 K below the $T_C=313$K Curie point using a white-light from a commercial projector, leading to a signature achromatic propagation orthogonal to the input facet (along $z$) irrespective of the launch direction $z'$ (and launch angle $\theta_0$) with diffraction only occuring as the beam leaves the sample (see Giant refraction Experiments Section in Methods). (d) Top view of basic evidence of giant refraction for white light propagation in KTN:Li. (e), (f) SHG is observed using a mode-locked Ti:sa laser (see SHG Setup Section in Methods). (g) Average output SHG power versus pump input power along two different lengths of one sample (sample 1). Conversion scales with $P_\omega^2$ and with $L_z^2$ as would occur for bulk SHG conversion \cite{Armstrong1962}. Super-broad SHG (h) wavelength and (i) angular acceptance (see Acceptance Section in Methods).
}
\label{figure1}
\end{figure*}
\begin{figure*}[!ht]
\centering
\includegraphics[width=2\columnwidth]{figure_2.pdf}
\caption{\textbf{SHG in a supercrystal.} (a) Illustration of a specific realization of a supercrystal. The spontaneous polarization, the white arrows, determines the specific $\chi^{(2)}$ response of each composing tetrahedral (a color coding is implemented). The actual structure of a fundamental $\Lambda \times \Lambda \times \Lambda$ cube is illustrated through a sequential build-up adding groups of tetrahedral domains. The lattice constant $\Lambda$ ($\simeq 50 \mu m$) is fixed by the built sample growth process (see Materials Section in Methods). The supercrystal can have many different chiral realizations. The one illustrated here serves to explain the specific results found, relative to the crystal growth axis $a$. Indicated are the regions leading to strongest SHG, i.e., the core of polarization vortices in the $ab$ facet and the core of anti-vortices in the $ac$ and $bc$ facets. (b), (c) Non-zero contributions to $\chi^{(2)}$ are illustrated for the TM (b) and TE cases (c). (d)-(f) Output spatial distribution and polarization distribution for a pump polarized at 45 degrees (d), TE (e), and TM (f). (g) Schematic illustration of the experiments for a pump focused into a single polarization-vortex in the $ab$ facet and into an anti-vortex both in the $bc$ and $ac$ facets. (h) SHG for a pump focused in an anti-vortex in the $ac$ facet. As illustrated in the inset, the only finite contribution to SHG is mediated by $b$ polarized domains through the $d_{15}$ component for TM. (i) SHG for a pump focused in an anti-vortex in the $bc$ facet. Results are analogous to the $ac$ facet, while here a full angle Cherenkov emission is clearly visible ($2\theta_C=\pi$). Note how, in distinction to random-phase-matching, no lateral emission is observed \cite{Roppo2010,Ayoub2011,Molina2008}, and all SHG is originating, as expected, solely from the output facet (see inset photographs).}
\label{figure2}
\end{figure*}
We perform experiments in two samples of nanodisordered oxide ferroelectric KTN:Li perovskites (see Materials Section in Methods). These manifest giant refraction, with record-high broadband index of refraction ($n>26$) at visible wavelengths. The effect is associated to the emergence of an underlying supercrystal \cite{Pierangeli2016}. Each lattice site of the supercrystal is the core of a periodic 3D vortex and anti-vortex structure, a mesh of spontaneous polarization that forms below the Curie point (see Supercrystal Preparation Section in Methods). Typical broadband giant refraction for sample 1 is reported in Figs. 1c, d (see Giant refraction Experiments Section in Methods). In Figs. 1e, f we illustrate the scheme used to investigate SHG using 190 fs pulses from a mode-locked Ti:Sa source (see SHG Setup Section in Methods). In Fig. 1g we report output SHG power versus pump input power. The observed scaling $P_{2\omega} \propto (P_{\omega})^2L_{\mathit z}^2$, where $L_{\mathit z}$ is the length of the sample in the $z$ direction, is reminiscent of the undepleted pump regime of standard SHG \cite{Armstrong1962, Boyd2008}. The $P_{2\omega}/P_{\omega}$ ratio is independent of input polarization, and output polarization is found to coincide with the input. In Fig. 1h we report SHG conversion varying the pump wavelength in the available pump spectrum (see Acceptance Section in Methods). As reported in Fig. 1i, SHG conversion is observed for all accessible input angles $\theta_i$ ($=\theta_0$), indicating that the conversion occurs also with no input angular acceptance (see Acceptance Section in Methods).
\subsection{An underlying 3D nonlinear lattice.}
\noindent
Wavelength conversion is mediated by the second-order nonlinear susceptibility response $\chi^{(2)}$ of the KTN:Li perovskite in its noncentrosymmetric tetragonal 4mm state. In distinction to single-domain or to quasi-phase-matching schemes, the nonlinear process is mediated by a supercrystal with its specific 3D geometry, giant refraction, and underlying ferroelectric domain structure \cite{Sheng2010,Wang2017}. Hence, while giant refraction causes conversion efficiency to be essentially independent of polarization, input angle, and wavelength, the details of the SHG output strongly depend on input parameters and the supercrystal structure. As illustrated in Fig. 2a, the structure of the 3D supercrystal is a volume lattice of 3D polarization vortices that emerge as the cubic symmetry is broken and polarization charge is screened \cite{Pierangeli2016,DiMei2018}. The supercrystal forms from the periodic compositional disorder along the growth direction (the $a$ axis). Each domain has its spontaneous polarization along one of the 6 principal directions (the direction of the spontaneous polarization is labeled using different colors , see white arrows and colored solids in Fig. 2a). In each domain (of a given color), the corresponding nonlinear susceptibility tensor $d$ depends on its orientation. Consider now the pump focused into a vortex site on the $a,b$ facet of the supercrystal (Fig. 2b, c). For a TM polarization, most of the component solids lead to a net zero $\chi^{(2)}$ effect, as light experiences a sequence of oppositely polarized tetrahedrals. The tetrahedrals that dominate $\chi^{(2)}$ response are those with a spontaneous polarization in the $a$ direction, if light propagates along the $c$ direction shifted in the $b$ direction above and below a single polarization vortex. Here conversion occurs through a sequence of solids with identically oriented polarization (see Fig. 2b). In the TE case, the situation is analogous, but the SHG signal is now produced for light propagating in the $c$ direction in regions shifted in the $a$ direction in proximity of the vortex (see Fig. 2c). Focusing the pump on the $ab$ facet into a polarization vortex leads to the output intensity distribution reported in Fig. 2d-f. For a $\lambda=$810 nm pump beam polarized at 45 degrees with respect to the crystal $a$ and $b$ axes, a signature SHG Cherenkov output peaked at $\lambda/2= 405$ nm is detected formed by two TE components in the $a$ direction and two TM components in the $b$ direction, the pump beam being at the center of this diamond-like distribution (Fig. 2d). For a TE pump, two TE components in the $a$ direction are dominant (Fig. 2e), while for a TM pump, two TM components along the $b$ axis form (Fig. 2f). Similar results are observed in both samples 1 and 2. An illustration of the SHG experiments for a pump focused into a single polarization-vortex in the $ab$ facet and into an anti-vortex both in the $bc$ and $ac$ facets is reported in Fig. 2g. The situation for a pump focused onto the $ac$ facet is reported in Fig. 2h. Here, only the $b$ oriented ferroelectric tetrahedrals can contribute to giant refraction Cherenkov SHG, and this only for the TM polarization, a condition that is achieved focusing the pump on an anti-vortex as opposed to a vortex. The output structure preserves the TM polarization and has a greatly enhanced output angular spectrum that no longer manifests localized peaks. A similar situation occurs also for light focused onto the $bc$ facet, as reported in Fig. 2i, where the output SHG is emitted at all available angles (see Fig. 2i inset photographs). The observed SHG follows the basic giant refraction SHG Cherenkov mechanism illustrated in Fig. 1b (see Cherenkov SHG Experiment Section in Methods).
\begin{figure}[!ht]
\centering
\includegraphics[width=1.0\columnwidth]{figure_3.pdf}
\caption{\textbf{Chromatic dispersion and output SHG power vs. input pump power.} (a) Supercrystal versus cubic phase chromatic dispersion measured using group velocity dispersion experiments in sample 2. (b) Output SHG power $P_{2\omega}$ normalized to $L_z^2$ versus input pump $P_{\omega}$ at a $\lambda =810$nm and $\theta_0=0$ for sample 1 and sample 2.}
\label{figure3}
\end{figure}
Results on Cherenkov SHG reported in Fig.2, these including conversion for light propagating along all three principal crystal axes, provide a nonlinear corroboration of evidence of a 3D ferroelectric lattice associated to transmission microscopy \cite{Pierangeli2016, DiMei2018} and polarization transmission microscopy \cite{Ferraro2017}. They also provide, through a direct measurement of $\theta_C$, an estimate of the supercrystal 2$\Delta n n_{2\omega}=(\sin{\theta_C})^2$ (see Cherenkov SHG Experiment Section in Methods). In the case of Fig.2b, 2$\Delta n n_{2\omega} \simeq 0.08$. Snell refraction experiments in this direction provide $n_{2\omega}> 26$, so that we expect a $\Delta n < 0.001$, corresponding to an ultra-low approximate dispersion of $dn/d\lambda <-0.002 \mu\text{m}^{-1}$. The prediction fits in well with our understanding of the supercrystal phase, for which chromatic dispersion is expected to be strongly reduced. To investigate this further we directly measured supercrystal chromatic dispersion using group-velocity dispersion \cite{Bor1985} for $T>T_C$, where no supercrystal forms, and $T<T_C$, where the supercrystal forms. Results are reported in Fig.3a. As expected, the onset of the supercrystal structure is accompanied by a sharp reduction in average values of dispersion, from $dn/d\lambda \simeq -0.10 \mu\text{m}^{-1}$ to $dn/d\lambda \simeq -0.06 \mu\text{m}^{-1}$. This, in turn, is not sufficiently small to circumvent the need for Cherenkov phase-matching ($\theta_{C} \simeq$ 0.28 in Figs. 2d-f). Strong SHG conversion does not allow a local vortex and anti-vortex dispersion measurement.
We compared supercrystal SHG in the two samples to identify possible growth and composition related effects. We found that sample 1 and 2 manifest the same geometrical behavior as regards to giant refraction and Cherenkov SHG, while their net SHG conversion efficiency is considerably different, as reported in Fig.3b. This may be connected to the different values of Curie temperature and/or different values of $\Lambda$ (70 $\mu$m for sample 1 and 50 $\mu$m for sample 2). An estimate of the effective $\chi^{(2)}_{GR}$ is provided in the $\chi^{(2)}_{GR}$ Evaluation Section in Methods.
\subsection{Spectral and angular acceptance.}
\noindent
The angle at which Cherenkov phase-matching is achieved is wavelength-dependent ($\theta_C(\lambda)$). To characterize this we report in Fig. 4a measurements of spectral acceptance for a detector able to collect light only from a limited cone at two fixed angles $\theta_1$ (yellow dots) and $\theta_2$ (magenta dots). The result is a spectral bandwidth whose peak follows $\theta_C(\lambda)$ and whose width is in agreement with the angular acceptance (see Angular versus Wavelength Acceptance Section in Methods). Since giant refraction allows no diffraction or pump-signal walk-off, Cherenkov phase-matching will occur for all wavelengths. In turn, not all Cherenkov SHG can actually leave the output facet of the sample, as total internal reflection occurs for wavevectors that have an internal incidence angle $\theta_i'>1/n_{2\omega}$ with the output facet. Hence, for a $\theta_i=0$, $\theta_i'=\theta_C'=\arccos{(n_\omega/n_{2\omega})}$, a zero emitted SHG will result for $\arccos{(n_\omega/n_{2\omega})}>1/n_{2\omega}$. The effect can be appreciated recalling the full angle-integrated measurement reported in Fig.1h (blue circles). For a given pump wavelength, the same effect will occur as a function of $\theta_0$ ($\theta_i$): assuming the previously evaluated $\sqrt{2\Delta nn_{2\omega}} \simeq 0.28$, we expect to observe a total internal reflection for an input $|\theta_0|>46^{\circ}$ (see Total Internal Reflection Section in Methods). Measured values of $\theta_C(\lambda)$ are reported in Fig.4b and are in agreement with chromatic dispersion results of Fig.3a. In Fig. 4c we report SHG output, for a $\lambda=810$nm pump, for different pump launch angles, as in Fig. 1i, but distinguishing between the two Cherenkov components $+\theta_C$ (violet circles) and $-\theta_C$ (magenta circles). SHG suppression is observed for $|\theta_0| > 25^{\circ}$. Illustration of the geometry leading to SHG suppression caused by total internal reflection of the Cherenkov radiation is reported in Fig. 4d. Once again, the broad spectral and angular acceptance underline how the Cherenkov mechanism in action is not Bragg in nature nor does it relate to quasi-phase-matching.
\begin{figure*}[!ht]
\centering
\includegraphics[width=2\columnwidth]{figure_4.pdf}
\caption{\textbf{Cherenkov spectral and angular acceptance.} (a) Spectral acceptance for a detector placed at two fixed angles (yellow and magenta circles) compared to the super-broad spectral acceptance capturing all emitted light (blue circles). (b) Observed Cherenkov angle versus wavelength. (c) Angular acceptance considering the two TE Cherenkov radiation beams separately (magenta and violet circles). (d) Illustration of the geometry leading to SHG suppression caused by total internal reflection of the Cherenkov radiation. }
\label{figure4}
\end{figure*}
\subsection{Enhanced Fresnel reflection and extreme nonlinearity}
\noindent
The $n\gg1$ regime leading to SHG (as discussed in Fig. 2) is accompanied by strong Fresnel reflection at the input and output facets. This does not allow a direct evaluation of enhanced wavelength conversion occuring inside the sample by detecting the converted light transmitted outside the sample. Fresnel reflection can be measured directly for the pump, that experiences a conventional $R \simeq$ 0.2, compatible with an average index of refraction $\sim$ 2.6, as expected for light focused onto the vortex and antivortex core \cite{DiMei2018}. By aligning the pump in different positions, a maximum is observed $R \simeq$ 0.45 compatible with an average $n\sim 5$. In these conditions, conversion can be analyzed through the observation of beam dynamics along the propagation direction. In a standard system, this leads an evolution governed by the so-called Manley-Rowe relationships \cite{Armstrong1962,Boyd2008}. In Fig.5a, b we report scattered SHG light from the body of the sample. Observed scattering in our experiment disappears altogether only when the sample is heated above the Curie point $T_C$ (Fig.5c) and for $T<T_C$ leads to an almost constant SHG signal from the input facet of the sample to the output facet. Analysis of the scattered light versus propagation distance in the sample for different temperatures is reported in Fig. 5d. The transition from SHG to supercontinuum generation is achieved by doubling the input pump numerical, as reported in Fig. 5e, f (see the Supercontinuum Generation Section in Methods). The origin of this broadband emission and its relation to specific nonlinear processes is still unclear. We recall that SHG with remarkably wide angular and spectral acceptance can be observed in multidomains ferroelectrics \cite{Molina2008, Molina2009}. The enhanced tunability in these studies is associated to phase-matching supported by the underlying disordered domains structure, while in our experiments, phase-mismatch (the transverse component $\Delta k$ in Fig.1) persists. The tunability here is then a product of giant refraction that does not involves Bragg scattering. It serves to further compare our findings to highly versatile SHG in nonlinear photonic crystals \cite{Mateos2012, Li2012}. Here the physical origin of efficient SHG is phase-matching mediated by vector components of the reciprocal lattice in the linear or nonlinear response. It follows that the extent of the pump beam must be larger than the lattice constant. In our case the 15 $\mu$m beam hardly occupies even a single lattice site ($\Lambda \simeq$ 50 $\mu$m).
\section*{Conclusions}
The $n\gg1$ regime forwards a wide range of hereto unobserved and highly versatile nonlinear effects that side other pioneering experiments, such as mismatch-free nonlinear propagation in zero-index materials \cite{Suchowski2013}. In this paper we have reported our investigation of SHG in conditions of giant refraction. The converted light appears in the form of Cherenkov radiation even in the presence of phase-mismatch.
This reduces constraints on launch angle, a feature that can considerably mitigate alignment requirements in nonlinear-based light sources. Furthermore, the SHG manifests increased tolerances in wavelength and polarization, a property that can be implemented to support multiple simultaneous nonlinear processes, with specific impact, for example, in the conversion of infrared images to the visible spectrum.
\begin{figure*}[!ht]
\centering
\includegraphics[width=2\columnwidth]{figure_5.pdf}
\caption{\textbf{Strong SHG conversion versus limited net conversion efficiency.} (a), (b) Top view of pump and SHG signal scattered light ($T=T_C-35K$, sample 2). (c) Scattered light from the body of sample 2 of SHG signal, almost constant for $T<T_C$ from the input facet of the sample to the output facet, while disappears when the sample is heated above the Curie point $T_C$. (d) Analysis of the scattered light versus propagation distance in the sample for different temperatures indicates a characteristic absence of propagation dynamics for the ferroelectric case ($T=T_C-35K$, magenta full circles), for the Curie temperature on heating from the ferroelectric phase ($T_C^-$, heating from $T_C-35K$ to $T_C$) and on cooling from the paraelectric phase ($T_C^+$, cooling from $T_C+10K$ to $T_C$). (e) Evidence of supercontinuum generation (see Supercontinuum Generation Section in Methods). For further details see Video in Supplementary Movie. (f) Spectrum and multispectral images ($\sim$ 10 nJ/pulse pump at input).}
\label{figure5}
\end{figure*}
\section*{Methods}
\noindent\textbf{Giant refraction Cherenkov phase-matching}. In a material with giant broadband refraction, $n_{\omega},n_{2\omega} \gg 1$. At the sample input facet the plane-wave components of the pump refract according to the Snell law $\theta_r=\arcsin{(\sin{\theta_i}/n_{\omega})\simeq 0}$, where $\theta_i$, $\theta_r$ are the incidence and refraction angle. Cherenkov phase-matching occurs for SHG wavevectors at an angle relative to the pump $\theta_C'=\arccos{(n_{\omega}/n_{2\omega})}\simeq 0$, insomuch that $\Delta n=(n_{2\omega}-n_{\omega})/n_{\omega}\ll 1$, even for a finite $n_{2\omega}-n_{\omega}$.
\noindent\textbf{Giant refraction Cherenkov SHG}. SHG polarization is $P_i^{2\omega}=d_{ijk}E_j^{\omega}E_k^\omega$, where $\textbf{E}^\omega$ is the pump field and $d_{ijk}$ is the nonlinear optical susceptibility tensor, that has nonzero components $d_{31},d_{33}$, and $d_{15}$ for the tetragonal 4mm symmetry of KTN:Li \cite{Yariv2002}. Considering the TE case for a spontaneous polarization parallel to the optical polarization (along the y axis), $P_{\mathit y}^{2\omega}=d_{33}(E_{\mathit y}^{\omega})^2$. The emitted Cherenkov radiation must then have a $\textbf{k}_{2\omega}$ in a plane orthogonal to $\textbf{P}^{2\omega}$, i.e., in the incidence plane (xz) (central panels in Fig.1b). Analogously for the TM case, in which, for a spontaneous polarization parallel to the optical polarization (along the x axis), the nonlinear polarization is dominated by the x component $P_{\mathit x}^{2\omega}=d_{33}(E_{\mathit x}^{\omega})^2$, so that Cherenkov SHG occurs in the yz plane (right panels in Fig.1b). In the TM case, a SHG contribution arises also for a domain with a spontaneous polarization along the z axis, i.e., $P_{\mathit x}^{2\omega}=2d_{15}E_{\mathit x}^{\omega}E_{\mathit z}^{\omega}$ and $P_{\mathit z}^{2\omega}=d_{31}(E_{\mathit x}^{\omega})^2+d_{33}(E_{\mathit z}^{\omega})^2$. The emitted Cherenkov SHG will then be TM polarized and have a $\textbf{k}_{2\omega}$ orthogonal to $\textbf{P}^{2\omega}$ in the incidence plane xz (i.e., along the y axis). This situation is particularly relevant for results reported in Fig.2h and Fig.2i. In this case the spontaneous polarization is oriented orthogonal to the input facet, so that while the pump is prevalently experiencing a standard index of refraction $n_\omega$, the SHG is dominated by $d_{33}$ and has a stronger component along the direction of spontaneous polarization. The result then is that $n_\omega/n_{2\omega}\ll 1$, so that $\theta'_{C}$ inside the sample remains finite, while all waves still have their Poynting vectors along the normal to the input facet (giant refraction). This causes this angular amplification caused by the angular spectrum of the focused pump to populate in a continuous manner a wide angle of SHG emission around the pump average propagation direction that, on output, can ever occupy the entire angular spectrum ($2 \theta_C =\pi$, Fig. 2i).
\noindent\textbf{Materials}.
The two samples (sample 1 and sample 2) are zero-cut polished lithium-enriched solid-solutions of potassium-tantalate-niobate (KTN:Li). They have the same composition K$_{0.997}$Ta$_{0.64}$Nb$_{0.36}$O$_3$:Li$_{0.003}$, while in the flux of sample 2 traces of Mo impurities are introduced. Sample 1 measures along its three axes 4.62$^{\mathit (a)}$ x 3.86$^{\mathit (b)}$ x 1.6$^{\mathit (c)}$ mm while sample 2 is 6.96$^{\mathit (a)}$ x 3.86$^{\mathit (b)}$ x 1.6$^{\mathit (c)}$ mm. The samples form perovskites with room-temperature cubic-to-tetragonal (m3m to 4mm) ferroelectric phase-transition temperatures $T_{C,1}=315$ K and $T_{C,2}=333$ K. Both are grown through the top-seeded method that causes them to have a built-in spatially periodic oscillation in composition along the growth axis (the $a$ axis) that translates into an approximately periodic $\Lambda=50$ $\mu$m striation grating (for sample 2) that then determines the lattice constant of the underlying super-crystal \cite{Pierangeli2016}.
\noindent\textbf{Supercrystal Preparation}.
Each sample, initially equilibrated at $T=298$K and unbiased, is heated to $\simeq 373$ K at a rate of 0.6 K/min and is DC-biased by an electric field that increases at a constant rate from 0 to 4 kV cm$^{-1}$. The sample is then cooled back down to $T=298$K while the bias field remains constant at 4 kV cm$^{-1}$. The DC field is applied between the two parallel faces along the $a$ axis (growth axis). To minimize the temperature gradient, the sample is dipped into a Teflon holder that contains temperature resistant mineral oil. The supercrystal can now be further modified having the sample undergo successive thermal cycles, composed of a first stage in which the unbiased sample is heated to $T_C+10 K$ at a rate of $0.35$K$s^{-1}$ immediately followed by a second cooling stage tp $T_C-35$K at a rate of $0.1$ K$s^{-1}$. Once the thermal protocol in completed, each sample is used for optical experiments at a given temperature $T<T_C$.
\noindent\textbf{Giant refraction Experiments}. The sample is cooled using a current-controlled Peltier junction to $T_C-35$K and rotated by a tunable angle $\theta_0$ with respect to the optical propagation axis $z'$ (see scheme illustrated in Fig.1c). Light is collected from a commercial projector (NEC-VE281X, XGA, 2800 lumens) polarized using a linear polarization filter and focused onto the input facet of the sample using a high-aperture long-working distance microscope objective (Edmund Optics, x100, 3mm
working distance, achromatic, NA$=0.8$) positioned $\simeq$ 30 cm from the output lens of the projector. The top-view image in Fig. 1d is taken using an Apple iPhone7. Top-view scattered light from within the sample and from the lower metallic support indicates strong reflection from the input facet and a non-spreading propagation inside the sample normal to the input facet irrespective of wavelength, $\theta_0$, and launch polarization, and a regular diffraction of the beam exiting the sample, as expected for giant refraction.
\noindent\textbf{SHG Setup}.
SHG experiments (see scheme and photo of apparatus in Fig. 1e) are carried out in the 790-880 nm range using a Tsunami Spectra Physics Ti:Sa CW mode-locked laser (maximum output power of 0.6W at $\lambda = 810 \pm 7$ nm), with a repetition rate of 80 MHz and a pulsewidth of 190 fs. Laser beam linear polarization, TM or TE, or a superposition of the two, is set using a $\lambda /2$ waveplate. The beam is focused onto the input facet of the $\theta_0$-rotated sample using a 50-mm-focal-length lens. The pump beam is focused to an input FWHM $\simeq 15 \mu$m. The SHG pattern is detected on a white screen placed at $d$= 7.0 cm from the output facet of the sample using a Canon EOS 50d. SHG power $P_{2\omega}$ is measured in Fig. 1g (and Fig. 3b) filtering and focusing converted light onto a power meter for a TM pump (and SHG).
\noindent\textbf{Acceptance}. Spectral acceptance is reported for $\theta_0=0$ in arbitrary power units $P_{2\omega}$ normalized to the peak spectral value. Since each measurement at different wavelengths is carried out with different pump power, the output signal is rescaled appropriately, i.e., divided by the input power squared. Angular acceptance is evaluated for a $810$nm pump and for all accessible launch angles. In both spectral and angular acceptance experiments, output SHG is collected by a lens and focused onto a power meter. Note that on consequence of giant refraction, the effective propagation length in the sample is launch-angle independent and equal to the length of the sample in the propagation direction (see Fig. 1b). Wavelength dependence, due to the Fresnel reflection at output, is analyzed in Fig.4.
\noindent
\textbf{Cherenkov SHG Experiment.}
$\cos{(\theta_C')}=(2k_{\omega}/k_{2\omega})=n_{\omega}/n_{2\omega}$. For normal dispersion ($n_{2\omega}>n_{\omega}$) $n_{\omega}=n_{2\omega}-\Delta n$, so that since $\theta_C'\ll 1$, $\theta_C'\simeq \sqrt{2\Delta n/n_{2\omega}}$. Outside the sample, $\sin{(\theta_C)}=n_{2\omega}\sin{\theta_C'}\simeq n_{2\omega}\theta_C'$. Measuring $\theta_C$ leads to an estimate of $\Delta n=(\sin{(\theta_C)})^2/(2n_{2\omega})$. For a pump focused on the $ab$ facet in Fig.2b, for both the TE and TM cases, the two SH beams emerge in the x-z (i.e., $ac$) and y-z (i.e., $bc$) planes at an angle $\theta_C \simeq 0.28$ rad with respect to the pump (for all accessible values of $\theta_i$). According to the Cherenkov model, this implies that $\sqrt{2\Delta n n_{2\omega}}\simeq 0.28$. For light focused on the $ac$ facet in Fig. 2h and $bc$ facet in Fig. 2i, SHG is generated from tetrahedral domains oriented along the $b$ and $a$ axis, respectively, i.e., with a spontaneous polarization orthogonal to the input facet. Involving both $d_{31},d_{33},d_{15}$, the result is a TM SHG in the $bc$ and $ac$ plane, respectively. The observed $\theta_C \simeq \pi/2$ (see detailed photos in the inset of Fig. 2i), corresponding to a $\sqrt{2\Delta n n_{2\omega}}\simeq 1$. A similar contribution associated to $d_{15}$ in the case reported in Fig. 2b, c, where $d_{33}$ contributions are dominant, and leads to the ``spurious" SHG scattering in the TM case (Fig. 2f) as opposed to the TE case (Fig. 2e).
\noindent\textbf{\bm{$\chi^{(2)}_{GR}$} Evaluation}.
To provide an estimate of the effective nonlinear susceptibility $\chi^{(2)}_{GR}$ we can make use of the simplified plane-wave model (diffraction is absent in the GR regime) described in Ref. \cite{Boyd2008}, pages 77-79. For sample 1, the time averaged powers at the output are $P_{\omega}= 510 mW$ and $P_{2\omega}= 2 \mu W$ for the 190 fs pulse train operating at 80 MHz repetition rate, while the beam waste is taken to be $w_{0} \simeq 15 \mu m$. For the purpose of the evaluation of peak intensity and Fresnel reflection, the regions leading to SHG for the TM case have $n_{\omega} \sim n_{2\omega} \sim n_{GR}$, where
$n_{GR}\gg$1. Of the input pump beam, only a portion actually interact with the tetrahedral structures allowing SHG (the $d_{33}$ solids in Fig.2b). The fraction of active area can be evaluated measuring the Fresnel reflection at input and output facets, comparing it to what expected for regions with standard reflection (i.e., for $n \simeq 2.2$) and those regions that have $d_{33}$ and hence an enhanced reflection associated to $n_{GR}$. While longitudinal phase-matching is guaranteed by the Cherenkov-like geometry (Fig.1a), the aperture of the pump and SHG beams remains finite ($\Delta \theta '\sim\Delta\theta_{ext}/n_{GR}$, where $\Delta \theta_{ext} \simeq$ 0.07). This introduces a residual longitudinal mismatch. The transverse wavevector mismatch can be evaluated considering $\theta_{c}\simeq$ 0.28 rad measured outside the sample, leading to $\Delta k \simeq 4.4 {\mu m}^{-1}$. The result is $\chi^{(2)}_{GR} \simeq 5.2$ pm V$^{-1}$ ${n^{2}}_{GR} $. Considering even the minimum value of $n_{GR}$ as measured from diffraction and Snell refraction ($n_{GR} \simeq$ 26), we obtain an effective $\chi^{(2)}_{GR} \simeq 3.5 \times 10^3$ pm V$^{-1}$, to be compared to the measured value of standard KTN, i.e., $\chi^{(2)}_{KTN} \simeq 168$ pm V$^{-1}$ (n$\simeq 2.3$) \cite{Zhang1997}.
\noindent\textbf{Angular versus Wavelength Acceptance}. To test this we maximized SHG efficiency, i.e., Cherenkov phase-matching is established for the specific pump wavelength $\lambda$, and the SHG signal detector is placed so as to capture a single output diffraction-limited mode. As reported in Fig.4a, changing the pump wavelength without altering the crystal and detector geometry leads to a relative spectral acceptance $\Delta \lambda/ \lambda \simeq 0.047$ that is in agreement with the input pump numerical aperture $2\lambda/ \pi w_{0} \simeq 0.05$.
\noindent\textbf{Total internal reflection}. Total internal reflection of the SHG signal occurs at the output facet when approximately $\theta_C'+\theta_r>1/n$, where $\theta_r=\sin{(\theta_0)}/n$. Assuming that $\theta_C'=\sqrt{2\Delta n/n_{2\omega}}$, we have that total internal reflection occurs for $|\sin{(\theta_0)}|>1-\sqrt{2\Delta n n_{2\omega}}$. Taking the value of $\sqrt{2\Delta nn_{2\omega}}\simeq 0.28$ gives $|\theta_0|>46^{\circ}$.
\noindent\textbf{Supercontinuum generation}. Experiments are carried out replacing the 50mm lens with a 25mm one. The pump is now focused in proximity of the input facet of sample 2 (Fig. 5e) and a characteristic white plume is detected.
\section*{Data availability }
\noindent
The data that support the plots within this paper and other findings of this study
are available from the corresponding author upon reasonable request.
\vspace*{0.2cm}
|
2,877,628,088,489 | arxiv |
\section{System Model}
\label{model_mec}
\subsection{Mobile Application Model}
We consider a mobile application that is composed of mutually dependent computation task modules (i.e., procedures/components), which need to be executed in a specific order. Thus, a mobile application can generally be modeled as a directed acyclic graph (DAG) $G=\{\mathcal{N}, \mathcal{E}\}$, where $\mathcal{N}$ and $\mathcal{E}$ are the sets of nodes (i.e., task modules) and directed edges (i.e., indicating the dependency between two task modules), respectively. This modeling has been widely adopted in the computation offloading literature, such as, \cite{wu2019efficient,luo2017energy,zhang2015collaborative,mao2017mobile,kao2017hermes}, which capture the inter-dependency among different task modules using directed acyclic call-graph. In practice, the partition of application into task modules can be obtained using task profilers \cite{melendez2017computation,miettinen2010energy} or manually decided by the mobile user. In Fig. \ref{fig:tag}, we show an example of a DAG modeled real-world mobile mobile application called Smart Diagnosis.
\begin{figure}[h]
\centering
\includegraphics[width=.4\textwidth]{figures/diseaes3.eps}
\caption{An example of a DAG-modeled mobile application, Smart Diagnosis.}
\label{fig:tag}
\end{figure}
\begin{table*}[h]
\caption{Frequently used notations.}
\centering
\begin{tabular}{c|c | c |c }
\hline
Notations & Descriptions & Notations & Descriptions\\
\hline
$\mathcal{N}$ & set of task modules in an application & $c_n\in\{0,1\}$ & whether or not module $n$ is executed at local device\\
\hline
$\mathcal{E}$ & set of dependency connections among modules & $s_n\in\{0,1\}$ & whether or not module $n$ is offloaded to edge server\\
\hline
$\omega_n$ & workload (in CPU cycles) of task module $n\in\mathcal{N}$ & $Q_{m,n}^{u}$ & uncertain existing queue lengths at the client device\\
\hline
$o_{n,k}$ & output data size (in bits) from module $n$ and to $k$ & $Q_{m,n}^{d}$ & uncertain existing queue lengths at the edge server\\
\hline
$\mathcal{C}(n)$ & set of children nodes of module $n$ & $f_c$ ($f_s$) & CPU frequencies at local device (edge server) \\
\hline
$\mathcal{M}(n)$ & set of parent nodes of module $n$ & $R_u$ ($R_d$) & uncertain uploading (downloading) bit rates\\
\hline
$\Delta$ & duration of a time slot in the MEC system & $P_u$ ($P_d$) & uncertain uploading (downloading) power consumption\\
\hline
$T$ & completion deadline of the mobile application & $\epsilon_m$ & upper bound of extreme event occurrence probability\\
\hline
$\mathbf{x}_{n,t}^p$ & whether or not module $n$ terminates execution in & $\epsilon$ & approximation factor of the optimal solution\\
\cline{3-4}
$\in\{0,1\}$ & $t$th slot at client device $p=0$ (or edge $p=1$) & $\Psi$ & worst-case expected energy consumption of local device\\
\hline
\end{tabular}
\label{freq_notations}
\end{table*}
A task module $n\in\mathcal{N}$ of a mobile application is defined by $\omega_n$, where $\omega_n$ is the computation workload (in CPU cycles) of module $n$. The dependency between two task modules is defined by $o_{n, k}$, for $k \in \mathcal{C}(n)$, where $o_{n, k}$ is the output data size (in bits) from $n$ to $k$ and $\mathcal{C}(n)$ is the set of child nodes of module $n$. Likewise, we also denote by $\mathcal{M}(n)$ the set of parent nodes of module $n$. Note that, $\omega_n$'s and $o_{n,k}$'s can be inferred by task profilers \cite{melendez2017computation,miettinen2010energy} before running any offloading scheme. Table \ref{freq_notations} lists the frequently used notations.
\subsection{MEC System Model}
We consider a typical MEC system, in which task modules of a mobile application are offloaded over uncertain radio channels to edge servers. A task module is waiting in a queue before it is transmitted over a wireless channel. In addition, to support computation offloading, the edge server assigns virtual machines to execute task modules offloaded by client devices. We also take into account a time-slotted MEC system; the time is divided into slots with equal duration $\Delta$, and the discrete time period $((t-1)\Delta, t\Delta]$ is referred to as slot $t$.
To guarantee the quality of service, the entire mobile application is considered to complete before $T$ timeslots.
\subsection{Computation Execution in the MEC System}
\subsubsection{The Location of Executing A Task Module}
To model the module execution location, we first define a binary variable $\mathbf{x}_{n, t}^p$ to indicate whether module $n$ terminates its execution at the $t$th slot at a client device (i.e., $p = 0$) or at an edge server (i.e., $p = 1$). $\mathbf{x}_{n}^0\in\{0,1\}^T$ and $\mathbf{x}_{n}^1\in\{0,1\}^T$ are 1-sparse vectors that recording values of $\mathbf{x}_{n, t}^0$ and $\mathbf{x}_{n, t}^1$, respectively. As a result, we have $c_n = \sum_{t=1}^T\mathbf{x}_{n, t}^0=1$ and $s_n = \sum_{t=1}^T\mathbf{x}_{n, t}^1=1$ for executing task module $n$ at a client device and an edge server, respectively. Since a task module can be executed at either a client device or the edge server, we have
\begin{equation}
c_n+s_n = 1, \forall n \in\mathcal{N}.
\label{other_node}
\end{equation}
Particularly, for practical mobile applications, the first and last task module are usually used for initializing a mobile application and displaying computation results \cite{mao2017mobile}. Both of them need to be done at the client. Hence,
\begin{equation}
c_1 = 1 \quad{\rm{and}} \quad c_N = 1.
\label{node_1}
\end{equation}
\subsubsection{Module Execution Completion}
The execution completion time of a module $n$ can be calculated by $\sum_{p=0}^1\sum_{t=1}^Tt\mathbf{x}_{n,t}^p$. Due to a time-critical mobile application, the completion time of the last task module is subject to
\begin{equation}
\sum_{t=1}^Tt\mathbf{x}_{N,t}^0\leq T.
\label{deadline}
\end{equation}
\subsubsection{Task Module Execution Dependency}
\label{dependency}
Due to the interaction among task modules, the completion of executing a task module depends on its parent modules. Specifically, the starting time to execute a module $n$ should be no earlier than the time when its parent module $m$ ($\forall m \in \mathcal{M}(n)$) completes execution plus the time for possible output data queuing and transmission between the client device and the edge server. Thus, we can model the module execution dependency by
\begin{equation}
\begin{aligned}
\label{dependence}
&\sum_{p=0}^1\sum_{t=1}^Tt\mathbf{x}_{m,t}^p+c_ms_n \frac{Q_{m,n}^u+o_{m,n}}{R_{u}}+s_mc_n\frac{Q_{m,n}^d+o_{m,n}}{R_{d}}\\
&\leq\sum_{p=0}^1\sum_{t=1}^T t\mathbf{x}_{n,t}^p-s_n\frac{\omega_n}{f_s}-c_n\frac{\omega_n}{f_c}, \forall m \in \mathcal{M}(n), \forall n\in\mathcal{N},
\end{aligned}
\end{equation}
where $Q_{m,n}^u$ (resp. $Q_{m,n}^d$) is a random variable presenting the existing queue lengths of the out-going buffers at a client device (resp. edge server) when transmitting $o_{m, n}$ over uncertain wireless channels, $f_c$ and $f_s$ are the CPU frequencies of the client device and virtual machine, respectively, and $R_u$ (resp. $R_d$) is a random variable presenting the bit rates for uploading data to (resp. downloading data from) the edge server. For instance, $\frac{Q_{m,n}^u+o_{m,n}}{R_{u}}$ calculates the overall time of transmitting the output data of size $o_{m,n}$ and the existing buffered data of size $Q_{m,n}^u$ to the edge server under a random transmission rate $R_u$. Based on the Shannon-Hartley theorem, the bit rates $R_u$ and $R_d$ can be obtained by $R= B\log_2(1+SNR)$, where $B$ is the bandwidth and $SNR$ is the nondeterministic signal-to-noise ratio. Note that in (\ref{dependence}) we do not have any assumptions on the queuing models (e.g., Poisson distributed task arrival rates and exponential distributed job processing time assumed in the M/M/1 model \cite{ganesh2004big}), and the uncertainties introduced by queue length and bit rates will be addressed by the extreme value theory in the following section.
\section{Handling the Uncertainties}
\label{handlernd}
Due to the uncertain radio channels and network queue length, $Q_{m,n}^u, Q_{m,n}^d, R_{u}$, and $R_{d}$ in (\ref{dependence}) are random variables. To deal with the uncertainties, we propose to apply the extreme value theory and bound the time for the data queuing and transmission in the worst case. In so doing, we can draw broad conclusions about computation offloading in MEC systems without relying on specific case of radio channel and queuing model (e.g., the block-fading channel, constant or Poisson distributed queue size). The reason is that the extreme value theory is a technique describing the unusual rather than the usual, even if the network parameters change over time, their values will still be upper-bounded by their extreme values with a large probability.
By explicitly accounting for the uncertainties, our methodology is also superior than the classical ones, e.g., \cite{ganesh2004big}, which applies large deviation theory to handle rare events in the network.
Specifically, we consider the Generalized Extreme Value (GEV) distribution instead of its specific cases commonly used by existing works \cite{liu2007application,xiao2012reliable,swapna2013throughput,liu2017latency} when applying the extreme value theory to wireless radio channels. This is because these methods need to choose the most appropriate special case of the GEV distribution, and the followed statistic inferences do not allow any flexibility once the special case is determined.
We start with the case that a client device offloads module $n$ to the edge server, i.e., $s_n=1$, $c_n=0$, and $\mathbf{x}_n^0 = \mathbf{0}$,
and rewrite (\ref{dependence}) as
\begin{equation}
\begin{aligned}
c_m\frac{Q_{m,n}^u+o_{m,n}}{R_{u}} \leq \sum_{t=1}^{t=T}t\mathbf{x}_{n,t}^1- \sum_{p=0}^{1}\sum_{t=1}^{t=T} t\mathbf{x}_{m,t}^p-\frac{\omega_n}{f_s},
\end{aligned}
\label{QR}
\end{equation}
$\forall m \in\mathcal{M}(n), \forall n\in\mathcal{N}$.
Define random variable $V = \frac{Q_{m,n}^u+o_{m,n}}{R_{u}}\in[0,+\infty)$, which is the quotient between two random variables. According to \cite{curtiss1941distribution}, one generic way to derive the distribution of quotient from the joint distribution of the two involved random variables (i.e., $Q_{m,n}^u+o_{m,n}$ and $R_u$ in our case) is by integration of the following form $f_V(v) = \int |r|f_{Q,R}(vr,r)dr$, where $f_{Q,R}(vr,r)$ is the joint PDF of $Q_{m,n}^u+o_{m,n}$ and $R_{u}$, $v$ and $r$ denotes one pair of instance of random variable $V$ and $R_u$, and the multiplication of $u$ and $v$ represent one instance of the size of the buffered data ($Q_{m,n}^u$) plus the size of $o_{m,n}$.
The common way of handling the randomness is to bound the expected time of the data transmission and queuing by taking the expected value of $V$. However, due to no prior knowledge of the distributions of $Q_{m,n}^u$ and $R_{u}$, it is impossible to obtain $f_V(v)$ in practice. Moreover, only relying on the expected value is insufficient for ultra-reliable low latency communication (URLLC) applications, as it might fail to quantify some extreme events in the wireless communication systems \cite{liu2017latency,abdel2018ultra}. To address this issue, we consider the $k$th order statistic, i.e., $V_{(k)} = \underset{1\leq i \leq k}{\text{max}}V_i$,
to bound the time of data transmission and queuing in the worst case. $\{V_1,V_2,\cdots V_k\}$ are samples drawn from $f_V(v)$. Thus, we can rewrite (\ref{QR}) as
\begin{equation*}
\begin{aligned}
c_ms_nV_{(k)} \leq \sum_{t=1}^{t=T}t\mathbf{x}_{n,t}^1-\sum_{p=0,t=1}^{p=1,t=T} t\mathbf{x}_{m,t}^p-s_n\frac{\omega_n}{f_s},
\end{aligned}
\label{Vn}
\end{equation*}
$\forall m \in\mathcal{M}(n), \forall n\in\mathcal{N}$,
and bound the probability that $V_{(k)}$ exceeds its upper bound by
\begin{equation}
\Pr(c_mV_{(k)} \geq b_{m,n}^{c,s})\leq \epsilon_m, \forall m \in\mathcal{M}(n), \forall n\in\mathcal{N},
\label{pr}
\end{equation}
where $b_{m,n}^{c,s} =\sum_{t=1}^{t=T}t\mathbf{x}_{n,t}^1-\sum_{p=0,t=1}^{p=1,t=T} t\mathbf{x}_{m,t}^p-\frac{\omega_n}{f_s}$ and $\epsilon_m\ll1$. To characterize the PDF of $V_{(k)}$, we resort to the extreme value theory and explore the Generalized Fisher-Tippett-Gnedenko theorem to bound the worst-case data transmission time and queuing delay in the MEC system.
\begin{theorem}[Generalized Fisher-Tippett-Gnedenko Theorem \cite{coles2001introduction}]
\label{theorem_gev}
Let $V_i, 1\leq i \leq k$, be a sequence of random variables with a common PDF as $f_V(v)$, and $V_{(k)} = \underset{1\leq i \leq k}{\text{max}}V_i$. If there exist two sequences of constants $a_k\in\mathcal{R}^+$ and $b_k\in\mathcal{R}$ such that $\Pr(\frac{V_{(k)}-b_k}{a_k}\leq z)\approx G(z)$ as $k\rightarrow \infty$
for a non-degenerate distribution function $G$, where $\approx$ stands for ``asymptotic to". Then $G$ is a member of the GEV family with a cumulative distribution function (CDF) as
\begin{equation*}
G(z) =\left\{
\begin{aligned}
& \exp\bigg\{-[1+\xi(\frac{z-\mu}{\sigma})^{-1/\xi}]\bigg\}, & \text{if $\xi \neq 0$,} \\
& \exp\bigg\{ -exp[-\frac{z-\mu}{\sigma}] \bigg\}, & \text{if $\xi = 0$,}
\end{aligned}
\right.
\end{equation*}
defined over the set $\{z:1+\xi(z-\mu)/\sigma>0\}$, where $\mu\in \mathcal{R}$, $\xi\in \mathcal{R}$, and $\sigma\in\mathcal{R}^+$ are the location parameter, shape parameter, and scale parameter, respectively.
\end{theorem}
\begin{remark}\label{remark:cdf_remark}
In particular, $G(z)$ becomes the Gumbel distribution, Fr\'echet distribution, and Weibull distribution, when $\xi=0$, $\xi>0$, and $\xi<0$, respectively. The three distributions are all members of GEV family distribution and can be written in the form of $G(z)$ (with different parameters, i.e., $\mu$, $\xi$, and $\sigma$). Theorem \ref{theorem_gev} establishes the CDF in terms of $\Pr(\frac{V_{(k)}-b_k}{a_k}\leq z)$ instead of $\Pr(V_{(k)}\leq z)$. This is because as $k$ increases the corresponding probability density function may degenerate to a point mass (page 46 of \cite{coles2001introduction}). It can be avoided by allowing a linear renormalization of the block maximum $V_{(k)}$, i.e., $\frac{V_{(k)}-b_k}{a_k}$, and the renormalized variable is asymptotically distributed as $G(z)$ as $k$ increases. Note that if the renormalization of a variable follows a GEV family distribution, the variable itself also follows a GEV family distribution with different parameters (page 49 of \cite{coles2001introduction}), i.e., the CDF of $V_{(k)}$ takes the same form as $\Pr(\frac{V_{(k)}-b_k}{a_k}\leq z)$ (i.e., $G(z)$), but with different $\mu$, $\xi$, and $\sigma$.
\end{remark}
Based on Remark \ref{remark:cdf_remark}, to bound the probability (in (\ref{pr})) by $\epsilon_m$, we can bound the corresponding extreme quantile of $V_{(k)}$, i.e., $z_{\epsilon_m}^u$.
Then, we have $\Pr(V_{(k)}\geq z_{\epsilon_m}^u)=\epsilon_m^u$.
Thus, (\ref{pr}) becomes
\begin{equation}
c_mz_{\epsilon_m}^u\leq b_{m,n}^{c,s}, \forall m \in\mathcal{M}(n), \forall n \in\mathcal{N}.
\label{extreme_quantiles}
\end{equation}
Note that
$b_{m,n}^{c,s}$ is determined by the binary variable $\mathbf{x}_{n}^p$, and $z_{\epsilon_m}^u$ is a constant given the probability $\epsilon_m$ and a GEV distribution. By inverting the function of $G$ (which can also describe the CDF of $V_{(k)}$, but with different $\mu$, $\xi$, and $\sigma$), the extreme quantile is
\begin{equation}
z_{\epsilon_m}^u=\left\{
\begin{aligned}
&\mu-\frac{\sigma}{\xi}[1-\{-\log(1-\epsilon_m)\}^{-\xi}], & \text{if $\xi \neq 0$,} \\
&\mu-\sigma\log\{-\log(1-\epsilon_m)\}, & \text{if $\xi = 0$.}
\end{aligned}
\right.
\label{z}
\end{equation}
Parameters of the GEV distribution $\mathbf{\Theta} = (\xi,\sigma,\mu)$ can be inferred via the maximum likelihood estimation (MLE), which will be elaborated in Section \ref{inf}.
Now, we consider $n$ is executed at the client device, i.e., $c_n=1, s_n=0$ and $\mathbf{x}_n^1=\mathbf{0}$. We can rewrite (\ref{dependence}) as
\begin{equation*}
\begin{aligned}
s_m\frac{Q_{m,n}^d+o_{m,n}}{R_{d}} \leq \sum_{t=1}^{t=T}t\mathbf{x}_{n,t}^0-\sum_{p=0,t=1}^{p=1,t=T} t\mathbf{x}_{m,t}^p-c_n\frac{\omega_n}{f_c},
\label{sc}
\end{aligned}
\end{equation*}
$\forall m \in\mathcal{M}(n), \forall n\in\mathcal{N}$.
We notice that $\frac{Q_{m,n}^d+o_{m,n}}{R_{d}}$ is also a random variable. Thus, we also apply the $k$th order statistics and the extreme value theory to obtain
\begin{equation}
s_mz_{\epsilon_m}^d \leq b_{m,n}^{s,c}, \forall m \in\mathcal{M}(n), \forall n\in\mathcal{N}.
\label{extreme_s}
\end{equation}
where $z_{\epsilon_m}^d$ is the extreme quantile and $b_{m,n}^{s,c} = \sum_{t=1}^{t=T}t\mathbf{x}_{n,t}^0-\sum_{p=0,t=1}^{p=1,t=T} t\mathbf{x}_{m,t}^p-\frac{\omega_n}{f_c}$ is the nondeterministic upper bound.
\section{Problem Formulation}
\label{PF}
We consider to minimize a client device's energy consumption under the constraint of the completion time of the mobile application. Specifically, a client device's energy consumption for task execution and data transmission can be given by
\begin{equation}
\begin{aligned}
E = \sum_nc_n\kappa \omega_nf_c^2 &+ P_{u}\sum_{m\in\mathcal{M}(n)}\sum_n c_ms_n\frac{o_{m,n}}{R_{u}}\\
&+P_{d}\sum_{m\in\mathcal{M}(n)}\sum_ns_mc_n\frac{o_{m,n}}{R_{d}},
\end{aligned}
\label{energu}
\end{equation}
where $\kappa$ is a constant related to the hardware architecture of the client device \cite{mao2017mobile,miettinen2010energy}, $P_{u}$ and $P_{d}$ are random variables represent the power consumption of the device for transmitting data to and receiving data from the server. Due to the uncertain radio channels, the client device's energy consumption is nondeterministic as well.
To design a robust computation offloading scheme for transmitting offloaded computation over uncertain radio channels, we first formulate the energy consumption in the worst case:
\begin{equation}
\begin{aligned}
\bar{E} = \sum_nc_n\kappa \omega_nf_c^2 &+ \sum_{m\in\mathcal{M}(n)}\sum_n c_ms_n o_{m,n} J_{(k)}\\
&+\sum_{m\in\mathcal{M}(n)}\sum_ns_mc_n o_{m,n}H_{(k)}
\end{aligned}
\label{energy_worst}
\end{equation}
where $J_{(k)} = \underset{1\leq i \leq k}{\text{max}}J_i$ and $H_{(k)} = \underset{1\leq i \leq k}{\text{max}}H_i$ are the $k$-th order statistics.
$\{J_1,J_2,\cdots, J_k\}$ and $\{H_1,H_2,\cdots, H_k\}$ are the sample sets drawn from $f_{P_u/R_u}(p_u/r_u)$ and $f_{P_d/R_d}(p_d/r_d)$, respectively (by following statistical conventions, $P_u/R_u$ and $P_d/R_d$ denote the random variables, and $p_u/r_u$ and $p_d/r_d$ are the instances).
Then, we define the expected energy consumption of the client device in the worst case as follows:
\begin{equation}
\begin{aligned}
\Psi =&\qquad\mathbb{E}_{\mathclap{\substack{\qquad\\\qquad\\J_{(k)}\sim GEV(j_{(k)})\\ H_{(k)}\sim GEV(h_{(k)})}}} \quad(\bar{E}) \sum_nc_n\kappa \omega_nf_c^2
+\sum_{m\in\mathcal{M}(n)}\sum_n\\
&\Big( c_ms_no_{m,n}\quad\mathbb{E}_{\mathclap{\substack{\qquad\\\qquad\\J_{(k)}\sim GEV(j_{(k)})}}}(J_{(k)})+ s_mc_no_{m,n}\quad \mathbb{E}_{\mathclap{\substack{\qquad\\\qquad\\H_{(k)}\sim GEV(h_{(k)})}}}(H_{(k)})\ \ \Big),
\end{aligned}
\label{energy_worst_ex}
\end{equation}
where $GEV(j_{(k)})$ and $GEV(h_{(k)})$ are the inferred probability distributions of $J_{(k)}$ and $H_{(k)}$, respectively.
The expected value of GEV distributed variable can be calculated as \cite{coles2001introduction}
\begin{equation}
\mathbb{E}_{\mathclap{\substack{\qquad\\\qquad\\X\sim GEV(x)}}}(X) =
\begin{cases}
\mu+\sigma(g_1-1)/\xi, & \text{if $\xi \neq 0$, $\xi <1$}, \\
\mu+\sigma\gamma, & \text{if $\xi = 0$},\\
\infty, & \text{if $\xi \geq 1$},
\end{cases}
\label{mean}
\end{equation}
where $g_1 = \Gamma(1-\xi)$ and $\gamma$ is Euler's constant. For notational simplicity, we have $\theta_u = \quad\mathbb{E}_{\mathclap{\substack{\qquad\\\qquad\\J_{(k)}\sim GEV(j_{(k)})}}}(J_{(k)})$ and $\theta_d = \quad\mathbb{E}_{\mathclap{\substack{\qquad\\\qquad\\H_{(k)}\sim GEV(h_{(k)})}}}(H_{(k)})$.
Therefore, we formulate the energy efficient computation offloading problem as
\begin{equation}
\begin{aligned}
\text{MP}:\qquad& \underset{\mathbf{x}_{n}^p\in\{0,1\}^T,\ p\in\{0,1\}}{\text{min}} \quad \Psi\\
&\text{\textbf{s. t.}}\quad (\ref{other_node}),(\ref{node_1}),(\ref{deadline}),(\ref{extreme_quantiles}),(\ref{extreme_s}).
\label{formulation}
\end{aligned}
\end{equation}
\section{Introduction}
\label{intro}
\IEEEPARstart{T}{he} breakthroughs of hardware and software technologies have resulted in the surge of various mobile applications, such as real-time online gaming and anywhere anytime online social interactions. Mobile devices (e.g., smartphones and wearable devices) executing these applications are usually computation-constrained and battery-limited \cite{satyanarayanan1996fundamental}, which can hinder the development of mobile applications. To address this issue, academia and industry propose to employ moible edge computing (MEC) to let mobile devices offload their computations to edge servers with abundant resources in the proximity \cite{mao2017mobile,wang2020thirty}. As a result, mobile devices are able to support computation-intensive and energy-hungry mobile applications in MEC systems.
So far, researchers have conducted extensive works on energy-efficient computation offloading schemes \cite{geng2018energy,wang2016mobile,zhang2016energy,yu2016joint,salinas2016efficient,sardellitti2015joint,kao2017hermes,luo2017energy,jia2014heuristic,zhang2015collaborative,mahmoodi2016optimal,liu2016delay,wu2019efficient}. Specifically, some works are
based on physical layer design of wireless communications \cite{wang2016mobile,sardellitti2015joint}. Some designed schemes from the perspective of cross-layer design \cite{barbarossa2013computation,zhang2016energy,yu2016joint}. Some focused on designing offloading schemes for time-critical applications \cite{kao2017hermes,jia2014heuristic,zhang2015collaborative,ji2021economy,mahmoodi2016optimal,liu2016delay}.
Previous studies generally impose strong assumptions on communication channels and network queue sizes; they develop computation offloading schemes based on specific radio channels and network queue models. For example, Zhang et al. \cite{zhang2015collaborative} proposed the one-climb policy to offload computations under independent and identically distributed (i.i.d.) stochastic channels. Geng et al. \cite{geng2018energy} proposed a critical path based solution to allocate computation tasks to a mobile multi-core device and an edge server without considering dynamic network queues.
However, a practical MEC system is always subject to intrinsic uncertainties like unknown communication channels and network queue size. Factors like weather, obstacles, movements can easily affect radio channels \cite{tse2005fundamentals}.
For example, it has been shown that the varying locations of the local mobile transmitter and receiver can lead to uncertainties in channel law as well as arbitrarily varying communication bit rates and energy consumption \cite{lapidoth1998reliable}. The network queue size is also highly dependent on data volume, network condition, arrival/processing methods, etc., \cite{rappaport1996wireless}. Recent studies have shown that sudden burst of serving requests can overflow the edge servers' resource and cause MEC failure \cite{satria2017recovery,babou2018home}.
Consequently, it is impossible to use approximated mathematical models to accurately capture the underlying dynamics of radio channel and network queue size \cite{ezio2016impact}. Since the strong assumptions on radio channels and network queue size cannot hold in practical MEC systems, simply applying computation offloading schemes developed based on the two assumptions can cause improper parameter deployment at the wireless link layer and even upper layers, which incurs performance degradation. This is because the parameter deployment at the wireless link layer and upper layers, e.g., transmission and reception power, channel allocation, and routing, are highly related to the radio channels and network queue size.
Particularly, in Section \ref{exp:performance_degrade}, we will empirically corroborate the fact that strong assumptions on channel conditions and queuing delay will result in much higher energy consumption at the client device in computation offloading.
In light of this, we are motivated to investigate the computation offloading problem by taking into account intrinsic uncertainties in practical MEC systems. We model the mobile application as a directed acyclic graph (DAG), and handle the execution dependency by explicitly considering the parent and children sets of each computation module. To efficiently solve the formulated the energy-efficient computation offloading problem, we develop a column generation based $\epsilon$-bounded algorithm with theoretical optimality guarantee. The main contributions of this paper are summarized as follows:
\begin{itemize}
\item We study energy-efficient computation offloading by lifting strong assumptions on communication channels and network queue size imposed by previous studies.
\item We design an energy-efficient computation offloading scheme for executing a mobile application modeled as a DAG in a practical MEC system with uncertainties. Particularly, we employ the extreme value theory to bound the occurrence probability of uncertain events.
\item We formulate the energy-efficient computation offloading problem that is subject to the application execution time and develop an efficient column generation based algorithm to solve it. In particular, we also provide an $\epsilon$-bounded approximate solution with theoretical guarantee on the optimality of the proposed scheme.
\item We propose computation offloading principles for mobile applications with only sequential or parallel module dependency in practical MEC systems with uncertainties.
\item We implement our proposed offloading scheme on an Android platform, and conduct extensive experiments by executing a real-world application, i.e., Smart Diagnosis. Experiment results corroborate that we can reduce the energy consumption of the client device by explicitly considering the intrinsic uncertainties in computation offloading, and also demonstrate that our proposed scheme significantly outperforms other state-of-the-art computation offloading schemes in terms of energy saving.
\end{itemize}
\textbf{Roadmap.} We introduce the related works in Section \ref{RW}. Section \ref{model_mec} describes the considered system model. We present the process of handling the randomness caused by uncertainties of MEC systems in Section \ref{handlernd}, followed by the formulation of the energy-efficient computation offloading problem in Section \ref{PF}. We develop the column generation based algorithm to solve the formulated problem, and provide computation offloading principles for special mobile applications
in Section \ref{CG}. We show our experiment results in Section \ref{ex}, and conclude the paper in Section \ref{con}.
\section{Experimental Results}
\label{ex}
Now, we will describe the experiment setup, discuss parameter estimation for the considered GEV distributions, evaluate the performance of the proposed computation offloading scheme, empirically corroborate the performance degradation caused by simple assumptions on network condition and queues, and compare with the state-of-the-art mechanism. Furthermore, we will also study the scalability of the proposed scheme, and investigate the offloading performance on simulated applications with sequential or parallel dependence.
\subsection{Experimental Setup}
\label{expsetup}
We conduct extensive experiments including both testbed experiments (Section \ref{exp:performance}, \ref {exp:performance_degrade}, and \ref{exp:compare}) and simulations (Section \ref{exp:scale} and \ref{exp:special}) to evaluate the performance of our proposed scheme. Specifically, we establish a MEC architecture in which four servers (provided by the Accipiter System\footnote{\url{https://www.accipitersystems.com/}}), each having $2.4$GHz CPU with 6 cores,
are deployed in a $40\times 70m^2$ floor area in an office building. The servers are directly connected to a switch. The client devices used in the experiments are all Motorola moto $G^4$ with $1.5$GHz CPU.
In Fig. \ref{fig:floorplan}, we provide the floor plan of the environment along with the edge server used in the testbed experiments.
\begin{figure}[h]
\centering
\includegraphics[width=.3 \textwidth]{figures/floorplan.eps}
\caption{Floor plan of the experiment environment. The edge server is located at the second office room (indicated by the red arrow), top right corner.}
\label{fig:floorplan}
\end{figure}
We execute a real-world mobile application, i.e., Smart Diagnosis visualized in Fig. \ref{fig:tag}. We implement this application on Android devices and encapsulate the proposed scheme as an API that can be called when the application is invoked. We set the competition time $T=0.5\times 10^4$ milliseconds (approximately the average execution time of a mobile device running a computer vision application), and
$\Delta=1$ millisecond.
To measure the communication parameters in the MEC system, we first execute Smart Diagnosis locally on several mobile devices, randomly walk in the considered area
with varying speeds, make video phone calls, and run programs at these devices to generate random matrices with varying sizes, offload them to the servers to calculate the singular values, then receive the results. At the same time, we also use available Android tools, i.e., tPacketCaptur
,
WiFi SN
,
and PETrA \cite{di2017software}, to record the statistics of the queue length, the channel SNR, and the client device's energy consumption, respectively. Specifically, tPacketCapture captures out-going and incoming packets using VpnService provided by Android OS, and the captured data are saved as PCAP files
and further analyzed using Wireshark. WiFi SNR records the varying SNR, from which we calculate the corresponding bit rates using Shannon-Hartley theorem given the predetermined bandwidth for data transmission. PETrA provides the client device's energy consumption profile in real-time when executing applications. In Table \ref{table:samle_statistics}, we show some statistics of the out-going and incoming packet size (i.e., $P_u$ and $P_d$), uploading and downloading data rates (i.e., $R_u$ and $R_d$), and uploading and downloading power consumption (i.e., $P_u$ and $P_d$).
\begin{table}[h]
\caption{Statistics of collected samples.}
\centering
\begin{tabular}{c|c| c}
\hline
parameters& mean & standard deviation\\
\hline
$P_u$ ($\times 10^5$ bit) & $1.01$ & 0.09\\
$P_d$ ($\times 10^5$ bit) &1.32 & $0.01$ \\
$R_u$ ($\times 10^5$ bit/s) & 4.5 & 0.5\\
$R_d$ ($\times 10^5$ bit/s)&13.1 & 1.1\\
$P_u$ (mW)& 0.14 & 0.04\\
$P_d$ (mW)&0.16 &0.06 \\
\hline
\end{tabular}
\label{table:samle_statistics}
\end{table}
\subsection{Parameter Estimation for GEV Distribution}
\label{inf}
Based on the statistics collected in Section \ref{expsetup}, we calculate the communication parameters, i.e., $z_{\epsilon_m}^u$, $z_{\epsilon_m}^d$, $\theta_u$ and $\theta_d$, which involve the inference of GEV distribution
of $V_{(k)}$, $J_{(k)}$, and $H_{(k)}$. Since all of them follow the same process, we only show the process of inferring $V_{(k)}$. We use tPacketCapture and WIFI SNR to capture a series of independent samples of real-time queue length and SNR, and then form a set $\{V_1,V_2,\cdots,V_N\}$, where $V_n = \frac{Q_n^u+o_n}{R_u}$, $o_n$ is a generated matrix data size measured in bits, $Q_n^u$ is the queue length when transmitting $o_n$ to the edge server and is determined based on the time tag when the matrix transmission command is invoked and executed, and $R_u$ is the corresponding bit rate. The samples are blocked into $\frac{N}{k}$ sequences of length $k$, for some large values of $k$, generating a series of block maxima, ${V_{(k)}}_1,{V_{(k)}}_2,\cdots,{V_{(k)}}_{\frac{N}{k}}$, to which the GEV distribution for $V_{(k)}$ in the case of transmitting data to the edge server is fitted. Please refer to \cite{coles2001introduction} for the details of using MLE to find the GEV distribution given series of block maxima. In the testbed experiment, we set $k$ (the order of statistics) as 1500, because by the end of executing the Smart Diagnosis, each device can generate about 1500 random matrices, transmit them to the server, and receive the singular values. The data collection period is repeated for 100 times , which means $\frac{N}{k}=100$.
Fig. \ref{fig:gev} shows the histogram of the block maximas (indicated by the blue bars) and the inferred GEV distributions (indicated by the red curves).
Specifically, by fixing $\epsilon_m^u = \epsilon_m^d = 0.1$, according to (\ref{z}), we calculate the $90\%$ quantile of $f_{V_k}(v_k)$ for transmitting data to the edge server and client device as $z_{\epsilon_m}^u=0.349$ and $z_{\epsilon_m}^d = 0.107$, respectively. By applying (\ref{mean}), we calculate $\theta_u = 4.81\times 10^{-4}$ and $\theta_d = 1.11\times 10^{-5}$. The parameters used for executing Smart Diagnosis in the MEC system are summarized in Table \ref{para}.
\begin{figure}[htb]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width= .47\columnwidth]{figures/Vu2.eps}
&
\includegraphics[width= .47\columnwidth]{figures/Vd2.eps}
\\
{\small (a) GEV distribution of} &
{\small (b) GEV distribution of}
\\
{\small $V_{(k)}$ during data uploading} &
{\small $V_{(k)}$ during data downloading}
\\
\includegraphics[width= .47\columnwidth]{figures/J2.eps}
&
\includegraphics[width= .47\columnwidth]{figures/H2.eps}
\\
{\small (c) GEV distribution of $J_{(k)}$} &
{\small (d) GEV distribution of $H_{(k)}$}
\end{tabular}
\end{center}
\caption{\label{fig:gev} Inferred GEV distributions.}
\end{figure}
\begin{table*}[h]
\caption{The Summary of Experiment Settings.}
\centering
\begin{tabular}{c|c| c| c| c|c|c| c| c| c}
\hline
$f_c$ & $f_s$ &$\kappa$& $z_{\epsilon_m}^u$ & $z_{\epsilon_m}^d$ & T&$\Delta$&$\theta_u$ & \multicolumn{2}{c}{$\theta_d$} \\ [0.5ex]
\hline
$1.5$GHz & $2.4$GHz &$10^{-24}$mW$/Hz^3$ & $0.349$s &$0.107$s & $0.5\times 10^4$ms&1&$4.81\times 10^{-4}$mW$\cdot$s/bit & \multicolumn{2}{c}{$1.11\times 10^{-5}$mW$\cdot$s/bit} \\
\hline
\end{tabular}
\label{para}
\end{table*}
\subsection{Performance of the Proposed Scheme}
\label{exp:performance}
To evaluate the efficacy of our proposed scheme, we calculate the offloading percentage and the energy consumption of Smart Diagnosis using different computation offloading decisions obtained under varying approximation parameter ($\epsilon$), while fixing the probabilities ($\epsilon_m^u$ and $\epsilon_m^d$) of the extreme case events happening during the period of data uploading and downloading.
\begin{figure}[htb]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width= .45\columnwidth]{figures/performance_of_ours/q001_2nd.eps}
&
\includegraphics[width= .45\columnwidth]{figures/performance_of_ours/q01_2nd.eps}
\\
{\small (a) Fix $\epsilon_m^d = \epsilon_m^u = 0.01$.} &
{\small (b) Fix $\epsilon_m^d = \epsilon_m^u = 0.1$.}
\\
\includegraphics[width= .45\columnwidth]{figures/performance_of_ours/u2.eps}
&
\includegraphics[width= .45\columnwidth]{figures/performance_of_ours/d2.eps}
\\
{\small (c) Fix $\epsilon = 0.03, \epsilon_m^d = 0.01$.} &
{\small (d) Fix $\epsilon = 0.03, \epsilon_m^u = 0.01$.}
\end{tabular}
\end{center}
\caption{\label{fig:approx} Offloading percentage and energy consumption under various approximation parameter and queueing delay.}
\end{figure}
We show the results in Fig. {\ref{fig:approx}}, where the blue dotted line represents the offloading percentage and the red dotted line is the energy consumption. Specifically, Fig. {\ref{fig:approx}}(a) and (b) are the results of setting $\epsilon_m^d = \epsilon_m^u = 0.01$ and $\epsilon_m^d = \epsilon_m^u = 0.1$, respectively.\footnote{In practice, we can choose large values of $\epsilon_m^d$ and $\epsilon_m^u$ if there are multiple mobile devices utilizing the computation resource at the edge, because in this case, the probability of experiencing high queuing size and low communication rate will be higher.} We see that our scheme is very robust under various $\epsilon$. For example, in Fig. {\ref{fig:approx}}(a), when $\epsilon_m^d = \epsilon_m^u = 0.01$, the offloading policy is optimal as long as the approximation parameter does not exceed $0.05$. This is because the client device favors an energy-friendly computation offloading policy that offloads more computing nodes, and when the extreme case events happen with a lower probability, the radio channel is able to support more data transferring between the client device and the edge server. Even increasing the probability that extreme case events occur, we can still obtain an optimal offloading policy when $0\leq \epsilon \leq 0.03$ as shown in Fig. {\ref{fig:approx}}(b). Based on the obtained optimal offloading decision for Smart Diagnosis when $\epsilon = 0.03, \epsilon_m^d = \epsilon_m^u = 0.1$, modules named ``SURF Description'', ``SIFT Description'', ``KNN1'', ``KNN2", ``Tokenization'', and ``POS Tagging'' in Fig. \ref{fig:tag} are offloaded
We further evaluate the impact of queuing delay on the performance by varying probabilities $\epsilon_m^u$ and $\epsilon_m^d$, and show the results in Fig. \ref{fig:approx} (c) and (d). We can see that for both cases, the offloading percentage decreases and $\Psi$ increases with the increasing values of $\epsilon_m^u$ and $\epsilon_m^d$. The reason is that as $\epsilon_m^u$ and $\epsilon_m^d$ increase, the channel conditions and queuing delay cause the increase in data transmission time, thus the proposed scheme decreases the number of modules to be offloaded
\subsection{Performance Degradation Caused by Simple Network Condition and Queueing Assumptions}
\label{exp:performance_degrade}
Having demonstrated the efficacy of the proposed scheme, we now empirically corroborate that it will lead to higher energy consumption for the client device if we simply adopt strong assumptions on the network conditions or the queuing delays when deciding which task module to offload. Specifically, we consider the offloading decisions for the Smart Diagnosis obtained under the
following scenarios: (i) a block-fading communication channel, and (ii) constant queuing size of the out-going buffers at the client device and server.
\noindent\textbf{Scenario (i).}
The block-fading channel assumes that the channel condition does not change over the duration of application execution, thus, according to the Shannon-Hartley theorem, the client device can have constant uploading and downloading bit rates (i.e., $R_u$ and $R_d$). As a result, the unknown conditions when making computation offloading decisions are $Q_{m,n}^u$, $Q_{m,n}^d$ (cf. (\ref{dependence})), $P_u$, and $P_d$ (cf. (\ref{energu})). The offloading decision can still be obtained by the developed column generation based algorithm by considering constant $R_u$ and $R_d$ when fitting the required GEV distributions in Section \ref{inf}.
\begin{figure}[!htb]
\centering
\begin{center}
\begin{tabular}{cc}
\includegraphics[width= .45\columnwidth]{figures/experiment_w_assumptions/scenario_i_vary_Ru.eps}
&
\includegraphics[width= .45\columnwidth]{figures/experiment_w_assumptions/scenario_i_vary_Rd.eps}
\\
{\small (a) $R_d = 22.7$MB/s, vary $R_u$} &
{\small (b) $R_u = 10.7$MB/s, vary $R_d$
\end{tabular}
\end{center}
\caption{\label{fig:scenario_i} Energy consumption comparison between offloading decisions obtained with and without the block-fading channel assumption.}
\end{figure}
In Fig. ~\ref{fig:scenario_i} (a) and (b), we present the empirical energy consumption of the client device when executing Smart Diagnosis using the offloading decisions obtained with the block-fading channel assumption (the blue curves). We also plot the energy consumption obtained without assuming block-fading channel (the red lines) for comparison. In this experiment, we fix $\epsilon_m^d = \epsilon_m^u = 0.01$ and $\epsilon=0.05$. More specifically, in Fig. ~\ref{fig:scenario_i} (a), we fix $R_d$ to be the average value computed using SNR measured by WiFiSNR in Section \ref{expsetup}, and vary $R_u$ in the range of $\mathbb{E}[R_u]\pm4.5$MB/s, where $\mathbb{E}[R_u] = 10.7$MB/s is obtained from Section \ref{expsetup}.
Similarly, in Fig. ~\ref{fig:scenario_i} (b), we fix $R_u$ to be the average value obtained in Section \ref{expsetup}, and vary $R_d$ in the range of $\mathbb{E}[R_d]\pm4.5$MB/s, where $\mathbb{E}[R_d] = 22.7$MB/s is also obtained from Section \ref{expsetup}. Clearly, by considering the uncertainty in downloading and uploading bit rates, i.e., without the strong assumptions on the communication channel, the offloading decisions achieved by our proposed scheme can always lead to much lower empirical energy consumption. The reason is that under the block-fading channel assumption, when $R_u$ or $R_d$ is low, the offloading decision will assign more task module at the client device, which increases its energy consumption. On the other hand, when $R_u$ or $R_d$ is high, the offloading decision will assign more task module to the server. However, in practice, it may take the client device more time to transmit or receive the data if the data rates drop in the course of the application execution, because we carry the mobile device and walk at various speed during all the experiments. Similar phenomenon has also been shown in \cite{lapidoth1998reliable}, i.e., the varying locations of the local mobile transmitter and receiver can lead to uncertainties in channel law as well as arbitrarily varying communication bit rates.
\noindent\textbf{Scenario (ii).} In this case, the unknown conditions in our computation offloading scheme are $R_u$, $R_d$, $P_u$, and $P_d$. The offloading decision is obtained using the column generation based algorithm by considering the constant $Q^u_{m,n}$ and $Q^d_{m,n}$.
\begin{figure}[!htb]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width= .45\columnwidth]{figures/experiment_w_assumptions/vary_Qu.eps}
&
\includegraphics[width= .45\columnwidth]{figures/experiment_w_assumptions/vary_Qd.eps}
\\
{\small (a) $Q^d_{m,n} = 3.3$KB, vary $Q^u_{m,n}$} &
{\small (b) $Q^u_{m,n} = 5.4$KB, vary $Q^d_{m,n}$
\end{tabular}
\end{center}
\caption{\label{fig:scenario_ii} Energy consumption comparison between offloading decisions obtained with and without the constant queuing size assumption.}
\end{figure}
In Fig. ~\ref{fig:scenario_ii} (a) and (b), we present the empirical energy consumption of the client device when executing Smart Diagnosis using the offloading decisions obtained with the constant queuing size assumption (the blue curves). Additionally, the energy consumption (the red lines) obtained without such assumption is shown for comparison. In this experiment, we still fix $\epsilon_m^d = \epsilon_m^u = 0.01$ and $\epsilon=0.05$. Particularly, in Fig. ~\ref{fig:scenario_ii} (a), we fix $Q^d_{m,n}$ to be the average value measured by tPacketCapture in Section \ref{expsetup}, and vary $Q^u_{m,n}$ in the range of $\mathbb{E}[Q^u_{m,n}]\pm1.5$KB, where $\mathbb{E}[Q^u_{m,n}] = 3.3$KB is obtained from Section \ref{expsetup}.
Similarly, in Fig. ~\ref{fig:scenario_ii} (b), we fix $Q^u_{m,n}$ to be the average value obtained in Section \ref{expsetup}, and vary $Q^d_{m,n}$ in the range of $\mathbb{E}[Q^d_{m,n}]\pm1.5$MB/s, where $\mathbb{E}[Q^d_{m,n}] = 5.4$MB/s is also obtained from Section \ref{expsetup}. Clearly, by considering the uncertainty of the queuing size, i.e., without the constant queuing size assumption, the offloading decisions achieved by the proposed scheme can always lead to much lower empirical energy consumption. Similar analysis also applies to this scenario; under the assumption of constant queuing size at the out-going buffer of the client device and the server, when $Q^u_{m,n}$ or $Q^d_{m,n}$ is low, the offloading decision will assign more task modules to the server to save local energy consumption, but it may take the client device more time to transmit or receive the data, which incur extra communication energy, if the queuing size increase during the application execution. The reason is that sudden burst of serving requests can overflow the edge servers' resource and may even cause MEC failure \cite{satria2017recovery,babou2018home}. On the other hand, if $Q^u_{m,n}$ or $Q^d_{m,n}$ is high, the offloading decision will assign more task modules to the local device and increase its energy consumption.
\subsection{Performance Improvement over Existing Schemes}
\label{exp:compare}
We compare our proposed algorithm with two state-of-the-art offloading approaches, i.e., Hermes \cite{kao2017hermes} and JSCO \cite{mahmoodi2016optimal} discussed in Section \ref{RW}.
The experiment are conducted by varying $\epsilon$ when $\epsilon_m^d = \epsilon_m^u = 0.01$, and the offloading percentage and energy consumption are calculated for all algorithms. The results are shown in Fig. \ref{fig:compare_soa}.
We see that our scheme can always lead to the lowest energy consumption by offloading more computation modules as long as the approximation parameter $\epsilon$ is less than $0.05$. Particularly, when $\epsilon\leq 0.05$, our scheme can save more than $50\%$ energy for the local device, as the energy consumption of executing Smart Diagnosis using Hermes, JSCO, and our scheme are about $0.03J$, $0.028J$, and $0.014J$, respectively. The reason is that JSCO fails to consider uncertainties in dynamic radio channels with queueing delay, and Hermes introduces extra energy consumption due to communication overhead caused by continuously probing the channel.
\begin{figure}[h]
\centering
\includegraphics[width=.45\textwidth]{figures/compare_with_soa/q001_w_hermes_jsco_2nd2.eps}
\caption{Performance comparison with state-of-the-art offloading approaches.}
\label{fig:cmp_soa}
\end{figure}
\subsection{The Scalability of The Proposed Scheme}\label{exp:scale}
To explore the scalability of the proposed scheme, we measure the number of offloaded nodes when running simulated DAGs with different sizes. Specifically, we simulate DAGs with layer-by-layer structure, where a random number of nodes is generated for each layer and edges are added between nodes across layers with a given probability $p$. In this experiment, the computation workload and output data size of each module in simulated DAGs are all attributed to half-normal distributions (nonnegative support) with different parameters. Besides, the communication parameters used in this simulation experiment are the same as those in Table \ref{para}.
\begin{figure}[h]
\centering
\includegraphics[width=.45\textwidth]{figures/varyN4_2nd_3.eps}
\caption{Evaluation of the scalability of the proposed computation offloading algorithm.}
\label{fig:compare_scale}
\end{figure}
We vary the number of nodes in a DAG from $10^1$ to $10^3$ and consider four connecting probabilities, i.e., $p\in\{0.05, 0.15, 0.2, 0.25\}$. In the left panel of Fig. \ref{fig:compare_scale}, we plot the number of offloaded nodes versus the size of DAG when having $\epsilon = 0.03$ and $\epsilon_m^u = \epsilon_m^d = 0.01$. We see that by fixing a DAG size, i.e., $N=10^2$, the number of offloaded modules decreases with the increasing $p$. This is because the denser the DAG, the more dependency among modules, and offloading will cost more energy in dense DAGs than that in sparse ones.
Moreover, we also observe that by fixing a connecting probability, i.e., $p = 0.05$, the number of offloaded nodes grows linearly with the size of DAG. Since the proposed scheme identifies one column at each iteration, it scales linearly with the graph size. This indicates that our scheme is computationally efficient. In the right panel of Fig. \ref{fig:compare_scale}, we also plot the time cost
of running our proposed algorithm on these simulated DAGs. We can see that the computation time also scales linearly with the graph size, and it suggests that our proposed algorithm can work in real-world MEC system.
\subsection{Offloading Applications with Special Dependency}
\label{exp:special}
In this section, to corroborate the findings in Theorem \ref{seq_dep} and \ref{para_dep}, we also simulate special applications with only sequential or parallel dependency.
Specifically, we consider an application composed of $14$ task modules. The computation workload and output data size for each task module are sampled from Poisson distributions with $\lambda_1 = 10^6$Hz and $\lambda_2 = 12$kb
Fig. \ref{fig:seq_e0} shows the offloading decision of task modules with only sequential dependency. In particular, we compare the offloading decisions by considering two approximation parameters, i.e., $\epsilon = 0$ and $\epsilon = 0.03$, and for each $\epsilon$, we consider the effect of $\epsilon_m^d$ and $\epsilon_m^u$ on the offloading policy. In Fig. \ref{fig:seq_e0}, we use the blue dotted line to show the offloading policy under $\epsilon_m^d = \epsilon_m^u=0.01$, use the red circled line to show the offloading policy under $\epsilon_m^d = \epsilon_m^u=0.1$, and use $1$ or $0$ to indicate a particular module is offloaded or not. We can see that in both cases, the offloading policies for sequential DAG only offload one subsequence. Besides, when the extreme events happen with low probabilities, the offloading policy will offload longer subsequence. This is because the better radio conditions, the less task modules will be executed locally.
\begin{figure}[htb]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width= .45\columnwidth]{figures/1_seq.eps}&
\includegraphics[width= .45\columnwidth]{figures/seq003.eps}
\\
{\small (a) $\epsilon=0$}
&
{\small (b) $\epsilon=0.03$}
\end{tabular}
\end{center}
\caption{\label{fig:seq_e0} Offloading decision for sequential dependency DAG.
\end{figure}
\begin{figure}[htb]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width= 0.45\columnwidth]{figures/0_para.eps}&
\includegraphics[width= .45\columnwidth]{figures/003_para.eps}
\\
{\small (a) $\epsilon=0$, $\epsilon_m^d=\epsilon_m^u=0.01$}
&
{\small (b) $\epsilon=0.03$, $\epsilon_m^d=\epsilon_m^u=0.1$}
\end{tabular}
\end{center}
\caption{\label{fig:para_ex} Offloading decision for parallel dependency DAG.
\end{figure}
Fig. \ref{fig:para_ex} shows the offloading decision of all tasks with only parallel dependency. We compare the offloading decisions by considering two sets of approximation parameter and probabilities, i.e., $\epsilon=0$, $\epsilon_m^d=\epsilon_m^u=0.01$ and $\epsilon=0.03$, $\epsilon_m^d=\epsilon_m^u=0.1$. In Fig. \ref{fig:para_ex}, the blue dotted lines indicate the offloading policy decided by running our column generation based algorithm, the red circled lines indicate the policy decided by Theorem \ref{para_dep}. We can see that in Fig. \ref{fig:para_ex}(a), the obtained offloading polices are identical for both methods, but in Fig. \ref{fig:para_ex}(b), the policy decided by Theorem \ref{para_dep} offloads more nodes than that decided by the algorithm. The reason is that when the approximation parameter is large, e.g., $\epsilon=0.03$ in Fig. \ref{fig:para_ex}(b), the solution to the proposed algorithm is an $\epsilon$-bounded approximate solution, while the solution given by Theorem \ref{para_dep} is the optimal offloading decision. This implies that when applications have only parallel dependency, we can directly obtain the optimal offloading policy by evaluating the energy consumption and expected worst-case energy consumption for data transmission of each task module.
Additionally, we also conduct experiments to offload the simulated DAGs with special dependency structures under the uncertainty MEC system described in Section \ref{expsetup}. The communication parameters are still set as those listed in Table \ref{para}. In Table \ref{table:special_compare}, we compare the energy consumption achieved by our scheme, under different choice of $\epsilon$, $\epsilon_m^d$, and $\epsilon_m^u$, with Hermes and JSCO. Clearly, our developed scheme can still achieve the lowest local energy consumption, which suggests again that by explicitly addressing the uncertainties involved in computation offloading using the extreme value theory, we can make offloading decisions that are close to the optimal ones (i.e., when $\epsilon=0$).
\begin{table}[h]
\caption{Comparison of energy consumption caused by our algorithm and other schemes on DAG with Special Dependency.}
\centering
\begin{tabular}{c|c| c| c| c }
\hline
& \multicolumn{2}{c|}{ sequential DAG} & \multicolumn{2}{c}{ parallel DAG} \\ [0.5ex]
\cline{2-5}
& $\epsilon=0$ & $\epsilon=0.03$ & $\epsilon=0$ & $\epsilon=0.03$\\
\hline
$\epsilon_m^d=\epsilon_m^u=0.01$ & 0.011 J & 0.014 J & 0.017 J & 0.018 J \\
\hline
$\epsilon_m^d=\epsilon_m^u=0.1$ & 0.016 J & 0.018 J & 0.020 J & 0.022 J \\
\hline
& \multicolumn{2}{c|}{0.019 J by Hermes} & \multicolumn{2}{c}{ 0.022 J by Hermes} \\
& \multicolumn{2}{c|}{0.020 J by JSCO} & \multicolumn{2}{c}{0.024 J by JSCO} \\
\hline
\end{tabular}
\label{table:special_compare}
\end{table}
\section{Conclusions}
\label{con}
In this paper, we investigate the problem of energy-efficient computation offloading in a practical MEC systems. In contrast to existing works which usually impose strong assumptions on communication channels and network queues, we remove these assumptions and address the uncertainties in the MEC system. First, we handle the uncertainties and bound the occurrence probability of extreme events using extreme value theory. Then, we formulate the expected energy consumption in the worst case when executing time-sensitive applications. Next, since the formulated optimization problem is a quadratically constrained binary quadratic programming, we develop an $\epsilon$-bounded approximation algorithm based on column generation technique to solve it. In addition, we also tailor the proposed computation offloading scheme to accommodate special applications with only sequential or parallel task module dependency. We implement the proposed scheme on the Android platform and conduct extensive experiments by executing a real-world mobile application. Experiment results corroborate the fact that it will lead to lower energy consumption by explicitly consider the uncertainties in communication channels and network queues, and show that our scheme outperform state-of-the-art schemes in terms of energy saving.
\section{A Column Generation Based Efficient $\epsilon$-bounded Approximation Algorithm}
\label{CG}
The formulated optimization problem in (\ref{formulation}) is a quadratically constrained binary quadratic programming. In particular, the decision variables are $2N$ 1-sparse binary vectors in $\{0,1\}^T$, resulting in a total of $2^{2NT}$ degrees of freedom. This makes the formulated problem computationally-intractable. To address this issue, we propose an $\epsilon$-bounded approximate algorithm based on the column generation (CG) technique that is usually used to solve linear or nonlinear programming in an iterative way \cite{bertsimas1997introduction} and is widely used in wireless communication \cite{li2014energy,li2014optimal}. Particularly, our proposed scheme can find the $\epsilon$-bounded approximate offloading policy, and especially the optimal policy when setting $\epsilon= 0$.
Specifically, we consider MP in (\ref{formulation}) as a master problem, and solve it via starting with a restricted master problem (i.e., called RMP) which minimizes $\Psi$ given only a fraction of the columns. We can add a new column into RMP only if it has been determined by a price problem (i.e., called PP) to be profitable for further reducing $\Psi$. To be more specific, at each iteration, PP first determines whether any column ($\mathbf{x}_n^p$) uninvolved in RMP can lead to a negative reduced cost, and then the one with the most negative reduced cost will be added into RMP. The iteration terminates at or satisfyingly close to the optimal solution.
\subsection{Restricted Master Problem}
The general RMP that minimizes $\Psi$ by considering only a subset of all decision columns is
\begin{equation}
\begin{aligned}
\text{RMP}:\qquad& \underset{\mathbf{x}_n^p\in\mathcal{S}_I}{\text{min}}\quad\widetilde{\Psi}\\
&\ \ \text{\textbf{s. t.}}\quad (\ref{other_node}), (\ref{node_1}), (\ref{deadline}), (\ref{extreme_quantiles}), (\ref{extreme_s}),
\label{rmp}
\end{aligned}
\end{equation}
where $\mathcal{S}_I$ is a subset of all columns considered in MP, denoted as $\mathcal{S} = \{\mathbf{x}_1^0\}\cup\{\mathbf{x}_N^0\}\cup\big\{\mathbf{x}_n^p|\mathbf{x}_n^p\in\{0,1\}^T, n\in\{2,\cdots,N-1\}, p\in\{0,1\}\big\}$, and $\widetilde{\Psi}$ is the expected worst case energy consumption calculated with respect to $\mathcal{S}_I$. At the same time, we set all the elements in set $\mathcal{S}_I^c = \mathcal{S}\backslash\mathcal{S}_I$ (the complement set of $\mathcal{S}_I$) to be zero vectors. Given a feasible offloading policy determined by $\big\{\mathbf{x}_n^p, n\in\mathcal{N}, p\in\{0,1\}\big\}$, $\widetilde{\Psi}$ becomes deterministic. Thus, RMP is reduced to a feasibility checking problem. Specifically, with a provided offloading policy, we need to check whether there exists at least one feasible solution in the set defined by the constraints in (\ref{rmp}).
Without loss of generality (w.l.o.g.), we can start with a special RMP in which all computation task modules are assigned to the client device ($c_n=1, s_n=0$, $\forall n\in\mathcal{N}$). As a result, $\mathcal{S}_I = \big\{\mathbf{x}_n^0|\forall n\in\mathcal{N}\big\}$ and $\widetilde{\Psi} = \sum_n\kappa \omega_nf_c^2$. Then, one feasible solution for this special RMP is $\big\{\mathbf{x}_{n,t}^0=1, {\mathrm{if}}\ t = \sum_{i=1}^n\ceil{\frac{\omega_n}{f_c}},\mathbf{x}_n^1=\mathbf{0}|n\in\mathcal{N}\big\}$.
Since RMP involves only a subset of $\mathcal{S}$ used by MP, i.e., $\mathcal{S}_I\subset\mathcal{S}$, $\widetilde{\Psi}$ serves as an upper bound on the optimal result of MP \cite{bertsimas1997introduction}. By introducing more columns to RMP, the upper bound may be lowered down further. Therefore, we need to determine which column is the most profitable to MP in terms of negative reduced cost, and when the optimal result of RMP is exactly the same as or satisfyingly close to the optimal result of MP. This is achieved by solving the corresponding price problem, described in the following.
\subsection{Adding A New Column to RMP via PP}
In each iteration, when RMP is solved, we need to check whether adding any new column can lead to better solution to MP. In particular, for each $\mathbf{x}_n^1$ in $\mathcal{S}_I^c$ (i.e., $\mathcal{S}\backslash\mathcal{S}_I$),
we need to examine whether it has a negative reduced cost. The reduced cost $\zeta_n$ for a column $\mathbf{x}_n^1\in\mathcal{S}_I^c$ is, according to \cite{bertsimas1997introduction},
\begin{equation}
\begin{aligned}
\zeta_n=&\sum_{m\in\mathcal{M}(n)} c_ms_no_{m,n}\theta_u+\sum_{k\in\mathcal{C}(n)} s_nc_ko_{n,k} \theta_d\\
&-\sum_{m\in\mathcal{M}(n)}\pi_{m,n} b_{m,n}^{c,s}-\sum_{k\in\mathcal{C}(n)}\pi_{n,k} b_{n,k}^{s,c},
\label{pp}
\end{aligned}
\end{equation}
where $\pi_{m,n}$'s and $\pi_{n,k}$'s are the Lagrangian dual optimal solution corresponding to constraints (\ref{extreme_quantiles}) and (\ref{extreme_s}) in (\ref{rmp}).
Then, we need to find the column that can produce the most negative reduced cost. Hence, the column to be generated and added to RMP is obtained by solving the price problem:
\begin{equation}
\begin{aligned}
\text{PP}:\qquad& \underset{\mathbf{x}_n^1\in\mathcal{S}_I^c}{\text{min}}\quad\zeta_n\\
&\ \ \text{\textbf{s. t.}}\ \ \ c_ms_nz_{\epsilon_m}^u\leq b_{m,n}^{c,s}, \forall m \in\mathcal{M}(n),\\
&\ \quad\quad\ \ \ s_nc_kz_{\epsilon_m}^d \leq b_{n,k}^{s,c},\ \ \forall k \in\mathcal{C}(n).
\label{pp2}
\end{aligned}
\end{equation}
In PP, the new column $\mathbf{x}_n^1$ is embedded in $b_{m,n}^{c,s}$ and $b_{n,k}^{s,c}$. This is because all execution location indicators $c_m$'s, $s_n$'s and $c_k$'s have been determined by solving RMP. Thus, PP is a binary integer programming (BIP). Denote by $r^*$ the optimal solution to PP. If $r^*\geq 0$, then no column can generate negative reduced cost, and the current solution to RMP is the optimal solution to MP. Otherwise we add to RMP the column with the most negative reduced cost identified by (\ref{pp2}).
We first discuss the feasible solution to RMP with the added new column and solve PP in the next subsection. Recall that the objective function in (\ref{rmp}) becomes deterministic under a new offloading policy. The new feasible solution can be obtained by setting
\begin{equation}
\mathbf{x}_g^0 = \mathbf{0}\quad{\rm{and}}\quad \mathbf{x}_{g,t}^1 = 1,\ \exists_{=1} t\in\{t_{min}, \cdots, t_{max}\},
\label{rmp_f}
\end{equation}
where $g$ is the node index of the new column identified by (\ref{pp2}), $\exists_{=1}$ means ``exists and choose one'', $t_{min} = \underset{m\in\mathcal{M}(n)}{\text{max}}\sum_{p=0}^{p=1}\sum_{t=1}^{t=T}t\mathbf{x}_{m,t}^p+c_ms_no_{m,n}\theta_u+\ceil{\frac{\omega_n}{f_s}}$ and $t_{max} = \underset{k\in\mathcal{C}(n)}{\text{min}}\sum_{p=0}^{p=1}\sum_{t=1}^{t=T}t\mathbf{x}_{k,t}^p-s_nc_ko_{n,k}\theta_d-\ceil{\frac{\omega_k}{f_c}}$ are constants, indicating the range of the execution completion time of node $g$ at the server. $t_{min}\leq t_{max}$ is guaranteed when solving PP.
\subsection{Solving PP}
\label{PP_solve}
To solve PP, we need to evaluate the reduced cost of every $\mathbf{x}_n^1\in \mathcal{S}_I^c$ by solving $N-2$ BIP's and checking their sign. Yet, solving multiple BIP's is time-consuming when a DAG has hundreds or thousands of nodes. To circumvent this problem, we formulate a new price problem (NPP):
\begin{equation}
\begin{aligned}
\text{NPP}:\quad& \underset{\lambda_n,\ \mathbf{x}_n^1,\ n\in\mathcal{S}_I^c}{\text{min}}\ \ \ \ \ r =\sum_{n\in\mathcal{S}_I^c}\lambda_n\zeta_n\\
&\text{\textbf{s. t.}}\quad\ c_ms_nz_{\epsilon_m}^u\leq \sum_{n\in\mathcal{S}_I^c}\lambda_nb_{m,n}^{c,s}, \forall m \in\mathcal{M}(n),\\
& \quad \quad\quad\ s_nc_kz_{\epsilon_m}^d \leq \sum_{n\in\mathcal{S}_I^c}\lambda_nb_{n,k}^{s,c},\ \ \forall k \in\mathcal{C}(n),\\
& \quad \qquad\sum_{n\in\mathcal{S}_I^c}\lambda_n = 1,\ \ \ \lambda_n\in\{0,1\},
\label{pp3}
\end{aligned}
\end{equation}
where the decision variables are $\lambda_n$'s and $\mathbf{x}_n^1$'s (absorbed in $b_{m,n}^{c,s}$, $b_{n,k}^{s,c}$ and $\zeta_n$). (\ref{pp3}) is equivalent to (\ref{pp2}), because in (\ref{pp3}) only a single $\lambda_n$ can be activated (equals to 1) in both the objective function and the constraints, and the number of $\lambda_n$'s equals to the cardinality of $\mathcal{S}_I^c$ in (\ref{pp2}). Although NPP formulated in (\ref{pp3}) is a quadratic integer programming with quadratic constraints, which is generally more complicated than (\ref{pp2}), we can solve it efficiently by further decomposing (\ref{pp3}) into two separate problems, including a column selection (CS) problem with the fixed $b_{m,n}^{c,s}$ and $b_{n,k}^{s,c}$ and a completion time decision (TD) problem with the selected columns.
W.l.o.g., we can initially set $b_{m,n}^{c,s} = z_{\epsilon_m}^u$ and $b_{n,k}^{s,c} = z_{\epsilon_m}^d$, and optimize (\ref{pp3}) over $\lambda_n$ as
\begin{equation}
\begin{aligned}
\text{CS}:\qquad& \underset{\lambda_n,\ n\in\mathcal{S}_I^c}{\text{min}}\ \ \ \ \ r =\sum_{n\in\mathcal{S}_I^c}\lambda_n\zeta_n\\
&\text{\textbf{s. t.}}\quad\ \sum_{n\in\mathcal{S}_I^c}\lambda_n = 1,\ \ \ \lambda_n\in\{0,1\}.
\label{CS}
\end{aligned}
\end{equation}
Then (\ref{pp3}) is reduced to a trivial problem, i.e., $0$-$1$ knapsack problem with the maximum weight capacity $W=1$. The solution to CS is $\lambda_l=1$, for $l = \underset{n}{\text{arg\ min}}\ \zeta_n$
Next, we fix $\lambda_n$'s, and formulate the TD problem as follows:
\begin{equation}
\begin{aligned}
\text{TD}:\qquad& \underset{\mathbf{x}_l^1,\ l\in\mathcal{S}_I^c}{\text{min}}\ \ \ \ \ r =\lambda_l\zeta_l\\
& \text{\textbf{s. t.}}\qquad c_ms_lz_{\epsilon_m}^u\leq \lambda_lb_{m,l}^{c,s}, \forall m \in\mathcal{M}(l),\\
& \qquad\quad\ \ s_lc_kz_{\epsilon_m}^d \leq \lambda_lb_{l,k}^{s,c},\ \ \forall k \in\mathcal{C}(l),\\
&\qquad\quad\ \ \lambda_l=1.
\label{TD}
\end{aligned}
\end{equation}
TD is a BIP problem with the only decision variable $\mathbf{x}_l^1$ (absorbed in $b_{m,l}^{c,s}$, $b_{l,k}^{s,c}$ and $\zeta_l$).
By relaxing $\mathbf{x}_l^1$ as $0\leq \mathbf{x}_{l,t}^1\leq 1$, TD can be relaxed to a linear programming that can be easily solved by polynomial interior algorithm \cite{bertsimas1997introduction}. Then, we set one element with the largest value in the solution vector as $\mathbf{x}_{l,t}^1=1$, and other elements as $\mathbf{x}_{l,t'}^1=0$ ($t'\neq t$).
We use $\underline{r}^*$ to denote the solution of NPP achieved by solving CS and relaxed TD iteratively, and summarize the above process of solving NPP in Algorithm~\ref{solvepp}.
\begin{algorithm}
\caption{Solving NPP}
\begin{flushleft}
\textbf{Input:} Dual solution $\pi_{m,n}$'s and $\pi_{n,k}$'s of RMP, $\mathcal{S}_I^c$, maximum iteration number $max\_iter\_num$, $iter\_num=0$\\
\textbf{Output:} $\mathbf{x}_g^1\in \mathcal{S}_I^c$ (column to be added to RMP
\end{flushleft}
\begin{algorithmic}[1]
\While {$iter\_num<max\_iter\_num$ or $\underline{r}^*$ changes}
\State Initialize $b_{m,n}^{c,s}$'s as $z_{\epsilon_m}^u$, $b_{n,k}^{s,c}$ as $z_{\epsilon_m}^d$;
\State {Given $b_{m,n}^{c,s}$'s and $b_{n,k}^{s,c}$'s, solve CS by setting $\lambda_l=1$, for $l = \underset{n}{\text{argmin}}\ \zeta_n$};
\State Given $\lambda_l=1$, relax $0\leq \mathbf{x}_{l,t}^1\leq 1$ and solve the relaxed TD by polynomial interior algorithm;
\State Set the maximum $\mathbf{x}_{l,t}^1$ to $1$ and other $\mathbf{x}_{l,t}^1$'s as $0$;
\State Evaluate $\underline{r}^*$;
\State $iter\_num = iter\_num+1$;
\EndWhile
\State set $g=l$, return $\mathbf{x}_g^1$.
\end{algorithmic}
\label{solvepp}
\end{algorithm}
\subsection{$\epsilon$-Bounded Approximate Solution}
The solver developed in Algorithm~\ref{solvepp} does not achieve the optimal solution to PP, i.e., $r^*$. However, when the result of NPP, i.e., $\underline{r}^*$, is larger than or equal to $0$, i.e., $r^*\geq \underline{r}^*\geq 0$, MP can still be solved optimally. Even if $\underline{r}^*<0$, we can still find an $\epsilon$-bounded approximate solution to MP. First, we define the $\epsilon$-bounded approximate solution as follows.
\begin{mydef}
The $\epsilon$-Bounded Approximate Solution. Let $0\leq \epsilon <1$ be the predefined parameter and $\Psi^*$ be the optimal result. Then a solution $\Psi$ is called the $\epsilon$-bounded approximate solution if it satisfies $\Psi^*\leq \Psi \leq (1+\epsilon)\Psi^*$.
\label{def}
\end{mydef}
We can have the following lemma on determining the $\epsilon$-bounded approximate offloading policy
\begin{lemma}\label{lemma:condition}
Denote by $\Psi_l$ and $\Psi_u$ the lower and upper bounds on the optimal result $\Psi^*$ of MP, respectively. Then, the $\epsilon$-bounded approximate solution can be obtained when no new column ($\mathbf{x}_n^1$) can be found by solving PP, or the iteration stops at $\underline{r}^*\geq 0$, or $\frac{\Psi_l}{\Psi_u}\geq \frac{1}{1+\epsilon}$.
\end{lemma}
\begin{IEEEproof}[Proof]
When $\frac{\Psi_l}{\Psi_u}\geq \frac{1}{1+\epsilon}$, we have $\Psi_u\leq (1+\epsilon) \Psi_l\leq (1+\epsilon)\Psi^*$. Then, given any result $\Psi$ between the lower and upper bounds, i.e., $\Psi_l\leq \Psi \leq \Psi_u$, we can have $\Psi^*\leq\Psi\leq\Psi_u\leq(1+\epsilon)\Psi^*$, which is the $\epsilon$-bounded approximate solution by Definition \ref{def}. Besides, when $\underline{r}^*\geq0$ or no new column can be found by solving PP, as discussed before, the obtained solution is the optimal solution and hence an $\epsilon$-bounded approximate solution as well ($\epsilon=0$).
\end{IEEEproof}
The optimal result of RMP at each iteration serves as an upper bound of MP, i.e., $\Psi_u$. A lower bound can be obtained by setting $\Psi_l =\Psi_u+\mathcal{K}r^*\leq \Psi^*$ according to \cite{bertsimas1997introduction}, where $r^*$ is the optimal solution to PP, and $\mathcal{K}\geq \sum_{n}s_n$ holds for the optimal solution to RMP. Since we do not actually obtain $r^*$ with the decomposition of NPP into CS and relaxed TD, the lower bound can be set to $\Psi_u + \mathcal{K}\underline{r}^*$, which is less than $\Psi_u+\mathcal{K}r^*$ and hence $\Psi^*$. Additionally, because $r^*$ is negative, $\Psi_l$ may be negative as well. Thus, in practice, we set $\Psi_l = {\rm{max}}\{0, \Psi_u+\mathcal{K}\underline{r}^*\}$.
At each iteration, by solving RMP and NPP we can obtain a new pair of lower and upper bounds. By evaluating their ratio, i.e., $\frac{\Psi_l}{\Psi_u}$, we can determine whether we have obtained an $\epsilon$-bounded approximate solution. The process to find an $\epsilon$-bounded approximate offloading solution for offloading a DAG modeled application is summarized in Algorithm~\ref{offload}.
\begin{algorithm}
\caption{An $\epsilon$-bounded approximate solution for DAG-modeled application offloading}
\begin{flushleft}
\textbf{Input:} initialize $\epsilon$, $\mathcal{S}_I$, $\Psi_l=0$, $\Psi_u=\infty$, $\underline{r}^* = \infty$\\
\textbf{Output:} offloading decision $c_n$'s and $s_n$'
\end{flushleft}
\begin{algorithmic}[1]
\While {PP generates a new column, or $\frac{\Psi_l}{\Psi_u}<\frac{1}{1+\epsilon}$, or $\underline{r}^*<0$}
\State Solve RMP under current $\mathcal{S}_I$ by checking the feasibility, obtain $\Psi_u$ and the dual optimal solutions $\pi_{m,n}$'s and $\pi_{n,k}$'s;
\State Solve NPP using Algorithm~\ref{solvepp}, and obtain new column $\mathbf{x}_g^1$ according to (\ref{rmp_f});
\State Obtain the result of NPP, i.e., $\underline{r}^*$;
\State Update $\mathcal{S}_I = \mathcal{S}_I\cup\mathbf{x}_g^1$, set $\mathbf{x}_g^0 = \mathbf{0}$, $\mathcal{S}_I^c=\mathcal{S}\backslash\mathcal{S}_I$;
\State $\Psi_l = \Psi_u+\mathcal{K}\underline{r}^*$;
\EndWhile
\For{\texttt{$n\in\{1,\cdots,N\}$}}
\State $c_n = \sum_t\mathbf{x}_{n,t}^0$; $s_n = \sum_t\mathbf{x}_{n,t}^1$;
\EndFor
\end{algorithmic}
\label{offload}
\end{algorithm}
\subsection{Computational Complexity Analysis}
It is obvious that the computational complexity is closely related to the DAG's structure, and the number of parent modules of a node is clearly upper bounded by $N$.
Since Algorithm \ref{offload} solves the formulated problem iteratively, we assume that it solves the problem by $K$ iterations. There are a total of $N-2$ task modules that can be offloaded, so $K\leq N-2$. At each iteration, we need to solve both the RMP and NPP. As a result, we can analyze the computational complexity as follows. First, solving the RMP is reduced to feasibility checking, which requires the computational complexity of $O(NT)$. Second, the NPP includes CS and TD. Solving CS takes the $\mathcal{O}(N|\mathcal{S}_I^C|)$ computational complexity, where $|\mathcal{S}_I^C|$ decreases by one at each iteration. Solving the TD by the polynomial interior algorithm requires the computational complexity of $\mathcal{O}(n^3)$, where $n$ is the number of variables. Moreover, $\mathbf{x}_l^1$ with $T$ dimensions is the only decision variable, which means we have $n = T$. Therefore, we can conclude that the computational complexity of solving MP in (\ref{formulation}) by the proposed scheme based on column generation is $\mathcal{O}\big(K(NT+N^2+T^3)\big)$. Computational complexity also implicitly
depends on $\epsilon$ through the parameter $K$. The reason is that $K$ depends on
if $\frac{\Psi_l}{\Psi_u}<\frac{1}{1+\epsilon}$ holds (i.e., one of the condition on obtaining an $\epsilon$-bounded approximate solution discussed in Lemma \ref{lemma:condition}). As a result, by choosing different values of $\epsilon$, the value of $K$ also varies.
\subsection{Offloading Mobile Applications with Special Structures}
Some mobile applications may have simple structures, and can be modeled as DAGs with only sequential or parallel task module dependency \cite{zhang2015collaborative,ra2011odessa}. In this section, we provide principles to conduct computation offloading for these special mobile applications.
\begin{figure}[htb]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width= .4\columnwidth]{figures/seq.eps}
&
\includegraphics[width= .4\columnwidth]{figures/para.eps}
\\
{\small (a) Sequential dependency} &
{\small (b) Parallel dependency}
\end{tabular}
\end{center}
\caption{\label{fig:special} DAGs with only sequential or parallel task module dependency.
\end{figure}
\subsubsection{Case $\uppercase\expandafter{\romannumeral1}$: Applications with Only Sequential Module Dependency}
For applications that can be modeled as a DAG with only sequential dependency, i.e., Fig. \ref{fig:special}(a), we develop an efficient way to offload its computations. First, we arrive at a theorem about the optimal computation offloading policy for executing applications with only sequential dependency under uncertain radio channel and network queues.
\begin{theorem
Under uncertain radio channel with queuing delay, the optimal offloading policy for executing applications with only sequential dependency
only migrates computations once from the client device to the edge server if needed.
\label{seq_dep}
\end{theorem}
\begin{IEEEproof}[Proof]
W.l.o.g., suppose an application has $N$ sequential modules, and two subsequences, i.e., modules from $u$ to $v$ and $p$ to $q$, are migrated to the server for execution, where $1<u<v<p<q<N$. Then, the expected worst-case energy consumption for this offloading policy is $\Psi_1 =\sum_{n\in S_1} c_n\kappa \omega_nf_c^2+P_u(o_{u-1}+o_{p-1})\theta_u+P_d(o_{v+1}+o_{q+1})\theta_d$,
where $S_1 = \big\{\{2,\cdots, u-1\} \cup \{v+1,\cdots, p-1\} \cup \{q+1,\cdots, N-1\} \big\}$. If modules from $v+1$ to $p-1$ are executed at the server instead, then the expected worst-case energy consumption for the new policy is $\Psi_2 =\sum_{n\in S_2} c_n\kappa i_n\omega_nf_c^2+P_uo_{u-1}\theta_u+P_do_{q+1}\theta_d$,
where $S_2 = \big\{\{2,\cdots, u-1\} \cup \{q+1,\cdots, N-1\} \big\}$. Obviously, we have $\Psi_2<\Psi_1$, thus offloading only one subsequence, i.e., migrating only once, provides a better solution
\end{IEEEproof}
With Theorem \ref{seq_dep}, we can have a more efficient way to solve PP for applications with only sequential dependency. Since the offloaded modules should be consecutive in a DAG with only sequential dependency, once we find one column with negative reduced cost, we can keep adding its neighboring module with increasing hops as the new column until the reduced cost is positive. Then the subsequence (each node has a negative reduced cost) is migrated to the server for execution.
\subsubsection{Case $\uppercase\expandafter{\romannumeral2}$: Applications with Only Parallel Dependency}
We now consider DAG with only parallel dependency, i.e., Fig. \ref{fig:special}(b),
We also have a theorem on the computation offloading policy for executing such applications.
\begin{theorem
Under uncertain radio channel with queuing delay, the optimal offloading policy for executing applications with only parallel dependency is the one that offloads all modules whose energy consumption for execution is larger than the expected worst-case energy consumption for data transmission, i.e., $\kappa \omega_nf_c^2>P_u\theta_uo_1+P_d\theta_do_n, n\in\mathcal{N}$.
\label{para_dep}
\end{theorem}
\begin{IEEEproof}[Proof]
Suppose the minimum expected worst-case energy consumption without the $n$th module
is $\Psi^*_{\mathcal{N}/n}$. If $n$ is executed locally, then $\Psi = \Psi^*_{\mathcal{N}/n}+\kappa \omega_nf_c^2$, else if $n$ is executed remotely, then $\Psi = \Psi^*_{\mathcal{N}/n}+P_u\theta_uo_1+P_d\theta_do_n$. Hence, offloading a module that consumes higher energy by local execution than data transmission will result in lower $\Psi$.
\end{IEEEproof}
\section{Related Works}
\label{RW}
Quite a few works have studied the problem of energy-efficient computation offloading in MEC systems \cite{geng2018energy,wang2016mobile,zhang2016energy,yu2016joint,sardellitti2015joint,kao2017hermes,jia2014heuristic,zhang2015collaborative,mahmoodi2016optimal,liu2016delay,hou2020reliable,wang2020thirty}. From the communication perspective, these existing studies can be generally categorized into the physical layer design based, the cross-layer design based, and the time-critical application based. We discuss them as follows.
\subsection{Physical layer design based schemes.} This line of works focus on the impact of electronic circuit transmission on offloading decisions and energy consumption. For example, Geng et al. \cite{geng2018energy} propose a critical path based solution to recursively check the task modules and move them to the right CPU cores of a multicore mobile device to save its energy consumption.
Wang et al. \cite{wang2016mobile} take advantage of dynamic voltage scaling technology to adjust the transmission power at the physical layer in the course of computation offloading. Sardellitti et al. \cite{sardellitti2015joint} jointly optimize the radio resources, constellation size, and CPU cycles per second of the mobile device to minimize the energy consumption of local device.
\subsection{Cross-layer design based schemes.} This category of works attend to devise offloading systems by exploiting the resources in different layers in MEC systems. For example, Barbarossa et al. \cite{barbarossa2013computation} propose a joint framework encompassing a fading channel depending on the number of antennas and packet retransmission strategies to determine the joint allocation of radio resources and computation tasks. Zhang et al. \cite{zhang2016energy} incorporate the multi-access characteristics of the 5G heterogeneous network to jointly optimize offloading and radio resource allocation.
Yu et al. \cite{yu2016joint} minimize the energy consumption of local execution by jointly investigating the subcarrier allocation for offloaded tasks and local CPU time allocation.
\subsection{Time-critical mobile application based schemes.} These works concentrate on the development of proper delay-sensitive scheduling mechanism to meet the quality-of-service requirements of mobile users
in computation offloading. For example, Hermes \cite{kao2017hermes}, a fully polynomial time approximation scheme, minimizes the application execution latency while meeting the prescribed resource utilization constraints.
Jia et al. \cite{jia2014heuristic} propose a heuristic programming partition scheme to maximize the parallelism so as to minimize the completion time of an application. Zhang et al. \cite{zhang2015collaborative} propose a one-climb-policy to offload application with only sequential dependency while meeting a time deadline.
Mahmoodi et al. \cite{mahmoodi2016optimal}
apply integer programming to develop a wireless aware joint scheduling and computation offloading to maximize energy saving for devices while satisfying the execution deadline. Liu et al. \cite{liu2016delay} propose a latency-optimal offloading scheme by considering the combination of the queueing state in the task buffer, the execution state of the device's processing unit and transmission unit. Meng et al. \cite{meng2019closed} use an infinite horizon average cost Markov
decision process to characterize the interactions between local device and edge server, and develop a delay-optimal multilevel water-filling computation offloading solution. Hou et al. \cite{hou2020reliable} consider computation offloading for latency-sensitive applications in Internet of Vehicles.
All these previous studies impose strong assumptions on communication channels and network queue size. However, in practical MEC systems, radio channels and network queue size are usually dynamic and the patterns of the dynamics are unable to be captured accurately \cite{ezio2016impact}. Particularly, channel status can also be very unpredictable even during the offloading of task modules of an application. As a result, those assumptions cannot hold in real world applications. Although some works, e.g., \cite{kao2017hermes}, adapt an online optimization scheme to continuously probe the radio channels of the unknown dynamic environments, it cannot consider potential extreme cases sufficiently and will introduce extra communication overhead. Therefore, to develop an energy-efficient computation offloading scheme for a practical MEC system is still an open issue. |
2,877,628,088,490 | arxiv | \section{Introduction}
Model predictive control (MPC) is currently the most popular control methodology employed in process control and its impact on industry has been recognized \cite{Samad2017survey}. In MPC, the control action is calculated through solving a finite-horizon open-loop optimal control problem at each sampling instant, which is computationally expensive and only suitable for systems with slow dynamics. Therefore, for fast dynamical systems, in order to use MPC, the complexity of the online optimization should be reduced. A natural thought is to move the online optimization offline, which is the idea of explicit MPC. Explicit MPC was first introduced in \cite{Bemporad2002explicit}, in which the linear MPC problem is formulated as a multi-parametric quadratic programming (mpQP) problem and solved offline. The optimal control law is proved to be continuous piecewise affine (PWA) with respect to the state. The subregions as well as the corresponding affine functions defined on them are recorded. For online implementation, given the current state, one must only find the subregion in which the state lies, and the function evaluation of the corresponding affine function gives rise to the optimal control law.
However, the offline construction of such subregions, the memory required to store them, and the online search for the right subregion are the main limitations of explicit MPC \cite{Kvasnica2015region}. Much work has been done to solve these three problems. To overcome the complex geometric computations offline, combinatorial approaches are proposed that are based on implicitly enumerating all possible combinations of active constraints \cite{Gupta2011novel,Moshkenani2018combinatorial}. To reduce the memory required to store the subregions as well as the control laws, region-free explicit MPC is proposed \cite{Borrelli2010computation,Kvasnica2015region}. Moreover, the online search complexity can be reduced by storing additional information \cite{Herceg2013evaluation,Christophersen2007efficient}. The lattice PWA representation has also been used to {exactly} express explicit MPC control law, resulting in a much lower storage requirement \cite{Wen2009analytical,Xu2016irredundant}. As the complexity of solving explicit MPC problem increases exponentially with the size of the optimization problem, all these methods can only alleviate the computational burden to some extent.
Another idea is to formulate the approximate MPC controller \cite{Bemporad2001suboptimal,Bemporad2011ultra} or semi-explicit MPC controller \cite{Goebel2017semi}.
In these methods, training data containing the values of states and corresponding optimal control laws of the MPC problem are generated, and the approximated controller is constructed using these data. In general, the samples are required to distribute sufficiently evenly over the domain \cite{Chakrabarty2016support}. Different units have been used to generate the approximation, such as the canonical piecewise linear function \cite{Karg2020efficient}, radial basis functions \cite{CsekHo2015explicit}, wavelets \cite{Summers2011mutlresolution}, and so on. In addition, reinforcement learning has also been used to derive a data-driven MPC control law in \cite{Gros2020data}. In the work of \cite{Scibilia2009approximate},\cite{Bemporad2011ultra} and \cite{Summers2011mutlresolution}, the approximations are based on particular partitions of the domain of interest, and the interpolation based algorithm can be developed \cite{Pavlov2020minimax}. In fact, the partitions of the domain of interest employed in these works are different from the domain partitions in the explicit linear MPC, i.e.,
the property of the linear MPC problem has not been employed in the approximation.
To resemble explicit MPC control law to a larger extent, the lattice PWA approximation of the optimal control law is studied. In our previous work, the lattice PWA \emph{representation} of explicit MPC control law was derived, which also scales poorly with the dimension of the parameters. In this paper, instead of an exact representation, we present an \emph{error-free approximation} that consists of disjunctive and conjunctive lattice PWA approximations. A preliminary thought of disjunctive lattice PWA approximation of explicit linear MPC was presented in \cite{Xu2021lattice}, in which the approximated control law is not guaranteed to be error-free. However, in this work, under mild assumptions, the equivalence of the disjunctive and conjunctive approximations guarantees that the two approximations are identical to the optimal control law in the domain of interest. The approximation can also be simplified to further lower the requirement of storage and online computational complexity.
The rest of this paper is organized as follows. Section \ref{sec:pre} gives the preliminaries about the explicit linear MPC problem and the lattice PWA expression. The offline approximations of the explicit linear MPC control law through the lattice PWA expression are given in detail in Section 3, in which the sampling and re-sampling procedures, as well as the simplification of the approximation are provided. In Section 4, the approximation error and the complexity of the proposed procedure are analyzed. Section 5 provides the simulation results and the paper ends with conclusions and plans for future work presented in Section 6.
\section{Preliminary}\label{sec:pre}
\subsection{Explicit linear MPC problem}\label{sec:explicit_mpc}
In particular, MPC for a discrete-time linear time-invariant system can be cast as the following optimization problem at time step $t$:
\begin{subequations}\label{optimproblem}
\begin{align}
\min\limits_{U} & \Bigg\{J(U,\bm x_0)= v_{N_p}(\bm x_{N_p})+\sum\limits_{k=0}^{N_p-1}v(\bm x_k, \bm u_k) \Bigg\}\\
\mbox{s.t.}~ & \bm x_{k+1}=A\bm x_k+B\bm u_k, k=0, \ldots, N_p-1\label{linear_system}\\
{}&(\bm x_k, \bm u_k) \in \mathcal{G}, k=0, \ldots, N_p-1\\
{}&\bm x_{N_p} \in \mathcal{F}
\end{align}
\end{subequations}
in which the optimized variable is $U=[\bm u_0^T, \ldots, \bm u_{N_p-1}^T]^T$, $N_p$ is the prediction horizon, the variables $\bm x_{k} \in \mathbb{R}^{n_x}$ and $\bm u_k \in \mathbb{R}^{n_u}$ denote the predicted state and input at time step $k$, respectively, using (\ref{linear_system}). The terminal penalty is denoted as $v_N$ and $v(\cdot, \cdot)$ is the stage cost; $\mathcal{G}$ and $\mathcal{F}$ are full-dimensional polyhedral sets of appropriate dimensions. In this paper, we assume strictly convex cost, i.e., $v_N=\bm x_N^T Q_N \bm x_N, v(\bm x_k, \bm u_k)=\bm x_k^TQ_k \bm x_k+\bm u_k^TQ_u\bm u_k$, in which $Q_u \succ 0, Q_k, Q_N\succeq 0$.
After solving the optimization problem (\ref{optimproblem}), the optimal $U^*=[({\bm u_0^*})^T, \ldots, ({\bm u_{N_p-1}^*)}^T]^T$ is obtained, and only $\bm u_0^*$ is applied to the system. The optimization problem is subsequently reformulated and solved at the next time steps $t=1,2,\ldots$ by updating the given state vector $\bm x_0$.
It has been proved in \cite{Bemporad2002explicit} that the solution $U^*$ is a \emph{\textbf{continuous PWA function}} of the state $\bm x_0$, and we use $\bm x$ instead hereafter in this paper.
In fact, this conclusion is obtained through solving a mpQP problem of the form
\begin{equation}\label{mp-qp}
\begin{array}{rl}
\min\limits_U & \frac{1}{2}U^THU+\bm x^T FU\\
s.t. & GU \leq \bm w+E\bm x
\end{array}
\end{equation}
where $U \in \mathbb{R}^{N_p\cdot n_u}$ is the vector of optimization variables, the parameter vector is $\bm x \in \mathbb{R}^{n_x}$, and the matrices $H, F, G$ and $E$ are calculated through the optimization problem (\ref{optimproblem}) \cite{Bemporad2002explicit}. Under the assumption that $Q_k,Q_N \succeq 0, Q_u\succ 0$, we have $H \succ 0$.
The definition of a continuous PWA function as well as the lemma concerning the continuous PWA property of the solution to the mpQP problem is presented as follows.
\begin{defn}\label{def_cpwa}\cite{Chua1988}
A function $f:\Omega \rightarrow \mathbb{R}^m$, where $\Omega \subseteq \mathbb{R}^{n_x}$ is convex, is said to be continuous PWA if it is continuous on the domain $\Omega$ and the following conditions are satisfied,
\begin{enumerate}
\item The domain space $\Omega $ is divided into a finite number of nonempty convex polyhedra, i.e., $\Omega=\cup_{i=1}^{\hat{N}} \Omega_{i},~\Omega_i \neq \emptyset$, the polyhedra are closed and have non-overlapping interiors, $\mathrm{int}(\Omega_i) \cap \mathrm{int}(\Omega_j) = \emptyset, ~\forall i,j \in \{1,\ldots,\hat{N}\}, i \neq j$. These polyhedra are also called local regions. The boundaries of the polyhedra are nonempty sets in ($n-1$)-dimensional space.
\item In each local region $\Omega_i$, $f$ equals a local affine function $u_{\mathrm{loc}(i)}$,
\begin{equation*}
f(\bm x)=u_{\mathrm{loc}(i)}(\bm x), ~\forall x \in \Omega_i.
\end{equation*}
\end{enumerate}
\end{defn}
\begin{lem}\label{lem:mpc_cpwa}\cite{Bemporad2002explicit}
Considering the mpQP problem (\ref{mp-qp}) and assuming that $H \succ 0$, then the set of feasible parameters $\Omega \subset \mathbb{R}^{n_x}$ is convex, the optimizer $U^* : \Omega \rightarrow \mathbb{R}^{p\cdot n_u}$ is continuous PWA, and the value function $J^*: \Omega \rightarrow \mathbb{R}$ is continuous convex and piecewise quadratic.
\end{lem}
The details of constructing such continuous PWA function are given as follows.
First, the mpQP problem can be rewritten in the form,
\begin{equation}\label{mp-qp2}
\begin{array}{rl}
\min\limits_{\bm z} & \frac{1}{2}\bm z^T H \bm z\\
s.t. & G\bm z \leq \bm w+S\bm x
\end{array}
\end{equation}
by letting $\bm z=U+H^{-1}F^T\bm x$ and $S=E+GH^{-1}F^T$. Once the optimal solution $\bm z^*$ of the optimization problem (\ref{mp-qp2}) is available, we can easily obtain the optimal $U^*$ as
\[
U^*=\bm z^*-H^{-1}F^T\bm x.
\]
The optimal solution $\bm z^*$ for a fixed $\bm x$ is fully characterized by the karush-Kuhn-Tucker (KKT) conditions:
\begin{subequations}\label{eq:KKT}
\begin{align}
&H\bm z^*+G_{\mathcal{A}^*}^T\bm \lambda^*+G_{\mathcal{N}^*}^T\bm \mu^*=0 \label{first_order_condition}\\
&G_{\mathcal{A}^*}\bm z^*=\bm w_{\mathcal{A}^*}+S_{\mathcal{A}^*}\bm x \label{active_constraints}\\
&G_{\mathcal{N}^*}\bm z^*<\bm w_{\mathcal{N}^*}+S_{\mathcal{N}^*}\bm x \label{inactive_constraints}\\
&\bm\lambda^* \geq 0 \label{dual_feasibility}\\
&\bm \mu^*\geq 0\\
&{\bm \lambda^*}^T(G_{\mathcal{A}^*}\bm z^*-\bm w_{\mathcal{A}^*}-S_{\mathcal{A}^*}\bm x)=0\\
&{\bm \mu^*}^T(G_{\mathcal{N}^*}\bm z^*-\bm w_{\mathcal{N}^*}-S_{\mathcal{N}^*}\bm x)=0
\end{align}
\end{subequations}
in which (\ref{active_constraints}) and (\ref{inactive_constraints}) are the active and inactive constraints at $\bm z^*$, respectively. Assuming that $G\in \mathbb{R}^{p\times N_p\cdot n_u}, \bm w\in \mathbb{R}^{p}, S \in \mathbb{R}^{p\times n_x}$, and $G_i, w_i$, and $S_i$ denote the $i$-th row of $G, w$, and $S$, respectively, the active as well as inactive index sets can be written as,
\[
\mathcal{A}^*=\{j\in \{1, \ldots,p\}|G_j\bm z^*=\bm w_j+S_j\bm x\}
\]
and
\[
\mathcal{N}^*=\{j \in \{1, \ldots, p\}|G_j \bm z^*<\bm w_j+S_j\bm x\},
\]
respectively. It is apparent that $\mathcal{A}^*=\{1, \ldots, p\} \setminus \mathcal{N}^*$. For the inactive constraint, we have $\bm \mu^*=0$. If $\mathcal{A}^*$ is fixed and $G_{\mathcal{A}^*}$ is full row rank, we have
\begin{equation}\label{opt_lambda}
\bm \lambda^*=-(G_{\mathcal{A}^*}H^{-1}G_{\mathcal{A}^*}^T)^{-1}(\bm w_{\mathcal{A}^*}+S_{\mathcal{A}^*}\bm x),
\end{equation}
as well as
\begin{equation}\label{opt_z}
\bm z^*=H^{-1}G_{\mathcal{A}^*}^T(G_{\mathcal{A}^*}H^{-1}G_{\mathcal{A}^*}^T)^{-1}(\bm w_{\mathcal{A}^*}+S_{\mathcal{A}^*}\bm x).
\end{equation}
The local region for which the local affine function (\ref{opt_z}) is defined is called \emph{critical region}, and can be constructed by the constraints of primal feasibility (\ref{inactive_constraints}) and dual feasibility (\ref{dual_feasibility}).
\begin{rem}\label{rem1}
For the case in which $G_{\mathcal{A}^*}$ is not full row rank, i.e., the rows of $G_{\mathcal{A}^*}$ are linearly dependent, the linear independence constraints qualification (LICQ) is violated according to \cite{Nocedal2006numerical}, and this is referred to as the primary degeneracy \cite{Bemporad2002explicit} (dual degeneracy cannot occur as $H \succ 0$). Assuming that the rank of $G_{\mathcal{A}^*}$ is $r$, we can arbitrarily select $r$ independent constraints, and proceed with the new reduced active index set \cite{Borrelli2003constrained}.
\end{rem}
To search for all the local affine functions and critical regions, one must enumerate all possible active index sets $\mathcal{A}^*$, apply the KKT conditions accordingly, and then the continuous PWA control law can be obtained.
In the next subsection, the lattice PWA representation is shown, which is used to express the resulting continuous PWA control law
in our previous work \cite{Xu2016irredundant}.
\subsection{Lattice PWA representation}\label{sec:lpwa_representation}
It is stated in \cite{Xu2016irredundant} that any continuous PWA function can be represented by the lattice PWA representation.
\begin{lem}\label{lem:full_lattice}
Letting $f$ be a continuous PWA function defined in Definition 1, then $f$ can be represented as
\begin{equation}\label{eq:ful_lat_representation_d}
f(\bm x)=f_{\mathrm{L,d}}(\bm x)=\max\limits_{i=1,\ldots,N}\min_{j \in I_{\geq,i}}u_j(\bm x), ~\forall x \in \Gamma,
\end{equation}
or
\begin{equation}\label{eq:ful_lat_representation_c}
f(\bm x)=f_{\mathrm{L,c}}(\bm x)=\min\limits_{i=1,\ldots,N}\max_{j \in I_{\leq,i}}u_j(\bm x), ~\forall x \in \Gamma,
\end{equation}
in which $I_{\geq,i}=\{j|u_j(\bm x)\geq u_{i}(\bm x), , \forall \bm x \in \Gamma_i\}, I_{\leq, i}=\{j|u_j(\bm x)\leq u_{i}(\bm x), \forall \bm x \in \Gamma_i\}$, and the expressions $\min_{j \in I_{\geq,i}}u_j(\bm x)$ and $\max_{j \in I_{\leq,i}}u_j(\bm x)$ are called terms of $f_{\mathrm{L,d}}$ and $f_{\rm L, c}$, respectively. The affine function $u_j(\bm x)$ is named a literal. The region $\Gamma_i$ is a \textbf{unique order (UO) region} that is subset of the local region and the order of the affine functions
\begin{equation}\label{cond_base_region}
u_1(\bm x), \ldots, u_N(\bm x),
\end{equation}
remains unchanged in the interior of $\Gamma_i$. The expressions (\ref{eq:ful_lat_representation_d}) and (\ref{eq:ful_lat_representation_c}) are called full disjunctive and conjunctive lattice PWA representations, respectively, in which the names ``disjunctive" and ``conjunctive" come from the terminology in Boolean algebra.
\end{lem}
Considering a two-dimensional continuous PWA function (\ref{2dfunction}) with four affine pieces, Fig. \ref{fig_ex1} illustrates the UO region and corresponding lattice PWA representations.
\begin{exmp}
\begin{equation}\label{2dfunction}
f=\left\{\begin{array}{ll}
\ell_1(\bm x)=-x_2+1& \mathrm{if} \bm x \in \Gamma_1,\\
\ell_2(\bm x)=-x_1+1& \mathrm{if} \bm x \in \Gamma_2,\\
\ell_3(\bm x)=x_2+1& \mathrm{if} \bm x\in \Gamma_3,\\
\ell_4(\bm x)=x_1+1&\mathrm{if} \bm x\in \Gamma_4.
\end{array}\right.
\end{equation}
The polyhedral regions $\Gamma_1, \Gamma_2, \Gamma_3, \Gamma_4$, and the two-dimensional function $f$ are shown in Fig. \ref{fig_ex1}.
\begin{figure}[htbp]
\centering
\psfrag{gamma1}[c]{$\Gamma_1$}
\psfrag{gamma11}[c]{$\Gamma_{11}$}
\psfrag{gamma12}[c]{$\Gamma_{12}$}
\psfrag{gamma2}[c]{$\Gamma_2$}
\psfrag{gamma21}[c]{$\Gamma_{21}$}
\psfrag{gamma22}[c]{$\Gamma_{22}$}
\psfrag{gamma3}[c]{$\Gamma_3$}
\psfrag{gamma31}[c]{$\Gamma_{31}$}
\psfrag{gamma32}[c]{$\Gamma_{32}$}
\psfrag{gamma4}[c]{$\Gamma_{4}$}
\psfrag{gamma41}[c]{$\Gamma_{41}$}
\psfrag{gamma42}[c]{$\Gamma_{42}$}
\psfrag{x1}[c]{$x_1$}
\psfrag{x2}[c]{$x_2$}
\psfrag{l1}[c]{\color[RGB]{255,0,0}$\ell_1$}
\psfrag{l2}[c]{\color[RGB]{255,0,0}$\ell_2$}
\psfrag{l3}[c]{\color[RGB]{255,0,0}$\ell_3$}
\psfrag{l4}[c]{\color[RGB]{255,0,0}$\ell_4$}
\psfrag{f}[c]{$f$}
\subfigure[Function.]{
\includegraphics[width=0.9\columnwidth]{fig/fig_2d_fun.eps}\label{fig_ex1_fun}}\\
\subfigure[Regions.]{
\includegraphics[width=0.9\columnwidth]{fig/fig_2d_region.eps}
\label{fig_ex1_region}}
\caption{Continuous PWA function in Example 1.}
\label{fig_ex1}
\end{figure}
The regions $\Gamma_1, \Gamma_2, \Gamma_3$, and $\Gamma_4$ are local affine regions, and can be divided into UO regions $\Gamma_{11}, \Gamma_{12}, \ldots, \Gamma_{41}$, and $\Gamma_{42}$. Taking the UO region $\Gamma_{31}$ as an example, the order of affine functions is
\[
\ell_3<\ell_2<\ell_1<\ell_4.
\]
For this continuous PWA function, as it is concave, both the disjunctive and conjunctive lattice PWA representations are
\[
f=\min\{\ell_1, \ell_2, \ell_3, \ell_4\}.
\]
\end{exmp}
According to Lemma \ref{lem:full_lattice}, we can represent a continuous PWA control law using a lattice PWA function (either disjunctive or conjunctive). The disjunctive lattice PWA representation of explicit linear MPC was investigated in \cite{Xu2016irredundant} and \cite{Wen2009analytical}, in which the continuous PWA control law was obtained through the MPT toolbox \cite{MPT3} in advance. However, as explained in Section \ref{sec:explicit_mpc}, for problems with a large number of constraints and a high-dimensional state, the number of possible combinations of active constraints increases exponentially and the derivation of explicit MPC solution is extremely computationally expensive. Hence, in this paper, we propose an approximated continuous PWA control law by sampling only a set of states in the domain of interest. We show that this approximation utilizes the local affine property of the original explicit MPC control law and is identical to the original control law at the sample points and the UO regions in which the sample points lies. In addition, under mild assumptions, the lattice PWA approximations are identical to explicit MPC control law in the domain of interest.
\section{Lattice PWA approximation of explicit linear MPC control law}
\subsection{Generation of sample points in interior of UO regions}\label{sec:generate_points}
As indicated in Lemma \ref{lem:mpc_cpwa}, the explicit linear MPC control law $U^*$ is a continuous PWA function with respect to the state $\bm x$. Apparently, the first element of $U^*$, which is $\bm u_0^*$, is also a continuous PWA function of $\bm x$, i.e., $\bm u_0^*$ is affine in the local regions that $\bm x$ lies in. For simplicity, we omit the subscript in $\bm u_0^*$ hereafter in the paper, and use $\bm u^*$ instead. For $i \in \mathbb{N}_1$, the sample points $(\bm x_i, \bm u_i(\bm x_i)) \in \mathcal{X}_1 \times \mathcal{U}_1$ are generated in the domain of feasible parameters, in which $\bm u_i(\bm x)$ is the affine function at $\bm x_i$ such that
\[
\bm u_i(\bm x_i)=\bm u^*(\bm x_i).
\]
For simplicity, we consider the case $n_u=1$, and the proposed methodology can be easily extended to the case when $n_u>1$. Moreover, the domain of interest is assumed to be regular.
In this subsection, the training points $\bm x_i \in \mathcal{X}_1$ are required to be in the interior of UO regions, i.e., $\bm x_i \in \mathrm{int}(\Gamma(\bm x_i))$, in which $\Gamma(\bm x_i)$ is the corresponding UO region. This means that if $u_{j_1}(\bm x_i)>u_{j_2}(\bm x_i)$ holds, then we have
\[
u_{j_1}(\bm x)>u_{j_2}(\bm x), \forall \bm x \in \Gamma(\bm x_i).
\]
As a matter of fact, if there are no affine functions $u_{j_1}(\bm x)$ and $u_{j_2}(\bm x)$ such that $u_{j_1}(\bm x_i)=u_{j_2}(\bm x_i)$, then $\bm x_i \in \mathrm{int}(\Gamma(\bm x_i))$.
Algorithm \ref{alg:sampling} describes the sampling of training points.
\algrrule[0.8pt]
\begin{alg}
{Sampling of training points in explicit linear MPC control law.}
\label{alg:sampling}
\algrule[0.5pt]
\begin{algorithmic}[1]
\hspace{-4ex} \textbf{Input:} Linear MPC problem, the number of sample points $N_1$, sample domain $\Omega$.\\
\hspace{-4ex} \textbf{Output:} Sample data set $\mathcal{X}_1 \times \mathcal{U}_1$.
\STATE $\mathcal{X}_1=\emptyset$, $\mathcal{U}_1=\emptyset$.
\FOR{$i=1$ to $N_1$}
\STATE Generate a grid point $\bm x_i$ in $\Omega$.
\IF {$\bm x_i$ is not an interior point of some UO region}
\STATE Apply a perturbation until $\bm x_i$ is an interior point of $\Gamma(\bm x_i)$.\label{line:perturbation}
\ENDIF
\STATE Solve corresponding QP problem (\ref{mp-qp2}) by letting $\bm x=\bm x_i$.\label{line:qp}
\STATE Determine active and inactive index sets $\mathcal{A}^*$ and $\mathcal{N}^*$, respectively.
\STATE Solve KKT conditions (\ref{eq:KKT}) to obtain the affine function $\bm z_i^*$ through (\ref{opt_z}).
\STATE Obtain optimal input, i.e., $U_i(\bm x_i)$ and $u_i(\bm x_i)$.
\ENDFOR
\algrule[0.5pt]
\end{algorithmic}
\end{alg}
For a feasible state $\bm x_i$, line \ref{line:qp} in Algorithm \ref{alg:sampling} states that optimal $\bm z_i^*=U_i^*+H^{-1}F^T\bm x_i$ can be obtained through the QP problem (\ref{mp-qp2}), which, together with the information of $\bm x_i$, determine the active and inactive constraints (\ref{active_constraints}) and (\ref{inactive_constraints}), respectively. Therefore, the index set $\mathcal{A}_i^*$ as well as $\mathcal{N}_i^*$ is fixed, and if the matrix $G_{\mathcal{A}_i^*}$ is full row rank, the affine function $\bm z_i(\bm x_i)$ can be calculated through (\ref{opt_z}) (the rank-deficient case can be handled as in Remark \ref{rem1}). Then we have
\begin{equation}\label{U_affine}
U_i(\bm x_i)=\bm z_i^*(\bm x_i)-H^{-1}F^T\bm x_i,
\end{equation}
and
\begin{equation}\label{uU_transform}
u_i(\bm x_i)=\left[\begin{array}{cccc}
\mathbf{I}_{n_u \times n_u}&\bm 0&\cdots&\bm 0
\end{array}\right]U_i(\bm x_i)
\end{equation}
in which $\mathbf{I}_{n_u\times n_u}$ is the identity matrix with size $n_u \times n_u$.
After evaluating Algorithm \ref{alg:sampling}, we can obtain the sample dataset $\mathcal{X}_1 \times \mathcal{U}_1$, in which $\mathcal{U}_1$ a set of affine functions $u_i(\bm x_i)$. It is noted that compared with ordinary sampling, in which only the evaluation of $u_i^*(\bm x_i)$ is available, here the corresponding affine function is also recorded, which can be used for the lattice PWA approximation in Section \ref{sec:lattice_sample}.
\subsection{Lattice PWA approximation based on sample points}\label{sec:lattice_sample}
We now derive both the disjunctive and conjunctive lattice PWA approximations based on the sample dateset $\mathcal{X}_1 \times \mathcal{U}_1$.
The disjunctive lattice PWA approximation is constructed via the sample points and can be expressed as follows,
\begin{equation}\label{eq:lattice_approximation_d}
\hat{f}_{\mathrm{L, d}}(\bm x)=\max\limits_{i=1,\ldots,N_1}\min_{j \in J_{\geq,i}}u_j(\bm x),
\end{equation}
in which the index set $J_{\geq,i}$ is described as
\begin{equation}\label{index_J}
J_{\geq,i}=\{j|u_j(\bm x_i)\geq u_i(\bm x_i)\}.
\end{equation}
Similarly, the conjunctive lattice PWA approximation can be described as follows,
\begin{equation}\label{eq:lattice_approximation_c}
\hat{f}_{\mathrm{L, c}}(\bm x)=\min\limits_{i=1,\ldots,N_1}\max_{j \in J_{\leq,i}}u_j(\bm x),
\end{equation}
in which the index set $J_{\leq,i}$ is described as
\begin{equation}\label{index_J}
J_{\leq,i}=\{j|u_j(\bm x_i)\leq u_i(\bm x_i)\}.
\end{equation}
Compared with the full disjunctive and conjunctive lattice PWA representations (\ref{eq:ful_lat_representation_d}) and (\ref{eq:ful_lat_representation_c}), respectively, we can see that the lattice PWA approximations (\ref{eq:lattice_approximation_d}) and (\ref{eq:lattice_approximation_c}) only consider the order of local affine control laws at each sample point. Under certain conditions as shown in Assumption \ref{assump1}, the lattice PWA approximations (\ref{eq:lattice_approximation_d}) and (\ref{eq:lattice_approximation_c}) coincide with the explicit linear MPC control law at the sample points.
\begin{assum}\label{assump1}
We assume that the distinct affine functions have been sampled.
\end{assum}
\begin{lem}\label{lem:lattice_approx}
Assume the disjunctive and conjunctive lattice PWA approximations are constructed through (\ref{eq:lattice_approximation_d}) and (\ref{eq:lattice_approximation_c}), respectively. Suppose that Assumption \ref{assump1} holds,
then we have
\begin{equation}\label{lattice==pwa}
\hat{f}_{\mathrm{L, d}}(\bm x)=u^*(\bm x), \forall \bm x \in \Gamma(\bm x_i), \forall \bm x_i\in \mathcal{X}_1
\end{equation}
and
\begin{equation}\label{eq:lattice==pwa2}
\hat{f}_{\mathrm{L, c}}(\bm x)=u^*(\bm x), \forall \bm x \in \Gamma(\bm x_i), \forall \bm x_i\in \mathcal{X}_1.
\end{equation}
\end{lem}
\begin{pf}
For a UO region $\Gamma(\bm x_i)$, as the order of affine functions remains unchanged in the UO region, the set
\[
I_{\geq,i}=\{j|u_j(\bm x)\geq u_i(\bm x), \forall \bm x \in \Gamma(\bm x_i)\}
\]
is identical to $J_{\geq,i}$ for {any $\bm x_i \in \mathrm{int}(\Gamma(\bm x_i))$}. Similarly, the sets $I_{\leq, i}$ and $J_{\leq, i}$ are equivalent for all $\bm x \in \Gamma(\bm x_i)$. Therefore, if for some sample dataset, all the UO regions have been identified, i.e., the sample index $\mathcal{N}$ satisfies $\bigcup\limits_{i \in \mathcal{N}}\Gamma(\bm x_i)=\Omega$, and
we have
\begin{equation}\label{eq-full-lattice}
u^*(\bm x)=\max\limits_{i \in \mathcal{N}}\min\limits_{j \in J_{\geq,i}}u_j(\bm x), \forall \bm x \in \Omega,
\end{equation}
\begin{equation}\label{eq:full-lattice}
u^*(\bm x)=\min\limits_{i \in \mathcal{N}}\max\limits_{j \in J_{\leq, i}}u_j(\bm x), \forall \bm x \in \Omega.
\end{equation}
Therefore, for all $\bm x \in \Omega$ and all $i \in \mathcal{N}$, the following inequalities hold,
\begin{equation}\label{eq:term_less_f}
\min\limits_{j \in J_{\geq,i}}u_j(\bm x)\leq u^*(\bm x),
\end{equation}
and
\begin{equation}\label{eq:term_ge_f}
\max\limits_{j \in J_{\leq, i}} u_j(\bm x) \geq u^*(\bm x).
\end{equation}
As the sampled UO regions are only a subset of all UO regions, we have
\[
\mathbb{N}_1 \subset \mathcal{N},
\]
which means that for $\bm x \in \Omega$ the inequalities (\ref{eq:term_less_f}) and (\ref{eq:term_ge_f}) are valid all $i \in \mathbb{N}_1$.
According to the definition of UO regions, we have
\[
\begin{array}{r}
\min\limits_{j \in J_{\geq,i}} u_j(\bm x)=\max\limits_{j \in J_{\leq, i}} u_j(\bm x)=u^*(\bm x),\\
\forall \bm x \in \Gamma(\bm x_i), \forall i \in \mathbb{N}_1
\end{array}
\]
holds, and
together with (\ref{eq:term_less_f}) and (\ref{eq:term_ge_f}), we have (\ref{lattice==pwa}) and (\ref{eq:lattice==pwa2}).
\end{pf}
\begin{rem}
It is noted that the lattice PWA approximation differs from the other approximations in that the lattice PWA approximation equals the original control law not only at the sample points, but also in the UO regions containing the sample points as interior points, as (\ref{lattice==pwa}) and (\ref{eq:lattice==pwa2}) show. To achieve this, for each sample point, not only is the value of the corresponding control law recorded, but the specific affine expression is as well, as Algorithn \ref{alg:sampling} shows.
\end{rem}
\subsection{Re-sampling}
It is noted that it is actually not easy to guarantee the validity of Assumption \ref{assump1}, i.e., all the local affine functions have been sampled. This is because some critical regions may be very small, and it is difficult to sample a point in these regions by a uniform sampling procedure. In this section, additional points are sampled according to specific rules, which improves the lattice PWA approximations (\ref{eq:lattice_approximation_d}) and (\ref{eq:lattice_approximation_c}) such that more local affine functions are identified.
As indicated before, if all the local affine functions have been identified, we have (\ref{eq:term_less_f}) and (\ref{eq:term_ge_f}) for all $\bm x \in \Omega$ and $i \in \mathcal{I}$. Hence if (\ref{eq:term_less_f}) and (\ref{eq:term_ge_f}) are violated for some $\bm x$ and $i$, there should be some local affine functions that have not been sampled.
In fact, it is difficult to check the validity of (\ref{eq:term_less_f}) and (\ref{eq:term_ge_f}) for all $\bm x \in \Omega$ and $i \in \mathbb{N}_1$, as the information of $u^*(\bm x)$ is not available. Hence
instead, we check the validity of (\ref{eq:term_less_f}) and (\ref{eq:term_ge_f}) for the sample points $\bm x_i, i\in \mathbb{N}_1$. Moreover, we check the validity of
\begin{equation}\label{eq:cond_gen}
\min\limits_{j \in J_{\geq,i}}u_j(\bm x)\leq \max\limits_{j \in J_{\leq,k}}u_j(\bm x), \forall i, k \in \mathbb{N}_1, \forall \bm x \in \Omega,
\end{equation}
which is a direct result of (\ref{eq:term_less_f}) and (\ref{eq:term_ge_f}).
\subsubsection{Guaranteeing the validity of (\ref{eq:term_less_f}) and (\ref{eq:term_ge_f}) at sample points}\label{sec:guarantee1}
Taking the disjunctive lattice PWA approximation as an example, if (\ref{eq:term_less_f}) is violated for some $\bm x_{\alpha}, \bm x_{\beta} \in \mathcal{X}_1$, i.e.,
\begin{equation}\label{cond_violate}
\min\limits_{j \in J_{\geq, \alpha}}u_j(\bm x_{\beta})>u^*(\bm x_{\beta})=u_{\beta}(\bm x_{\beta}),
\end{equation}
we can add sample points in the line segment
\begin{equation}\label{def:line-segment}
\mathcal{L}(\bm x_{\alpha}, \bm x_{\beta}) =\lambda \bm x_{\alpha}+(1-\lambda) \bm x_{\beta}, \lambda \in (0,1)
\end{equation}
such that (\ref{eq:term_less_f}) is satisfied for $\alpha$ and $\bm x_{\beta}$, which is shown in Lemma \ref{lem_find_points}.
\begin{lem}\label{lem_find_points}
Assuming that there are two points $\bm x_{\alpha}$ and $\bm x_{\beta}$ such that (\ref{cond_violate}) holds, then there must be some point $\bm x_{\gamma}\in \mathcal{L}(\bm x_{\alpha}, \bm x_{\beta})$, which is defined in (\ref{def:line-segment}), and the corresponding control solution $u_{\gamma}$, such that the following inequality,
\begin{equation}\label{concl_lem8}
u_{\gamma}(\bm x_{\alpha})\geq u_{\alpha}(\bm x_{\alpha}), u_{\gamma}(\bm x_{\beta}) \leq u_{\beta}(\bm x_{\beta}),
\end{equation}
holds.
Furthermore, by adding $\bm x_{\gamma}$ to the sample dataset, we have
\begin{equation}\label{eq:concl-lem8.2}
\min\limits_{j \in J_{\geq, \alpha}}u_j(\bm x_{\beta})\leq u_{\beta}(\bm x_{\beta}).
\end{equation}
\end{lem}
\begin{pf}
As the optimal control solution $u^*$ is continuous PWA, it is still continuous PWA when restricted to the line segment $\mathcal{L}(\bm x_{\alpha}, \bm x_{\beta})$.
Defining an index set $\mathrm{aff}(\bm x_{\alpha}, \bm x_{\beta})$ as
\[
\mathrm{aff}{(\bm x_{\alpha}, \bm x_{\beta})}=\{j| \exists \bm x \in \mathcal{L}(\bm x_{\alpha}, \bm x_{\beta})~\mbox{such that}~u^*(\bm x)=u_j(\bm x)\},
\]
i.e., the index set $\mathrm{aff}{(\bm x_{\alpha}, \bm x_{\beta})}$ includes all the indices of affine functions in $u^*(x)$ when restricted to $\mathcal{L}(\bm x_{\alpha}, \bm x_{\beta})$.
According to \cite{Xu2016irredundant}, we have
\[
\min\limits_{j \in S_{\geq,\alpha}}u_j (\bm x) \leq u_{\beta}(\bm x_{\beta}), \forall \bm x \in \mathcal{L}(\bm x_{\alpha}, \bm x_{\beta}),
\]
in which $S_{\geq,\alpha}$ is the index set such that
\[
S_{\geq,\alpha}=\{j \in \mathrm{aff}{(\bm x_{\alpha}, \bm x_{\beta})}|u_j(\bm x_{\alpha}) \geq u_{\alpha}(\bm x_{\alpha})\}.
\]
Apparently, there should be some $\gamma \in S_{\geq,{\alpha}}$, such that (\ref{concl_lem8}) is valid.
Therefore, if we add one of these $\bm x_{\gamma}$ to the sample point set, we have
\[
\gamma \in J_{\geq, \alpha}.
\]
Hence (\ref{eq:concl-lem8.2}) is valid.
\end{pf}
It is noted that for the case when (\ref{eq:term_ge_f}) is violated, we have similar results.
\begin{cor}
Assuming that there are two points $\bm x_{\alpha}$ and $\bm x_{\beta}$ such that the following inequality holds,
\begin{equation}\label{eq:violate2}
\max\limits_{j \in J_{\leq, \alpha}}u_j(\bm x_{\beta})<u^*(\bm x_{\beta})=u_{\beta}(\bm x_{\beta}),
\end{equation}
then there must be some point $\bm x_{\gamma}\in \mathcal{L}(\bm x_{\alpha}, \bm x_{\beta})$, and the corresponding control solution $u_{\gamma}$ satisfies the following inequality,
\begin{equation}\label{eq:concl_cor}
u_{\gamma}(\bm x_{\alpha})\leq u_{\alpha}(\bm x_{\alpha}), u_{\gamma}(\bm x_{\beta}) \geq u_{\beta}(\bm x_{\beta}).
\end{equation}
Additionally, by adding $\bm x_{\gamma}$ to the sample dataset, we have
\begin{equation}\label{eq:concl_cor2}
\min\limits_{j \in J_{\leq, \alpha}}u_j(\bm x_{\beta})\geq u_{\beta}(\bm x_{\beta}).
\end{equation}
\end{cor}
When (\ref{cond_violate}) or (\ref{eq:violate2}) holds, in order to construct a continuous PWA function and ensure the validity of (\ref{eq:term_less_f}) and (\ref{eq:term_ge_f}) for every \emph{sample point} in the line segment $\mathcal{L}(\bm x_{\alpha}, \bm x_{\beta})$, the line segment is recursively partitioned to generate new sample points, as Algorithm \ref{alg:addpoints} shows.
\algrrule[0.8pt]
\begin{alg}
{Recursive partitioning of line segment $\mathcal{L}(\bm x_{\alpha}, \bm x_{\beta})$ in case that (\ref{cond_violate}) or (\ref{eq:violate2}) holds.}
\label{alg:addpoints}
\algrule[0.5pt]
\begin{algorithmic}[1]
\hspace{-4ex} \textbf{Input:} Linear MPC problem, initial sample datset $\mathcal{X}_1 \times \mathcal{U}_1$, the line segment $\mathcal{L}(\bm x_{\alpha}, \bm x_{\beta})$. \\
\hspace{-4ex} \textbf{Output:} Additional sample dataset $\mathcal{X}_2 \times \mathcal{U}_2$.
\STATE Initialize $flag=1$, $\mathcal{X}_2=\emptyset$, $\mathcal{U}_2=\emptyset$.
\WHILE{flag}
\STATE $N_a=0$;
\FOR{$\bm x_i \in \mathcal{X}_1\cup \mathcal{X}_2$}
\STATE Select corresponding $u_i(\bm x_i) \in \mathcal{U}_1 \cup \mathcal{U}_2$ .
\IF {${\mathrm sign}(u_i(\bm x_i)-u_{i+1}(\bm x_i))=\mathrm{sign} (u_{i}(\bm x_{i+1})-u_{i+1}(\bm x_{i+1}))$}
\STATE $N_a=N_a+1$.
\STATE Add a point $\bm {x}_{\rm new} = 0.5(\bm x_i+\bm x_{i+1})$ to $\mathcal{X}_2$.
\STATE Calculate corresponding affine function $u^*(\bm x_{\rm new})$ through lines 7-9 of Algorithm 1, add to $\mathcal{U}_2$.
\ENDIF
\ENDFOR
\IF {Na=0}
\STATE flag=0.
\ENDIF
\ENDWHILE
\algrule[0.5pt]
\end{algorithmic}
\end{alg}
Lemma \ref{lem_add_points_line} shows that, if we add points according to Algorithm \ref{alg:addpoints}, (\ref{eq:term_less_f}) and (\ref{eq:term_ge_f}) are satisfied for all the sample points in $\mathcal{L}(\bm x_{\alpha}, \bm x_{\beta})$.
\begin{lem}\label{lem_add_points_line}
Given the line segment $\mathcal{L}(\bm x_{\alpha}, \bm x_{\beta})$ such that (\ref{cond_violate}) holds, if we add points according to Algorithm \ref{alg:addpoints}, then we have
\begin{equation}\label{eq:conl_lem10}
\begin{array}{r}
\min\limits_{j \in J_{\geq,i}}u_j(\bm x_k)\leq u_k(\bm x_k), \forall \bm x_i, \bm x_k \in \mathcal{L}(\bm x_{\alpha}, \bm x_{\beta}) \cap (\mathcal{X}_1\cup \mathcal{X}_2)
\end{array}
\end{equation}
and
\begin{equation}\label{eq:concl2_lem10}
\max\limits_{j \in J_{\leq,i}}u_j(\bm x_k)\geq u_k(\bm x_k), \forall \bm x_i, \bm x_k \in \mathcal{L}(\bm x_{\alpha}, \bm x_{\beta}) \cap (\mathcal{X}_1\cup \mathcal{X}_2).
\end{equation}
\end{lem}
\begin{pf}
After evaluating Algorithm \ref{alg:addpoints}, the condition
\begin{equation}\label{cond:uiuj}
(u_i(\bm x_i)-u_{i+1}(\bm x_i))\cdot (u_{i}(\bm x_{i+1})-u_{i+1}(\bm x_{i+1}))\leq 0
\end{equation}
is satisfied for all points $\bm x_i\in \mathcal{L}(\bm x_{\alpha}, \bm x_{\beta})$, which corresponds to two cases as shown in Fig. \ref{fig_uiuj}. To generalize, here we only consider the case when $u_i \neq u_{i+1}$, for the case $u_i=u_{i+1}$, the affine function $u_i$ can connect the two points.Then, we can construct a continuous PWA function connecting the two points $\bm x_i$ and $\bm x_{i+1}$, i.e., $\min\{u_i, u_{i+1}\}$ for case 1 and $\max\{u_i, u_{i+1}\}$ for case 2.
\begin{figure}[htbp]
\centering
\psfrag{xi}[c]{$\bm x_i$}
\psfrag{xj}[c]{$\bm x_{i+1}$}
\psfrag{ui}[c]{$u_i$}
\psfrag{uj}[c]{$u_{i+1}$}
\subfigure[Case 1.]{
\includegraphics[width=0.46\columnwidth]{fig/uiuj1.eps}
\quad
\subfigure[Case 2.]{
\includegraphics[width=0.46\columnwidth]{fig/uiuj2.eps}}
\caption{Two cases satisfying (\ref{cond:uiuj}).}
\label{fig_uiuj}
\end{figure}
Supposing that the constructed disjunctive and conjunctive continuous PWA functions connecting $\bm x_{\alpha}$ and $\bm x_{\beta}$ are $\hat{f}_{1}$ and $\hat{f}_2$, respectively, then we have
\[
\hat{f}_1(\bm x_k)=\hat{f}_2(\bm x_k)=u_k(\bm x_k), \forall \bm x_k \in \mathcal{L}(\bm x_{\alpha}, \bm x_{\beta}) \cap (\mathcal{X}_1\cup \mathcal{X}_2).
\]
Defining the index sets $\mathrm{aff}_1(\bm x_{\alpha}, \bm x_{\beta})$ and $\mathrm{aff}_1(\bm x_{\alpha}, \bm x_{\beta})$ as
\[
\mathrm{aff}_1{(\bm x_{\alpha}, \bm x_{\beta})}=\{j| \exists \bm x \in \mathcal{L}(\bm x_{\alpha}, \bm x_{\beta})~\mbox{such that}~\hat{f}_1(\bm x)=u_j(\bm x)\},
\]
and
\[
\mathrm{aff}_2{(\bm x_{\alpha}, \bm x_{\beta})}=\{j| \exists~ \bm x \in \mathcal{L}(\bm x_{\alpha}, \bm x_{\beta})~\mbox{such that}~\hat{f}_2(\bm x)=u_j(\bm x)\},
\]
we then have
\[
\begin{array}{l}
\min\limits_{j \in J_{\geq,i}\cap \mathrm{aff}_1(\bm x_{\alpha}, \bm x_{\beta})}u_j(\bm x_k)\leq u_k(\bm x_k)\\
\quad\quad\quad\quad\quad\quad\quad\quad \forall \bm x_i, \bm x_k \in \mathcal{L}(\bm x_{\alpha}, \bm x_{\beta})\cap (\mathcal{X}_1\cup \mathcal{X}_2)
\end{array}
\]
and
\[
\begin{array}{l}
\max\limits_{j \in J_{\leq, i} \cap \mathrm{aff}_2(\bm x_{\alpha}, \bm x_{\beta})} u_j(\bm x_k) \geq u_k(\bm x_k)\\
\quad\quad\quad\quad\quad\quad\quad\quad\forall \bm x_i, \bm x_k \in \mathcal{L}(\bm x_{\alpha}, \bm x_{\beta})\cap (\mathcal{X}_1\cup \mathcal{X}_2).
\end{array}
\]
According to the above inequalities, we have
(\ref{eq:conl_lem10}) and (\ref{eq:concl2_lem10}).
\end{pf}
Algorithm \ref{alg:addpoints} can be run repeatedly until, for all the sample points $(\bm x_i, u_i)\in \left(\mathcal{X}_1\cup \mathcal{X}_2\right) \times \left(\mathcal{U}_1\cup \mathcal{U}_2\right)$, we have (\ref{eq:term_less_f}) and (\ref{eq:term_ge_f}); thus, the resulting lattice PWA approximation equals the original control solution at all sample points and the UO regions that the sample points lies in, as Lemma \ref{lem:lattice_approx} shows.
\subsubsection{Guaranteeing validity of (\ref{eq:cond_gen})}
If (\ref{eq:cond_gen}) is violated, i.e., there is some $\alpha, \beta$, and a point $\bm x_{\gamma} \in \Omega$ such that
\begin{equation}\label{eq:dlargerc}
\min\limits_{j \in J_{\geq, \alpha}}u_j(\bm x_{\gamma})>\max\limits_{j \in J_{\leq, \beta}}u_j(\bm x_{\gamma}),
\end{equation}
then at least one of the inequalities
\begin{equation}\label{eq:dlargeru}
\min\limits_{j \in J_{\geq, \alpha}}u_j(\bm x_{\gamma})>u^*(\bm x_{\gamma})
\end{equation}
and
\begin{equation}\label{eq:clessu}
\max\limits_{j \in J_{\leq, \beta}}u_j(\bm x_{\gamma})<u^*(\bm x_{\gamma})
\end{equation}
is valid.
This is apparent, since, if both (\ref{eq:dlargeru}) and (\ref{eq:clessu}) do not hold, we have
\[
\min\limits_{j \in J_{\geq, \alpha}}u_j(\bm x_{\gamma})\leq \max\limits_{j \in J_{\leq, \beta}}u_j(\bm x_{\gamma}),
\]
which contradicts (\ref{eq:dlargerc}).
If (\ref{eq:dlargeru}) holds, then sample points can be added to the line segment $\mathcal{L}(\bm x_{\alpha}, \bm x_{\gamma})$ as in Section \ref{sec:guarantee1} to ensure that
\[
\min\limits_{j \in J_{\geq,i}}u_j(\bm x_k)\leq u_k(\bm x_k)
\]
for all the sample points in $\mathcal{L}(\bm x_{\alpha}, \bm x_{\gamma})$.
For the conjunctive case, if (\ref{eq:clessu}) is valid, we can also add sample points in the line segment $\mathcal{L}(\bm x_{\beta}, \bm x_{\gamma})$ such that
\[
\max\limits_{j \in J_{\leq,i}}u_j(\bm x_k)\geq u_k(\bm x_k)
\]
for all the sample points in $\mathcal{L}(\bm x_{\beta}, \bm x_{\gamma})$.
To check whether (\ref{eq:cond_gen}) is satisfied, the following optimization problem can be solved,
\[
\begin{array}{rl}
\min\limits_{\bm x} & \max\limits_{j \in J_{\leq, i}}u_j(\bm x)-\min\limits_{j \in J_{\geq, k}}u_j(\bm x)\\
s.t. & \bm x \in \Omega,
\end{array}
\]
with $i, k \in \mathbb{N}_1$. If the optimal value is nonnegative, then (\ref{eq:cond_gen}) holds. The optimization can be transformed into an equivalent linear programming (LP) problem,
\begin{equation}\label{eq:lp}
\begin{array}{rl}
\min\limits_{\bm x, y_1, y_2} &y_1+y_2 \\%-\min\limits_{j \in J_{\geq, \alpha}}u_j(\bm x)\\
s.t. & \bm x \in \Omega,\\
{}&u_j(\bm x)\leq y_1, \forall j \in J_{\leq, i}\\
{}&u_j(\bm x)\geq -y_2, \forall j \in J_{\geq, k},
\end{array}
\end{equation}
which is easy to solve as $\Omega$ is a polyhedron.
If we find an optimum value that is negative, which means that (\ref{eq:dlargerc}) holds, as indicated before, then we can generate more sample points until the optimal value of the optimization problem is nonnegative for all sample indices. The process of guaranteeing the validity of (\ref{eq:cond_gen}) is shown in Algorithm \ref{alg:addpoints2}.
\algrrule[0.8pt]
\begin{alg}
{Re-sampling in order to guarantee validity of (\ref{eq:cond_gen}).}
\label{alg:addpoints2}
\algrule[0.5pt]
\begin{algorithmic}[1]
\hspace{-4ex} \textbf{Input:} Linear MPC problem, initial sample dataset $\mathcal{X}_1 \times \mathcal{U}_1$. \\
\hspace{-4ex} \textbf{Output:} Additional sample dataset $\mathcal{X}_3 \times \mathcal{U}_3$.
\STATE Initialize $\mathcal{X}_3=\emptyset$, $\mathcal{U}_3=\emptyset$.
\STATE Solve LP problem (\ref{eq:lp}) for all $\bm x_i, \bm x_k \in \mathcal{X}_1$.
\STATE Record the pairs of $(i, k)$ and optimal $\bm x$ such that the optimum value is negative, denote as $(\alpha, \beta)$ and $\bm x_{\gamma}$, respectively.
\WHILE{$(\alpha, \beta)\neq \emptyset$ }
\FOR{all $(\alpha, \beta)$}
\IF {(\ref{eq:dlargeru}) holds}
\STATE Add points in $\mathcal{L}(\bm x_{\alpha}, \bm x_{\gamma})$ according to Algorithm \ref{alg:addpoints}.
\ELSE
\STATE Add points point in $\mathcal{L}(\bm x_{\gamma}, \bm x_{\beta})$ according to Algorithm \ref{alg:addpoints}.
\ENDIF
\STATE Update $\mathcal{X}_3$ and $\mathcal{U}_3$ by incorporating the added points.
\ENDFOR
\STATE Solve LP problem (\ref{eq:lp}) for all $\bm x_i, \bm x_k \in \mathcal{X}_1\cup \mathcal{X}_3$ to find if there is combination $(\alpha, \beta)$ such that (\ref{eq:dlargerc}) holds.
\ENDWHILE
\algrule[0.5pt]
\end{algorithmic}
\end{alg}
\begin{rem}
It is noted that although the bounds have been loosened, the checking of (\ref{eq:cond_gen}) is with respect to the entire domain $\Omega$, not only the sample points. Furthermore, in general, $\bm x_{\gamma}$ in (\ref{eq:dlargerc}) is not in the sample set $\mathcal{X}_1$, and to check whether (\ref{eq:dlargeru}) or (\ref{eq:clessu}) holds, we must recompute $u^*(\bm x_{\gamma})$ through lines 7-9 of Algorithm \ref{alg:sampling}.
\end{rem}
A simple one-dimensional example is used to illustrate the process of constructing the disjunctive and conjunctive lattice PWA approximations and the re-sampling procedure.
\begin{exmp}\label{ex1}
Considering a one-dimensional continuous PWA function as in \cite{Xu2016irredundant},
\[
u(x)=\left\{\begin{array}{lr}
\ell_1(x)=0.5x+0.5&x \in [0, 1],\\
\ell_2(x)=2x-1&x \in [1, 1.5],\\
\ell_3(x)=2&x \in [1.5, 3.5],\\
\ell_4(x)=-2x+9&x \in [3.5, 4],\\
\ell_5(x)=-0.5x+3&x \in [4, 5],
\end{array}\right.
\]
the plot of which is shown in Fig. \ref{fig:ex1_plot}.
\begin{figure}[htbp]
\centering
\psfrag{x}[c]{$x$}
\psfrag{f(x)}[c]{$u(x)$}
\psfrag{l1}[c]{$\ell_1$}
\psfrag{l2}[c]{$\ell_2$}
\psfrag{l3}[c]{$\ell_3$}
\psfrag{l4}[c]{$\ell_4$}
\psfrag{l5}[c]{$\ell_5$}
\psfrag{x1}[c]{\tiny $(x_1, u(x_1))$}
\psfrag{x2}[c]{\tiny $(x_2, u(x_2))$}
\psfrag{x3}[c]{\tiny $(x_3, u(x_3))$}
\psfrag{x4}[c]{\tiny $(x_4, u(x_4))$}
\psfrag{x5}[c]{\tiny $(x_5, u(x_5))$}
\includegraphics[width=0.8\columnwidth]{fig/ex1_plot.eps}
\caption{One-dimensional continuous PWA function.}
\label{fig:ex1_plot}
\end{figure}
Supposing that we choose sample points as $(x_1, u(x_1))=(0.5, 0.75)$, $(x_2, u(x_2))=(2.4, 2)$, $(x_3, u(x_3))=(3.75, 1.5)$, and $(x_4, u(x_4))=(4.5, 0.75)$, then the disjunctive lattice PWA approximation is
\[
\begin{array}{r}
\hat{f}_{\rm L, d}=\max\{\min\{\ell_1, \ell_3, \ell_4, \ell_5\}, \min\{\ell_3, \ell_4\},\\
\min\{\ell_1, \ell_3, \ell_4\}, \min\{\ell_1, \ell_3, \ell_5\}\}.
\end{array}
\]
The conjunctive lattice PWA approximation is constructed as
\[
\begin{array}{r}
\hat{f}_{\rm L, c}=\min\{\ell_1, \max\{\ell_3, \ell_5\}, \max\{\ell_4, \ell_5\},
\max\{\ell_4, \ell_5\}\}.
\end{array}
\]
It is apparent that the affine function $\ell_2$ has not been sampled, and we now check whether (\ref{eq:term_less_f}), (\ref{eq:term_ge_f}), and (\ref{eq:cond_gen}) are satisfied.
For the sample point $(x_1, u(x_1))=(0.5, 0.75)$, we have
\[
\min\limits_{j \in J_{\geq, 2}}u_j(x_1)=\min\{\ell_3(x_1), \ell_4(x_1)\}>\ell_1(x_1),
\]
which means that not all affine functions have been sampled and more sample points should be added between $x_1=0.5$ and $x_2=2.4$. Applying Algorithm \ref{alg:addpoints}, we obtain the newly sampled points $x_5=1.2$ and $u(x_5)=2\times 1.2-1=1.4$.
Moreover, comparing the term $\ell_1$ in the conjunctive case and $\min\{\ell_3, \ell_4\}$ in the disjunctive case, we formulate the optimization problem as in (\ref{eq:lp}),
\[
\min\limits_{x \in [0. 5]} \ell_1-\min\{\ell_3, \ell_4\},
\]
which reaches the optimum value -1.5 when $x=0$. This means that either (\ref{eq:dlargeru}) or (\ref{eq:clessu}) holds for $x_{\alpha}=2.5$ and $x_{\gamma}=0$. As $x_{\gamma}=0$ is not a sample point, the value $f(x_{\gamma})$ must be calculated through the original continuous PWA function (for the explicit optimal control law, lines 7-9 of Algorithm \ref{alg:sampling} are evaluated). As $f(0)=0.5$, we have
\[
\min\{\ell_3(0), \ell_4(0)\}>f(0),
\]
which means that not all affine functions have been sampled and more sample points should be added between $x_{\gamma}=0$ and $x_2=2.6$. Applying Algorithm \ref{alg:addpoints2}, we add sample point $x_6=1.3$ and $u(x_6)=1.6$.
Untill now, all of the distinct affine functions $\ell_1, \ldots, \ell_5$ have been sampled, and we obtain the following lattice PWA approximations:
\[
\begin{array}{r}
\hat{f}_{\rm L, d}=\max\{\min\{\ell_1, \ell_3, \ell_4, \ell_5\}, \min\{\ell_2, \ell_3, \ell_4\},\\
\min\{\ell_2, \ell_3, \ell_4, \ell_5\}, \min\{\ell_1, \ell_2, \ell_3, \ell_4\}, \min\{\ell_1, \ell_2, \ell_3, \ell_5\}\}
\end{array}
\]
and
\[
\begin{array}{r}
\hat{f}_{\rm L, c}=\min\{\max\{\ell_1, \ell_2\}, \max\{\ell_1, \ell_2\},\\
\max\{\ell_1, \ell_3, \ell_5\}, \max\{\ell_4, \ell_5\}, \max\{\ell_4, \ell_5\}\}.
\end{array}
\]
For all $x$, as $\min\{\ell_2, \ell_3, \ell_4\}\geq \min\{\ell_2, \ell_3, \ell_4, \ell_5\}$ and $\min\{\ell_2, \ell_3, \ell_4\}\geq \min\{\ell_1, \ell_2, \ell_3, \ell_4\}$, the disjunctive approximation $\hat{f}_{\rm L, d}$ can be further expressed as
\[
\begin{array}{r}
\hat{f}_{\rm L, d}=\max\{\min\{\ell_1, \ell_3, \ell_4, \ell_5\}, \min\{\ell_2, \ell_3, \ell_4\}, \\
\min\{\ell_1, \ell_2, \ell_3, \ell_5\}\}.
\end{array}
\]
Similarly, the conjunctive approximation can be rewritten as
\[
\hat{f}_{\rm L, c}=\min\{\max\{\ell_1, \ell_2\}, \max\{\ell_1, \ell_3, \ell_5\}, \max\{\ell_4, \ell_5\}\}.
\]
Fig. \ref{fig:ex1_fd} and \ref{fig:ex1_fc} give the plots of $\hat{f}_{\rm L, d}$ and $\hat{f}_{\rm L, c}$, respectively.
\begin{figure}[htbp]
\centering
\psfrag{x}[c]{$x$}
\psfrag{fd}[c]{$\hat{f}_{\rm L, d}$}
\psfrag{x1}[c]{\tiny $(x_1, u(x_1))$}
\psfrag{x2}[c]{\tiny $(x_2, u(x_2))$}
\psfrag{x3}[c]{\tiny $(x_3, u(x_3))$}
\psfrag{x4}[c]{\tiny $(x_4, u(x_4))$}
\psfrag{x5}[c]{\tiny $(x_5, u(x_5))$}
\psfrag{x6}[c]{\tiny $(x_6, u(x_6))$}
\includegraphics[width=0.8\columnwidth]{fig/ex1_fd.eps}
\caption{Disjunctive lattice PWA approximation.}
\label{fig:ex1_fd}
\end{figure}
\begin{figure}[htbp]
\centering
\psfrag{x}[c]{$x$}
\psfrag{fc}[c]{$\hat{f}_{\rm L, c}$}
\psfrag{x1}[c]{\tiny $(x_1, u(x_1))$}
\psfrag{x2}[c]{\tiny $(x_2, u(x_2))$}
\psfrag{x3}[c]{\tiny $(x_3, u(x_3))$}
\psfrag{x4}[c]{\tiny $(x_4, u(x_4))$}
\psfrag{x5}[c]{\tiny $(x_5, u(x_5))$}
\psfrag{x6}[c]{\tiny $(x_6, u(x_6))$}
\includegraphics[width=0.8\columnwidth]{fig/ex1_fc.eps}
\caption{Conjunctive lattice PWA approximation.}
\label{fig:ex1_fc}
\end{figure}
It is apparent that for this simple example, after re-sampling, the disjunctive approximation $\hat{f}_{\rm L, d}$ equals $u(x)$, and there is \textbf{deviation} between the conjunctive approximation $\hat{f}_{\rm L, c}$ and $u(x)$. However, both $\hat{f}_{\rm L, d}$ and $\hat{f}_{\rm L, c}$ are identical to $u(x)$ at the sample points $x_1, \ldots, x_6$ and $\Gamma(x_1), \ldots, \Gamma(x_6)$, confirming Lemma \ref{lem:lattice_approx}.
\end{exmp}
\subsection{Simplification of lattice PWA approximation}
In Example \ref{ex1}, duplicated or redundant terms have been removed from the lattice PWA approximation, which simplifies the approximation. When the number of elements in $\mathbb{N}_1$ is large, the evaluations of (\ref{eq:lattice_approximation_d}) and (\ref{eq:lattice_approximation_c}) are not easy, and hence the simplification is considered in this section.
The simplification of a disjunctive lattice PWA function was addressed in \cite{Xu2016irredundant}, for which the detailed subregions of the PWA function are known. In this paper, the information of the subregions of the PWA function is unknown. It is also difficult to derive the expression of the subregion polyhedra through lattice PWA approximation.
Hence, in this section, the disjunctive and conjunctive lattice PWA approximations are simplified according to the following axiom. Assuming that the set $C$ denotes the codomain of affine functions $u_1, \ldots, u_M$, the operations $\bigvee$ and $\bigwedge$ are defined as follows,
\[
u_i \bigvee u_j = \max\{u_i, u_j\}, u_i \bigwedge u_j = \min\{u_i, u_j\}.
\]
It has been shown in \cite{Tarela1975minimization} that the
the set $C$, together with the operations $\bigvee$ and $\bigwedge$, constitutes a distributive lattice, and the following axiom holds for all $u_i, u_j \in C$:
\[
A1: \begin{array}{l}
u_i \bigvee (u_i \bigwedge u_j)=u_i\\
u_i \bigwedge (u_i \bigvee u_j)=u_i.
\end{array}
\]
Actually, the above axiom has been used in the simplification of lattice PWA approximations $\hat{f}_{\rm L, d}$ and $\hat{f}_{\rm L, c}$ in Example \ref{ex1}.
The disjunctive and conjunctive lattice PWA approximations of explicit MPC control law are illustrated using a small example of a linear discrete-time system.
\begin{exmp}\label{ex2}
Considering the discrete-time double integrator example introduced in \cite{Johansson2003piecewise}, the system dynamics can be written as
\[
\begin{array}{rcl}
x_{k+1}&=&\left[\begin{array}{cc}
1&T_s\\
0&1
\end{array}\right]+\left[\begin{array}{c}
T_s^2\\
T_s
\end{array}\right]u_
\end{array}
\]
where the sampling interval $T_s$ is 0.3. Considering the MPC problem with $Q=\mathrm{diag}(1,0)$, $R=1$, and $P$ is the solution of the discrete-time algebraic Riccati equation. The system constraints are $-1 \leq u_k \leq 1$ and $-0.5 \leq x_{k, 2}\leq 0.5$. The region is set to be $[-2.8, 2.8] \times [-0.8, 0.8]$.
To derive the lattice PWA approximation, 441 points ($21\times 21$) are uniformly generated in the region $[-1,1]^2$. For all 441 points, 68 points reach the boundary of some subregions, and we apply perturbations to move the points to the interior of some UO regions.
There are five distinct affine functions, which are
\[
\begin{array}{l}
u_1=-0.8082x_1-1.1559x_2;\\
u_2 = -3.3333x_2-2.6667;\\
u_3=-3.3333x_2+2.6667; \\
u_4 = -1; u_5=1.
\end{array}
\]
We generate 441 terms, and after the simplification of terms, the following lattice PWA approximations are obtained,
\begin{equation*
\hat{f}_{\mathrm{L, d}}(\bm x)=\max\{u_2(\bm x), u_4(\bm x), \min(u_1(\bm x), u_3(\bm x), u_5(\bm x))\}
\end{equation*}
and
\[
\hat{f}_{\mathrm{L, c}}(\bm x)=\min\{\max(u_1(\bm x), u_2(\bm x), u_4(\bm x)), u_3(\bm x), u_5(\bm x)\}.
\]
Readers can verify that the disjunctive and conjunctive approximations are equivalent. Fig. \ref{fig_ex2}(a) gives the optimal MPC controller generated by MPT3 \cite{MPT3}, and the linear subregions are also shown, see Fig. \ref{fig_ex2}(b). For this example, the lattice PWA approximations are identical to explicit MPC. In the next section, we demonstrate that if all the affine functions have been sampled and the two lattice PWA approximations are identical, then both of them equal the optimal MPC controller.
\begin{figure}[htbp]
\centering
\psfrag{x1}[c]{$x_1$}
\psfrag{x2}[c]{$x_2$}
\psfrag{u}[c]{$u^*$}
\subfigure[Controller.]{
\includegraphics[width=0.47\columnwidth]{fig/ex2_controller.eps}}
\subfigure[Region.]{
\includegraphics[width=0.47\columnwidth]{fig/ex2_partition.eps}
}
\caption{Explicit MPC controller in Example \ref{ex2}.}
\label{fig_ex2}
\end{figure}
\end{exmp}
\section{Approximation error and computational complexity}
After evaluating Algorithm \ref{alg:sampling}-\ref{alg:addpoints2}, we obtain the sample point set $(\mathcal{X}_1\cup \mathcal{X}_2
\cup \mathcal{X}_3)\times (\mathcal{U}_1 \cup \mathcal{U}_2 \cup \mathcal{U}_3)$, and use $\mathcal{X}=\mathcal{X}_1\cup \mathcal{X}_2
\cup \mathcal{X}_3$ and $\mathcal{U}=\mathcal{U}_1 \cup \mathcal{U}_2 \cup \mathcal{U}_3$ to denote the sample point set. Assuming that $\mathcal{X}=\{\bm x_1, \ldots, \bm x_{N_s}\}$, given both the disjunctive and conjunctive lattice PWA approximations, the deviation between the approximations and the optimal control law can be derived.
\begin{thm}\label{thm:1}
Supposing that
\[
\hat{f}_{\rm L, d}(\bm x)=\max\limits_{i \in \{1, \ldots, N_s\}}\min\limits_{j \in J_{\geq, i}}u_j(\bm x)
\] and
\[
\hat{f}_{\rm L, c}(\bm x)=\min\limits_{i \in \{1, \ldots, N_s\}}\max\limits_{i \in J_{\leq, i}}u_i(\bm x)
\] are the disjunctive and conjunctive lattice PWA approximations of the optimal control law $u^*(\bm x)$ over the domain $\Omega$, assuming that Assumption \ref{assump1} holds, and denoting that
\begin{equation}\label{eq:epsilon}
\varepsilon = \max\limits_{\bm x}\left(\hat{f}_{\rm L, c}(\bm x)-\hat{f}_{\rm L, d}(\bm x)\right),
\end{equation}
we then have
\begin{equation}\label{eq:concl1_thm2}
-\varepsilon\leq \hat{f}_{\rm L, d}(\bm x)-u^*(\bm x) \leq 0
\end{equation}
and
\begin{equation}\label{eq:concl2_thm2}
0\leq \hat{f}_{\rm L, c}(\bm x)-u^*(\bm x) \leq \varepsilon.
\end{equation}
Furthermore, if $\varepsilon=0$, we have
\begin{equation}\label{eq:identical}
\hat{f}_{\rm L, d}(\bm x)=\hat{f}_{\rm L, c}(\bm x)=u^*(x), \forall \bm x \in \Omega.
\end{equation}
\end{thm}
\begin{pf}
According to the proof of Lemma \ref{lem:lattice_approx}, if Assumption \ref{assump1} holds, we have (\ref{eq:term_less_f}) and (\ref{eq:term_ge_f}). Then the following is valid,
\[
\hat{f}_{\rm L, d}(\bm x) \leq u^*(\bm x)\leq \hat{f}_{\rm L, c}(\bm x) , \forall \bm x\in \Omega.
\]
According to (\ref{eq:epsilon}), we have (\ref{eq:concl1_thm2}) and (\ref{eq:concl2_thm2}).
Furthermore, if $\varepsilon=0$, then both approximations are identical to the optimal control law in the region $\Omega$, i.e., (\ref{eq:identical}) holds.
\end{pf}
For problems with high dimension, it is difficult to check whether $\hat{f}_{\rm L, d}(\bm x)=\hat{f}_{\rm L, c}(\bm x)$ for all $\bm x \in \Omega$. In practice,
we fulfill this by generating a huge number of i.i.d. sample points, which constitute a validation dataset $\mathcal{X}_{\rm validate}=\{\bm x_i, {i=1}, \ldots, {N_v}\}$. For each sample point, we define an indicator function as
\[
I(\bm x_i):=\left\{
\begin{array}{cc}
1&\mbox{if}~ \hat{f}_{\rm L, d}(\bm x_i)=\hat{f}_{\rm L, c}(\bm x_i)\\
0& \mbox{if}~ \hat{f}_{\rm L, d}(\bm x_i)\neq \hat{f}_{\rm L, c}(\bm x_i),
\end{array}
\right.
\]
and then the random variables $I(\bm x_i), i=1,\ldots, N_v$ are also i.i.d. Denoting the probability for $I(\bm x_i=1)$ as $\mu$, i.e., $\mathbb{P}[I(\bm x_i)=1]=\mu$, then, according to Hoeffding's inequality \cite{Hoeffding1994probability}, we have
\[
\mathbb{P}[|\mu-\bar{I}|\geq \epsilon]\leq 2\exp(-2N_v \epsilon^2),
\]
in which $\bar{I}=\frac{1}{N_v}\sum\limits_{k=1}^{N_v}I(\bm x_k)$. Therefore, we have
\[
\mathbb{P}[\mu \geq \bar{I}-\epsilon]>1-2\exp(-2N_v \epsilon^2),
\]
meaning that with confidence $1-2\exp(-2N_v \epsilon^2)$, the probability that the lattice PWA approximations $\hat{f}_{\rm L, d}$ and $\hat{f}_{\rm L, c}$ are identical is larger than $\bar{I}-\epsilon$. If $\bar{I}=1$, then
by setting a small enough threshold $\epsilon$, we can say that with confidence $1- 2\exp(-2N_v \epsilon^2)$, the lattice PWA approximations $\hat{f}_{\rm L, d}$ and $\hat{f}_{\rm L, c}$ are almost identical, both equal the optimal control law. For example, if $\epsilon=10^{-3}$, then $N_v\geq 5\times 10^6$ can ensure that the confidence is almost 1.
The process for checking whether the lattice PWA approximations equal can be shown in Algorithm \ref{alg:check_equiv}, which also generates additional sample data if the approximations are not equal. After evaluating Algorithm \ref{alg:check_equiv}, we can obtain statistically error-free lattice PWA approximations to the linear explicit MPC control law.
\algrrule[0.8pt]
\begin{alg}
{Deriving statistically error-free lattice PWA approximations to linear explicit MPC control law.}
\label{alg:check_equiv}
\algrule[0.5pt]
\begin{algorithmic}[1]
\hspace{-4ex} \textbf{Input:} Disjunctive and conjunctive lattice PWA approximations $\hat{f}_{\rm L, d}$ and $\hat{f}_{\rm L, c}$.\\
\hspace{-4ex} \textbf{Output:} Confidence $\delta$ and probability $\mathbb{P}$, such that with confidence $\delta$, the probability that $\hat{f}_{\rm L, d}(\bm x)=\hat{f}_{\rm L, c}(\bm x), \forall x \in \Omega$ is larger than $\mathbb{P}$.
\STATE Initialize $\bar{I}=0$, the set $\mathcal{X}_{\rm resample}$ is set to be empty.
\WHILE{$\bar{I}<1$}
\STATE Randomly generate $N_v$ validate points $\bm x_1, \ldots, \bm x_{N_v}$ in $\Omega$, calculate corresponding function value $\hat{f}_{\rm L, d}$ and $\hat{f}_{\rm L, c}$.
\FOR{$k=1:N_v$}
\STATE Calculate the difference $\hat{f}_{\rm L, d}(\bm x_k)-\hat{f}_{\rm L, c}(\bm x_k)$, i.e., $I(\bm x_k)$.
\IF{$I(\bm x_k) \neq 1$}
\STATE Add $\bm x_k$ to $\mathcal{X}_{\rm resample}$.
\ENDIF
\ENDFOR
\STATE $\bar{I}=\frac{1}{N_v}\sum\limits_{k=1}^{N_v}I(\bm x_k)$.
\STATE Update $\hat{f}_{\rm L, d}$ and $\hat{f}_{\rm L, c}$ according to $\mathcal{X}_{\rm resample}$.
\STATE Set $\mathcal{X}_{\rm resample}=\emptyset$.
\ENDWHILE
\STATE The confidence $\delta=1- 2\exp(-2N_v \epsilon^2)$, the probability $\mathbb{P}=1-\epsilon$, in which $\epsilon$ is a user-defined value.
\algrule[0.5pt]
\end{algorithmic}
\end{alg}
After the evaluation of Algorithm \ref{alg:check_equiv}, we can guarantee that with confidence $\delta=1- 2\exp(-2N_v \epsilon^2)$ the probability that the approximated lattice PWA control law $\hat{f}_{\rm L, d}$ and $\hat{f}_{\rm L, c}$ coincide with the optimal control law is larger than $1-\epsilon$. This is because if $\bar{I}\neq 1$, by updating $\hat{f}_{\rm L, d}$ and $\hat{f}_{\rm L, c}$ according to $\mathcal{X}_{\rm resample}$, the lattice PWA approximation is closer to $u^*$, as Lemma \ref{lem:closer} shows.
\begin{lem}\label{lem:closer}
Given $\hat{f}_{\rm L, d}$ and $\hat{f}_{\rm L, c}$, and supposing that Assumption \ref{assump1} holds, if $\bar{I}\neq 1$, for any $\bm x_k \in \mathcal{X}_{\rm resample}$, we denote
\[
\hat{f}_1=\max\{\hat{f}_{\rm L, d}, \min\limits_{j \in J_{\geq,k}}u_j(\bm x)\}
\]
and
\[
\hat{f}_2=\max\{\hat{f}_{\rm L, c}, \max\limits_{j \in J_{\leq,k}}u_j(\bm x)\}
\]
and then we have
\begin{equation}\label{eq:concl_lem22}
\|\hat{f}_1-u^*\|_{\infty}\leq \|\hat{f}_{\rm L, d}-u^*\|, \|\hat{f}_2-u^*\|_{\infty}\leq \|\hat{f}_{\rm L, c}-u^*\|.
\end{equation}
\end{lem}
\begin{pf}
As Assumption \ref{assump1} holds, for all $\bm x_k \in \mathcal{X}_{\rm resample}$, we have
\[
\min\limits_{j \in J_{\geq, k}}u_j(\bm x) \leq u^*(\bm x), \forall \bm x \in \Omega
\]
and
\[
\max\limits_{j \in J_{\leq, k}}u_j(\bm x) \geq u^*(\bm x), \forall \bm x \in \Omega.
\]
Hence we have
\[
\hat{f}_{\rm L, d}(\bm x)\leq \hat{f}_1(\bm x)\leq u^*(\bm x),
\]
and the following inequality,
\begin{equation}\label{eq:uf_disjunctive}
u^*(\bm x)-\hat{f}_1(\bm x) \leq u^*(\bm x)-\hat{f}_{\rm L, d}(\bm x), \forall \bm x \in \Omega.
\end{equation}
For the conjunctive case, similarly we have
\[
u^*(\bm x)\leq \hat{f}_2(\bm x) \leq \hat{f}_{\rm L, c}(\bm x), \forall \bm x \in \Omega,
\]
which means that
\[
\hat{f}_2(\bm x)-u^*(\bm x) \leq \hat{f}_{\rm L, c}(\bm x)-u^*(\bm x), \forall \bm x \in \Omega.
\]
Together with (\ref{eq:uf_disjunctive}), we have (\ref{eq:concl_lem22}).
\end{pf}
\subsection{Complexity analysis}
\subsubsection{Online evaluation}\label{sec:online}
Assuming that there are $\tilde{N}$ terms in the final approximation, according to \cite{Wen2009analytical}, the worst-case online evaluation complexity is $O(\tilde{N}^2)$. In general, we have $\tilde{N}\ll N_s$, and hence the online evaluation is very fast.
\subsubsection{Storage requirements}
Assuming that the disjunctive lattice PWA approximation has $\tilde{N}$ terms, we must store $(n_x+1)\cdot M$ real numbers and $\sum\limits_{i=1}^{\tilde{N}}|J_{\geq, i}|$ integer numbers, in which $|J_{\geq, i}|$ is the number of elements in the set $J_{\geq, i}$.
As $|J_{\geq,i} |\leq M$, in total $(n_x+1)\cdot M$ real numbers and $M\cdot \tilde{N}$ integer numbers must be stored.
In many cases, we have $\tilde{N}\ll N_s$, and hence the storage requirement for the disjunctive lattice PWA approximation is very small.
For the conjunctive lattice PWA approximation, we achieve the same result.
\subsubsection{Offline complexity}
The offline time complexity for deriving equal disjunctive and conjunctive lattice PWA approximations can be summarized as follows.
The time complexity consists of two parts. One concerns the training points sampling and re-sampling in order to obtain preliminary lattice PWA approximations and the other is the complexity of evaluating Algorithm \ref{alg:check_equiv}. Lemma \ref{lem:last} describes this offline time complexity.
\begin{lem}\label{lem:last}
Assuming that the sample domain is regular, then the worst-case complexity of deriving the disjunctive and conjunctive lattice PWA approximations is $O(N_v\cdot N_s^2)$, in which $N_s$ and $N_v$ are the numbers of sample points in $\mathcal{X}$ and $\mathcal{X}_{\rm validate}$, respectively.
\begin{pf}
As indicated previously, the offline complexity comes from generating the sample set $\mathcal{X}_1$, as Algorithm \ref{alg:sampling} shows, re-sampling in order to make (\ref{eq:term_less_f}) and (\ref{eq:term_ge_f}) hold (Algorithms \ref{alg:addpoints} and \ref{alg:addpoints2}), simplification according to Axiom 1, and generating statistically identical $\hat{f}_{\rm L, d}$ and $\hat{f}_{\rm L, c}$ according to Algorithm \ref{alg:check_equiv}.
The number of sample points in Algorithm \ref{alg:sampling} can be calculated as $N_1=\prod\limits_{i=1}^n \frac{b_i-a_i}{\delta_i}$, in which $[a_i, b_i]$ is the sample bound for the $i$-th component, and $\delta_i$ is the sample interval. The complexity of evaluating Algorithm \ref{alg:sampling} includes solving $N_1$ convex quadratic programming problems and corresponding KKT conditions, which are basically solving linear equations.
The solving of $N_1$ convex quadratic programming with $N_p\cdot n_u$ decision variables is approximately $O(N_1 \cdot L^2(N_p\cdot n_u)^4)$ by using an interior-point algorithm \cite{Ye1989extension}, in which $L$ is the bit length of the quadratic programming problem. The dominant algorithmic operation in solving the KKT conditions is solving $N_1$ matrix inversion problems, the worst-case complexity of which is $O(N_1|\mathcal{A}^*|^3)$ using the Gauss–Jordan elimination algorithm, in which $|\mathcal{A}^*|$ is the number of active constraints. As $|\mathcal{A}^*|\leq p$, where $p$ is the number of constraints in QP (\ref{mp-qp2}), the worst-case complexity for solving the KKT conditions is $O(N_1 p^3)$.
We now discuss the worst-case complexity of evaluating Algorithm \ref{alg:addpoints}. For two points $\bm x_{\alpha}$ and $\bm x_{\beta}$, if (\ref{eq:term_less_f}) is violated, the evaluation of Algorithm \ref{alg:addpoints} is basically a binary search method for identifying omitted subregions in the line segment $\mathcal{L}(\bm x_{\alpha}, \bm x_{\beta})$. The maximum number of subregions appearing in $\mathcal{L}(\bm x_{\alpha}, \bm x_{\beta})$ is $\frac{d_{\alpha, \beta}}{\delta_M}$, where $d_{\alpha, \beta}$ is the length of $\mathcal{L}(\bm x_{\alpha}, \bm x_{\beta})$ and $\delta_M$ the minimum measure of subregions. The binary searching of the subregions yields a worst-case complexity of $O(\log_2\frac{d_{\alpha, \beta}}{\delta_M})$. Supposing that there are $N_t$ point pairs such that (\ref{eq:term_less_f}) are violated, then the worst-case complexity is $O(N_t \cdot \log_2\frac{d_{\alpha, \beta}}{\delta_M})$. In general, the number $N_t$ is closely related to the sample interval $\delta$, which then depends on the number of sample points, $N_1$ in Algorithm 1. A larger $N_1$ will result in a smaller number of $N_t$ and $d_{\alpha, \beta}$; hence, the complexity of Algorithm \ref{alg:addpoints} can be decreased by increasing the complexity of Algorithm 1.
The complexity of evaluating Algorithm \ref{alg:addpoints2} can be discussed similarly. In Algorithm \ref{alg:addpoints2}, assuming that $L$ is the bit length of the LP problem (\ref{eq:lp}), at most $N_1^2$ LP problems must be solved, which in the worst case has a complexity of $O(N_1^2 \cdot n_x^{3.5}\cdot L)$. Together with the evaluation of Algorithm \ref{alg:addpoints} in line 7 or 9, the worst-case complexity of Algorithm \ref{alg:addpoints2} is $O(N_1^2 \cdot n_x^{3.5}\cdot L)+O(N_1^2 \log_2\frac{d_{\alpha, \beta}}{\delta_M})$. As in general $\log_2\frac{d_{\alpha, \beta}}{\delta_M} \ll n_x^{3.5}$, the worst-case complexity of evaluating Algorithm \ref{alg:addpoints2} is $O(N_1^2 \cdot n_x^{3.5}\cdot L)$.
After evaluating Algorithm \ref{alg:sampling}-\ref{alg:addpoints2}, there are $N_s$ sample points. The simplification procedure requires the comparison of the sets $J_{\geq, i}$ ($J_{\leq, i}$) for $i=1, \ldots, N_s$, which at most yields $\left(^{~2}_{N_s}\right)=\frac{N_s(N_s-1)}{2}$ times comparisons. For each comparison, at most $M^2$ literals need to be compared. Hence the worst-case complexity for the simplification is $O(M^2N_s^2)$.
For the last part, the most time-consuming part of Algorithm \ref{alg:check_equiv} is the sampling of $N_v$ validation points in the lattice PWA approximations. As indicated in Section \ref{sec:online}, the function evaluation process of lattice PWA approximations has a worst-case complexity of $O(\tilde{N}^2)$, in which $\tilde{N}<N_s$. There may be several cycles for the generation of $N_v$ validation points, and we find in the simulations that the number of cycles is small, usually less than 10, and hence the worst-case complexity of evaluating Algorithm \ref{alg:check_equiv} can be written as $O(N_v\cdot N_s^2)$.
In general, $N_1<N_s$, $p\ll N_s$, $N_t \ll N_1$, and $M\ll N_v$, then the total worst-case complexity is $O(N_v \cdot N_s^2)$.
\end{pf}
\end{lem}
It is noted that $O(N_v \cdot N_s^2)$ is the worst-case complexity. In the simulation results, we can see that the offline calculation is not time consuming.
\section{Simulation results}
\begin{exmp}\label{example1}
Consider the following linear system taken from \cite{Borrelli2003constrained}:
\[
x_{k+1}=\left[
\begin{array}{cccc}
4&-1.5&0.5&-0.25\\
4&0&0&0\\
0&2&0&0\\
0&0&0.5&0
\end{array}
\right]x_k+\left[
\begin{array}{c}
0.5\\
0\\
0\\
0
\end{array}
\right]u_k
\]
The system is subject to input constraints $-1 \leq u_k\leq 1$ and state constraints $|x_k|^T\leq \left[\begin{array}{cccc}
10&10&10&10
\end{array}
\right]^T$. The MPC controller is designed with $Q = \rm diag\{1, 1, 1, 1\}$, $R=0.01$, and $P = 0$. For this example, the explicit solution is computed with horizon $N_p=10$, resulting in a PWA function with 767 polyhedral regions and 108 distinct affine functions, and it is extremely time consuming to derive an accurate representation of the optimal control law \cite{Xu2016irredundant}.
To construct the disjunctive and conjunctive lattice PWA approximations, 4,096 samples are generated uniformly in the region $\Omega = [-1, 1]^4$ and 28 distinct affine functions have been sampled. By running Algorithm \ref{alg:addpoints}, additional one affine function has been identified. After evaluating Algorithm \ref{alg:addpoints}, (\ref{eq:cond_gen}) is satisfied, and there is no need to run Algorithm \ref{alg:addpoints2}.
The evaluation of Algorithm \ref{alg:check_equiv} results in equivalent disjunctive and conjunctive lattice PWA approximations, both with 16 terms. All the computations in this paper are implemented through MatLab 2016b (MathWorks, USA) on a 2.7-GHz Intel Core i7 computer.
The entire offline calculation time is 100.1507s.
The number of parameters for both approximations are 208. We also use the result of MPT for comparison; the number of parameters stored in the MPT solution is 39,230 (including the parameters for local affine functions and local regions). The average online evaluation time (over 20,000 times) for the MPT solution, the disjunctive lattice PWA approximation, and the conjunctive lattice PWA approximation are 0.0036s, $1.3263\times 10^{-4}$s, and $1.3620 \times 10^{-4}$s, respectively.
It has been tested through $5\times 10^6$ test data points that the two lattice PWA approximations are identical in the region $[-1, 1]^4$. By setting $\epsilon=10^{-3}$, it can be concluded that with confidence $\delta =0.9999$ the probability that the approximated lattice PWA control laws equal the optimal control is larger than 0.999. For the $5\times 10^6$ points, the optimal control law is also calculated and the conclusion is verified, i.e., the lattice PWA approximations are error-free.
\end{exmp}
\begin{exmp}\label{example2}
Consider an example taken from \cite{Karg2020efficient}, which is the inverted pendulum on a cart. The state consists of the angle and the angle speed of the pole, i.e., $\Phi$, $\dot{\Phi}$, respectively, and the position and speed of the cart, i.e., $s$ and $\dot{s}$, respectively. The constraints for the state are $\|\bm x\|_{\infty}^T \leq [1, 1.5, 0.35, 1.0]^T$. The input is the force, the constraint of which is $|u|\leq 1$. The discrete-time dynamic is given by
\[
A=\left[
\begin{array}{cccc}
1&0.1&0&0\\
0&0.9818&0.2673&0\\
0&0&1&0.1\\
0&-0.0455&3.1182&1
\end{array}
\right], B=\left[\begin{array}{c}
0\\
0.1818\\
0\\
0.4546
\end{array}\right].
\]
The prediction horizon is taken to be $N=10$. The value of matrices in the cost function is $Q = \rm diag\{2, 2, 2, 2\}$, $R=0.01$, and $P = 0$.According to the MPT toolbox, the optimal control solution is a PWA function of the state $\bm x$, with 2,271 polyhedral regions, which is much more complex than the PWA solution in Example \ref{example1}.
In \cite{Karg2020efficient}, 88,341 samples were generated to train an approximated controller, which is basically a deep PWL neural network. Here,
in order to construct the disjunctive and conjunctive lattice PWA approximations, similar to Example \ref{example1}, only 4,096 samples are generated uniformly in the region $\Omega =[-0.6~0.6]\times [-1~1]\times [-0.2~0.2]\times [-0.6~0.6]]$. In this case, there are only five distinct affine functions, which is surprising, as the full PWA function is complex. This is due to the choice of the region. For a larger region, say $\Omega _2= [-0.9~0.9]\times [-1.2~1.2]\times [-0.25~0.25]\times [-0.9~0.9]$, there are 47 distinct affine functions. However, there are a significant number of infeasible state points in $\Omega_2$, and according to the state trajectory in \cite{Karg2020efficient}, the region $\Omega$ is enough. It is noted that the domain of interest should be selected carefully in the future, and perhaps a data-driven method can be used to estimate the recursively feasible set.
The evaluation of Algorithm \ref{alg:sampling}-\ref{alg:check_equiv} results in the disjunctive and conjunctive lattice PWA approximations, both with three terms.
The entire offline calculation time is 210.5173s.
The number of parameters for the both approximations is 33, while the number of parameters stored in the MPT solution is 104,535. The average online evaluation times for the MPT solution, the disjunctive as well as conjunctive lattice PWA approximations are 0.0105s, $3.0607\times 10^{-5}$s, and $2.5525 \times 10^{-5}$s, respectively. It is apparent that the super-simple approximation results in a much lower online computational burden.
It has been tested through $5\times 10^6$ test data points that the two lattice PWA approximations are identical in the region $\Omega$. By setting $\epsilon=10^{-3}$, it can be concluded that with confidence $\delta =0.9999$ the probability that the approximated lattice PWA control laws equal the optimal control is larger than 0.999. For the $5\times 10^6$ points, the optimal explicit linear MPC control law is also calculated and it is found that the lattice PWA approximations are error-free in $\Omega$.
Fig. \ref{ex2_fig} shows one exemplary closed-loop simulation of the example, and we can see from the figure that the optimal state trajectory and the trajectory with the lattice PWA approximations as inputs are identical.
\begin{figure}[htbp]
\centering
\psfrag{Phi}[c]{$\Phi$}
\psfrag{u}[c]{$u$}
\subfigure[]{
\includegraphics[width=0.8\columnwidth]{fig/ex2_state.eps}
\quad
\subfigure[]{
\includegraphics[width=0.8\columnwidth]{fig/ex2_control.eps}}
\caption{One exemplary closed-loop simulation of Example \ref{example2}.}
\label{ex2_fig}
\end{figure}
It can be seen from the figure that even if the state trajectory exceeds the predefined region $\Omega$, the approximations still equal the optimal control law. However, in this paper, we can only guarantee that the approximated control law equals optimal control law in the predefined region. The conditions under which the approximations and optimal control law are identical out of the predefined region will be explored in our future work.
\end{exmp}
\section{Conclusions and Future work}
In this paper, we have presented disjunctive and conjunctive lattice PWA approximations of the explicit linear MPC control law. The lattice PWA approximated and exact control laws are identical for sample points and in UO regions that contain the sample points as interior points. Furthermore, under the assumption that all the linear functions have been identified in the domain of interest, if the disjunctive and conjunctive lattice PWA approximations are identical, both are equivalent to the optimal control law. Re-sampling procedure for the satisfaction of the assumption is also provided.The two kinds of lattice PWA approximations have been simplified to further reduce the storage and online evaluation complexity. The complexity of online and offline approximation as well as the storage requirements, have been analyzed. Simulation results show that with a moderate number of sample points we can obtain statistically error-free lattice PWA approximations that are calculated with relatively small computational cost.
In the future, the domain of interest should be treated carefully and lattice PWA approximations with bounded approximation error will be considered, for which the corresponding feasibility and stability analysis will be provided.
\bibliographystyle{plain}
|
2,877,628,088,491 | arxiv | \section{\@startsection{section}{1}%
\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}%
{\bfserie
\centering
}}
\def\@secnumfont{\bfseries}
\makeatother
\setlength{\textheight}{19.5 cm}
\setlength{\textwidth}{12.5 cm}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{corollary}[theorem]{Corollary}
\theoremstyle{definition}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{example}[theorem]{Example}
\theoremstyle{remark}
\newtheorem{remark}[theorem]{Remark}
\numberwithin{equation}{section}
\setcounter{page}{1}
\usepackage{amsmath,amsthm,amssymb,amsbsy}
\usepackage[all]{xy}
\usepackage{chngcntr}
\counterwithin{figure}{section}
\newcommand{\mathbb{A}}{\mathbb{A}}
\newcommand{\mathbb{B}}{\mathbb{B}}
\newcommand{\mathbb{C}}{\mathbb{C}}
\newcommand{\mathbb{D}}{\mathbb{D}}
\newcommand{\mathbb{E}}{\mathbb{E}}
\newcommand{\mathbb{F}}{\mathbb{F}}
\newcommand{\mathbb{G}}{\mathbb{G}}
\newcommand{\mathbb{H}}{\mathbb{H}}
\newcommand{\mathbb{I}}{\mathbb{I}}
\newcommand{\mathbb{J}}{\mathbb{J}}
\newcommand{\mathbb{K}}{\mathbb{K}}
\newcommand{\mathbb{L}}{\mathbb{L}}
\newcommand{\mathbb{M}}{\mathbb{M}}
\newcommand{\mathbb{N}}{\mathbb{N}}
\newcommand{\mathbb{O}}{\mathbb{O}}
\newcommand{\mathbb{P}}{\mathbb{P}}
\newcommand{\mathbb{Q}}{\mathbb{Q}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathbb{S}}{\mathbb{S}}
\newcommand{\mathbb{T}}{\mathbb{T}}
\newcommand{\mathbb{U}}{\mathbb{U}}
\newcommand{\mathbb{V}}{\mathbb{V}}
\newcommand{\mathbb{W}}{\mathbb{W}}
\newcommand{\mathbb{X}}{\mathbb{X}}
\newcommand{\mathbb{Y}}{\mathbb{Y}}
\newcommand{\mathbb{Z}}{\mathbb{Z}}
\newcommand{\mathcal{A}}{\mathcal{A}}
\newcommand{\mathcal{B}}{\mathcal{B}}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\mathcal{D}}{\mathcal{D}}
\newcommand{\mathcal{E}}{\mathcal{E}}
\newcommand{\mathcal{F}}{\mathcal{F}}
\newcommand{\mathcal{G}}{\mathcal{G}}
\newcommand{\mathcal{H}}{\mathcal{H}}
\newcommand{\mathcal{I}}{\mathcal{I}}
\newcommand{\mathcal{J}}{\mathcal{J}}
\newcommand{\mathcal{K}}{\mathcal{K}}
\newcommand{\mathcal{L}}{\mathcal{L}}
\newcommand{\mathcal{M}}{\mathcal{M}}
\newcommand{\mathcal{N}}{\mathcal{N}}
\newcommand{\mathcal{O}}{\mathcal{O}}
\newcommand{\mathcal{P}}{\mathcal{P}}
\newcommand{\mathcal{Q}}{\mathcal{Q}}
\newcommand{\mathcal{R}}{\mathcal{R}}
\newcommand{\mathcal{S}}{\mathcal{S}}
\newcommand{\mathcal{T}}{\mathcal{T}}
\newcommand{\mathcal{U}}{\mathcal{U}}
\newcommand{\mathcal{V}}{\mathcal{V}}
\newcommand{\mathcal{W}}{\mathcal{W}}
\newcommand{\mathcal{X}}{\mathcal{X}}
\newcommand{\mathcal{Y}}{\mathcal{Y}}
\newcommand{\mathcal{Z}}{\mathcal{Z}}
\newcommand{\mathfrak{A}}{\mathfrak{A}}
\newcommand{\mathfrak{B}}{\mathfrak{B}}
\newcommand{\mathfrak{C}}{\mathfrak{C}}
\newcommand{\mathfrak{D}}{\mathfrak{D}}
\newcommand{\mathfrak{E}}{\mathfrak{E}}
\newcommand{\mathfrak{F}}{\mathfrak{F}}
\newcommand{\mathfrak{G}}{\mathfrak{G}}
\newcommand{\mathfrak{H}}{\mathfrak{H}}
\newcommand{\mathfrak{I}}{\mathfrak{I}}
\newcommand{\mathfrak{J}}{\mathfrak{J}}
\newcommand{\mathfrak{K}}{\mathfrak{K}}
\newcommand{\mathfrak{L}}{\mathfrak{L}}
\newcommand{\mathfrak{M}}{\mathfrak{M}}
\newcommand{\mathfrak{N}}{\mathfrak{N}}
\newcommand{\mathfrak{O}}{\mathfrak{O}}
\newcommand{\mathfrak{P}}{\mathfrak{P}}
\newcommand{\mathfrak{Q}}{\mathfrak{Q}}
\newcommand{\mathfrak{R}}{\mathfrak{R}}
\newcommand{\mathfrak{S}}{\mathfrak{S}}
\newcommand{\mathfrak{T}}{\mathfrak{T}}
\newcommand{\mathfrak{U}}{\mathfrak{U}}
\newcommand{\mathfrak{V}}{\mathfrak{V}}
\newcommand{\mathfrak{W}}{\mathfrak{W}}
\newcommand{\mathfrak{X}}{\mathfrak{X}}
\newcommand{\mathfrak{Y}}{\mathfrak{Y}}
\newcommand{\mathfrak{Z}}{\mathfrak{Z}}
\newcommand{\ldb}{[\hspace{-1.5pt}[}
\newcommand{\rdb}{]\hspace{-1.5pt}]}
\newcommand{\biggldb}{\bigg[\hspace{-3.5pt}\bigg[}
\newcommand{\biggrdb}{\bigg]\hspace{-3.5pt}\bigg]}
\newcommand{\ldp}{(\hspace{-1.5pt}(}
\newcommand{\rdp}{)\hspace{-1.5pt})}
\newcommand{\eqlaw}{\stackrel{\mathcal{D}}=}
\newcommand{\claw}{\stackrel{\mathcal{D}}\longrightarrow}
\newcommand{\cas}{\stackrel{\mathrm{a.s.}}\longrightarrow}
\newcommand{\cmean}{\stackrel{\mathcal{L}_1}\longrightarrow}
\newcommand{\cp}{\stackrel{P}\longrightarrow}
\newcommand{\cuc}{\stackrel{uc}\longrightarrow}
\newcommand{\cucp}{\stackrel{ucp}\longrightarrow}
\newcommand{\clp}[1]{\stackrel{\mathcal{L}^{#1}}\longrightarrow}
\newcommand{\cmart}{\stackrel{\mathcal{M}^2}\longrightarrow}
\newcommand{\cwk}{\stackrel{wk}\longrightarrow}
\newcommand{\io}{\textrm{ i.o.}}
\newcommand{\abf}{\textrm{ a.b.f.}}
\newcommand{\df}[1]{\,\mathrm{d}#1}
\newcommand{\id}{\mathrm{id}}
\newcommand{\eps}{\varepsilon}
\newcommand{\foralls}{\;\forall\;}
\newcommand{\existss}{\;\exists\;}
\newcommand{\spann}{\mathrm{span}\;}
\newcommand{\spanc}{\overline{\mathrm{span}}\;}
\newcommand{\supp}{\mathrm{supp }}
\newcommand{\diam}{\mathrm{diam }}
\newcommand{\cnvup}{\uparrow\uparrow}
\newcommand{\cnvdn}{\downarrow\downarrow}
\newcommand{\Log}{\mathrm{Log}}
\newcommand{\sgn}{\mathrm{sgn}}
\newcommand{\diag}{\mathrm{diag}}
\newcommand{\tr}{\mathrm{tr\;}}
\newcommand{\cl}{\mathrm{cl\;}}
\newcommand{\cov}{\mathrm{Cov}}
\newcommand{\corr}{\mathrm{Corr}}
\newcommand{\sbot}{\bot\hspace{-7pt}\bot}
\newcommand{\sigam}{\Sigma^\mathcal{A}}
\newcommand{\sigpr}{\Sigma^p}
\newcommand{\sigpo}{\Sigma^\pi}
\newcommand{\contc}{\textbf{\upshape c}}
\newcommand{\discd}{\textbf{\upshape d}}
\newcommand{\boundb}{\textbf{\upshape b}}
\newcommand{\locall}{\ell}
\newcommand{\fvfv}{\textbf{\upshape fv}}
\newcommand{\iviv}{\textbf{\upshape iv}}
\newcommand{\svsv}{\textbf{\upshape sv}}
\newcommand{\msq}{\mathcal{M}^2}
\newcommand{\msp}{\mathcal{M}}
\newcommand{\um}{\mathcal{M}^u}
\newcommand{\aaf}{\mathcal{A}}
\newcommand{\aai}{\mathcal{A}^i}
\newcommand{\aal}{\mathcal{A}^i_\locall}
\newcommand{\aab}{\mathcal{A}^b}
\newcommand{\vvf}{\mathcal{V}}
\newcommand{\vvi}{\mathcal{V}^i}
\newcommand{\vvl}{\mathcal{V}^i_\locall}
\newcommand{\vvb}{\mathcal{V}^b}
\begin{document}
\setlength{\parindent}{0cm}
\setlength{\parskip}{0.5cm}
\title{Intervention in Ornstein-Uhlenbeck SDEs}
\author{Alexander Sokol}
\address{Alexander Sokol: Institute of Mathematics, University of
Copenhagen, 2100 Copenhagen, Denmark}
\email{[email protected]}
\urladdr{http://www.math.ku.dk/$\sim$alexander}
\subjclass[2010] {Primary 60G15}
\keywords{Causality, Intervention, SDE, Ornstein-Uhlenbeck process, Stationary
distribution.}
\begin{abstract}
We introduce a notion of intervention for stochastic differential
equations and a corresponding causal interpretation. For the case of
the Ornstein-Uhlenbeck SDE, we show that the SDE resulting from a
simple type of intervention again is an Ornstein-Uhlenbeck SDE. We
discuss criteria for the existence of a stationary distribution for
the solution to the intervened SDE. We illustrate the effect of
interventions by calculating the mean and variance in the stationary
distribution of an intervened process in a particularly simple case.
\end{abstract}
\maketitle
\noindent
\section{Introduction}
Causal inference for continuous-time processes is a field in ongoing
development. Similar to causal inference for graphical models, see
\cite{MR2548166}, one of the primary objectives for causal inference for continuous-time
processes is to identify the effect of an intervention given
assumptions on the distribution and causal structure of the observed
continuous-time process.
Several flavours of causal inference are available for continuous-time
processes, see for example \cite{MR1403234,MR2575938,MR2811860}. In this
paper, we outline a causal interpretation of stochastic differential
equations and a corresponding notion of intervention, we calculate the distribution of an intervened
Ornstein-Uhlenbeck SDE, and we calculate analytical expressions
for the mean and variance of the stationary distribution of the
resulting process for particular examples of interventions.
\section{Causal interpretation of stochastic differential equations}
\label{sec:CausalInterpretation}
Consider a filtered probability space
$(\Omega,\mathcal{F},(\mathcal{F}_t)_{t\ge0},P)$ satisfying the usual conditions,
see \cite{MR2273672} for the definition of this and other notions related to
continuous-time stochastic processes. Let $Z$ be a $d$-dimensional
semimartingale and assume that $a:\mathbb{R}^p\to\mathbb{M}(p,d)$ is a Lipschitz
mapping, where $\mathbb{M}(p,d)$ denotes the space of real $p\times d$
matrices. Consider the stochastic differential equation (SDE)
\begin{align}
X^i_t &= x^i_0 + \sum_{j=1}^d \int_0^t a_{ij}(X_{s-})\df{Z^j}_s,
\qquad i\le p.\label{eq:MainSDE}
\end{align}
By the Lipschitz property of $a$, it holds by Theorem V.7 of \cite{MR2273672} that there exists a
pathwisely unique solution to (\ref{eq:MainSDE}). The following
definition yields a causal interpretation of (\ref{eq:MainSDE}) based
on simple substitution and inspired by ideas outlined in Section 4.1 of \cite{MR2993496}.
\begin{definition}
\label{def:SDEIntervention}
Consider some $m\le p$ and $c\in\mathbb{R}$. The $(p-1)$-dimensional intervened SDE arising from the
intervention $X^m := c$ is defined to be
\begin{align}
U^i_t &= x^i_0 + \sum_{j=1}^d \int_0^t b_{ij}(U_{s-})\df{Z^j}_s
\textrm{ for } i\le p\textrm{ with } i\neq m, \label{eq:IntervenedSDE}
\end{align}
where $b_{ij}(y_1,\ldots,y_{m-1},y_{m+1},\ldots,y_p)=a_{ij}(y_1,\ldots,c,\ldots,y_p)$, and
the $c$ is on the $m$'th coordinate. Letting $U$ be the unique
solution to the SDE and defining $Y$ by putting $Y=(U^1,\ldots,U^{m-1},c,U^{m+1},\ldots,U^p)$, we refer to $Y$ as the intervened process and write $(X|X^m := c)$ for $Y$.
\end{definition}
By Theorem V.16 and Theorem V.5 of \cite{MR2273672}, the solutions to both (\ref{eq:MainSDE})
and (\ref{eq:IntervenedSDE}) may be approximated by the Euler schemes
for their respective SDEs. Making these approximations and applying Pearl's
notion of intervention in an appropriate sense, see \cite{MR2548166}, we may
interpret Definition \ref{def:SDEIntervention} as intervening in the
system (\ref{eq:MainSDE}) under the assumption that the driving semimartingales $Z^1,\ldots,Z^d$ are
noise processes unaffected by interventions, while the processes
$X^1,\ldots,X^p$ are affected by interventions. Note that the
operation of making an intervention takes a $p$-dimensional SDE as its input and
yields a $(p-1)$-dimensional SDE as its output, and this operation is crucially dependent
on the coefficients in the SDE: These coefficients in a sense
corresponds to the directed acyclic graphs of \cite{MR2548166}. A major benefit of
causality in systems such as (\ref{eq:MainSDE}) as compared to the theory
of \cite{MR2548166} is the ability to capture feedback systems and interventions in
such feedback systems.
As the solutions to (\ref{eq:MainSDE}) and (\ref{eq:IntervenedSDE}) are defined on the same
probability space, we may even consider the process $Y - X$, where $Y=(X|X^m :=
c)$, allowing us to calculate for example the variance of the effect
of the intervention. As $Y$ and $X$ are never observed simultaneously
in practice, however, we will concentrate on analyzing the differences
between the laws of $Y$ and $X$ separately.
\section{Intervention in Ornstein-Uhlenbeck SDEs}
\label{sec:OUIntervention}
Recall that for an $\mathcal{F}_0$ measurable variable $X_0$ and for $A\in\mathbb{R}^p$, $B\in\mathbb{M}(p,p)$ and $\sigma\in\mathbb{M}(p,d)$,
the Ornstein-Uhlenbeck SDE with initial value
$X_0$, mean reversion level $A$, mean reversion speed $B$, diffusion
matrix $\sigma$ and $d$-dimensional driving noise is
\begin{align}
X_t &= X_0 + \int_0^t B(X_s-A)\df{s}+\sigma W_t,\label{eq:OUSDE}
\end{align}
where $W$ is a $d$-dimensional $(\mathcal{F}_t)$ Brownian motion, see Section
II.72 of \cite{MR1796539}. The unique solution to this equation is
\begin{align}
X_t =&\exp(tB)\left(X_0-\int_0^t \exp(-sB)BA\df{s}+\int_0^t\exp(-sB)\sigma\df{W}_s\right)
\end{align}
where the matrix exponential is defined by $\exp(A)=\sum_{n=0}^\infty
A^n/n!$. This is a Gaussian homogeneous
Markov process with continuous sample paths. The following lemma shows
that making an intervention in an Ornstein-Uhlenbeck SDE yields an
SDE whose nontrivial coordinates solve another Ornstein-Uhlenbeck SDE.
\begin{lemma}
\label{lemma:SimpleOUIntervention}
Consider the Ornstein-Uhlenbeck SDE (\ref{eq:OUSDE}) with initial
value $x_0$. Fix $m\le p$ and $c\in\mathbb{R}$, and let $X$ be the unique
solution to (\ref{eq:OUSDE}). Furthermore, let $Y=(X|X^m:=c)$ and let $Y^{-m}$ be the $p-1$ dimensional
process obtained by removing the $m$'th coordinate from $Y$. Let
$\tilde{B}$ be the submatrix of $B$ obtained by removing the $m$'th
row and column of $B$, and assume that $\tilde{B}$ is invertible. Then $Y^{-m}$ solves
\begin{align}
Y^{-m}_t &= y_0 + \int_0^t \tilde{B}(Y^{-m}_s-\tilde{A})\df{s}+\tilde{\sigma} W_t,\label{eq:OUSDEInter}
\end{align}
where $y_0$ is obtained by removing the $m$'th coordinate
from $x_0$, $\tilde{\sigma}$ is obtained by removing the $m$'th row of
$\sigma$ and $\tilde{A}=\alpha-\tilde{B}^{-1}\beta$, where $\alpha$
and $\beta$ are obtained by removing the $m$'th coordinate from $A$
and from the vector whose $i$'th component is $b_{im}(c-a_m)$,
respectively, where $b_{im}$ is the entry corresponding to the $i$'th row
and the $m$'th column of $B$, and $a_m$ is the
$m$'th element of $A$.
\end{lemma}
\textit{Proof.}
By Definition \ref{def:SDEIntervention}, we have
\begin{align}
Y^i_t =& y_0 + \int_0^t b_{im}(c-a_m)+\sum_{j\neq m} b_{ij}(Y^j_s-a_j)\df{s} + \sum_{j=1}^p\sigma_{ij}W^j_t
\end{align}
for $i\neq m$. Note that for any vector $y$, the
system of equations in $\tilde{a}$
\begin{align}
b_{im}(c-a_m)+\sum_{j\neq m} b_{ij}(y_j-a_j) = \sum_{j\neq
m}b_{ij}(y_j-\tilde{a}_j)\textrm{ for }i\neq m,
\end{align}
is equivalent to the system of equations
\begin{align}
\sum_{j\neq m}b_{ij}\tilde{a}_j
=\left(\sum_{j\neq m} b_{ij}a_j\right)-b_{im}(c-a_m) \textrm{ for }i\neq m.
\end{align}
Since we have assumed $\tilde{B}$ to be invertible, this system of
equations has the unique solution
$\tilde{A}=\tilde{B}^{-1}(\tilde{B}\alpha-\beta)=\alpha-\tilde{B}^{-1}\beta$. For
$i\neq m$, we therefore obtain that $Y^i_t = y_0 + \int_0^t \sum_{j\neq m}
b_{ij}(Y^j_s-\tilde{a}_j)\df{s}+ \sum_{j=1}^p \sigma_{ij}W^j_t$, proving the result.
\hfill$\Box$
Recall that a principal submatrix of a matrix is a submatrix with the
same rows and columns removed. In words, Lemma
\ref{lemma:SimpleOUIntervention} states that if a particular principal
submatrix $\tilde{B}$ of the mean reversion speed is invertible, then making the
intervention $X^m:=c$ in an Ornstein-Uhlenbeck SDE results in a
new Ornstein-Uhlenbeck SDE with mean reversion speed $\tilde{B}$ and
modified mean reversion level involving the inverse of $\tilde{B}$. Now
assume that an Ornstein-Uhlenbeck SDE is given such that the solution
has a stationary initial distribution. A natural question to ask is
what interventions will yield intervened processes where stationary
initial distributions also exist. In the following, we consider this question.
Recall that a square matrix is called stable if
its eigenvalues have negative real parts and semistable if its
eigenvalues have nonpositive real parts, see \cite{MR0480586}. Theorem 4.1
of \cite{MR0260056} yields necessary and sufficient criteria for the
existence of a stationary probability measure for the solution of
(\ref{eq:OUSDE}). One criterion is
expressed in terms of the controllability subspace of of the matrix
pair $(B,\sigma)$, which is the span of the columns in the matrices
$\sigma, B\sigma, \ldots, B^{p-1}\sigma$. In the case where $\sigma$ has full column span, meaning
that the columns of $\sigma$ span all of $\mathbb{R}^p$, the controllability
subspace is all of $\mathbb{R}^p$, and Theorem 4.1 of \cite{MR0260056} shows that the existence of a stationary
probability measure is equivalent to $B$ being stable. The case where
$\sigma$ is not required to have full column span is more involved.
In the following, we will restrict our attention to Ornstein-Uhlenbeck processes with
$\sigma$ having full column span. By Theorem 4.1 of \cite{MR0260056}, it then
holds that there exists a stationary distribution if and only if $B$ is stable. Furthermore,
applying Theorem 2.4 and Theorem 2.12 of \cite{MJ}, it holds
in the affirmative case that the stationary
distribution is the normal distribution with mean $\mu$ and variance
$\Gamma$ solving $B\mu=BA$ and $\sigma\sigma^t+B\Gamma+\Gamma B^t = 0$. Note that
as $B$ is stable, zero is not an eigenvalue of $B$, thus $B$ is
invertible and $\mu=A$. Also, stability of $B$ yields that $\Gamma
=\int_0^\infty e^{sB}\sigma\sigma^te^{sB^t}\df{s}$. For the
$(p-1)$-dimensional Ornstein-Uhlenbeck process resulting from an
intervention according to Lemma \ref{lemma:SimpleOUIntervention}, the
diffusion matrix $\tilde{\sigma}$ is obtained by removing the $m$'th row of
$\sigma$. As the columns of $\sigma$ span $\mathbb{R}^p$, the columns of
$\tilde{\sigma}$ span $\mathbb{R}^{p-1}$. Therefore, it also holds for the
intervened process that there exists a stationary distribution if and only
if the mean reversion speed is stable. We conclude that for diffusion
matrices with full column span, the existence of stationary distributions
for both the original and the intervened SDE is determined solely by
stability of the mean reversion speed matrix $B$ and the corresponding principal submatrices.
Consider a stable matrix $B$. It then holds that if all principal
submatrices of $B$ are stable, all interventions will preseve
stability of the system. We are thus lead to the question of when a principal submatrix of a
matrix is stable. That stability does not in general lead to stability
of principal submatrices may be seen from the following example. Define $B$ by putting
\begin{displaymath}
B = \left[\begin{array}{cc} 1 & 7 \\ -1 & -3 \end{array}\right].
\end{displaymath}
The matrix $B$ has eigenvalues $-1\pm i\sqrt{3}$ and is thus
stable, while the principal submatrix obtained by removing the second
row and second column trivially has the single eigenvalue $1$ and thus
is not stable, in fact not even semistable. Conversely, $-B$ has
eigenvalues $1\pm i\sqrt{3}$ and thus is neither stable nor
semistable, while the principal submatrix obtained by removing the second
row and second column of $-B$ is stable.
There are classes of matrices satisfying that all principal
submatrices are stable. For example, by the inclusion principle for symmetric matrices, see Theorem
4.3.15 of \cite{MR832183}, it follows that a principal submatrix of any symmetric stable matrix
again is stable. In general, though, it is difficult
to ensure that all principal submatrices are stable. However, there
are criteria ensuring that all principal submatices are
semistable. For example, Lemma 2.4 of \cite{MR1971090} shows that if $B$ is stable and sign symmetric, then all
principal submatrices of $B$ is semistable. Here, sign symmetry is a
somewhat involved matrix criterion, it does however hold that any
stable symmtric matrix also is sign symmetric. Furthermore, by Theorem 1 of
\cite{MR0480586}, either of the follow three properties are also
sufficient for having all principal submatrices being semistable:
\begin{enumerate}
\item $A-D$ is stable for all nonnegative diagonal $D$.
\item $DA$ is stable for all positive diagonal $D$.
\item There is positive diagonal $D$ such that $AD+DA^t$ is negative definite.
\end{enumerate}
\section{An example of a particular intervention}
\label{sec:Examples}
Consider now a three-dimensional Ornstein-Uhlenbeck process $X$ with
$\sigma$ being the identity matrix of order three and upper diagonal mean reversion speed matrix $B$, and
assume that the diagonal elements of $B$ all are negative. As the
diagonal elements of $B$ in this case also are the eigenvalues, $B$ is then
stable, and all principal submatrices are stable as well. The interpretation of having $B$ upper diagonal is that the levels of
both $X^1$, $X^2$ and $X^3$ influence the average change in $X^1$, while only
the levels of $X^2$ and $X^3$ influence the average change in $X^2$
and only $X^3$ influences the average change in $X^3$. Figure
\ref{figure:SimpleOUDAG} illustrates this, as well as the changes to
the dependence structure obtained by making interventions
$X^2:=c$ or $X^3:=c$.
\begin{figure}[htb]
\begin{minipage}[b]{0.300\linewidth}
\vspace{0.5cm}
\centering
\begin{displaymath}
\xymatrix@C=0.5cm{& X^1 \ar@(ul,ur)[] & \\
X^2 \ar@(dl,dr)[] \ar@/^8pt/[ur] & &
X^3 \ar@(dl,dr)[] \ar@/_8pt/[ul] \ar@/^8pt/[ll]
}
\end{displaymath}
\vspace{0.5cm}
\end{minipage}
\begin{minipage}[b]{0.300\linewidth}
\vspace{0.5cm}
\centering
\begin{displaymath}
\xymatrix@C=0.5cm{& X^1 \ar@(ul,ur)[] & \\
X^2 \ar@/^8pt/[ur] & &
X^3 \ar@(dl,dr)[] \ar@/_8pt/[ul]
}
\end{displaymath}
\vspace{0.5cm}
\end{minipage}
\begin{minipage}[b]{0.300\linewidth}
\vspace{0.5cm}
\centering
\begin{displaymath}
\xymatrix@C=0.5cm{& X^1 \ar@(ul,ur)[] & \\
X^2 \ar@(dl,dr)[] \ar@/^8pt/[ur] & &
X^3 \ar@/_8pt/[ul] \ar@/^8pt/[ll]
}
\end{displaymath}
\vspace{0.5cm}
\end{minipage}
\caption{Graphical illustrations of the dependence structures of
$(X^1,X^2,X^3)$ (left), of the dependence when making the
intervention $X^2:=c$ (middle) and of the dependence when making the
intervention $X^3:=c$ (right).}
\label{figure:SimpleOUDAG}
\end{figure}
We will investigate the details of what happens to the system when
making the intervention $X^2:=c$ or $X^3:=c$. To this end, we
calculate the mean and variance in the stationary distribution for
the nontrivial coordinates in each of the intervened processes. Consider first the case of the intervention $X^2:=c$. Let $\mu$ and
$\Gamma$ denote the mean and variance in the stationary
distribution after intervention. Applying Lemma \ref{lemma:SimpleOUIntervention}, the
SDE resulting from making this intervention is a two-dimensional Ornstein-Uhlenbeck SDE
with mean reversion speed and mean reversion level
\begin{align}
\left[\begin{array}{cc} b_{11} & b_{13} \\ 0 & b_{33} \end{array}\right]
\quad\textrm{ and }\quad
\left[\begin{array}{cc} a_1 \\ a_3 \end{array}\right]
-\left[\begin{array}{cc} b_{11} & b_{13} \\ 0 & b_{33} \end{array}\right]^{-1}
\left[\begin{array}{cc} b_{12}(c-a_2) \\ 0 \end{array}\right].
\end{align}
As we have
\begin{align}
\left[\begin{array}{cc} b_{11} & b_{13} \\ 0 & b_{33} \end{array}\right]^{-1}
&=\left[\begin{array}{cc} \frac{1}{b_{11}} & -\frac{b_{13}}{b_{11}b_{33}} \\ 0 & \frac{1}{b_{33}} \end{array}\right],
\end{align}
this immediately yields that
\begin{align}
\mu &= \left[\begin{array}{cc} a_1-\frac{b_{12}}{b_{11}}(c-a_2) \\
a_3
\end{array}\right].\label{eq:AsympMean}
\end{align}
As for the variance, recall that we have the representation
\begin{align}
\Gamma &= \int_0^\infty
\exp\left(s\left[\begin{array}{cc} b_{11} & b_{13} \\ 0 & b_{33} \end{array}\right]\right)
\exp\left(s\left[\begin{array}{cc} b_{11} & 0 \\ b_{13} & b_{33} \end{array}\right]\right)
\df{s}.
\end{align}
In order to calculate this integral, first consider the case
$b_{11}=b_{33}$. By Theorem 4.11 of \cite{MR2396439}, we in this case obtain
\begin{align}
\exp\left(s\left[\begin{array}{cc} b_{11} & b_{13} \\ 0 & b_{33} \end{array}\right]\right)
&=e^{sb_{11}}\left[\begin{array}{cc} 1 & sb_{13} \\ 0 & 1 \end{array}\right],
\end{align}
and similarly for the transpose. Applying that $\int_0^\infty x^\alpha e^{\beta x}\df{x} =
\Gamma(\alpha+1)/(-\beta)^{\alpha+1}$ for all $\alpha>-1$ and
$\beta<0$, we conclude
\begin{align}
\Gamma
&= \int_0^\infty e^{2sb_{11}}
\left[\begin{array}{cc} 1 & sb_{13} \\ 0 & 1 \end{array}\right]
\left[\begin{array}{cc} 1 & 0 \\ sb_{13} & 1 \end{array}\right]\df{s}\notag\\
&= \int_0^\infty e^{2sb_{11}}
\left[\begin{array}{cc} 1+s^2b^2_{13} &
sb_{13} \\
sb_{13} &
1 \end{array}\right]\df{s}
= \left[\begin{array}{cc}
-\frac{1}{2b_{11}}-\frac{b^2_{13}}{4b_{11}^3} &
\frac{b_{13}}{4b_{11}^2} \\
\frac{b_{13}}{4b_{11}^2} &
-\frac{1}{2b_{11}}
\end{array}\right].
\end{align}
In the case $b_{11}\neq b_{33}$, we put $\zeta=b_{13}/(b_{11}-b_{33})$
and Theorem 4.11 of \cite{MR2396439} yields
\begin{align}
\exp\left(s\left[\begin{array}{cc} b_{11} & b_{13} \\ 0 & b_{33} \end{array}\right]\right)
&=\left[\begin{array}{cc} e^{sb_{11}} &
\zeta(e^{sb_{11}}-e^{sb_{33}}) \\
0 &
e^{sb_{33}} \end{array}\right],
\end{align}
and we then obtain
\begin{align}
& \exp\left(s\left[\begin{array}{cc} b_{11} & b_{13} \\ 0 & b_{33} \end{array}\right]\right)
\exp\left(s\left[\begin{array}{cc} b_{11} & 0 \\ b_{13} & b_{33} \end{array}\right]\right)\notag\\
&=\left[\begin{array}{cc} e^{sb_{11}} &
\zeta(e^{sb_{11}}-e^{sb_{33}}) \\
0 &
e^{sb_{33}} \end{array}\right]
\left[\begin{array}{cc} e^{sb_{11}} &
0 \\
\zeta(e^{sb_{11}}-e^{sb_{33}}) &
e^{sb_{33}} \end{array}\right]\notag\\
&=\left[\begin{array}{cc} (1+\zeta^2)e^{2sb_{11}}
-2\zeta^2 e^{s(b_{11}+b_{33})}
+\zeta^2e^{2sb_{33}} &
\zeta e^{s(b_{11}+b_{33})}
-\zeta e^{2sb_{33}} \\
\zeta e^{s(b_{11}+b_{33})}
-\zeta e^{2sb_{33}} &
e^{2sb_{33}} \end{array}\right],
\end{align}
implying that
\begin{align}
\Gamma
&=\left[\begin{array}{cc}
-\frac{(1+\zeta^2)}{2b_{11}}+\frac{2\zeta^2}{b_{11}+b_{33}}-\frac{\zeta^2}{2b_{33}} &
-\frac{\zeta}{b_{11}+b_{33}}+\frac{\zeta}{2b_{33}} \\
-\frac{\zeta}{b_{11}+b_{33}}+\frac{\zeta}{2b_{33}} &
-\frac{1}{2b_{33}}
\end{array}\right]\notag\\
&=\left[\begin{array}{cc}
-\frac{1}{2b_{11}}-\zeta^2\left(\frac{1}{2b_{11}}+\frac{2}{b_{11}+b_{33}}-\frac{1}{2b_{33}}\right) &
\frac{\zeta(b_{11}-b_{33})}{2b_{33}(b_{11}+b_{33})} \\
\frac{\zeta(b_{11}-b_{33})}{2b_{33}(b_{11}+b_{33})} &
-\frac{1}{2b_{33}}
\end{array}\right]\notag\\
&=\left[\begin{array}{cc}
-\frac{1}{2b_{11}}-\frac{b_{13}^2}{2b_{11}b_{33}(b_{11}+b_{33})} &
\frac{b_{13}}{2b_{33}(b_{11}+b_{33})} \\
\frac{b_{13}}{2b_{33}(b_{11}+b_{33})} &
-\frac{1}{2b_{33}}
\end{array}\right].\label{eq:AsympVar}
\end{align}
Note in particular that (\ref{eq:AsympVar}) also yields the correct
result in the case $b_{11}=b_{33}$. Next, considering the intervention
$X^3:=c$, we let $\nu$ and
$\Sigma$ denote the mean and variance in the stationary
distribution of the nontrivial coordinates after intervention. By Lemma \ref{lemma:SimpleOUIntervention}, the
result of making this interveniton is an Ornstein-Uhlenbeck SDE
with mean reversion speed and mean reversion level
\begin{align}
\left[\begin{array}{cc} b_{11} & b_{12} \\ 0 & b_{22} \end{array}\right]
\quad\textrm{ and }\quad
\left[\begin{array}{cc} a_1 \\ a_2 \end{array}\right]
-\left[\begin{array}{cc} b_{11} & b_{12} \\ 0 & b_{22} \end{array}\right]^{-1}
\left[\begin{array}{cc} b_{13}(c-a_3) \\ b_{23}(c-a_3) \end{array}\right],
\end{align}
yielding by calculations similar to the previous case that
\begin{align}
\nu &=\left[\begin{array}{cc}
a_1-\left(\frac{b_{13}}{b_{11}}-\frac{b_{12}b_{23}}{b_{11}b_{22}}\right)(c-a_3) \\
a_2-\frac{b_{23}}{b_{22}}(c-a_3)
\end{array}\right]\label{eq:AsympMean2}
\end{align}
and
\begin{align}
\Sigma &= \left[\begin{array}{cc}
-\frac{1}{2b_{11}}-\frac{b_{12}^2}{2b_{11}b_{22}(b_{11}+b_{22})} &
\frac{b_{12}}{2b_{22}(b_{11}+b_{22})} \\
\frac{b_{12}}{2b_{22}(b_{11}+b_{22})} &
-\frac{1}{2b_{22}}
\end{array}\right].\label{eq:AsympVar2}
\end{align}
We have now calculated the mean and variance in the stationary
distribution for both intervened processes. We next take a moment to
interpret our results.
In the original system, all of $X^1$, $X^2$ and $X^3$
negatively influenced themselves, and in addition to this, $X^2$
influenced $X^1$ and $X^3$ influenced $X^1$ both directly and through
its influence on $X^2$. Based on this, we would expect that
making the intervention $X^2:=c$, the steady state of $X^3$ would not
be changed, while the steady state of $X^1$ would change, depending on
the level of influence $b_{12}$ of $X^2$ on $X^1$. This is what we see
in (\ref{eq:AsympMean}). When making the intervention $X^3:=c$,
however, we obtain a change in the steady state of $X^1$ based both on
the direct influence of $X^3$ on $X^1$, depending on $b_{13}$, but
also on the indirect influence of $X^3$ on $X^1$ through $X^2$,
depending also on $b_{23}$ and $b_{12}$. Furthermore, the steady state
of $X^2$ also changes. These results show themselves in (\ref{eq:AsympMean2}).
As for the steady state variance, the changes resulting from interventions are in
both cases of the same type, yielding moderately complicated
analytical expressions, both independent of $c$. This implies that
while we in most cases will be able to obtain any steady state mean
for, say, $X^1$, by picking $c$ suitably, the steady state variance
can be influenced only by the type of intervention made, that is, on
which parts of the system the interventions are made. Furthermore, by
considering explicit formulas for the steady state variance in the
original system, it may be seen that for example positive covariances
may turn negative and vice versa when making interventions.
\bigskip
\textbf{Acknowledgements.} The development of the notion of intervention for
SDEs is joint work with my thesis advisor, Niels Richard
Hansen, whom I also thank for valuable discussions and advice.
\bibliographystyle{amsplain}
|
2,877,628,088,492 | arxiv | \section{Introduction}
Quantum theory provides, for a given state preparation,
expectation values and distributions for a number of observables
whose operators have been identified by a combination of heuristic
arguments (e.g. quantization rules),
and consistency arguments. Their relevance and validity is eventually
put to the test or motivated by experiments. Time observables, i.e.,
random variables
such as the arrival times of particles at a detector for a given state
preparation, have been more problematic than other observables like
``energy'',
``momentum'' or ``position'' evaluated at a fixed instant. In fact
almost a century after the creation of the basic quantum formalism, the
theoretical framework to deal with time observables which have a relatively
straightforward operational definition in the laboratory,
is still being debated.
Reviews of several aspects of the difficulties and
efforts to formalize time in quantum mechanics may be found in two recent
books \cite{b1,b2}.
Some of these difficulties may be traced back to a lack of a general
framework to generate and define ``time operators''.
An important point, frequently overlooked, is that, for a given system,
there is no single time operator. There are infinitely
many time operators corresponding to different observables and apparatus.
``Canonical time operators'' have been defined \cite{Holevo,Hall},
but, as we shall stress, the definition of ``canonical'' is basis
dependent, even without energy degeneracy. Thus, further analysis is
necessary
to set ideal operators and possibly uniqueness in some cases
by imposing the physical conditions to be satisfied (e.g. symmetries)
or optimal properties, such as a minimal variance.
Time operators can be classified into two main groups on physical grounds,
depending on their association with time durations or time
instants. An example of a duration is the dwell time of a particle in
a region of space.
The corresponding operator commutes with the Hamiltonian since the
duration of a future process does not depend on the instant
that we choose to predict it \cite{dwell}. Instead, the other group of
time observables are shifted by the same amount as the preparation
time, either forward
(clocks) \cite{clocks} or backward (event times recorded with a
stopwatch, the simplest case being the time of arrival),
and are conjugate to the Hamiltonian.
We shall set here a framework for these ``covariant'' observables \cite{Holevo}
associated with instants and analyze their multiplicity and physical
properties. Applications are discussed
for quantum clocks and the time of arrival. The relation to Lyapunov operators
is also spelled out.
The plan of the paper is as follows.
After introducing the main concepts and notation in Sec. \ref{notation},
in Section \ref{general} the most general form of a covariant time
operator is determined for a Hamiltonian with only continuous,
possibly degenerate, eigenvalues. In Section \ref{uniqueness} it
is shown that, for a time-reversal invariant Hamiltonian, one arrives
at a unique and natural form of time operator by imposing time
reversal covariance, invariance under
additional symmetries and minimality of the variance.
In Section \ref{arrival} the results are applied to arrival times for
a particle moving on a half-line and a
connection with the delay time of Smith \cite{Smith} is established.
In Section \ref{Lyapunov}
the results are applied to Lyapunov operators which were considered in
Ref. \cite{Strauss}. It is shown that the expression given there is
a special case, and the general form as well as possible uniqueness
conditions are presented. In particular it is shown that for a time
reversal invariant Hamiltonian there is no time reversal invariant
Lyapunov operator. This is of interest because it has been argued
that, in order to characterize a quantum system as
irreversible and an arrow of time if the Hamiltonian is time reversal
invariant and if one uses a formulation in terms of Lyapunov
operators, a Lyapunov operator or functional
should be time reversal invariant \cite{Sewell}.
\section{Covariance of time operators. Notation.}\label{notation}
We differentiate between clock time operators and event time
operators. The former, denoted by $\hat{T}$, can be associated with
a quantum clock which measures the progressing parametric time,
while the latter, denoted by $\hat{T}^A$, describe the time of an event,
for example, the instant of time a particle is found to arrive at a particular position. This and the following two sections are mostly devoted
to clock observables, although the formal results are analogous for event times.
In an ordinary clock the dial position is the observable which tells us what time it is. In a quantum clock the dial ``position'' is probabilistic but its
average should follow faithfully the advancement of parametric time.
We would like as well to minimize the variance and estimate the time
as accurately as possible with a finite number of measurements.
We will not investigate here specific operational realizations,
see a review in \cite{clocks}, but instead idealized operators and
their properties.
\subsection{Clock time operators}
For a given state $|\psi \rangle$, let the probability of finding the
measured time in the interval $(-\infty, \tau)$ be given by the
expectation with $|\psi \rangle$ of an operator $\hat{F}_\tau$. Note
that $0\leq \hat{F}_\tau \leq 1$ so that $\hat{F}_\tau$ is
selfadjoint and bounded. [For a momentum
measurement the analogous operator would be $\int_{- \infty}^p
dp^\prime |p^\prime \rangle \langle p^\prime|$ for finding the
momentum in $(- \infty, p)$. Here $\hat{F}_\tau$ can have a more
general form and in general one deals with a positive-operator valued
measure.] Then
\begin{equation}
\label{eq1.1}
\Pi (\tau; \psi) \equiv \frac{d}{d\tau} \langle \psi | \hat{F}_\tau|
\psi \rangle
\end{equation}
is the corresponding temporal probability density, normalized as $\int
d\tau \,\Pi(\tau; \psi)= 1$. We define the probability density
operator $\hat{\Pi}_\tau $ by
\begin{equation}
\label{eq1.2}
\hat{\Pi}_\tau \equiv \frac{d}{d\tau} \hat{F}_\tau,
\end{equation}
normalized as
\begin{equation}
\label{eq1.2a}
\int_{- \infty}^\infty d\tau\, \hat{\Pi}_\tau = \Eins.
\end{equation}
The mean value of observed time can be written as
\beq
\int_{- \infty}^\infty\!\! d\tau\, \tau \,\Pi (\tau; \psi) = \langle \psi| \int_{-
\infty}^\infty\!\! d\tau\, \tau\, \hat{\Pi}_\tau | \psi \rangle
\equiv \langle \psi | \hat{T} | \psi \rangle,
\label{eq1.3}
\eeq
where
\begin{equation}
\label{eq1.4}
\hat{T} \equiv \int_{- \infty}^\infty d\tau\, \tau\, \hat{\Pi}_\tau
\end{equation}
is called the time operator associated with $\hat{\Pi}_\tau$.
The second moment, if it exists, is given by
\begin{equation}
\label{eq1.5}
\int d\tau ~\tau^2\, \Pi (\tau; \psi) = \langle \psi| \int
d\tau\,\tau^2\, \hat{\Pi}_\tau\,
|\psi \rangle
\end{equation}
and similarly for higher moments. It may happen that this is not equal to
$\langle \psi | \hat{T}^2| \psi \rangle$.
A clock time operator is called covariant with respect to ordinary
(parametric) time if for the states $|\psi \rangle \equiv | \psi_0
\rangle$ and $|\psi_{t} \rangle$ the probabilities of finding the
measured time in the respective intervals $(- \infty, \tau)$ and
$(- \infty, \tau + t)$ coincide, i.e. if
\begin{equation}
\label{eq1.6}
\langle \psi_0 | \hat{F}_\tau|\psi_0 \rangle = \langle \psi_{t} |
\hat{F}_{\tau + t}|\psi_{t} \rangle.
\end{equation}
This implies, choosing $t = - \tau$,
\begin{eqnarray}
\label{eq1.7a}
\hat{F}_\tau &=& e^{-i \hat{H} \tau/\hbar} \hat{F}_0\, e^{i \hat{H}\tau/\hbar}\\
\label{eq1.7bc}
\hat{\Pi}_0 &=& \frac{-i}{\hbar}[\hat H, \hat F_0]\\
\label{eq1.7b}
\hat{\Pi}_\tau &=&
e^{-i \hat{H} \tau/\hbar}\,\hat{\Pi}_0\, e^{i \hat{H}\tau/\hbar}.
\end{eqnarray}
Note that $\la\psi|\hat{\Pi}_\tau|\psi\ra$ is non-negative because
$\la\psi|\hat{F}_\tau|\psi\ra$ is non-decreasing.
By a change of variable in Eq. (\ref{eq1.4}) one obtains
\begin{equation}
\label{eq1.7c}
e^{i \hat{H}t/\hbar}\, \hat{T} \,e^{-i \hat{H}t/\hbar} = \hat{T} + t.
\end{equation}
{}From this it follows by differentiation that $\hat H$ and $\hat T$
satisfy the canonical commutation relation
\beq
\label{CCR}
[\hat T,\hat H] = i\hbar
\eeq
when sandwiched between (normalizable) vectors from the domain of $\hat H$.
Note that
$\hat \Pi_0$ and $\hat \Pi_\tau$ are in general not operators on the
Hilbert space but only bilinear forms evaluated
between normalizable vectors from the domain of $\hat H$.
An expression like $\la E| \hat{\Pi}_0|E'\ra$ has to be understood as a
distribution. Since the diagonal $E=E'$ has measure 0 it is no contradiction
that Eq. (\ref{eq1.7bc}) gives 0 on
the diagonal while the following example gives $(2\pi\hbar)^{-1}$.
\noindent
{\bf Example}: For a Hamiltonian $\hat{H}$ with non-degenerate
continuous eigenvalues $E$ and
eigenvectors $|E\rangle$ with $\langle E| E^\prime \rangle = \delta
(E - E^\prime)$ we put
\begin{equation}
\label{eq1.13}
\langle E|\hat{\Pi}_0|E^\prime \rangle \equiv \frac{1}{2\pi\hbar},
\end{equation}
so that in this case
\begin{eqnarray}
\label{eq1.14}
\hat{\Pi}_0&=&\frac{1}{2\pi\hbar} \int dE~dE^\prime |E\rangle \langle E^\prime|,
\\ \label{eq1.15}
\hat{\Pi}_\tau &=& \frac{1}{2\pi\hbar}
\int dE~dE^\prime e^{-i(E-E^\prime)\tau/\hbar}
|E\rangle \langle E^\prime|.
\end{eqnarray}
The normalization condition of Eq. (\ref{eq1.2a}) is easily checked. The
corresponding clock time operator $-i\hbar\partial_E$
which results from Eq.~(\ref{eq1.4}), has been
considered the ``canonical time operator in the energy
representation'' \cite{Holevo,Hall,Hall08},
but note that $|E \rangle$ is unique only up to a phase
\cite{Holevo,Caves,Hall95},
and taking $|E \rangle_\varphi \equiv e^{i \varphi (E)}| E \rangle$
instead of $|E \rangle$ leads, for different $\varphi$,
to multiple ``energy representations'',
even for a system without any degeneracy. In the new basis the
``canonical operator'' will be shifted by
\begin{equation}
\label{eq1.16}
\hbar\int dE ~\varphi^\prime (E) |E\ra \langle E|.
\end{equation}
Moreover, the mean-square deviation $\Delta T^2$ for a given state
depends on $\varphi(E)$ in such a way that there is no choice of
$\varphi(E)$ which would make $\Delta T$ minimal for {\it all} states,
as shown in Appendix \ref{B}. Therefore, in this case a minimality
condition imposed on $\Delta T$ cannot be fulfilled and does not lead
to a unique natural choice of time operator without further additional
restrictions. There must be additional physical criteria to choose,
and in fact several of them may be physically significant. This will be
exemplified below, see Sect. \ref{arrival}.
\subsection{Arrival time operators}
``Time-of-event'', and in particular time-of-arrival operators and
probability densities are similar to clock operators (for reviews of
this concept see \cite{MugaLeavens,toatqm2}).
Physically, we expect that a free particle in one dimension will arrive with certainty at a given detection point (including negative times and ignoring the
case of zero momentum which is of measure zero for an arbitrary physical wave-packet). Similarly a free particle in three dimensions will arrive at an
infinitely extended plane.
Also, a particle on a half-line with reflecting
boundary conditions and without additional potential, is expected
to arrive once at the boundary and, at least on classical grounds,
twice at any other point. In
the latter case it is meaningful to consider the first arrival at a
given point because this should be in principle observable. In all these
cases the total arrival probability resp. first-arrival probability is 1.
The corresponding arrival time
operators are denoted by $\hat{T}^A$ and $\hat{\Pi}^A_t$,
respectively, and when compared to clock operators their formal properties are
identical up to a change of sign, e.g. in the conjugacy relations or the
formulation of covariance \cite{Werner}.
This means that, in contrast to clock times, if the particle's state is
shifted in time by $t_0$, it
should arrive a time $t_0$ earlier, and the temporal probability density
should be shifted by $t_0$ to earlier times.
These are, in other words, waiting times until an event occurs, which depend
on the time when we set the stopwatch to zero, and decrease if we reset it
at a later instant.
Thus the analog of the cumulative probability operator in
Eq. (\ref{eq1.6}) must now satisfy
\begin{equation}
\begin{split}
\langle \psi_0 | \hat{F}^A_\tau|\psi_0 \rangle &= \langle \psi_{t} |
\hat{F}^A_{\tau - t}|\psi_{t} \rangle\\
\label{eq5.7a}
\hat{F}^A_\tau &= e^{i \hat{H} \tau/\hbar} \hat{F}^A_0\, e^{-i \hat{H}\tau/\hbar}.
\end{split}
\end{equation}
With $\hat{\Pi}^A_t\equiv d\hat{F}^A_t/dt$ and
$\hat{\Pi}^A_0=\frac{i}{\hbar}[\hat{H},\hat{F}^A_0]$
we have
\beqa \label{eq5.8a}
\hat{T}^A &=& \int {dt}\, t \, e^{i \hat{H}t/\hbar}\, \hat{\Pi}^A_0 \,
e^{-i \hat{H}t/\hbar},
\\
\label{eq5.9a}
\hat{\Pi}^A_{t} &=& e^{i \hat Ht/\hbar}\, \hat{\Pi}^A_0 \, e^{-i\hat Ht/\hbar},
\\
\label{eq5.10a}
\langle \psi_{t_0} |\hat{T}^A | \psi_{t_0} \rangle &=& \langle \psi_0
|\hat{T}^A| \psi_0 \rangle - t_0,
\\
\nonumber
\langle \psi_{t_0} |\hat{\Pi}^A_t | \psi_{t_0} \rangle &=& \langle \psi_0
|\hat{\Pi}^A _{t + t_0} |\psi_0 \rangle.
\end{eqnarray}
In addition, the operator should incorporate the location where
the arrivals are observed. For
free particles coming in from one side and arrivals at a plane this
was achieved in Ref. \cite{Kij} by
postulating invariance of the probability density under a combination of
space reflection and time reversal.
It is evident that these properties still do not specify the
operator uniquely. For physical reasons one will also demand for an
optimal arrival-time observable that the arrival-time probability density has
minimal variance, analogous to the postulate in Ref. \cite{Kij} for
free particles in three-dimensional space. This means
that no other arrival-time observable can be measured more precisely.
\section{The general form of covariant time operators}
\label{general}
We begin with covariant clock time operators associated with a given
Hamiltonian $H$. For simplicity, we first consider the case when
$\hat H$ has only non-degenerate continuous eigenvalues $E$, with
generalized eigenvector $|E\rangle$ and normalization
\[\langle E |E^\prime \rangle = \delta (E - E^\prime).\]
We will determine the most general form of
$\hat{\Pi}_0$ which, through Eqs. (\ref{eq1.1} - \ref{eq1.7b}), leads to a
covariant probability density operator and corresponding time
operator.
The simple example in Eq. (\ref{eq1.14}) can be generalized to
\begin{equation*}
\hat{\Pi}_0= \frac{1}{2\pi\hbar}\int dE\, dE^\prime\, b(E)\,|E \rangle \langle E^\prime|\,
\overline{b(E^\prime)}
\end{equation*}
and, more generally, it will be shown that
\begin{eqnarray}\label{eq2.1}
\hat{\Pi}_0& =& \frac{1}{2\pi\hbar}\sum_i \int dE \,dE^\prime
\,b_i(E)\,|E\rangle \langle E^\prime|\,
\overline{b_i(E^\prime)},
\\
\hat{T} &=& \frac{1}{2\pi\hbar}\sum_i \int {dt}\, t \int dE\,dE^\prime~
e^{-i(E-E^\prime)t/\hbar}
\nonumber\\
&\times&b_i(E)\,|E\rangle \langle E^\prime|\,\overline{b_i(E^\prime)}\label{tgen}
\end{eqnarray}
is the most general form of $\hat{\Pi}_0$ and $\hat T$,
where the functions $b_i(E)$ have to satisfy certain properties in
order that the total probability is 1 and that the second moment in
Eq. (\ref{eq1.5}) is finite. Indeed, for given state
$|\psi \rangle$, the total temporal probability is, with $\psi(E)\equiv \langle
E|\psi \rangle$,
\beqa
&&\int_{- \infty}^{+ \infty} dt
\langle \psi|\, e^{-i\hat Ht/\hbar}\,
\hat{\Pi}_0\, e^{i\hat H t/\hbar}\,|\psi \rangle
\\ \nonumber
&&= \sum_i \int \frac{dt}{2 \pi\hbar} \left|\int dE\, e^{-iEt/\hbar}\,
\overline{\psi(E)}\, b_i(E)\right|^2
\\ \nonumber
&&= \sum_i \int dE\, dE^\prime \,
\delta(E-E^\prime)\, \overline{\psi (E)}\, b_i (E)\,
\overline{b_i(E^\prime)}\, \psi(E^\prime)
\\ \nonumber
&&=
\sum_i \int dE~\overline{\psi (E)} \sum_i b_i(E)\, \overline{b_i(E)}\,
\psi(E).
\eeqa
This equals 1 for every state $|\psi \rangle$ if and only if
\begin{equation} \label{eq2.3}
\sum_i b_i (E) \,\overline{b_i(E)} = 1.
\end{equation}
Similarly,
\beqa
\!\!\!\!\!\!\langle \psi |\hat{T}| \psi \rangle &=&\int dE\, \bar{\psi} (E)
\,\frac{\hbar}{i}\, \psi^\prime (E)
\nonumber\\
&&+ \int dE\, |\psi (E)|^2\, \frac{\hbar}{i} \sum b_i(E)\,
\overline{b_i^\prime (E)}.
\label{eq2.3a}
\eeqa
Note that $\sum b_i\, \bar{b_i^\prime}$ is purely imaginary, from
Eq. (\ref{eq2.3}), and thus vanishes if $b_i$ is real.
The second
moment is
\beqa\label{eq2.4}
&&\int dt\, t^2
\langle \psi |e^{-i\hat Ht/\hbar}\, \hat{\Pi}_0 \,e^{i\hat Ht/\hbar}|\psi \rangle
\\\nonumber
&&= \hbar\sum_i \int \frac{dt}{2 \pi} \left|\int
dE\, \partial_E\, e^{-iEt/\hbar}\,
\overline{\psi(E)}\, b_i(E)\right|^2
\\\nonumber
&& =\hbar^2\sum_i \int dE \,\partial_E \left(
\overline{\psi (E)}\, \,b_i(E)\,
\right) \partial_E \left( \overline{b_i(E)}\, \psi (E) \right)
\\\nonumber
&&= \hbar^2\int dE \left\{
|\psi^\prime(E)|^2 + \sum_i |b_i^\prime (E)|^2 \, |\psi(E)|^2\right.
\\\nonumber
&&\left.+ 2\,
{\rm Re}\,
\sum_i \overline{b_i(E)}\, b_i^\prime(E)\, \overline{\psi(E)}\, \psi^\prime
(E)\right\}
\eeqa
by Eq. (\ref{eq2.3}). This is finite if and only if the contribution
from the first and second term are finite, and for the latter to hold for all
infinitely differentiable functions $\psi(E)$ vanishing outside a finite interval (i.e. with compact support in
$E$) one must have
\begin{equation} \label{eq2.6}
\sum_i |b_i^\prime(E)|^2 ~~ \mbox{integrable over any finite interval.}
\end{equation}
Eq. (\ref{eq2.1})
gives the most general form of $\hat{\Pi}_0$ leading to a covariant
time operator
when the functions $b_i$ satisfy Eqs. (\ref{eq2.3}), and the second
moment is finite for states with $\langle E|\psi \rangle$ of compact
support if and only if Eq. (\ref{eq2.6}) holds.
For a given
$\hat{\Pi}_0$ one can construct the functions $b_i$ as follows. One
chooses a maximal set $\{|g_i\rangle\}$ of vectors satisfying
\be \label{eq2.6a}
\langle g_i |\hat{\Pi}_0| g_j \rangle = \delta_{ij}/2\pi\hbar.
\ee
Such a maximal set is easily constructed by the standard Schmidt
orthogonalization procedure. Then a possible set $\{b_i\}$ is given by
\be \label{eq2.6b}
b_i(E) =2\pi\hbar\, \langle E |\hat{\Pi}_0| g_i \rangle.
\ee
Eq. (\ref{eq2.1}) is then a realization of the given $\hat{\Pi}_0$.
Mathematical details, in particular regularity properties, will be
presented elsewhere \cite{Gerhard}. It should be noted
that the functions $b_i$ in the decomposition of $\hat{\Pi}_0$ in
Eq. (\ref{eq2.1}) are not unique.
For the case of degenerate eigenvalues of $\hat{H}$ we first consider
the case where the degeneracy is indexed by a discrete number and such that
\begin{equation} \label{eq2.7}
\langle E, \alpha | E^\prime, \alpha^\prime \rangle = \delta_{\alpha
\alpha^\prime} \delta(E- E^\prime).
\end{equation}
For simplicity we assume the same degeneracy for each $E$. Then
Eqs. (\ref{eq2.1} - \ref{eq2.6}) generalize as
\begin{equation} \label{eq2.8}
\begin{split}
\hat{\Pi}_0 = \frac{1}{2\pi\hbar}\sum_i&\int dE\,dE^\prime \\
&\sum_{\alpha \alpha'}
b_i(E, \alpha)\,|E, \alpha \rangle \langle
E^\prime , \alpha^\prime|\, \overline{b_i (E^\prime, \alpha^\prime)},
\end{split}
\end{equation}
\begin{equation} \label{eq2.9}
\sum_i b_i(E, \alpha)\, \overline{b_i (E, \alpha^\prime)}=
\delta_{\alpha \alpha^\prime},
\end{equation}
\begin{equation} \label{eq2.10}
\mbox{second moment} = \hbar^2\int dE\,\Big|\partial_E
\sum_\alpha \overline{b_i (E, \alpha)}\, \psi (E, \alpha) \Big|^2,
\end{equation}
\begin{equation} \label{eq2.11}
\sum_i |b_i^\prime (E, \alpha)|^2~~ \mbox{integrable over any finite
interval}
\end{equation}
for each $\alpha$, where $\psi (E, \alpha)\equiv \langle E ,
\alpha|\psi \rangle$ and
$b_i(E,\alpha) = 2\pi\hbar\langle E,\alpha |\hat{\Pi}_0| g_i \rangle $.
Again Eq. (\ref{eq2.8})
gives the most general form of $\hat{\Pi}_0$ leading to a covariant time
operator through Eqs. (\ref{eq1.1}-\ref{eq1.7b}).
The case of continuous degeneracy parameter can be reduced to the
discrete case.
These results carry over in a corresponding way to arrival times with
normalized probability densities.
\section{Uniqueness of time operator: time reversal, symmetries and minimal variance}\label{uniqueness}
As seen in the previous section, there are many covariant
clock time operators. For uniqueness additional,
physically motivated conditions are needed. Requiring minimal variance by itself does not make $\hat{T}$ unique, not even in the case of non-degenerate
spectrum of $\hat{H}$, since in general it may not be possible
to fulfill this requirement simultaneously for all states
with second moment unless, in addition,
one restricts the set of functions $b_i$ by symmetry
requirements, as we shall now discuss.
The time reversal operator, here denoted by $\hat\Theta$, is an
anti-unitary operator. If the dynamics is time reversal invariant, it
is natural to demand
\begin{equation} \label{eq3.1}
\hat\Theta\, \hat{T}\, \hat\Theta = - \hat{T},
\end{equation}
and similarly for the probability density. By Eq. (\ref{eq1.7bc}) this
implies
\begin{equation} \label{eq3.2}
\hat\Theta \,\hat{\Pi}_0\hat \Theta = \hat{\Pi}_0.
\end{equation}
It will now be shown for the non-degenerate eigenvalue case that time
reversal invariance of the Hamiltonian $\hat H$ and of $\hat\Pi_0$, and minimal
$\Delta T$ together imply uniqueness of $\hat{T}$ and $\hat\Pi_t$. For
each eigenvalue $E$ of $\hat H$ one can choose a $\hat\Theta$ invariant
eigenvector, denoted by $ |E_\Theta \rangle$,
\begin{equation} \label{eq3.3}
\hat\Theta |E_\Theta \rangle = |E_\Theta \rangle.
\end{equation}
This means a specific choice of phase factor and a real function
in position space. Eq. (\ref{eq3.2})
implies $\hat{\Pi}_0 = 1/2 (\hat{\Pi}_0 + \hat\Theta\,
\hat{\Pi}_0\,\hat \Theta)$, and the general
form of $\hat{\Pi}_0$ in Eq. (\ref{eq2.1}) then implies that $b_i(E)$ can be
chosen real. Then, from Eqs. (\ref{eq2.3a}) and (\ref{eq2.4}), one finds
\begin{equation} \label{eq3.3a}
\langle \psi |\hat{T}| \psi \rangle = \int dE\, \overline{\psi(E)}\,
\frac{\hbar}{i}\, \psi^\prime (E),
\end{equation}
\beqa \label{eq3.4}
\nonumber
&\mbox{second moment}
= \hbar^2\int dE\, |\psi^\prime (E)|^2
\\
& +
\hbar^2\sum_i \int dE\, |\psi|^2\, |b_i^\prime(E)|^2.
\eeqa
Hence $\Delta T$ minimal means in this case that the second moment is
minimal, and the latter holds if and only if $b_i^\prime (E) \equiv 0$, i.e.
$$
b_i(E) \equiv c_i,~~~\sum ~c_i^2 = 1,
$$
by Eq. (\ref{eq2.3}). Inserting this into Eq. (\ref{eq2.1}) one sees
that the functions $b_i$
can be replaced by the single function $b(E) \equiv 1$. Thus one
obtains
\begin{equation} \label{eq3.5}
\begin{split}
\hat{\Pi}_0& =\frac{1}{2\pi\hbar}\int dE~dE^\prime \,|E_\Theta \rangle
\langle E_\Theta ^\prime|,
\\
\hat\Pi_t &= \frac{1}{2\pi\hbar}\int dE\,dE'\,
e^{-i(E-E')t/\hbar}|E_\Theta \ra\la E_\Theta '|,
\\
\hat T &= \int dt\, \hat\Pi_t,
\end{split}
\end{equation}
with time reflection invariant $|E_\Theta \rangle$.
The (non-orthogonal) eigenfunctions,
$|\tau\ra$, of $\hat T$ with eigenvalue $\tau$ are given by
\beq
\label{t0}
|\tau\ra=\frac{1}{\sqrt{2\pi\hbar}}\int_0^\infty dE e^{-i E
\tau/\hbar}|E_\Theta\ra,
\eeq
and $\hat T$ can be written as
\beq
\label{ta0}
\hat{T}=\int_{-\infty}^{\infty} d\tau \, \tau|\tau\ra\la \tau|.
\eeq
Therefore uniqueness holds
in the non-degenerate case if time-reversal invariance
and minimal $\Delta T$ are demanded.
In the degenerate eigenvalue case
this is no longer true and one needs additional conditions to obtain
uniqueness, as discussed elsewhere \cite{Gerhard}.
Here we simply state some results. With a reflection
invariant potential in one dimension, the clock time operator becomes
unique and can be explicitly determined if, in addition to covariance
under time reversal and minimal variance, one also demands invariance
under space reflection. With a rotation invariant potential in three
dimension, the time operator becomes unique and can be
explicitly determined if, in addition to covariance under time
reversal and minimal variance, one also demands invariance under
rotations and reflection $x_1 \to -x_1$. Analogous results hold for
arrival time operators. In particular, a
generalization of the result of Ref. \cite{Kij} is obtained \cite{Gerhard}.
\section{Application to arrival times}\label{arrival}
Evidently the techniques of the previous sections can be applied in a
completely analogous way to the study of arrival-time operators.
To illustrate this we consider in the following the motion of a particle on the
half-line $x\geq 0$, without additional potential, and study
its arrival times at the origin and at an arbitrary point.
In the classical case an incoming free particle of energy $E$ is
reflected at
the origin and then travels back to infinity. Hence, for each point
$a\neq 0$, there is a first and second time of arrival which we denote by
$t^a_1$ and $t^a_2$. For the time reversed trajectory the first
arrival at $a$ is at time $t^{ a}_{\theta,1} = -t^a_2$ and the second
arrival at time $t^{ a}_{\theta,2} = -t^a_1$, as is easily
calculated. For the origin, $a=0$, there is only one arrival and
\beq \label{reversed}
t^{0}_{\theta} = -t^0.
\eeq
The corresponding
arrival-time operator for arrivals at the origin is denoted by $\hat
T^{A}_f$. It is natural to demand the analogous relation to
Eq. (\ref{reversed}), i.e.
\beq \label{qreversed}
\hat \Theta \hat T^{A}_f\hat \Theta = -\hat T^{A}_f,
\eeq
and time reversal invariance of
$\hat\Pi^A_{f,0}$, where $\hat\Pi^A_{f,t}$ is the associated
probability density operator.
If $a\neq 0$ a classical free particle on the positive half-line,
coming in from
infinity with velocity $|v|$, arrives first at time $t_1^{\,a}$ at the
point $a$, and then at time $t^{\, 0}$ at the origin,
\beq \label{heuristic1}
t_1^{\,a} = t^{\,0} - a/|v|.
\eeq
If $\hat T^{A}_{1}$ denotes the corresponding time
operator for the first arrival at $a$ we may demand
\beq \label{heuristic2}
\hat T^{A}_{1} = \hat T^{A}_f -a/|\hat v|,
\eeq
where $|\hat v| = \sqrt{2\hat H/m}$ is the velocity operator.
\subsection {Free particle on a half-line}
We first consider arrivals at the origin for free motion on
the half-line $x\geq 0$, with reflecting boundary
conditions at $x=0$. The eigenfunctions can be labeled by the energy
$E=k^2\hbar^2/(2m)$. Real, and thus $\hat \Theta$ invariant,
eigenfunctions for energy $E$ which vanish at the origin are
\beq \label{eq4.1aa}
\la r|E_f\ra
=\frac{i}{\hbar}\sqrt{\frac{m}{2\pi k}}(e^{-ikr}-e^{ikr}),
\eeq
where the
subscript $f$ in $|E_f\ra$ refers to the free Hamiltonian and where we
have written $r$ to indicate $r\equiv x\geq 0$. These
eigenfunctions are normalized as $\la E_f|E_f'\ra=\delta(E-E')$ on the
half-line.
For the probability density operator for arrivals
at the origin invariance under time reversal means
\beq \label{origin}
\hat\Theta \,\hat{\Pi}^{A}_{f,0}\,\hat\Theta = \hat{\Pi}^{A}_{f,0}.
\eeq
By the results of the last section, the operators
$\hat{\Pi}^{A}_{f,t}$ and $\hat T^{A}_f$
become unique if invariance under time reversal holds
and minimal variance is assumed. From Eq. (\ref{eq3.5})
one obtains, with a change $t \to -t$ and replacing $|E_\Theta \ra$ by $|E_f\ra$,
\beqa \label{eq4.1a}
\hat\Pi^{A}_{f,\,t} &=& \frac{1}{2\pi\hbar}\int dE\,dE'\,
e^{i(E-E')t/\hbar}|E_f\ra\la E'_f|,
\\
\hat T^{A}_f &=& \int dt\,t\, \hat\Pi^{\,0}_{f,\,t}.\nonumber
\eeqa
This arrival time operator is just the negative of the clock
time operator of Eq. (\ref{eq3.5}), with Eqs.~(\ref{t0}) and
(\ref{ta0}) holding correspondingly.
Note that the vanishing of the wave function at $r=0$ is not an obstacle
to define these operators in a physically meaningful manner.
A similar situation is found for antisymmetrical wavefunctions on the
full line. It was shown in \cite{HSMN} that the ideal time-of-arrival distribution follows in a limiting process from an operational measurement model
that considers explicitly a weak and narrow detector.
We now turn to first arrivals at $a\neq 0$. Using Eq. \ref{CCR}, a simple calculation shows that
\beq
\label{shift}
e^{iam|\hat v|/\hbar}\,\hat{T}^{A}_f\, e^{-iam|\hat v|/\hbar} =
\hat{T}^{A}_f - a/|\hat v|.
\eeq
Since the right-hand side equals $\hat T^A_{1}$, by
Eq. (\ref{heuristic2}), this implies an analogous relation for the
probability density operator, $\hat{\Pi}^{A}_{1,t}$, for $\hat T^A_{1}$,
\beq
\label{shift2}
\hat{\Pi}^{A}_{1,t} = e^{iam|\hat v|/\hbar}\,\hat{\Pi}^{A}_{f,t}\,
e^{-iam|\hat v|/\hbar}.
\eeq
Using Eq. (\ref{eq4.1a}) this can be written as
\beq
\label{shift3}
\hat{\Pi}^{A}_{1,t} = \frac{1}{2\pi\hbar}\int dE\,dE'\,
e^{i(E-E')t/\hbar} e^{i(k - k')a}|E_f\ra\la E'_f|,
\eeq
which explicitly gives the temporal probability density operator for
the first arrival at the point $a$ of a free particle on the positive
half-line. For $a \to 0$ one recovers Eq.~(\ref{eq4.1a}).
\subsection{Asymptotic states and Smith's delay time}
We now apply the free-particle result in Eq. (\ref{eq4.1a}) to the
asymptotic states of a particle in a potential on the
half-line whose Hamiltonian has no bound states and to which
scattering theory applies. Although for fixed $E$ the eigenstate is
unique up to a
phase, there are physically relevant eigenstates $|E_\pm\ra$ which
correspond to an incoming (+) and outgoing (-) plane wave,
respectively, as well as the $\Theta$ invariant state, denoted by
$|E_\Theta\ra$. Their relation and asymptotics are $|E_-\ra
=\hat\Theta |E_+\ra$ and, with the scattering phase shift $\delta=\delta(E)$,
\beqa
\la r|E_+\ra&\sim& \frac{1}{\hbar}\sqrt{\frac{2m}{k\pi}}\frac{i}{2}(e^{-ikr}-e^{2i\delta}e^{ikr}),
\nonumber\\
\la r|E_-\ra&=&\overline{\la r|E_+\ra} = e^{-2i\delta}\la r|E_+\ra,
\nonumber\\
\la r|E_\Theta\ra&=&e^{-i\delta}\la r|E_+\ra.
\label{eq4.1ab}
\eeqa
The M{\o}ller operators $\hat \Omega_\pm$ satisfy
\beqa \label{eq4.3a}
\hat{\Omega}_\pm &\equiv& \lim_{t\to\mp\infty}e^{i\hat{H}t/\hbar}
e^{-i\hat{H}_ft/\hbar}=\int_0^\infty dE\,|E_\pm\ra\la E_f|,
\nonumber\\
|E_\pm\ra &=&\hat \Omega_\pm |E_f\ra.
\eeqa
The freely moving asymptotic states $|\psi_{in}\ra$ and $|\psi_{out}\ra$ are
mapped by $\Omega_\pm$ to the actual state $|\psi\ra$,
\beqa \label{eq4.6a}
|\psi\ra&=&\hat{\Omega}_\pm\,|\psi_{\stackrel{in}{out}} \ra
\\
|\psi_{out}\ra&=&\hat{S}\,|\psi_{in}\ra \nonumber
\eeqa
where $\hat{S}=\hat{\Omega}_-^\dagger\hat{\Omega}_+$ is the $S$
operator. Note that, by Eq.~(\ref{eq4.1ab}),
\beq
\hat{S}=\int_0^\infty dE\, |E_f\ra e^{2i\delta} \la E_f|,
\eeq
so that
$e^{2i\delta}$ is the eigenvalue of $\hat{S}$ for the state $|E_f\ra$.
It is convenient to introduce also the operator
\beq \label{eq4.5a}
\hat \Omega_\Theta \equiv\int_0^\infty dE |E_\Theta\ra\la E_f|
\eeq
and define operators $\hat{T}^{\, A}_{\pm,\Theta}$ by
\beqa \label{eq4.7a}
\hat{T}^{\, A}_{\pm,\Theta} &\equiv&
\hat{\Omega}_{\pm,\Theta}\,\hat{T}^{\,
0}_f\,\hat{\Omega}_{\pm,\Theta}^\dagger\\
&=& \int dt\, t\int dE\,dE'\, e^{i(E-E')t/\hbar}|E_{\pm,\Theta}\ra\la
E'_{\pm,\Theta}| \nonumber.
\eeqa
The last line shows that $-\hat{T}^{\, A}_{\pm,\Theta}$ are possible clock time
operators for the particle in the potential.
Since the states $|E_{\pm,\Theta}\ra$ differ only by a phase, the same
calculation that leads to Eq. (\ref{eq1.16}) gives
\beq \label{eq4.2aa}
\hat T^{\, A}_{\pm} =\hat{T}^{\, A}_{\Theta} \mp \hbar\int dE\,
\frac{\partial\delta}{\partial E}\,|E_\Theta\ra\la E_\Theta|.
\eeq
{}From Eq. (\ref{eq4.6a}) it follows that the expectation values of
$\hat{T}^{\, A}_{+}$, $\hat{T}^{\, A}_{-}$ and $\hat{T}^{\,
A}_{\Theta}$ may be interpreted in terms of the asymptotic
states and the free-motion arrival-time operator $\hat{T}^{\, A}_f$,
\beq
\la\psi|\hat{T}^{\, A}_{+,-,\Theta}|\psi\ra =\la
\psi_{in,out,io}|\hat{T}^{\,
A}_f|\psi_{in,out,io}\ra,
\label{last}
\eeq
where the freely moving state $|\psi_{io}\ra$ is defined by
\beq \label{interpolation}
|\psi_{io}\ra \equiv\hat S^{1/2}|\psi_{in}\ra,
\eeq
and can be considered as an interpolation between
$|\psi_{in}\ra$ and $|\psi_{out}\ra = \hat S |\psi_{in}\ra$.
With Eq. (\ref{eq4.5a})
one can write
\beq \label{eq4.5aa}
|\psi_{io}\ra = \Omega_\Theta^\dagger |\psi\ra.
\eeq
Taking expectation values of Eq. (\ref{eq4.2aa}) with $|\psi\ra$ and
using Eqs. (\ref{last}) and (\ref{eq4.3a}),
together with the fact that $|E_{\pm,\Theta}\ra\la E_{\pm,\Theta}| $
all coincide since the phases drop out, yields
\beq \label{4.9a}
\begin{split}
\la\psi_{\stackrel{in}{out}}|\hat{T}^{\, A}_f|\psi_{\stackrel{in}{out}}\ra =
\la\psi_{io}|
\,\hat{T}^{\, A}_{\, f}\,|\psi_{io}\ra
\mp \hbar\int
dE\,\frac{\partial\delta}{\partial E}\,\big|\la E_f| \psi_{in}\ra\big|^2.
\end{split}
\eeq
One sees from this that the mean arrival time for the interpolating state
$|\psi_{io}\ra$ lies between those of the ingoing and outgoing wave.
From Eq.~(\ref{4.9a}),
\beq \label{4.10a}
\la
\psi_{out}|\hat{T}^{\, A}_f|\psi_{out}\ra -
\la\psi_{in}|\hat{T}^{\, A}_f|\psi_{in}\ra = 2 \hbar\int
dE\,\frac{\partial\delta}{\partial E}\,\big|\la E_f| \psi_{in}\ra\big|^2.
\eeq
The right-hand side of the last equation is the scattering time delay
of Smith \cite{Smith} and it shows that the time for the outgoing wave is
shifted with respect to the time for the ingoing wave by the
scattering time delay. An example is shown in Figs. 1 and 2.
{\it Time-reversal:} The behavior of $T^{\, A}_\pm$ with respect to
time-reversal is determined by acting with the anti-linear operator
$\hat{\Theta}$,
\beq
\hat{\Theta}\, \hat{T}^{\, A}_\pm\,\hat{\Theta} =-\hat{T}^{\, A}_\mp,
\label{mm}
\eeq
whereas $\hat{T}^{\, A}_{\Theta}$ simply changes sign.
The operators $\hat{T}^{\, A}_\pm$ do not simply change sign under
time-reversal
as $\hat{T}^{\, 0}_{\, \Theta}$ does, but their behavior in Eq. (\ref{mm}) (changing
the sign and exchanging the operators) is perfectly physical:
the time reversal of a trajectory which moves towards the origin
is a trajectory in the same location but moving away from the origin.
If the original incoming trajectory requires a certain time $\tau$ to arrive
at the origin (with free motion),
the reversed trajectory is outgoing, and departed from the origin
at $-\tau$. These operators provide in summary information of the
free-motion dynamics of incoming and outgoing asymptotes of the state,
and scattering time delays \cite{Leon,toapot}. Thus, although the operator
$\hat{T}^{\, A}_{\, \Theta}$ is unique when one applies the criteria of the
previous section it does not supersede $\hat{T}^{\, A}_{\pm}$ since
it does not describe
the same physics, and all three operators have their own legitimacy.
\begin{figure}[t]
\vspace{-4cm}\hspace{-1cm}
\includegraphics[height=13.cm,angle=0]{timeopf1.eps}
\vspace{-3cm}
\caption{\label{f1}
(Color online) Probability densities before ($t=0$) and after the collision
($t=190$)
with a delta barrier. Dimensionless units with $m=\hbar=1$.
The initial wave packet is $\psi(k)=N[1-\exp(-\beta k^2)]\exp[-(k-k_0)^2/(4\Delta_k^2)]
\exp(-ikx_0)\theta(k)$, where $N$ is the normalization constant
and $\theta$ (here) the Heaviside step function; initial wavenumber $k_0=-\pi/2$,
$\Delta_k=0.045$, $\beta=1/2$; $V=20\delta(x-20)$; initial center of the wave packet
$x_0=180$. The delta potential is rather opaque so the the outgoing packet is advanced with respect to the incoming state.}
\end{figure}
\begin{figure}[t]
\vspace*{-3cm}\hspace{-1cm}
\includegraphics[height=11.cm,angle=0]{timeopf2.eps}
\vspace{-3cm}
\caption{\label{f2}
(Color online) Time of arrival distributions for arrivals at $x=0$
corresponding to the previous figure.}
\end{figure}
\section{Application to Lyapunov operators
in Quantum Mechanics}\label{Lyapunov}
In Ref. \cite{Strauss} an operator $\hat L$ was called a
Lyapunov operator if for any
normalized $| \psi \rangle$ and $| \psi_t \rangle \equiv
e^{-i\hat{H}t/\hbar}| \psi \rangle$, the expectation value
$\langle \psi_t |\hat L|\psi_t \rangle$ is monotonically
decreasing to 0 as $t \to \infty$ and goes to
1 for $t \to - \infty$. Ref. \cite{Strauss} considered the case of a
Hamiltonian $\hat H$ with purely
continuous eigenvalues ranging from 0 to infinity and degeneracy
parameter $j$. The particular Lyapunov operator suggested there can be written
as
\be \label{eq6.1}
\hat{L}_S = \frac{i}{2\pi\hbar} \sum_j \int_0^\infty dE
\int_0^\infty dE^\prime \frac{|E, j \rangle \langle E^\prime, j|}{E-
E^\prime + i \varepsilon}.
\ee
More generally, one may call a bounded operator $\hat L$ a Lyapunov
operator if $\langle \psi_t |\hat L|\psi_t \rangle$ is just monotonically
decreasing, without specifying limits. However, it will be shown below,
after Eq. (\ref{eq6.2}), that, without loss
of generality, one can always assume the above limiting behavior from
1 to 0 as $t$ goes from $ - \infty$ to $ + \infty$.
The above notion does not quite correspond to Lyapunov functionals used in
Ref. \cite{Sewell} to define irreversibility and an
arrow of time, since time reversal invariance of the functional was
assumed there in order to have neutrality with respect to past and
future. It will be shown further below that there are no time reversal
invariant Lyapunov operators if the Hamiltonian is time reversal
invariant.
It is clear that the above properties do not define $\hat{L}$ in
Eq. (\ref{eq6.1})
uniquely. For example, one can introduce phases and
still get a Lyapunov operator. In this section we are going to
determine the most general form of $\hat{L}$ for a Hamiltonian
$\hat{H}$ with a purely (absolutely) continuous spectrum and give
conditions under which it becomes unique. It will also be seen that
to each $\hat{L}$ there is an associated covariant time
operator $\hat{T}_L$.
To show that one can assume the above limit behavior we put, for a
given general Lyapunov operator $\hat{L}$,
\be \label{eq6.2}
\hat{L}_t \equiv e^{-i \hat{H}t/\hbar} \hat{L} e^{ i \hat{H}t/\hbar}
\ee
so that $\hat{L}_t$ is monotonically increasing, by the
monotonic decrease of $\langle \psi_t |\hat{L} |\psi_t \rangle$. From
the boundedness of $\hat{L}$ and from monotonicity it follows that
$\hat{L}_{\pm \infty}$ exist as operator limits in the weak sense,
i.e. for expectation values. Moreover, $\hat{L}_{\pm \infty}$ commutes
with $e^{-i \hat{H}t/\hbar}$, and therefore $\hat{L}'\equiv
\hat{L}-\hat{L}_{- \infty}$ is also a Lyapunov operator, with
$\hat{L}'_t\geq 0$. Then $\hat{L}''\equiv \hat{L}'^{-1/2}
\hat{L}'\hat{L}'^{-1/2}$ is a Lyapunov operator satisfying $\hat{L}''_{- \infty}=0$
and $\hat{L}''_{ \infty}=1 $ so that $\langle \psi_t |\hat L''|\psi_t
\rangle$ is monotonically decreasing from 1 to 0, proving the above claim.
To determine the general form of $\hat{L}$ with such a limit behavior
for $t \to \pm \infty $, we note that by monotonicity
\be \label{eq6.3}
\hat{\Pi}^L_t \equiv \frac{d}{dt}{\hat{L}}_t = e^{-i \hat{H}t/\hbar}
\frac{-i}{\hbar} [\hat{H},
\hat{L}]e^{i \hat{H}t/\hbar} \ge 0,
\ee
i.e. expectation values of $\dot{\hat{L}}_t$ are non-negative for
all $t$, in particular
\be \label{eq6.4}
\hat{\Pi}^L_0 = \frac{-i}{\hbar} [\hat{H}, L] \ge 0
\ee
where the commutator is again to be understood in the weak sense via
matrix elements and where $\hat{\Pi}^L_0$ is in
general not an operator but only a bilinear form, as in Eq. (\ref{eq1.7bc}).
{}From Eq. (\ref{eq6.3}) and from $\hat{L}_{- \infty} = 0$
one obtains
\be \label{eq6.5}
\hat{L}= \int_{- \infty}^0 {dt}\, e^{-i \hat{H}t/\hbar}\, \hat{\Pi}^L_0\,
e^{i \hat{H}t/\hbar}.
\ee
{}From Eq. (\ref{eq6.3}) one sees that
\be \label{eq6.6}
\Pi_L (t; \psi) \equiv \langle \psi |\hat{\Pi}^L_t |\psi \rangle\, \ge\, 0
\ee
is a non-negative density which integrates to 1 for each normed state,
i.e. it can be regarded as a probability density and
hence $\hat{L}_t$ behaves like the cumulative probability operator
$\hat{F}_\tau$ in Eq. ({\ref{eq1.1}}). Therefore,
\be \label{eq6.7}
\hat{T}_L \equiv \int dt\, t\, e^{-i \hat{H}t/\hbar}\,
\hat{\Pi}^L_0\, e^{i \hat{H}t/\hbar}
\ee
is an analog of the time operator $\hat{T}$ in Eq. (\ref{eq1.4}).
Alternatively, $1- \hat L_{-t}$ behaves as the cumulative arrival
probability operator $\hat F^A_t$ in Eq. (\ref{eq5.7a}).
\\[.3cm]
{\bf Example:} Let $\hat{\Pi}^L_0$ given by Eq. (\ref{eq1.14}). Then,
by Eq. (\ref{eq6.5}), $\hat{L}$ is given by
\be \label{eq6.8}
\hat{L} = \frac{1}{2\pi\hbar}\int^0_{- \infty} dt \int dE~dE^\prime e^{-i(E-
E^\prime)t/\hbar} |E \rangle \langle E^\prime|,
\ee
which is readily seen to agree with $\hat{L}_S$ in
Eq. (\ref{eq6.1}) in the case of non-degeneracy.
{}For free motion on the half-line, with $|E\ra=|E_f\ra$ from
Eq.~(\ref{eq4.1aa}), the Lyapunov
property of this example simply reflects the
monotonous accumulation of arrivals at the origin since
a change of integration variable gives
\beq
\label{accum}
\la \psi_t|1 - \hat{L}|\psi_t\ra=\int_{-\infty}^t dt'\, \la
\psi|\hat\Pi^0_{f,\,t'}|\psi\ra.
\eeq
With a potential on the half-line and taking $|E\ra =|E_\pm\ra$ of the
previous section one obtains the accumulation of arrivals of the freely
moving packets
$|\psi_{in}\ra$ and $|\psi_{out}\ra$, and for $|E\ra =|E_\Theta\ra$
the corresponding
accumulation of arrivals for $|\psi_{io}\ra$.
The most general form of $\hat{L}$ is obtained from the most general
form of $\hat{\Pi}^L_0$ which is given by Eqs. (\ref{eq2.8}) and
(\ref{eq2.9}). If $\hat{\Pi}^L_0$ is known then $\hat{L}$ is given by
Eq. (\ref{eq6.5}), and in this way one obtains the most general form
of the Lyapunov operator $\hat L$ with the above limit behavior for
$t\to \pm\infty$.
Uniqueness of $\hat{L}$ may be achieved for particular Hamiltonians by
demanding, e.g. time reflection invariance of $\hat T_l$, special symmetries
and minimal variance $\Delta T_L$, as in Sections \ref{uniqueness} and
\ref{arrival}.
We finally show that for a time reversal invariant Hamiltonian there is
no nontrivial time reversal invariant Lyapunov operator. Indeed, if
$\hat\Theta\,\hat H \,\hat\Theta = \hat H$ and $\hat\Theta\,\hat L
\,\hat\Theta =\hat L$ then one obtains, for initial state
$\hat\Theta\,|\psi\ra \equiv |(\hat\Theta\psi)\ra$,
\beq \label{eq6.9}
\la(\hat\Theta\psi)_t|\hat L|(\hat\Theta \psi)_t\ra =
\la\psi_{-t}|\hat L| \psi_{-t}\ra~
\eeq
by the anti-unitarity of $\hat\Theta$. Now, for increasing $t$, the
expression on the left-hand side
decreases while the one on the right-hand side increases. This is
only possible if both sides are constant in $t$. Alternatively, one
can conclude from Eq. (\ref{eq6.4}) that both $\hat{\Pi}^L_0$ and
$\hat\Theta\,\hat{\Pi}^L_0\,\hat\Theta = -\hat{\Pi}^L_0$ are positive
operators, which is only possible if $\hat{\Pi}^L_0=0$. This means
that $\hat L$ commutes with $\hat H$, which also leads to the
constancy of both sides in Eq. (\ref{eq6.9}).
\section{Discussion and outlook}
We have provided the most general form of covariant,
normalized time operators. This is important to set
a flexible framework where physically motivated conditions
on the observable may be imposed. The application examples
include clock time operators, time-of-arrival operators
and Lyapunov operators.
Experimentally, a number of interesting open questions remain for
quantum clocks and arrival-time measurements.
For example, quantum clocks are basically
quantum systems with an observable that evolves linearly with time.
To evaluate the possibility to compete with current atomic clocks
\cite{MLM},
the observable must be realized in a specific system.
We have described an ideal observable (by imposing
antisymmetry with respect to time reversal and minimal variance)
and the analysis of the operational realization is now pending.
A similar analysis for the ideal arrival time-of-arrival distribution
of Kijowski has been carried out in terms of an operational
quantum-optical realization with cold atoms (cf. Ref. \cite{toatqm2}
for a review). Indeed, cold atoms and quantum optics
offer examples of times of events (other than arrivals), such as
jump times, excitation times,
escape times, admitting
a treatment in terms of covariant observables. Modeling and understanding these quantities and their statistics
may improve our ability to manipulate or optimize dynamical processes.
On the theory side, an open question is how to adapt the proposed framework,
possibly in combination with previous investigations
\cite{crossstates,Leon,toapot,HSMN,Galapon06,Galapon08,Galapon}, to
arrival times when a particle moves in a potential.
Finally, we have shown that Lyapunov operators follow naturally from
covariant time observables. Associated to time-of-arrival operators, they
account for the monotonous accumulation of arrivals for freely moving
asymptotic states from the infinite past independently of the state chosen.
Note that the ``infinite past'' here is an idealized construct since
it must be assumed that the wave has been evolving forever, ignoring
the fact that in practice
the state may have been prepared at some specific instant. In other
words the Lyapunov operator does not depend on that preparation
instant, and when applied to the state it takes into account its
idealized (not necessarily actual) past,
whether or not that past has been fully or partially realized.
We have also shown at the end of the last section that in theories
with a time reversal invariant Hamiltonian there are no time reversal
invariant Lyapunov operators. In Ref. \cite{Sewell} it was argued
that in order to characterize a system as irreversible and single out
a direction of time a Lyapunov functional should be time reversal invariant.
Hence, if one accepts this view of Ref. \cite{Sewell}
then, by our result, quantum mechanics for finitely many particles
should indeed not be irreversible and should not exhibit an arrow of
time if the Hamiltonian is time reversal invariant.
\section*{Acknowledgments}
We thank L. S. Schulman and J. M. Hall for discussions.
We also acknowledge the kind hospitality of the Max Planck
Institute for Complex Systems in Dresden, and
funding by the Basque Country University UPV-EHU (GIU07/40), and the
Ministerio de Educaci\'on y Ciencia Spain (FIS2009-12773-C02-01).
|
2,877,628,088,493 | arxiv | \section{Introduction}
In this paper we are concerned with uniqueness of nontrivial classic solution to the following class of nonlocal elliptic equations
\begin{equation}\label{P}\tag{P}
\left \{ \begin{array}{ll}
-\left(a(x)+b(x)\int_{\Omega}|\nabla u|^{2}dx\right)\Delta u=h(x) & \mbox{in $\Omega$,}\\
u=0 & \mbox{on $\partial\Omega$,}
\end{array}\right.
\end{equation}
where $\Omega\subset{\rm I}\hskip -0.85mm{\rm R}^{N}$, $N\geq 2$, is a bounded domain with smooth boundary, $a, b\in C^{0, \gamma}(\overline{\Omega})$, $\gamma\in (0, 1)$, are positive functions with $a(x)\geq a_{0}>0$, $b(x)\geq b_{0}>0$ and $h\in C^{0, \gamma}(\overline{\Omega})$ is given.
\medskip
When functions $a, b$ are positive constants, problem \eqref{P} is the $N$-dimensional stationary version of a hyperbolic problem proposed in \cite{K} to model small transversal vibrations of an elastic string with fixed ends which is composed by a homogeneous material. Such equation is a more realistic model than that provided by the classic D'Alembert's wave equation because takes account the changing in the length of the string during the vibrations. The hyperbolic Kirchhoff problem (with $a, b$ constants) began receiving special attention mainly after that in \cite{L} the author used an approach of functional analysis to attack it.
At least in our knowledge, the first work in studying uniqueness questions to problem \eqref{P} with $a, b$ constants was \cite{ACM}. It is an immediate consequence of Theorem 1 in \cite{ACM} that if $h$ is a H\"older continuous nonnegative (nonzero) function then problem \eqref{P}, with $a, b$ constants, has a unique positive solution. In \cite{ACM} functions $h$ sign changing are not considered. In the case that $a, b$ are not constant, problem \eqref{P} is yet more relevant in an applications point of a view because its unidimensional version models small transversal vibrations of an elastic string composed by non-homogeneous materials (see \cite{FMSS}, section 2). In \cite{FMSS} (see Theorem 1) the authors proved that for each $h\in L^{\infty}(\Omega)$ ($h\not\equiv 0$) given, problem (\ref{P}) admits at least a nontrivial solution. Moreover, same article tells us that if $h$ has defined sign ($h\leq 0$ or $h\geq 0$) such a solution is unique. Unfortunately, since their approach was based in a monotonicity argument which does not work when $h$ is a sign changing function, they were not able to say anything about uniqueness in this case. Indeed, at least in our knowledge, actually the uniqueness of solution to problem \eqref{P} in the general case is an open problem.
At this article we have obtained sufficient conditions on the quotient $a/b$ to ensure uniqueness of solution when function $h$, given, changes its sign. The main results of this paper are as follows
\begin{theorem}\label{Main1}
If there exists $\theta>0$ such that $a/b=\theta$ in $\Omega$, then for each $h\in C^{0, \gamma}(\overline{\Omega})$ problem \eqref{P} has a unique solution.
\end{theorem}
\begin{theorem}\label{Main2}
Let $a, b\in C^{2, \gamma}(\overline{\Omega})$, $h\in C^{0, \gamma}(\overline{\Omega})$ is sign changing and suppose $c=a/b$ not constant.
\begin{itemize}
\item[$(i)$] If $\Delta c\geq 2|\nabla c|^{2}/c \ \mbox{in} \ \Omega$, then, for each $h\in C^{0, \gamma}(\overline{\Omega})$ given, problem \eqref{P} has a unique nontrivial classic solution.
\item[$(ii)$] If $\Delta c< 2|\nabla c|^{2}/c \ \mbox{in some open} \ \Omega_{0}\subset \Omega$ then, for each $h\in C^{0, \gamma}(\overline{\Omega})$ given, problem \eqref{P} has a unique nontrivial classic solution, provided that
$$
\frac{|\nabla c|_{\infty}c_{M}}{\sqrt{\lambda_{1}}c_{L}^{2}}\leq 3/2,
$$
where $\lambda_{1}$ is the first eigenvalue of laplacian operator with Dirichlet boundary condition, $c_{L}=\min_{x\in\overline{\Omega}}c(x)$, $c_{M}=\max_{x\in\overline{\Omega}}c(x)$ and $|\nabla c|_{\infty}=\max_{x\in\overline{\Omega}}|\nabla c (x)|$.
\end{itemize}
\end{theorem}
First theorem above generalizes Theorem 1 in \cite{ACM} because it is true to functions $h$ sign changing or not. The second theorem above complements Theorem 1 in \cite{FMSS}.
\medskip
The paper is organized as follows.
In Section \ref{se:prelim} we present some abstracts results, notations and definitions.
In Section \ref{se:eigen} we investigated a nonlocal eigenvalue problem which seems to be closely related with uniqueness questions to problem \eqref{P}.
In Section \ref{se:trivial} we prove Theorems \ref{Main1} and \ref{Main2}. Moreover, an alternative proof of the existence and uniqueness result in \cite{FMSS} is supplied.
\medskip
\section{Preliminaries}\label{se:prelim}
In this section we state some results and fix notations used along of paper.
\begin{definition}
We say that a function $h$ is signed in $\Omega$ if $h\geq 0$ in $\Omega$ or $h\leq 0$ in $\Omega$.
\end{definition}
\begin{definition}
An application $\Psi:E\to F$ defined in Banach spaces is locally invertible in $u\in E$ if there are open sets $A\ni u$ in $E$ and $B\ni \Psi(u)$ in $F$ such that $\Psi:A\to B$ is a bijection. If $\Psi$ is locally invertible in any point $u\in E$ it is said that $\Psi:E\to F$ is locally invertible.
\end{definition}
\begin{definition}
Let $M, N$ be metric spaces. We say that a map $\Psi:M\to N$ is proper if $\Psi^{-1}(K)=\{u\in M: \Psi(u)\in K\}$ is compact in $M$ for all compact set $K\subset N$.
\end{definition}
Below we enunciate the classic local and global inverse function theorems, whose proofs can be found, for instance, in \cite{AP}.
\begin{theorem}[Local Inverse Theorem]\label{t1}
Let $E, F$ be two Banach spaces. Suppose $\Psi\in C^{1}(E, F)$ and $\Psi'(u):E\to F$ is a isomorphism. Then $\Psi$ is locally invertible at $u$ and its local inverse, $\Psi^{-1}$, is also a $C^{1}$-function.
\end{theorem}
\begin{theorem}[Global Inverse Theorem]\label{t2}
Let $M, N$ be two metric spaces and $\Psi\in C(M, N)$ a proper and locally invertible function on all of $M$. Suppose that $M$ is arcwise connected and $N$ is simply connected. Then $\Psi$ is a homeomorphism from $M$ onto $N$.
\end{theorem}
Next, we state another classical result which will be used in our arguments and whose proof can be found, for instance, for a more general class of problems, in \cite{DG}.
\begin{proposition}\label{Prodi}
Let $m\in L^{\infty}(\Omega)$, $m(x)>0$ in a set of positive measure. Then, problem
\begin{equation}
\left \{ \begin{array}{ll}
-div(A(x)\nabla u) =\lambda m(x)u & \mbox{in $\Omega$,}\\
u=0 & \mbox{on $\partial\Omega$,}
\end{array}\right.
\end{equation}
where $A\in L^{\infty}(\Omega)$ and $A(x)\geq \mathfrak{m}$ for some positive constant $\mathfrak{m}$, has a smallest positive eigenvalue $\lambda_{1}(m)$ which is simple and corresponding eigenfunctions do not change sign in $\Omega$.
\end{proposition}
Throughout this paper $X$ is the Banach space
$$
X=\{u\in C^{2, \gamma}(\overline{\Omega}): u=0 \ \mbox{on} \ \partial\Omega\}
$$
with norm
$$
\|u\|_{X}=\|u\|_{C^{2}(\overline{\Omega})}+\max_{|\beta|=2}[D^{\beta}u]_{\gamma},
$$
where $\gamma\in (0, 1)$, $\beta=(\beta_{1}, \ldots, \beta_{N})\in {\rm I}\hskip -0.85mm{\rm N}^{N}$, $|\beta|=\beta_{1}+\ldots+\beta_{N}$,
$$
\|u\|_{C^{2}(\overline{\Omega})}=\sum_{0\leq |\beta|\leq 2}\|D^{\beta}u\|_{C(\overline{\Omega})} \ \mbox{and} \ [D^{\beta}u]_{\gamma}=\sup_{x, y\in \Omega\atop x\neq y}\frac{|D^{\beta}u(x)-D^{\beta}u(y)|}{|x-y|^{\gamma}}.
$$
Moreover $Y$ will denote the Banach space $C^{0, \gamma}(\overline{\Omega})$ with norm
$$
\|f\|_{Y}=\|f\|_{C(\overline{\Omega})}+[f]_{\gamma},
$$
where $\|f\|_{C(\overline{\Omega})}=\max_{x\in \overline{\Omega}}|f(x)|$.
\medskip
Hereafter same symbol $C$ denotes different positive constants.
\medskip
\section{A nonlocal eigenvalue problem}\label{se:eigen}
In this section we are interested in studying the following nonlocal eigenvalue problem
\begin{equation}\label{E}\tag{$EP$}
\left \{ \begin{array}{ll}
-div\left(\displaystyle\frac{\nabla u}{c+|\nabla u|_{2}^{2}}\right)=\lambda \left\{ -div\left[\displaystyle\frac{\nabla c}{(c+|\nabla u|_{2}^{2})^{2}}\right]\right\} u & \mbox{in $\Omega$,}\\
u=0 & \mbox{on $\partial\Omega$,}
\end{array}\right.
\end{equation}
where $\Omega\subset{\rm I}\hskip -0.85mm{\rm R}^{N}$ is bounded smooth domain, $\lambda$ is a positive parameter and $c\in C^{2}(\overline{\Omega})$ is a positive (not constant) function. As we will see in the next section, problem \eqref{E} arises naturally when one studies questions of uniqueness to the problem \eqref{P}.
Before to state the main results of this section, we observe that
\begin{lemma}\label{positive}
The set
$$
\mathcal{A}:=\left\{\alpha>0: -div\left[\frac{\nabla c}{(c+\alpha)^{2}}\right]>0 \ \mbox{in some open $\Omega_{0}\subset \Omega$}\right\}
$$
is not empty if, and only if, there is an open $\hat{\Omega}\subset{\rm I}\hskip -0.85mm{\rm R}^{N}$ such that
\begin{equation}\label{C}
\Delta c<2\frac{|\nabla c|^{2}}{c} \ \mbox{in} \ \hat{\Omega}.
\end{equation}
\end{lemma}
\begin{proof}
Differentiating we get
\begin{equation}\label{ineq}
-div\left[\frac{\nabla c}{(c+\alpha)^{2}}\right]=-\frac{1}{(c+\alpha)^{2}}\Delta c+\frac{2}{(c+\alpha)^{3}}|\nabla c|^{2}.
\end{equation}
Now, note that
\begin{equation}\label{div}
-div\left[\frac{\nabla c}{(c+\alpha)^{2}}\right]> 0 \ \mbox{in some open} \ \Omega_{0}
\end{equation}
if, and only if,
\begin{equation}\label{ineq1}
\Delta c< 2\frac{|\nabla c|^{2}}{(c+\alpha)} \ \mbox{in} \ \Omega_{0}.
\end{equation}
It is clear that the existence of a positive number $\alpha$ satisfying \eqref{ineq1} is equivalent to inequality in \eqref{C}.
\end{proof}
\begin{remark}\label{ok}
In previous Lemma we have shown also that $\mathcal{A}=\emptyset$ if, and only if,
\begin{equation}\label{ineq4}
\Delta c\geq 2\frac{|\nabla c|^{2}}{c} \ \mbox{in} \ \Omega.
\end{equation}
Certainly, there are many positive functions $c\in C^{2}(\overline{\Omega})$ verifying \eqref{ineq4}. For instance, setting $c=\delta e+1$, where $0<\delta\leq \min\{1/(4|\nabla e|_{\infty}^{2}), 1/(2|e|_{\infty})\}$ and
\begin{equation}
\left \{ \begin{array}{ll}
\Delta e=1 & \mbox{in $\Omega$,}\\
e=0 & \mbox{on $\partial\Omega$,}
\end{array}\right.
\end{equation}
we conclude that $c>0$ and satisfies \eqref{ineq4}.
\end{remark}
\begin{remark}
An interesting question when \eqref{C} holds is about the topology of set $\mathcal{A}$. In this direction, the proof of Lemma \ref{positive} allows us to say that $\mathcal{A}$ contains ever a neighborhood $(0, \alpha_{0})$.
\end{remark}
Now we are ready to claim the following result.
\begin{theorem}\label{TE}
Suppose that \eqref{C} holds. For each $\alpha\in \mathcal{A}$, problem \eqref{E} has a unique solution $(\lambda_{\alpha}, u_{\alpha})$ such that $\lambda_{\alpha}>0$, $u_{\alpha}>0$ and $|\nabla u_{\alpha}|_{2}^{2}=\alpha$.
\end{theorem}
\begin{proof}
From Lemma \ref{positive}, $\mathcal{A}\neq \emptyset$. Since $c\in C^{2}(\overline{\Omega})$ and $b>0$ in $\overline{\Omega}$, it follows from Proposition \ref{Prodi} that, for each $\alpha\in \mathcal{A}$, the eigenvalue problem
\begin{equation}\label{EP}\tag{$P_{\alpha}$}
\left \{ \begin{array}{ll}
-div\left(\displaystyle\frac{\nabla u}{c+\alpha}\right)=\lambda \left\{ -div\left[\displaystyle\frac{\nabla c}{(c+\alpha)^{2}}\right]\right\} u & \mbox{in $\Omega$,}\\
u=0 & \mbox{on $\partial\Omega$,}
\end{array}\right.
\end{equation}
has a positive smallest eigenvalue $\lambda_{\alpha}$ whose associated eigenspace $V_{\alpha}$ is unidimensional and its eigenfunctions have defined sign.
Choosing $u\in V_{\alpha}$ such that $u> 0$ and $|\nabla u|_{2}^{2}=\alpha$, the result follows.
\end{proof}
\begin{remark}\label{re}
In particular, if \eqref{C} holds then
\begin{equation}\label{PI}
\displaystyle\int_{\Omega}u_{\alpha}^{2}\left\{-div\left[\displaystyle\frac{\nabla c}{(c+\alpha)^{2}}\right]\right\} dx=\frac{1}{\lambda_{\alpha}}\displaystyle\int_{\Omega}\displaystyle\frac{|\nabla u_{\alpha}|^{2}}{c+\alpha} dx, \ \forall \ \alpha\in \mathcal{A}.
\end{equation}
\end{remark}
\begin{corollary}\label{EE}
Suppose \eqref{C}. For each $\alpha\in \mathcal{A}$, the following inequality holds
$$
\lambda_{\alpha}\geq \displaystyle\frac{\sqrt{\lambda_{1}}(c_{L}+\alpha)^{2}}{2|\nabla c|_{\infty}(c_{M}+\alpha)\displaystyle},
$$
where $\lambda_{1}$ is the first eigenvalue of laplacian operator with Dirichlet boundary condition, $c_{L}=\min_{x\in\overline{\Omega}}c(x)$, $c_{M}=\max_{x\in\overline{\Omega}}c(x)$ and $|\nabla c|_{\infty}=\max_{x\in\overline{\Omega}}|\nabla c (x)|$.
\end{corollary}
\begin{proof}
From Remark \ref{re}, we get
\begin{equation}\label{16}
\lambda_{\alpha}=\displaystyle\frac{\displaystyle\int_{\Omega}\displaystyle\frac{|\nabla u_{\alpha}|^{2}}{c+\alpha} dx}{\displaystyle\int_{\Omega}u_{\alpha}^{2}\left\{-div\left[\displaystyle\frac{\nabla c}{(c+\alpha)^{2}}\right]\right\} dx}.
\end{equation}
Observe that
\begin{equation}\label{17}
\displaystyle\int_{\Omega}\displaystyle\frac{|\nabla u_{\alpha}|^{2}}{c+\alpha} dx\geq \displaystyle\frac{\alpha}{c_{M}+\alpha}.
\end{equation}
Moreover, by using the Divergence Theorem,
$$
\displaystyle\int_{\Omega}u_{\alpha}^{2}\left\{-div\left[\displaystyle\frac{\nabla c}{(c+\alpha)^{2}}\right]\right\} dx=2\displaystyle\int_{\Omega}\frac{u_{\alpha}\nabla u_{\alpha}\nabla c}{(c+\alpha)^{2}} dx
\leq \displaystyle\frac{2|\nabla c|_{\infty}\displaystyle\int_{\Omega}u_{\alpha}|\nabla u_{\alpha}| dx}{(c_{L}+\alpha)^{2}}.
$$
From H\"older and Poincar\'e inequalities, we conclude that
\begin{equation}\label{18}
\displaystyle\int_{\Omega}u_{\alpha}^{2}\left\{-div\left[\displaystyle\frac{\nabla c}{(c+\alpha)^{2}}\right]\right\} dx\leq \displaystyle\frac{2|\nabla c|_{\infty}\alpha}{\sqrt{\lambda_{1}}(c_{L}+\alpha)^{2}}.
\end{equation}
From \eqref{16}, \eqref{17} and \eqref{18} we have
$$
\lambda_{\alpha}\geq \displaystyle\frac{\sqrt{\lambda_{1}}(c_{L}+\alpha)^{2}}{2|\nabla c|_{\infty}(c_{M}+\alpha)\displaystyle},
$$
for all $\alpha\in \mathcal{A}$.
\end{proof}
\section{Uniqueness results}\label{se:trivial}
In order to apply Theorem \ref{t2} we define operator $\Psi:X\to Y$ by
$$
\Psi(u)=\left( a(x)+b(x)\int_{\Omega}|\nabla u|^{2}dx\right)\Delta u.
$$
In the sequel, we will denote $M\left(x,|\nabla u|_{2}^{2}\right)=a(x)+b(x)\int_{\Omega}|\nabla u|^{2}dx$ for short, where $|\nabla u|_{2}^{2}=\int_{\Omega}|\nabla u|^{2}dx$. The proof of main results of this paper will be divided in various propositions.
\medskip
\begin{proposition}\label{proper}
Operator $\Psi:X\to Y$ is proper.
\end{proposition}
\begin{proof}
It is sufficient to prove that if $\{h_n\}\subset Y$ is a sequence converging to $h \in Y$ and $\{u_{n}\}\subset X$ is another sequence with $\Psi(u_{n})=-h_{n}$ then $\{u_{n}\}$ has a convergent subsequence in $X$. For this, note that the equality $\Psi(u_{n})=-h_{n}$ is equivalent to
\begin{equation}\label{1}
-\Delta u_{n} = \frac{h_{n}}{M\left(x,|\nabla u_{n}|_{2}^{2}\right)}.
\end{equation}
Observe that $h_{n}/M\left(.,|\nabla u_{n}|_{2}^{2}\right)\in Y$ because $h_{n}\in Y$, $M\left(.,|\nabla u_{n}|_{2}^{2}\right)\in Y$ and $M\left(x,|\nabla u_{n}|_{2}^{2}\right)\geq a_{0}$.
Moreover,
\begin{equation}\label{2}
\left\|\frac{h_n(x)}{M\left(x,|\nabla u_{n}|_{2}^{2}\right)}\right\|_{C(\overline{\Omega})}\leq \|h_n\|_{C(\overline{\Omega})}/a_{0}, \ \forall \ n\in{\rm I}\hskip -0.85mm{\rm N}.
\end{equation}
\medskip
From $\|h_n\|_{C(\overline{\Omega})}\leq \|h_{n}\|_{Y}$, \eqref{2} and from boundedness of $\{h_{n}\}$ in $Y$ follows that $\left\{h_{n}/M\left(x,|\nabla u_{n}|_{2}^{2}\right)\right\}$ is bounded in $C(\overline{\Omega})$. Thus, the continuous embedding from $C^{1,\gamma}(\overline{\Omega})$ into $C(\overline{\Omega})$ and equality in (\ref{1}) tell us that $\{u_{n}\}$ is bounded in $C^{1,\gamma}(\overline{\Omega})$ (see Theorem 0.5 in \cite{AP}). Finally, by compact embedding from $C^{1,\gamma}(\overline{\Omega})$ into $C^{1}(\overline{\Omega})$, we conclude that there exists $u\in C^{1}(\overline{\Omega})$ such that, passing to a subsequence,
\begin{equation}
u_{n}\to u \ \mbox{in} \ C^{1}(\overline{\Omega}).
\end{equation}
Last convergence leads to
\begin{equation}
|\nabla u_{n}(x)|^{2}\to |\nabla u(x)|^{2} \ \mbox{uniformly in $x\in \Omega$}.
\end{equation}
Whence
\begin{equation}\label{3}
|\nabla u_{n}|_{2}^{2}\to |\nabla u|_{2}^{2}.
\end{equation}
\medskip
In the follows, we show that
\begin{equation}\label{4}
\left\|\frac{h_n}{M\left(., |\nabla u_{n}|_{2}^{2}\right)}\right\|_{Y}\leq C,
\end{equation}
for some positive constant $C$. In fact, since $\{h_{n}\}\subset Y$ and $\left\{M\left(.,|\nabla u_{n}|_{2}^{2}\right)\right\}\subset Y$, with $M(x, t)\geq a_{0}>0$ for all $t\geq 0$, a straightforward calculation shows us that
$$
\left[ \frac{h_n}{M\left(.,|\nabla u_{n}|_{2}^{2}\right)}\right]_{\gamma}\leq \frac{1}{a_{0}^2}
\left( \|h_{n}\|_{C(\overline{\Omega})} \left[M\left(.,|\nabla u_{n}|_{2}^{2}\right)\right]_{\gamma}+ \left\|M\left(., |\nabla u_{n}|_{2}^{2}\right)\right\|_{C(\overline{\Omega})} [h_n]_{\gamma}\right).
$$
From $\|h_n\|_{C(\overline{\Omega})}, [h_n]_{\gamma} \leq C$,
\begin{equation}
\left[M\left(.,|\nabla u_{n}|_{2}^{2}\right)\right]_{\gamma}\leq [a]_{\gamma}+[b]_{\gamma}|\nabla u_{n}|_{2}^{2} \leq [a]_\gamma + C[b]_\gamma
\end{equation}
and
\begin{equation}
\left\|M\left(.,|\nabla u_{n}|_{2}^{2}\right)\right\|_{C(\overline{\Omega})}\leq \|a\|_{C(\overline{\Omega})}+\|b\|_{C(\overline{\Omega})}|\nabla u_{n}|_{2}^{2} \leq \|a\|_{C(\overline{\Omega})} + C\|b\|_{C(\overline{\Omega})}
\end{equation}
it follows that
\begin{equation*}
\left[ \frac{h_n}{M\left(.,|\nabla u_{n}|_{2}^{2}\right)}\right]_{\gamma}
\leq \frac{C}{a_0^2} \left( [a]_\gamma + C [b]_\gamma + \|a\|_{C(\overline{\Omega})} + C \|b\|_{C(\overline{\Omega})}\right)
= \frac{C}{a_0^2} \|a\|_{Y} + \frac{C^2}{a_0^2} \|b\|_{Y}.
\end{equation*}
Being $\left\{h_{n}/M\left(x,|\nabla u_{n}|_{2}^{2}\right)\right\}$ bounded in $C(\overline{\Omega})$, the last inequality proves the assertion in \eqref{4}.
By \eqref{1}, \eqref{4} and Theorem 0.5 in \cite{AP}, sequence $\{u_{n}\}$ is bounded in $X$. By compact embedding from $X$ in $C^{2}(\overline{\Omega})$, passing to a subsequence, we get
\begin{equation}\label{5}
u_n \to u \ \mbox{in} \ C^2(\overline{\Omega}).
\end{equation}
By \eqref{5}, passing to the limit in $n\to \infty$ in \eqref{1} we have
\begin{equation}
-\Delta u=\frac{h}{M\left(x,|\nabla u|_{2}^{2}\right)}.
\end{equation}
Last equality and Theorem 0.5 in \cite{AP} allow us to conclude that $u\in X$.
Finally, by linearity of laplacian, we have
\begin{equation}\label{6}
-\Delta(u_n - u) =
\frac{h_n}{M\left(x,|\nabla u_{n}|_{2}^{2}\right)}- \frac{h}{M\left(x,|\nabla u|_{2}^{2}\right)}.
\end{equation}
From \eqref{6} and Theorem 0.5 in \cite{AP} we conclude that $u_{n}\to u$ in $X$.
\end{proof}
\medskip
\begin{proposition}\label{Prop}
Let $a, b\in C^{0, \gamma}(\overline{\Omega})$ and $u\in X$. If
\begin{equation}\label{Int}
\int_{\Omega}\frac{b(x)u\Delta u}{M(x,|\nabla u|_{2}^{2})}dx\neq 1/2
\end{equation}
holds then $\Psi$ is locally invertible in $u$.
\end{proposition}
\begin{proof}
We are interested in using Theorem \ref{t1} to prove this Lemma. It is standard to show that $\Psi\in C^{1}(X, Y)$ and
$$
\Psi'(u)v=2b(x)\Delta u\int_{\Omega}\nabla u\nabla vdx+M(x,|\nabla u|_{2}^{2})\Delta v.
$$
Remain us proving that $\Psi'(u): X\to Y$ is an isomorphism. It is clear that if $u=0$ there is nothing to prove. Now, if $u\neq 0$, observes that $\Psi'(u)$ is an isomorphism if, and only if, for each $g\in Y$ given, there is a unique $v\in X$ such that $\Psi'(u)v=-g$, this is
\begin{equation}\label{7}
-M(x,|\nabla u|_{2}^{2})\Delta v=g(x)+2b(x)\Delta u\int_{\Omega}\nabla u\nabla vdx.
\end{equation}
From Divergence Theorem, equation in (\ref{7}) is equivalent to
\begin{equation}\label{8}
-M(x,|\nabla u|_{2}^{2})\Delta v=g(x)-2b(x)\Delta u\int_{\Omega}u\Delta vdx.
\end{equation}
Consequently, $\Psi'(u)$ is an isomorphism if, and only if, for each $g\in Y$ given, there is a unique $v\in X$ such that
\begin{equation}\label{9}
\Delta v=\frac{2b(x)\Delta u\int_{\Omega}u\Delta vdx}{M(x,|\nabla u|_{2}^{2})}-\frac{g(x)}{M(x,|\nabla u|_{2}^{2})}.
\end{equation}
\medskip
To study equation (\ref{9}) we define the mapping $T: Y\to Y$ by
\begin{equation}\label{10}
T(w)=\frac{2b(x)\Delta u\int_{\Omega}u w dx}{M(x,|\nabla u|_{2}^{2})}-\frac{g(x)}{M(x,|\nabla u|_{2}^{2})}
\end{equation}
and we note that, since for each $w\in Y$ problem
\begin{equation}\label{LP}\tag{LP}
\left \{ \begin{array}{ll}
\Delta z=w(x) & \mbox{in $\Omega$,}\\
z=0 & \mbox{on $\partial\Omega$,}
\end{array}\right.
\end{equation}
has a unique solution $z\in X$, looking for solutions of (\ref{9}) is equivalent to find fixed points of $T$. Denoting $t=\int_{\Omega}u w dx$, it follows that $w$ is a fixed point of $T$ if, and only if,
\begin{equation}\label{11}
w=T(w)=t\frac{2b(x)\Delta u}{M(x,|\nabla u|_{2}^{2})}-\frac{g(x)}{M(x,|\nabla u|_{2}^{2})}.
\end{equation}
\medskip
\medskip
Therefore $w$ is a fixed point of $T$ if, and only if,
$$
T\left( t\frac{2b(x)\Delta u}{M(x,|\nabla u|_{2}^{2})}-\frac{g(x)}{M(x,|\nabla u|_{2}^{2})}\right)=t\frac{2b(x)\Delta u}{M(x,|\nabla u|_{2}^{2})}-\frac{g(x)}{M(x,|\nabla u|_{2}^{2})}.
$$
From (\ref{10}), we get
$$
\frac{2b(x)\Delta u}{M(x,|\nabla u|_{2}^{2})}\int_{\Omega}u\left[ t\frac{2b(x)\Delta u}{M(x,|\nabla u|_{2}^{2})}-\frac{g(x)}{M(x,|\nabla u|_{2}^{2})}\right]dx=t\frac{2b(x)\Delta u}{M(x,|\nabla u|_{2}^{2})}.
$$
Since $b>0$ and $\Delta u\not\equiv 0$ (because $u\neq 0$), $T$ admits a fixed point if, and only if,
$$
2\int_{\Omega}u\left[ t\frac{2b(x)\Delta u}{M(x,|\nabla u|_{2}^{2})}-\frac{g(x)}{M(x,|\nabla u|_{2}^{2})}\right]dx=t,
$$
namely,
\begin{equation}\label{13}
t\left[ \int_{\Omega}\frac{2b(x)u\Delta u}{M(x,|\nabla u|_{2}^{2})}dx-1\right]=2\int_{\Omega}\frac{g(x)u}{M(x, |\nabla u|_{2}^{2})}dx.
\end{equation}
\medskip
Equality (\ref{13}) say us that if \eqref{Int} occurs then $T$ has a unique fixed point $w$ given by
$$
w=t\frac{2b(x)\Delta u}{M(x,|\nabla u|_{2}^{2})}-\frac{g(x)}{M(x,|\nabla u|_{2}^{2})},
$$
with
$$
t=2\int_{\Omega}\frac{g(x)u}{M(x, |\nabla u|_{2}^{2})}dx/\left[ \int_{\Omega}\frac{2b(x)u\Delta u}{M(x,|\nabla u|_{2}^{2})}dx-1\right].
$$
\end{proof}
\medskip
\begin{remark}
Equality \eqref{13} shows us that $\Psi'(u):X\to Y$ is not surjective if
$$
\int_{\Omega}\frac{b(x)u\Delta u}{M(x,|\nabla u|_{2}^{2})}dx= 1/2.
$$
In fact, in this case, functions $g\in Y$ such that
$$
\int_{\Omega}\frac{g(x)u}{M(x, |\nabla u|_{2}^{2})}dx\neq 0
$$
are not in the range of $\Psi'(u)$.
\end{remark}
Actually, it is possible to get the same result of (existence and) uniqueness provided in \cite{FMSS} for signed functions as a consequence of Global Inverse Theorem and previous Proposition. This is exactly the content of next corollary.
\begin{corollary}\label{alt}
For each signed function $h\in Y$ given, problem (\ref{P}) has a unique solution.
\end{corollary}
\begin{proof}
First of all, we define the sets
$$
P_{1}=\{u\in X: \Delta u\geq 0\}\subset X
$$
and
$$
P_{2}=\{h\in Y: h\geq 0\}\subset Y.
$$
Consider $P_{1}\cup (-P_{1})$ and $P_{2}\cup (-P_{2})$ as metric spaces whose metrics are induced from $X$ and $Y$, respectively.
It is clear that $P_{1}\cup (-P_{1})$ is arcwise connected (because $P_{1}$ and $-P_{1}$ are convex sets and $P_{1}\cap(-P_{1})=\{0\}$) closed in $X$. On the other hand, since $P_{2}\cup (-P_{2})$ is the union of the closed cone of nonnegative functions of $Y$ with the closed cone of nonpositive functions of $Y$, follows that $P_{2}\cup (-P_{2})$ is simply connected.
From $\Psi(P_{1})\subset P_{2}$ and $\Psi(-P_{1})\subset (-P_{2})$, it follows that $\Psi$ is well defined from $P_{1}\cup (-P_{1})$ to $P_{2}\cup (-P_{2})$.
Moreover, being $\Psi$ proper from $X$ to $Y$ (see Proposition \ref{proper}) and $P_{1}\cup (-P_{1})$ and $P_{2}\cup (-P_{2})$ are closed metric spaces in $X$ and $Y$, respectively, it follows that $\Psi$ is proper from $P_{1}\cup (-P_{1})$ to $P_{2}\cup (-P_{2})$.
Note that if $u\in P_{1}$ (resp. $-P_{1}$) then, as $u$ is (the unique) solution to problem
\begin{equation}
\left \{ \begin{array}{ll}
\Delta u=\Delta u & \mbox{in $\Omega$,}\\
u=0 & \mbox{on $\partial\Omega$.}
\end{array}\right.
\end{equation}
Follows from maximum principle that $u\leq 0$ (resp. $u \geq 0$). Whence, we have
$$
\int_{\Omega}\frac{b(x)u\Delta u}{M(x,|\nabla u|_{2}^{2})}dx\leq 0, \ \forall \ u\in P_{1}\cup (-P_{1}).
$$
Therefore, from Proposition \ref{Prop}, $\Psi:P_{1}\cup (-P_{1})\to P_{2}\cup (-P_{2})$ is locally invertible. The result follows now from Global Inverse Theorem.
\end{proof}
Next corollary does not ensure uniqueness of solution for problem \eqref{P} when function $h$ given is sign changing, but it tells us that there is a unique solution with ``little variation'' if $h\in Y$ given (signed or not) has ``little variation''.
\begin{corollary}
There are positive constants $\varepsilon,\delta$ such that for each $h\in Y$ with $\|h\|_{Y}<\varepsilon$, problem \eqref{P} has a unique solution $u$ with $\|u\|_{X}<\delta$.
\end{corollary}
\begin{proof}
It is sufficient to note that when $u=0$ the integral in previous proposition is null.
\end{proof}
We are now ready to prove Theorem \ref{Main1}.
\medskip
{\sl Proof of Theorem \ref{Main1}.}
\medskip
Since $X$ and $Y$ are Banach spaces then $X$ is arcwise connected and $Y$ is simply connected. Moreover, from Proposition \ref{proper}, operator $\Psi$ is proper and by Divergence Theorem
$$
\int_{\Omega}\frac{b(x)u\Delta u}{M(x,|\nabla u|_{2}^{2})}dx=\frac{1}{\theta+|\nabla u|_{2}^{2}}\int_{\Omega}u\Delta u dx=-\frac{|\nabla u|_{2}^{2}}{\theta+|\nabla u|_{2}^{2}}<0, \ \forall \ u\in X.
$$
The result follows directly from Proposition \ref{Prop} and Global Inverse Theorem.
$\square$
\medskip
Next proposition provides us a sufficient condition on functions $a$ and $b$ for that \eqref{Int} occurs when $a/b$ is not constant.
\begin{proposition}\label{Eureka}
Let $a, b\in C^{2, \gamma}(\overline{\Omega})$ and $c=a/b$.
\begin{itemize}
\item[$(i)$] If $\Delta c\geq 2|\nabla c|^{2}/c \ \mbox{in} \ \Omega$, then
\begin{equation}
\int_{\Omega}\frac{b(x)u\Delta u}{M(x,|\nabla u|_{2}^{2})}dx\leq 0, \ \forall \ u\in X.
\end{equation}
\item[$(ii)$] If $\Delta c< 2|\nabla c|^{2}/c \ \mbox{in some open} \ \Omega_{0}\subset \Omega$ then
\begin{equation}\label{20}
\int_{\Omega}\frac{b(x)u\Delta u}{M(x,|\nabla u|_{2}^{2})}dx<1/2, \ \forall \ u\in X,
\end{equation}
provided that
$$
\frac{|\nabla c|_{\infty}c_{M}}{\sqrt{\lambda_{1}}c_{L}^{2}}\leq 3/2.
$$
\end{itemize}
\end{proposition}
\begin{proof}
Putting $b$ in evidence in the integral (\ref{Int}), we get
$$
\int_{\Omega}\frac{b(x)u\Delta u}{M(x,|\nabla u|_{2}^{2})}dx=\int_{\Omega}\frac{u\Delta u}{c+|\nabla u|_{2}^{2}}dx,
$$
where $c=c(x)=a(x)/b(x)$. From Divergence Theorem, we have
$$
\int_{\Omega}\frac{u\Delta u}{c+|\nabla u|_{2}^{2}}dx=-\int_{\Omega}\nabla\left(\frac{u}{c+|\nabla u|_{2}^{2}}\right)\nabla u dx.
$$
Since
$$
\nabla\left(\frac{u}{c+|\nabla u|_{2}^{2}}\right)=\frac{1}{c+|\nabla u|_{2}^{2}}\nabla u- \frac{u}{(c+|\nabla u|_{2}^{2})^{2}}\nabla c,
$$
we conclude that
\begin{eqnarray*}
\int_{\Omega}\frac{b(x)u\Delta u}{M(x,|\nabla u|_{2}^{2})}dx&=&-\int_{\Omega}\frac{|\nabla u|^{2}}{c+|\nabla u|_{2}^{2}}dx+
\int_{\Omega}\frac{u\nabla u\nabla c}{(c+|\nabla u|_{2}^{2})^{2}}dx\\
&=& -\int_{\Omega}\frac{|\nabla u|^{2}}{c+|\nabla u|_{2}^{2}}dx+\frac{1}{2}
\int_{\Omega}\frac{\nabla(u^{2})\nabla c}{(c+|\nabla u|_{2}^{2})^{2}}dx.
\end{eqnarray*}
Using again the Divergence Theorem
\begin{equation}\label{14}
\int_{\Omega}\frac{b(x)u\Delta u}{M(x,|\nabla u|_{2}^{2})}dx=-\int_{\Omega}\frac{|\nabla u|^{2}}{c+|\nabla u|_{2}^{2}}dx+\frac{1}{2}
\int_{\Omega}u^{2}\left\{-div\left[\frac{\nabla c}{(c+|\nabla u|_{2}^{2})^{2}}\right]\right\}dx.
\end{equation}
(i) At this case, from Lemma \ref{positive} (see also Remark \ref{ok}), $\mathcal{A}=\emptyset$ and, consequently, for each $u\in X$ we have
$$
\int_{\Omega}u^{2}\left\{-div\left[\frac{\nabla c}{(c+|\nabla u|_{2}^{2})^{2}}\right]\right\}dx\leq 0.
$$
Whence, by \eqref{14},
$$
\int_{\Omega}\frac{b(x)u\Delta u}{M(x,|\nabla u|_{2}^{2})}dx\leq 0, \ \forall \ u\in X.
$$
(ii) In this case $\mathcal{A}\neq \emptyset$. If $u\in X$ is such that $|\nabla u|_{2}^{2}\not\in\mathcal{A}$ we saw already that
$$
\int_{\Omega}\frac{b(x)u\Delta u}{M(x,|\nabla u|_{2}^{2})}dx\leq 0.
$$
Now, if $u\in X$ is such that $|\nabla u|_{2}^{2}\in\mathcal{A}$ then, from \eqref{14} and Proposition \ref{TE} (see also Remark \ref{re}), we obtain
\begin{equation}\label{15}
\int_{\Omega}\frac{b(x)u\Delta u}{M(x,|\nabla u|_{2}^{2})}dx\leq \left(\frac{1}{2\lambda_{\alpha}}-1\right)\int_{\Omega}\frac{|\nabla u|^{2}}{c+\alpha} dx,
\end{equation}
where $\alpha:=|\nabla u|_{2}^{2}$. If $\alpha$ is such that $1/2\leq \lambda_{\alpha}$ then, by \eqref{15}, $\int_{\Omega}b(x)u\Delta u/M(x,|\nabla u|_{2}^{2}) dx\leq 0$. Finally, if $0<\lambda_{\alpha}<1/2$ follows from $\alpha=|\nabla u|_{2}^{2}$ and from Corollary \ref{EE} that,
\begin{equation}\label{19}
\int_{\Omega}\frac{b(x)u\Delta u}{M(x,|\nabla u|_{2}^{2})}dx<\frac{|\nabla c|_{\infty}(c_{M}+\alpha)}{\sqrt{\lambda_{1}}(c_{L}+\alpha)^{2}}-1=:g(\alpha).
\end{equation}
We have that $g(0)=|\nabla c|_{\infty}c_{M}/\sqrt{\lambda_{1}}c_{L}^{2}-1$ and
$$
g'(\alpha)=\frac{|\nabla c|_{\infty}(c_{L}-2c_{M}-\alpha)}{\sqrt{\lambda_{1}}(c_{L}+\alpha)^{3}}<0, \ \forall \alpha>0.
$$
Therefore $g$ is decreasing and, from \eqref{19}, we conclude that if
$$
\frac{|\nabla c|_{\infty}c_{M}}{\sqrt{\lambda_{1}}c_{L}^{2}}\leq\frac{3}{2}
$$
then \eqref{20} holds.
\end{proof}
Bellow we give the proof of our main uniqueness result to problem \eqref{P} which covers sign changing functions.
{\sl Proof of Theorem \ref{Main2}.}
It follows directly from Proposition \ref{proper}, Proposition \ref{Prop}, Proposition \ref{Eureka} and Global Inverse Theorem.
$\square$
Theorems \ref{Main1} and \ref{Main2} seem to indicate that in the case that $h$ is sign changing the uniqueness of solution to the problem \eqref{P} is, in some way, related with the variation of $a/b$. In any way, remains open the question to know what happens with the number of solutions of \eqref{P} in the case that $h$ is sign changing, $\Delta c< 2|\nabla c|^{2}/c \ \mbox{in some open} \ \Omega_{0}\subset \Omega$ and $|\nabla c|_{\infty}c_{M}/\sqrt{\lambda_{1}}c_{L}^{2}$ is large.
|
2,877,628,088,494 | arxiv | \section{Introduction}\label{section1}
The one dimensional Hausdorff operator is defined by
$$ H_{\Phi}(f)(x)=\int\limits_{0}^\infty{\frac{\Phi(y)}{y}f\left(\frac{x}{y}\right)dy},$$
where $\Phi$ is an integrable function on the positive half-line. The Hausdorff operator may be originated by Hurwitz and Silverman \cite{Hurwitz} in order to study summability of number series (see also \cite{Hausdorff}). It is well known that the Hausdorff operator is one of important operators in harmonic analysis, and it is used to solve certain classical problems in analysis. It is worth pointing out that if the kernel function $\Phi$ is taken appropriately, then the Hausdorff operator reduces to many classcial operators in analysis such as the Hardy operator, the Ces\`{a}ro operator, the Riemann-Liouville fractional integral operator and the Hardy-Littlewood average operator (see, e.g., \cite{Andersen}, \cite{Christ}, \cite{FGLY2015}, \cite{Miyachi} and references therein).
\vskip 5pt
In 2002, Brown and M\'{o}ricz \cite{BM} extended the study of Hausdorff operator to the high dimensional space.
Given $\Phi$ be a locally integrable function on $\mathbb R^n$, the $n$-dimensional Hausdorff operator $H_{\Phi,A}$ associated to the kernel function $\Phi$ is then defined in terms of the integral form as follows
\begin{equation}\label{Hausdorff1}
H_{\Phi, A}(f)(x)=\int\limits_{\mathbb R^n}{\frac{\Phi(t)}{|t|^n}f(A(t) x)dt},\,x\in\mathbb R^n,
\end{equation}
where $A(t)$ is an $n\times n$ invertible matrix for almost everywhere $t$ in the support of $\Phi$. It should be pointed out that if we take $\Phi(t)=|t|^n\psi(t_1)\chi_{[0,1]^n}(t)$ and $A(t)= t_1.I_n$ ($I_n$ is an identity matrix), for $t=(t_1,t_2,...,t_n)$, where $\psi:[0,1]\to [0,\infty)$ is a measurable function, $H_{\Phi,A}$ then reduces to the weighted Hardy-Littlewood average operator due to Carton-Lebrun and Fosset \cite{Carton-Lebrun} defined as the following
\begin{equation}\label{Upsi}
U_{\psi}(f)(x)=\int\limits_{0}^{1}{f(tx)\psi(t)dt},\,\,x\in\mathbb R^n.
\end{equation}
Similarly, by taking $\Phi(t)=|t|^n\psi(t_1)\chi_{[0,1]^n}(t)$ and $A(t)= s(t_1).I_n$, with $s:[0,1]\to \mathbb R$ being a measurable function, it is easy to see that $H_{\Phi,A}$ reduces to the weighted Hardy-Ces\`{a}ro operator $U_{\psi,s}$ defined by Chuong and Hung \cite{CH2014} as follows
\begin{equation}\label{Upsi}
U_{\psi,s}(f)(x)=\int\limits_{0}^{1}{f(s(t)x)\psi(t)dt},\,\,x\in\mathbb R^n.
\end{equation}
\vskip 5pt
In recent years, the theory of weighted Hardy-Littlewood average operators, Hardy-Ces\`{a}ro operators and Hausdorff operators have been significantly developed into different contexts (for more details see \cite{BM}, \cite{CH2014}, \cite{CDH2016}, \cite{CHH2016}, \cite{Moricz2005}, \cite{Xiao2001} and references therein). Also, remark that Coifman and Meyer \cite{Coifman1}, \cite{Coifman2} discovered a multilinear point of view in their study of certain singular integral operators. Thus, the research of the theory of multilinear operators is not only attracted by a pure question to generalize the theory of linear ones but also by their deep applications in harmonic analysis. In 2015, Hung and Ky \cite{HK2015} introduced the weighted multilinear Hardy-Ces\`{a}ro type operators, which are generalized of weighted multilinear Hardy operators in \cite{FGLY2015}, defined as follows:
\begin{equation}\label{MulUpsi}
U_{\psi,\vec {s}}^{m, n}(f_1,...,f_m)(x)=\int\limits_{[0,1]^n}{\Big(\prod\limits_{i=1}^{m}f_i(s_i(t)x)\Big)\psi(t)dt},\,\,x\in\mathbb R^n,
\end{equation}
where $\psi:[0,1]^n\to [0,\infty)$, and $s_1,...,s_m:[0,1]^n\to \mathbb R$ are measurable functions. By the relation between Hausdorff operator and Hardy-Ces\`{a}ro operator as mentioned above, we shall introduce in this paper a more general multilinear operator of Hausdorff type defined as follows.
\begin{definition}
Let $\Phi:\mathbb R^n \to [0,\infty)$ and $A_i(t)$, for $i=1,...,m$, be $n\times n$ invertible matrices for almost everywhere $t$ in the support of $\Phi$. Given $f_1, f_2, ..., f_m:\mathbb R^n\to \mathbb C$ be measurable functions, the multilinear Hausdorff operator $H_{\Phi,\vec A}$ is defined by
\begin{equation}\label{mulHausdorff}
{H_{\Phi ,\vec{A} }}(\vec{f})(x) = \int\limits_{{\mathbb R^n}} {\frac{{\Phi (t)}}{{{{\left| t \right|}^n}}}} \prod\limits_{i = 1}^m {{f_i}} ({A_i}(t)x)dt,\,x\in\mathbb R^n,
\end{equation}
for $\vec{f}=\left(f_1, ..., f_m\right)$ and $\vec{A}=\left(A_1, ..., A_m\right)$.
\end{definition}
It is obvious that when $\Phi(t)=|t|^n.\psi(t)\chi_{[0,1]^n}(t)$ and $A_i(t)= s_i(t).I_n$, where $\psi:[0,1]^n\to [0,\infty), s_1,...,s_m:[0,1]^n\to \mathbb R$ are measurable functions, then the multilinear Hausdorff operator $H_{\Phi,\vec A}$ reduces to the weighted multilinear Hardy-Ces\`{a}ro operator $U_{\psi,\vec {s}}^{m, n}$ above.
\vskip 5pt
It is also interesting that the theory of function spaces with variable exponents has attracted much more attention because of the necessary in the field of electronic fluid mechanics and its important applications to the elasticity, fluid dynamics, recovery of graphics, and differential equations (see \cite{Almeida}, \cite{Chuong}, \cite{Chuong2016}, \cite{Diening}, \cite{H2000}, \cite{Jacob}, \cite{CUF2013}). The foundational results and powerful applications of some function spaces with variable exponents in harmonic analysis and partial differential equations are given in the books \cite{CUF2013}, \cite{Diening1} and the references therein. It is well-known that the Calder\'{o}n-Zygmund singular operators, the Hardy-type operators and their commutators have been extensively investigated on the Lebesgue, Herz, Morrey, and Morrey-Herz spaces with variable exponent (see, e.g., \cite{Bandaliev2010}, \cite{Capone}, \cite{Cruz-Uribe}, \cite{Guliyev}, \cite{Mamedov}, \cite{Mashiyev}, \cite{LZ2014}, \cite{Rafeiro}, \cite{WZ2016}, and others).
\vskip 5pt
Motivated by above mentioned results, the goal of this paper is to establish the necessary and sufficient conditions for the boundedness of multilinear Hausdorff operators on the product of weighted Lebesgue, central Morrey, Herz, and Morrey-Herz spaces with variable exponent. In each case, the estimates for operator norms are worked out. It should be pointed out that all results in this paper are new even in the case of linear Hausdorff operators.
\vskip 5pt
Our paper is organized as follows. In Section 2, we give necessary preliminaries on weighted Lebesgue spaces, central Morrey spaces, Herz spaces and Morrey-Herz spaces with variable exponent. Our main theorems are given and proved in Section 3.
\section{Preliminaries}\label{section2}
Before stating our results in the next section, let us give some basic facts and notations which will be used throughout this paper. By $\|T\|_{X\to Y}$, we denote the norm of $T$ between two normed vector spaces $X$ and $Y$. The letter $C$ denotes a positive constant which is independent of the main parameters, but may be different from line to line. Given a measurable set $\Omega$, let us denote by $\chi_\Omega$ its characteristic function, by $|\Omega|$ its Lebesgue measure, and by $\omega(\Omega)$ the integral $\int\limits_{\Omega}\omega(x)dx$. For any $a\in\mathbb R^n$ and $r>0$, we denote by $B(a,r)$ the ball centered at $a$ with radius $r$.
\vskip 5pt
Next, we write $a \lesssim b$ to mean that there is a positive constant $C$, independent of the main parameters, such that $a\leq Cb$. The symbol $f\simeq g$ means that $f$ is equivalent to $g$ (i.e.~$C^{-1}f\leq g\leq Cf$).
In what follows, we denote $\chi_k=\chi_{C_k}$, $C_k=B_k\setminus B_{k-1}$ and $B_k = \big\{x\in \mathbb R^n: |x| \leq 2^k\big\}$, for all $k\in\mathbb Z$. Now, we are in a position to give some notations and definitions of Lebesgue, Herz, Morrey and Morrey-Herz spaces with constant parameters. For further information on these spaces as well as their deep applications in analysis, the interested readers may refer to the work \cite{ALP} and to the monograph \cite{Lu}.
In this paper, as usual, we will denote by $\omega(\cdot)$ a non-negative weighted function on $\mathbb R^n$.
\begin{definition} Let $1\leq p<\infty$, we define the weighted Lebesgue space $L^p(\omega)$ of a measurable function $f$ by
\[
\big\|f\big\|_{L^p(\omega)} = \left(\int\limits_{\mathbb R^n} {{{\left| {f(x)} \right|^p\omega(x)}}dx} \right)^{\frac{1}{p}}<\infty,
\]
and for $p=\infty$ by
\[
\big\|f\big\|_{L^\infty(\omega)} = {\rm inf} \big\{M>0: \omega\big(\big\{x\in \mathbb R^n:|f(x)|>M\big\}\big)=0 \big\}<\infty.
\]
\end{definition}
\begin{definition}
Let $\alpha\in\mathbb R$, $0<q<\infty$, and $0<p<\infty$. The weighted homogeneous Herz-type space ${\mathop{K}\limits^.}^{\alpha, p}_q(\omega)$ is defined by
\[
{\mathop{K}\limits^.}^{\alpha, p}_q(\omega)=\left\{f\in L^q_\text {loc}(\mathbb R^n\setminus\{0\},\omega):\|f\|_{{\mathop{K}\limits^.}^{\alpha,p}_q(\omega)} <\infty \right \},
\]
where $\|f\|_{{\mathop{K}\limits^.}^{\alpha,p}_q(\omega)}=\left(\sum\limits_{k=-\infty}^\infty 2^{k\alpha p}\|f\chi_k\|^p_{L^q(\omega)}\right)^{\frac{1}{p}}.$
\end{definition}
\begin{definition}
Let $\lambda \in \mathbb R$ and $1\leq p<\infty$. The weighted central Morrey space ${\mathop B\limits^.}^{p,\lambda}({\omega})$
is defined by the set of all locally $p$-integrable functions $f$ satisfying
\[
{\left\| f \right\|_{{\mathop B\limits^.}^{p,\lambda}({\omega})}} = \mathop {\sup }\limits_{R\, > 0} {\left( {\dfrac{1}{{{\omega}(B(0,R))^{1+\lambda p}}}\int\limits_{B(0,R)} {{{\left| {f(x)} \right|}^p}\omega(x)dx} } \right)^{\frac{1}{p}}} < \infty .
\]
\end{definition}
\begin{definition}
Let $\alpha \in \mathbb R$, $0 < p < \infty, 0 < q <\infty, \lambda \geq 0$. The homogeneous weighted Morrey-Herz type space ${M\mathop{K}\limits^.}^{\alpha,\lambda}_{p,q}(\omega)$ is defined by
\[
{M\mathop{K}\limits^.}^{\alpha,\lambda}_{p,q}(\omega)=\left\{f\in L^q_\text {loc}(\mathbb R^n\setminus\{0\},\omega):\|f\|_{{M\mathop{K}\limits^.}^{\alpha,\lambda}_{p,q}(\omega)} <\infty \right \},
\]
where $\|f\|_{{M\mathop{K}\limits^.}^{\alpha,\lambda}_{p,q}(\omega)}=\sup\limits_{k_0\in\mathbb Z}2^{-k_0\lambda}\left(\sum\limits_{k=-\infty}^{k_0} 2^{k\alpha p} \|f\chi_k\|^p_{L^q(\omega)}\right)^{\frac{1}{p}}.$
\end{definition}
\hspace {-15pt} {\bfseries Remark 1.} It is useful to note that ${\mathop{K}\limits^.}^{0,p}_{p}(\mathbb R^n)= L^p(\mathbb R^n)$ for $0<p<\infty$; ${\mathop K\limits^.}^{\alpha/p,p}_{p}(\mathbb R^n)= L^p(|x|^\alpha dx)$ for all $0<p<\infty$ and $\alpha\in\mathbb R$. Since ${M\mathop{K}\limits^.}^{\alpha,0}_{p,q}(\mathbb R^n)$ =${\mathop{K}\limits^.}^{\alpha, p}_{q}(\mathbb R^n)$, it follows that the Herz space is a special case of Morrey-Herz space. Therefore, the Herz spaces are natural generalisations of the Lebesgue spaces with power weights.
\vskip 5pt
Now, we present the definition of the Lebesgue space with variable exponent. For further readings on its deep applications in harmonic analysis, the interested reader may find in the works \cite{CUF2013}, \cite{Diening} and \cite{Diening1}.
\begin{definition}
Let $\mathcal P(\mathbb R^n)$ be the set of all measurable functions $p(\cdot):\mathbb R\to [1,\infty]$. For $p(\cdot)\in \mathcal P(\mathbb R^n)$, the variable exponent Lebesgue space $L^{p(\cdot)}(\mathbb R^n)$ is the set of all complex-valued measurable functions $f$ defined on $\mathbb R^n$ such that there exists constant $\eta >0$ satisfying
\[
F_{p}(f/\eta):=\int\limits_{\mathbb R^n\setminus\Omega_\infty}\left({\frac{|f(x)|}{\eta}}\right)^{p(x)}dx +\big\|f/\eta\big\|_{L^{\infty}(\Omega_\infty)}<\infty,
\]
where $\Omega_{\infty}=\big\{ x\in\mathbb R^n: p(x)=\infty\big\}$. When $|\Omega_{\infty}|=0$, it is straightforward
\[
F_{p}(f/\eta)=\int\limits_{\mathbb R^n}\left({\frac{|f(x)|}{\eta}}\right)^{p(x)}dx<\infty.
\]
\end{definition}
The variable exponent Lebesgue space $L^{p(\cdot)}(\mathbb R^n)$ then becomes a norm space equipped with a norm given by
$$\left\|f\right\|_{L^{p(\cdot)}}= \inf \left\{\eta>0: F_p\left(\frac{f}{\eta}\right)\leq 1\right\}.$$
Let us denote by $\mathcal P_{b}(\mathbb R^n)$ the class of exponents $q(\cdot)\in\mathcal P(\mathbb R^n)$ such that
\[
1 < q_{-}\leq q(x) \leq q_{+}<\infty,\, \text{ for all }\,x\in\mathbb R^n,
\]
where $q_{-}= \text{ess\,inf}_{x\in\mathbb R^n}q(x)$ and $q_{+}= \text{ess\,sup}_{x\in\mathbb R^n}q(x)$.
For $p\in\mathcal P_{b}(\mathbb R^n)$, it is useful to remark that we have the following inequalities which are usually used in the sequel.
\begin{eqnarray}\label{maxminvar}
&&\hskip -30pt [i]\,\, \textit{ \rm If }\, F_{p}(f)\leq C, \textit{ \rm then}\, \big\|f\big\|_{L^{p(\cdot)}} \leq \textit{\rm max}\big\{ {C}^{\frac{1}{q_-}}, {C}^{\frac{1}{q_+}} \big\},\,\textit{\rm for all}\,\, f\in L^{p(\cdot)}(\mathbb R^n),
\nonumber
\\
&&\hskip - 30pt [ii]\textit{ \rm If }\, F_{p}(f)\geq C, \textit{ \rm then}\, \big\|f\big\|_{L^{p(\cdot)}}\geq \textit{\rm min}\big\{ {C}^{\frac{1}{q_-}}, {C}^{\frac{1}{q_+}} \big\},\,\textit{\rm for all}\, f\in L^{p(\cdot)}(\mathbb R^n).
\end{eqnarray}
\\
The space ${\mathcal P}_\infty(\mathbb R^n)$ is defined by the set of all measurable functions $q(\cdot)\in\mathcal P(\mathbb R^n)$ and there exists a constant $q_\infty$ such that
\[
q_{\infty} =\lim\limits_{|x|\to\infty}{q(x)}.
\]
\\
For $p(\cdot)\in\mathcal P(\mathbb R^n)$, the weighted variable exponent Lebesgue space $L^{p(\cdot)}_\omega(\mathbb R^n)$ is the set of all complex-valued measurable functions $f$ such that $f\omega$ belongs to the $L^{p(\cdot)}(\mathbb R^n)$ space, and the norm of $f$ in $L^{p(\cdot)}_\omega(\mathbb R^n)$ is given by
\[
\big\|f\big\|_{L^{p(\cdot)}_\omega}=\big\|f\omega\big\|_{L^{p(\cdot)}}.
\]
Let $\mathbf C_0^{\text{log}}(\mathbb R^n) $ denote the set of all log-H\"{o}lder continuous functions $\alpha (\cdot)$ satisfying at the origin
\[
\left|\alpha(x)-\alpha(0)\right|\leq \dfrac{C_0^\alpha}{\text{log}\left(e+\frac{1}{|x|}\right)},\, \text{ for all }\, x\in\mathbb R^n.
\]
Denote by $\mathbf C_\infty^{\text{log}}(\mathbb R^n) $ the set of all log-H\"{o}lder continuous functions $\alpha (\cdot)$ satisfying at infinity
\[
\left|\alpha(x)-\alpha_{\infty}\right|\leq \dfrac{C_\infty^\alpha}{\text{log}(e+|x|)},\, \text{ for all }\, x\in\mathbb R^n,
\]
\vskip 5pt
Next, we would like to give the definition of variable exponent weighted Herz spaces ${\mathop{K}\limits^.}^{\alpha(\cdot),p}_{q(\cdot),\omega}$ and the definition of variable exponent weighted Morrey-Herz spaces ${M\mathop{K}\limits^.}^{\alpha(\cdot),\lambda}_{p,q(\cdot),\omega}$ (see \cite{LZ2014}, \cite{WZ2016} for more details).
\begin{definition}
Let $0<p<\infty, q(\cdot)\in \mathcal P_b(\mathbb R^n)$ and $\alpha(\cdot):\mathbb R^n\to \mathbb R$ with $\alpha(\cdot)\in L^{\infty}(\mathbb R^n)$. The variable exponent weighted Herz space ${\mathop{K}\limits^.}^{\alpha(\cdot),p}_{q(\cdot),\omega}$ is defined
by
\[
{\mathop{K}\limits^.}^{\alpha(\cdot),p}_{q(\cdot),\omega}=\left\{f\in L^{q(\cdot)}_\text {loc}(\mathbb R^n\setminus\{0\}):\|f\|_{{\mathop{K}\limits^.}^{\alpha(\cdot),p}_{q(\cdot),\omega}} <\infty \right \},
\]
where $\|f\|_{{\mathop{K}\limits^.}^{\alpha(\cdot),p}_{q(\cdot),\omega}}=\left(\sum\limits_{k=-\infty}^{\infty} \|2^{k\alpha(\cdot)} f\chi_k\|^p_{L^{q(\cdot)}_\omega}\right)^{\frac{1}{p}}.$
\end{definition}
\begin{definition}
Assume that $0\leq \lambda<\infty, 0<p<\infty, q(\cdot)\in \mathcal P_b(\mathbb R^n)$ and $\alpha(\cdot):\mathbb R^n\to \mathbb R$ with $\alpha(\cdot)\in L^{\infty}(\mathbb R^n)$. The variable exponent weighted Morrey-Herz space ${M\mathop{K}\limits^.}^{\alpha(\cdot),\lambda}_{p,q(\cdot), \omega}$ is defined
by
\[
{M\mathop{K}\limits^.}^{\alpha(\cdot),\lambda}_{p,q(\cdot),\omega}=\left\{f\in L^{q(\cdot)}_\text {loc}(\mathbb R^n\setminus\{0\}):\|f\|_{{M\mathop{K}\limits^.}^{\alpha(\cdot),\lambda}_{p,q(\cdot),\omega}} <\infty \right \},
\]
where $\|f\|_{{M\mathop{K}\limits^.}^{\alpha(\cdot),\lambda}_{p,q(\cdot),\omega}}=\sup\limits_{k_0\in\mathbb Z}2^{-k_0\lambda}\left(\sum\limits_{k=-\infty}^{k_0} \|2^{k\alpha(\cdot)} f\chi_k\|^p_{L^{q(\cdot)}_\omega}\right)^{\frac{1}{p}}.$
\end{definition}
Note that, when $p(\cdot)$, $q(\cdot)$ and $\alpha(\cdot)$ are constant, it is obvious to see that
\begin{equation}\label{rel-func}
L^{p}_{{\omega}^{1/p}}= L^{p}(\omega),\,\,{\mathop{K}\limits^.}^{\alpha,p}_{q,{\omega}^{1/p}}={\mathop{K}\limits^.}^{\alpha,p}_{q}(\omega)\,\,\textit{\rm and}\,\,{{M\mathop{K}\limits^.}^{\alpha,\lambda}_{p,q,{\omega}^{1/p}}}={{M\mathop{K}\limits^.}^{\alpha,\lambda}_{p,q}(\omega)}.
\end{equation}
Because of defining of weighted Morrey-Herz spaces with variable exponent and Proposition 2.5 in \cite{LZ2014}, we have the following result. The proof is trivial and may be found in \cite{WZ2016}.
\begin{lemma}\label{lemmaVE}
Let $\alpha(\cdot)\in L^{\infty}(\mathbb R^n)$, $q(\cdot)\in \mathcal P_b(\mathbb R^n)$, $p\in(0, \infty)$ and $\lambda\in [0,\infty)$. If $\alpha(\cdot)$ is log-H\"{o}lder continuous
both at the origin and at infinity, then
\[
\big\|f\chi_j\big\|_{L^{q(\cdot)}_\omega} \leq C. 2^{j(\lambda-\alpha(0))}\big\|f\big\|_{{M\mathop{K}\limits^.}^{\alpha(\cdot),\lambda}_{p,q(\cdot),\omega}},\,\,\textit{for all}\,\, j\in\mathbb Z^-,
\]
and
\[
\big\|f\chi_j\big\|_{L^{q(\cdot)}_\omega} \leq C. 2^{j(\lambda-\alpha_\infty)}\big\|f\big\|_{{M\mathop{K}\limits^.}^{\alpha(\cdot),\lambda}_{p,q(\cdot),\omega}},\,\,\textit{for all}\,\, j\in\mathbb N.
\]
\end{lemma}
We also extend to define two-weight $\lambda$-central Morrey spaces with variable-exponent as follows.
\begin{definition}
For $\lambda \in \mathbb R$ and $p\in\mathcal P_\infty(\mathbb R^n)$, we denote ${\mathop B\limits^.}_{\omega_1,\omega_2}^{p(\cdot),\lambda}$ the class of locally integrable functions $f$ on $\mathbb R^n$ satisfying
\[
{\big\| f \big\|_{{\mathop B\limits^.}_{\omega_1,\omega_2}^{p(\cdot),\lambda }}} = \mathop {\sup }\limits_{R\, > 0} {\frac{1}{{{\omega_1}\big(B(0,R)\big)^{\lambda+\frac{1}{p_{\infty}}}}}} \big\|f\big\|_{L^{p(\cdot)}_{\omega_2}(B(0,R))}< \infty,
\]
where $\big\|f\big\|_{L^{p(\cdot)}_{\omega_2}(B(0,R))}=\big\|f\chi_{B(0,R)}\big\|_{L^{p(\cdot)}_{\omega_2}}$
and $\omega_1$, $\omega_2$ are non-negative and local integrable functions. Moreover, as $p(\cdot)$ is constant and $\omega_1=\omega$ and $\omega_2=\omega^{\frac{1}{p}}$, it is natural to see that ${\mathop B\limits^.}_{\omega,\omega^{1/p}}^{p(\cdot),\lambda}={\mathop B\limits^.}^{p,\lambda}({\omega})$.
\end{definition}
Later, the next theorem is stated as an embedding result for the Lebesgue spaces with variable exponent (see, for example, Theorem 2 in \cite{Bandaliev2010}, Theorem 2.45 in \cite{CUF2013}, Lemma 3.3.1 in \cite{Diening1}).
\begin{theorem}\label{theoembed}
Let $p(\cdot), q(\cdot)\in\mathcal P(\mathbb R^n)$ and $q(x)\leq p(x)$ almost everywhere $x\in \mathbb R^n$, and
\[
\frac{1}{r(\cdot)}:=\frac{1}{q(\cdot)}-\frac{1}{p(\cdot)}\,\,\textit{\rm and}\,\, \big\|1\big\|_{L^{r(\cdot)}}<\infty.
\]
Then there exists a constant K such that
\[
\big\|f\big\|_{L^{q(\cdot)}_\omega}\leq K \big\|1\big\|_{L^{r(\cdot)}}\big\|f\big\|_{L^{p(\cdot)}_\omega}.
\]
\end{theorem}
\section{Main results and their proofs}
Before stating the next main results, we introduce some notations which will be used throughout this section. Let $ \gamma_1,...,\gamma_m\in\mathbb R,\lambda_1,...,\lambda_m \geq 0, p, p_i\in (0,\infty)$, $q_i\in\mathcal P_b(\mathbb R^n)$ for $i=1,...,m$ and $\alpha_1,...,\alpha_m\in L^\infty(\mathbb R^n)\cap \mathbf C_0^{\text{log}}(\mathbb R^n)\cap \mathbf C_{\infty}^{\text{log}}(\mathbb R^n)$. The functions $\alpha(\cdot),q(\cdot)$ and numbers $\gamma,\lambda$ are defined by
$$ {\alpha _1(\cdot)} + \cdots + {\alpha _m(\cdot)} = \alpha(\cdot), $$
$$\frac{1}{{{q_1(\cdot)}}}+ \cdots + \frac{1}{{{q_m(\cdot)}}} = \frac{1}{q(\cdot)}, $$
$$\gamma_1+\cdots+\gamma_m=\gamma,$$
$$ {\lambda _1} + {\lambda _2} + \cdots + {\lambda _m} = \lambda. $$
Thus, it is clear to see that the function $\alpha$ also belongs to the $L^\infty(\mathbb R^n)\cap \mathbf C_0^{\text{log}}(\mathbb R^n)\cap \mathbf C_{\infty}^{\text{log}}(\mathbb R^n)$ space.
\vskip 5pt
For a matrix $A=(a_{ij})_{n\times n}$, we define the norm of $A$ as follow
\begin{equation}\label{normB}
\left\|A\right\|=\left(\sum\limits_{i,j=1}^{n}{{|a_{ij}|}^2}\right)^{1/2}.
\end{equation}
As above we conclude $\left|Ax\right|\leq \left\|A\right\|\left|x\right|$ for any vector $x\in\mathbb R^n$. In particular, if $A$ is invertible, then we find
\begin{equation}\label{detA}
\left\|A\right\|^{-n}\leq \left|\rm det(A^{-1})\right|\leq \left\|A^{-1}\right\|^{n}.
\end{equation}
In this section, we will investigate the boundedness of multilinear Hausdorff operators on variable exponent Herz, central Morrey and Morrey-Herz spaces with power weights associated to the case of matrices having the important property as follows: there exists ${\rho _{\vec{A} }} \geq 1$ such that
\begin{equation}\label{DK1}
\left\| {{A_i}(t)} \right\|.\left\| {A_i^{ - 1}(t)} \right\| \leq {\rho _{\vec{A} }}, \,\,\,{\text{\rm for all }}\,i=1,...,m,
\end{equation}
for almost everywhere $t \in \mathbb R^n$. Thus, by the property of invertible matrice, it is easy to show that
\begin{equation}\label{DK1.1}
\left\| A_i(t) \right\|^\sigma \lesssim \left\| A_i^{-1}(t)\right\|^{-\sigma},\,\textit{\rm for all }\,\sigma \,\in \mathbb R,
\end{equation}
and
\begin{equation}\label{DK1.2}
|A_i(t)x|^\sigma \gtrsim \left\| A_i^{-1}(t)\right\|^{-\sigma}.|x|^{\sigma}, \,\textit{\rm for all }\, \sigma\in\mathbb R,x\in \mathbb R^n.
\end{equation}
{\bfseries Remark 2.} If $A(t)=(a_{ij}(t))_{n\times n}$ is a real orthogonal matrix for almost everywhere $t$ in $\mathbb R^n$, then $A(t)$ satisfies the property (\ref{DK1}). Indeed, we know that the definition of the matrix norm (\ref{normB}) is the special case of Frobenius matrix norm. We recall the property of the above norm as follows
\[
\sqrt{\rho(B^*.B)}\leq \big\|B\big\|\leq \sqrt{n}.\sqrt{\rho(B^*.B)},\,\, {\text {for all}}\, \,B\in \, M_n(\mathbb C),
\]
where $B^*$ is the conjugate matrix of $B$ and $\rho(B)$ is the largest modulus of the eigenvalues of $B$. Thus, since $A^{-1}(t)$ is also a real orthogonal matrix, we get
$$ \left\|A(t)\right\|\leq \sqrt{n}\,\,\text{\rm and}\,\, \left\|A^{-1}(t)\right\|\leq \sqrt{n},$$
which immediately obtain the desired result. Now, we are ready to state our first main result in this paper.
\begin{theorem}\label{TheoremVarLebesgue}
Let $\omega_1(x)=|x|^{\gamma_1}, ... , \omega_m(x)=|x|^{\gamma_m}, \omega(x)=|x|^{\gamma}$, $q(\cdot)\in \mathcal P_b(\mathbb R^n)$, $\zeta >0$ and the following conditions are true:
\begin{equation}\label{DKVarLebesgue}
q_i(A_i^{-1}(t)\cdot)\leq \zeta.q_i(\cdot)\,\textit{\rm and}\,\, \big\|1\big\|_{L^{r_{i}(t,\cdot)}}<\infty, \, \text{a.e. }\,t\in \textit{\rm supp}(\Phi),
\end{equation}
\begin{equation}\label{DKVarLebesgue1}
\mathcal C_1= \int\limits_{\mathbb R^n}\frac{\Phi(t)}{|t|^n}\prod\limits_{i=1}^{m} c_{A_i,q_i,\gamma_i}(t).\big\|1\big\|_{L^{r_{i}(t,\cdot)}}dt<\infty,
\end{equation}
where
\[
{c_{A_i,q_i,\gamma_i}(t)} = {{{\rm max }}\big\{ {{{\big\| {{A_i}(t)} \big\|}^{ - {\gamma _i}}},{{\big\| {A_i^{ - 1}(t)} \big\|}^{{\gamma _i}}}} \big\}}{\rm max}\big\{ {\left| {\det A_i^{ - 1}(t)} \right|^{\frac{1}{q_{i+}}}},{\left| {\det A_i^{ - 1}(t)} \right|^{\frac{1}{q_{i-}}}}\big\},
\]
\[
\dfrac{1}{r_i(t,\cdot)}=\dfrac{1}{q_i(A_i^{-1}(t)\cdot)}-\dfrac{1}{\zeta q_i(\cdot)},\,\textit{\rm for all} \,i=1,...,m.
\]
Then, $H_{\Phi,\vec{A}}$ is a bounded operator from $L^{\zeta q_1(\cdot)}_{\omega_1}\times\cdots\times L^{\zeta q_m(\cdot)}_{\omega_m}$ to $L^{ q(\cdot)}_{\omega}$.
\end{theorem}
\begin{proof}
By using the versions of the Minkowski inequality for variable Lebesgue spaces from Corollary 2.38 in \cite{CUF2013}, we have
\begin{equation}\label{0HALq}
\big\|H_{\Phi,\vec{A}}(\vec{f})\big\|_{L^{q(\cdot)}_{\omega}}\lesssim\int\limits_{\mathbb R^n}{\frac{\Phi(t)}{|t|^n}\big\|\prod\limits_{i=1}^{m}f(A_i(t).)\big\|_{L^{q(\cdot)}_{\omega}}}dt.
\end{equation}
By assuming $\sum\limits_{i=1}^m{\frac{1}{q_i(\cdot)}}=\frac{1}{q(\cdot)}$ and applying the H\"{o}lder inequality for variable Lebesgue spaces (see also Corollary 2.28 in \cite{CUF2013}), we imply that
\[
\big\|\prod\limits_{i=1}^{m}f_i(A_i(t).)\big\|_{L^{q(\cdot)}_{\omega}}\lesssim \prod_{i=1}^{m}\big\|f_i(A_i(t).)\big\|_{L^{q_i(\cdot)}_{\omega_i}}.
\]
Consequently, we obtain
\begin{equation}\label{HALqLeb}
\big\|H_{\Phi,\vec{A}}(\vec{f})\big\|_{L^{q(\cdot)}_{\omega}}\lesssim \int\limits_{\mathbb R^n}{\frac{\Phi(t)}{|t|^n}\prod_{i=1}^{m}\big\|f_i(A_i(t).)\big\|_{L^{q_i(\cdot)}_{\omega_i}}}dt.
\end{equation}
For $\eta>0$, we see that
\begin{eqnarray}
&&\int\limits_{\mathbb R^n} {\left(\dfrac{\big|{{f_i}({A_i}(t).x)}\big|{\omega_i}(x)}{\eta}\right)^{q_i(x)}dx}
\nonumber
\\
&&= \int\limits_{\mathbb R^n} {{\left(\dfrac{\big| {{f_i}(z)}\big|{{\big| {A_i^{ - 1}(t)z} \big|}^{{\gamma_i}}}}{\eta}\right)^{q_i(A_i^{-1}(t).z)}}\big| {\det A_i^{-1}(t)}\big|dz}
\nonumber
\\
&&\leq \big| {\det A_i^{ - 1}(t)} \big|.\int\limits_{\mathbb R^n} {{\left(\dfrac{\max \big\{ {{{\left\| {A_i^{ - 1}(t)} \right\|}^{{\gamma_i}}},{{\left\| {{A_i}(t)} \right\|}^{ - {\gamma_i}}}} \big\}\big| {{f_i}(z)} \big|{\omega_i(z)}}{\eta}\right)^{{q_i(A_i^{-1}(t).z)}}}dz}\nonumber
\\
&&\leq\int\limits_{\mathbb R^n} {{\left(\dfrac{c_{A_i,q_i,\gamma_i}(t).\big| {{f_i}(z)} \big|{\omega_i(z)}}{\eta}\right)^{{q_i(A_i^{-1}(t).z)}}}dz}.\nonumber
\end{eqnarray}
Hence, it follows from the definition of the Lebesgue space with variable exponent that
\begin{equation}\label{MVarfiBLeb}
\big\|f_i(A_i(t).)\big\|_{L^{q_i(\cdot)}_{\omega_i}}\leq c_{A_i,q_i,\gamma_i}(t).\big\|f_i\big\|_{L^{q_i(A_i^{-1}(t)\cdot)}_{\omega_i}}.
\end{equation}
In view of ($\ref{DKVarLebesgue}$) and Theorem \ref{theoembed}, we deduce
\begin{equation}\label{nhung2}
\big\|f\big\|_{L^{q_i(A_i^{-1}(t)\cdot)}_{\omega_i}}\lesssim \big\|1\big\|_{L^{r_i(t,\cdot)}}.\big\|f\big\|_{L^{\zeta q_i(\cdot)}_{\omega_i}},
\end{equation}
Therefore, by (\ref{HALqLeb})-(\ref{nhung2}), we obtain
\[
\big\|H_{\Phi,\vec{A}}(\vec{f})\big\|_{L^{q(\cdot)}_{\omega}}\lesssim \mathcal C_1.\prod\limits_{i=1}^m\big\|f_i\big\|_{L^{\zeta q_i(\cdot)}_{\omega_i}},
\]
which finishes the proof of this theorem.
\end{proof}
In particular, when $\zeta \leq 1$, we have the following important result to the case of matrices having property (\ref{DK1}) above.
\begin{theorem}\label{TheoremVarLebesgue1}
Let us have the given supposition of Theorem \ref{TheoremVarLebesgue} and assume that
\begin{equation}\label{DKVarLeb}
q_i(A_i^{-1}(t)\cdot)\leq q_i(\cdot), \big\|1\big\|_{L^{r_{1i}(t,\cdot)}}<\infty,
\end{equation}
where $\dfrac{1}{r_{1i}(t,\cdot)}=\dfrac{1}{q_i(A_i^{-1}(t)\cdot)}-\dfrac{1}{q_i(\cdot)}, \, \text{a.e. }\,t\in \textit{\rm supp}(\Phi)$, for all $i=1,...,m$.
\\
{\rm(a)} If
\[
\mathcal C_2 =\int\limits_{\mathbb R^n}{\dfrac{\Phi(t)}{|t|^n}\prod\limits_{i=1}^{m}\max\Big\{\big\|A_i^{-1}(t)\big\|^{\frac{n}{q_{i+}}+\gamma_i}, \big\|A_i^{-1}(t)\big\|^{\frac{n}{q_{i-}}+\gamma_i}\Big\}\big\|1\big\|_{L^{r_{1i}(t,\cdot)}}dt}<\infty,
\]
then
\[
\big\|H_{\Phi,\vec A}(\vec f)\big\|_{L^{ q(\cdot)}_{\omega}}\lesssim \mathcal C_2\prod\limits_{i=1}^{m}\big\| f_i\big\|_{L^{q_i(\cdot)}_{\omega_i}}.
\]
{\rm(b)} Let
\[
\mathcal C_2^* =\int\limits_{\mathbb R^n}{\dfrac{\Phi(t)}{|t|^n}\prod\limits_{i=1}^{m}\min\Big\{\big\|A_i^{-1}(t)\big\|^{\frac{n}{q_{i+}}+\gamma_i}, \big\|A_i^{-1}(t)\big\|^{\frac{n}{q_{i-}}+\gamma_i}\Big\}}dt.
\]
Assume that $H_{\Phi,\vec{A}}$ is a bounded operator from $L^{q_1(\cdot)}_{\omega_1}\times\cdots\times L^{q_m(\cdot)}_{\omega_m}$ to $L^{ q(\cdot)}_{\omega}$ and the following condition is satisfied:
\begin{equation}\label{dangthucVarLeb}
\dfrac{1}{q_{1-}}+ \dfrac{1}{q_{2-}}+\cdots+\dfrac{1}{q_{m-}}=\dfrac{1}{q_+}.
\end{equation}
Then, $\mathcal C_2^*$ is finite. Moreover,
\[
\big\|H_{\Phi,\vec A}\big\|_{L^{q_1(\cdot)}_{\omega_1}\times\cdots\times L^{q_m(\cdot)}_{\omega_m} \to L^{ q(\cdot)}_{\omega}}\gtrsim \mathcal C_2^*.
\]
\end{theorem}
\begin{proof}
We begin with the proof for the case ${\rm(a)}$. From (\ref{DKVarLeb}), by using the Theorem \ref{theoembed} again, we have
\begin{equation}\label{nhungVarLeb2}
\big\|f\big\|_{L^{q_i(A_i^{-1}(t).)}_{\omega_i}}\lesssim \big\|1\big\|_{L^{r_{1i}(t,\cdot)}}.\big\|f\big\|_{L^{q_i(.)}_{\omega_i}},\,\textit{\rm for all}\, f \in L^{q_i(A_i^{-1}(t).)}_{\omega_i}.
\end{equation}
On the other hand, by (\ref{detA}) and (\ref{DK1.1}), for $i = 1, 2, ..., m$, we find
\begin{equation}\label{cAiq+-}
c_{A_i,q_i,\gamma_i}(t)\lesssim \max\Big\{\big\|A_i^{-1}(t)\big\|^{\frac{n}{q_{i+}}+\gamma_i}, \big\|A_i^{-1}(t)\big\|^{\frac{n}{q_{i-}}+\gamma_i}\Big\}.
\end{equation}
By the similar arguments as Theorem \ref{TheoremVarLebesgue}, by (\ref{nhungVarLeb2}) and (\ref{cAiq+-}), we also obtain
\[
\big\|H_{\Phi,\vec A}(\vec f)\big\|_{L^{q(\cdot)}_{\omega}}\lesssim \mathcal C_2\prod\limits_{i=1}^m\big\|f_i\big\|_{L^{q_i(\cdot)}_{\omega_i}}.
\]
Now, for the case ${\rm(b)}$, we make the functions $f_i$ for all $i=1,...m$ as follows:
\[{f_i}(x) =
\begin{cases}
0,\,\,\;\;\;\;\;\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\rm if }\, | x | < \rho_{\vec A}^{ - 1},&\\
{| x |^{ - \frac{{n}}{{{q_i(x)}}}-\gamma_i - \varepsilon }},\,{\rm otherwise.}&
\end{cases}
\]
This immediately deduces that $\big\|f_i\big\|_{L^{q_i(\cdot)}_{\omega_i}} >0$. Besides that, we also compute
\begin{eqnarray}
F_{q_i}(f_i\omega_i)&=&\int\limits_{|x|\geq\rho_{\vec A}^{-1}}{|x|^{-n-\varepsilon q_i(x)}}dx=\int\limits_{\rho_{\vec A}^{-1}}^{+\infty}\int\limits_{S^{n-1}}{r^{-\varepsilon q_i(rx')-1}}
d\sigma(x')dr\nonumber
\\
&=& \int\limits_{\rho_{\vec A}^{-1}}^{1}\int\limits_{S^{n-1}}{r^{-\varepsilon q_i(rx')-1}}d\sigma(x')dr
+ \int\limits_{1}^{+\infty}\int\limits_{S^{n-1}}{r^{-\varepsilon q_i(rx')-1}}d\sigma(x')dr.
\nonumber
\end{eqnarray}
Thus, $F_{q_i}(f_i\omega_i)$ is dominated by
\[
\int\limits_{\rho_{\vec A}^{-1}}^{1}\int\limits_{S^{n-1}}{r^{-1-\varepsilon q_{i+}}}d\sigma(x')dr
+ \int\limits_{1}^{+\infty}\int\limits_{S^{n-1}}{r^{-1-\varepsilon q_{i-}}}d\sigma(x')dr
\lesssim \Big((\rho_{\vec A}^{\varepsilon q_{i+}}-1)q_{i-}+q_{i+}\Big){\varepsilon}^{-1}.
\]
From the above estimation, by (\ref{maxminvar}), we get
\begin{equation}\label{normfiVarLeb}
\big\|f_i\big\|_{L^{q_i(\cdot)}_{\omega_i}} \lesssim \Big((\rho_{\vec A}^{\varepsilon q_{i+}}-1)q_{i-}+q_{i+}\Big)^{\frac{1}{q_{i-}}}{\varepsilon}^{\frac{-1}{q_{i-}}}.
\end{equation}
Next, let us denote two useful sets as follows
\[
{S_x} = \bigcap\limits_{i = 1}^m {\big\{ {t \in {\mathbb R^n}: | {{A_i}(t)}x| \geq {\rho^{-1}_{\vec A}} \big\}}},
\]
and
\[
U = \big\{ {t \in {\mathbb R^n}:\big\|A_i(t)\big\| \geq \varepsilon,\,\text{for all }\, i = 1,...,m} \big\}.\]
Then, we claim that
\begin{equation}\label{USx'Leb}
U \subset S_x,\,\,\text{for all}\,\, x\in\mathbb R^n\setminus B(0,\varepsilon^{-1}).
\end{equation}
Indeed, let $t\in U$. This leads that $\big\|A_i(t)\big\|.|x|\geq 1,\,\text{for all}\,\, x\in\mathbb R^n\setminus B(0,\varepsilon^{-1}).$ Therefore, it follows from applying the condition (\ref{DK1}) that
\[
|A_i(t)x|\geq \big\|A_i^{-1}(t)\big\|^{-1}|x|\geq\rho_{\vec A}^{-1},
\]
which finishes the proof of the relation (\ref{USx'Leb}).
\\
Now, by letting $x\in \mathbb R^n\setminus B(0,\varepsilon^{-1})$ and using (\ref{USx'Leb}), we see that
\[
{H_{\Phi ,\vec A }}(\vec{f)} (x) \geq \int\limits_{S_{x}} {\frac{{\Phi (t)}}{{{{\left| t \right|}^n}}}} \prod\limits_{i = 1}^m {{{| {{A_i}(t)x}|}^{- \frac{{n}}{{{q_i(x)}}} - \gamma_i-\varepsilon }}} dt \geq \int\limits_{U} {\frac{{\Phi (t)}}{{{{\left| t \right|}^n}}}} \prod\limits_{i = 1}^m {{{| {{A_i}(t)x}|}^{- \frac{{n}}{{{q_i(x)}}} - \gamma_i-\varepsilon }}} dt.
\]
Thus, by (\ref{DK1.2}), we find
\begin{equation}\label{HAf3Leb}
{H_{\Phi ,\vec A }}(\vec{f)} (x) \gtrsim \Big(\int\limits_{U} {\frac{{\Phi (t)}}{{{{\left| t \right|}^n}}}} \prod\limits_{i = 1}^m {{{\big\| A_i^{-1}(t)\big\|}^{\frac{{n}}{{{q_i(x)}}}+\gamma_i+\varepsilon }}} dt\Big)|x|^{- \frac{{n}}{{{q(x)}}} - \gamma-m\varepsilon}\chi_{\mathbb R^n\setminus B(0,\varepsilon^{-1})}(x).
\end{equation}
For convenience, we denote
\[
\Gamma_\varepsilon=\int\limits_{U}{\dfrac{\Phi(t)}{|t|^n}\prod\limits_{i=1}^{m}\min\Big\{\big\|A_i^{-1}(t)\big\|^{\frac{n}{q_{i+}}+\gamma_i}, \big\|A_i^{-1}(t)\big\|^{\frac{n}{q_{i-}}+\gamma_i}\Big\}}\prod\limits_{i=1}^m \big\|A_i^{-1}(t)\big\|^{\varepsilon}\varepsilon^{m\varepsilon}dt.
\]
Hence, by (\ref{HAf3Leb}), we arrive at
\begin{eqnarray}\label{HAf4Leb}
\big\|{H_{\Phi ,\vec A }}(\vec{f)} \big\|_{L^{q(\cdot)}_{\omega}}
&&\gtrsim \varepsilon^{-m\varepsilon}\Gamma_{\varepsilon}.\big\| |\cdot|^{- \frac{{n}}{{{q(\cdot)}}}-\gamma - m\varepsilon}\chi_{\mathbb R^n\setminus B(0,\varepsilon^{-1})}\big\|_{L^{q(\cdot)}_{\omega}}\nonumber\\
&&=:\varepsilon^{-m\varepsilon}\Gamma_{\varepsilon}.\big\| h\big\|_{L^{q(\cdot)}_{\omega}},
\end{eqnarray}
where $h(x)=|x|^{- \frac{{n}}{{{q(x)}}}-\gamma - m\varepsilon}\chi_{\mathbb R^n\setminus B(0,\varepsilon^{-1})}$.
Next, we will prove the following result
\begin{equation}\label{chuanfLeb}
\big\| h \big\|_{L^{q(\cdot)}_{\omega}}\gtrsim \varepsilon^{m\varepsilon\frac{q_{+}}{q_{-}}}.\varepsilon^{\frac{-1}{q_+}}.
\end{equation}
Indeed, for $\varepsilon$ sufficiently small such that $\varepsilon^{-1}>1$, we compute
\begin{eqnarray}
F_{q}(h\omega)&=&\int\limits_{|x|\geq \varepsilon^{-1}}{|x|^{-n-m\varepsilon q(x)}}dx= \int\limits_{\varepsilon^{-1}}^{+\infty}\int\limits_{S^{n-1}}{r^{-1-m\varepsilon q(rx')}}
d\sigma(x')dr\nonumber
\\
&\geq& \int\limits_{\varepsilon^{-1}}^{+\infty}\int\limits_{S^{n-1}}{r^{-1-m\varepsilon q_{+}}}
d\sigma(x')dr\gtrsim \varepsilon^{m\varepsilon q_{+}}.\varepsilon^{-1}.
\end{eqnarray}
From this, by the inequality (\ref{maxminvar}), we immediately obtain the inequality (\ref{chuanfLeb}).
By writing $\vartheta(\varepsilon)$ as
\[
\vartheta \big(\varepsilon\big) = \dfrac{\varepsilon^{m\varepsilon\frac{q_{+}}{q_{-}}}.\varepsilon^{\frac{-1}{q_+}}}{\prod\limits_{i=1}^{m}\Big((\rho_{\vec A}^{\varepsilon q_{i+}}-1)q_{i-}+q_{i+}\Big)^{\frac{1}{q_{i-}}}{\varepsilon}^{\frac{-1}{q_{i-}}}},
\]
then, by (\ref{HAf4Leb}) and (\ref{chuanfLeb}), we estimate
\begin{equation}\label{HAf5Leb}
\big\| {{H_{\Phi ,\vec A }}\big( {\vec f } \big)} \big\|_{L^{q(\cdot)}_{\omega}}\gtrsim \varepsilon^{-m\varepsilon}\vartheta.\Gamma_\varepsilon.\prod\limits_{i=1}^m\big\|f_i\big\|_{L^{q_i(\cdot)}_{\omega_i}}.
\end{equation}
Note that, by lettting $\varepsilon$ sufficiently small and $t\in U$, we find
\begin{equation}\label{prodLeb}
\prod\limits_{i=1}^m \big\|A_i^{-1}(t)\big\|^\varepsilon.\varepsilon^{m\varepsilon}\leq \rho_{\vec A}^{\varepsilon}\lesssim 1.
\end{equation}
By the relation (\ref{dangthucVarLeb}), we get the limit of function $\varepsilon^{-m\varepsilon}\vartheta$ is a positive number when $\varepsilon$ tends to zero. Thus, by (\ref{HAf5Leb}), (\ref{prodLeb}) and the dominated convergence theorem of Lebesgue, we also have
\[
\int\limits_{\mathbb R^n}{\dfrac{\Phi(t)}{|t|^n}\prod\limits_{i=1}^{m}\min\Big\{\big\|A_i^{-1}(t)\big\|^{\frac{n}{q_{i+}}+\gamma_i}, \big\|A_i^{-1}(t)\big\|^{\frac{n}{q_{i-}}+\gamma_i}\Big\}}dt<\infty.
\]
which completes the proof of the theorem.
\end{proof}
Next, we discuss the boundedness of the multilinear Hausdorff operators on the product of weighted Morrey-Herz spaces with variable exponent.
\begin{theorem}\label{TheoremVar-MorreyHerz}
Let $\omega_1(x)=|x|^{\gamma_1}$,..., $\omega_m(x)=|x|^{\gamma_m}$, $\omega(x)= |x|^{\gamma}$, $q(\cdot)\in\mathcal P_b(\mathbb R^n)$, $\lambda_1,...,\lambda_m,\zeta >0$, and the hypothesis (\ref{DKVarLebesgue}) in Theorem \ref{TheoremVarLebesgue} hold. Suppose that for all $i =1,...,m $, we have
\begin{equation}\label{Var-MorreyHerzDK3}
\alpha_i(0)-\alpha_{i\infty} \geq 0.
\end{equation}
At the same time, let
\begin{eqnarray}\label{Var-MorreyHerzDK4}
\mathcal C_3&=& \int\limits_{\mathbb R^n}{\frac{\Phi(t)}{|t|^n}\prod\limits_{i=1}^{m}c_{A_i,q_i,\gamma_i}(t)\big\|1\big\|_{L^{r_{i}(t,\cdot)}}.{\max\Big\{\big\|A_i(t)\big\|^{\lambda_i-\alpha_i(0)}, \big\|A_i(t)\big\|^{\lambda_i-\alpha_{i\infty}} \Big\}}}\times
\nonumber
\\
&&\,\,\,\,\,\,\,\times{\max\Big\{\sum\limits_{r=\Theta_n^*-1}^{0} 2^{r(\lambda_i-\alpha_i(0))},\sum\limits_{r=\Theta_n^*-1}^{0}2^{r(\lambda_i-\alpha_{i\infty})}\Big\}}dt<\infty,
\nonumber
\\
\end{eqnarray}
where $\Theta_n^*=\Theta_n^*(t)$ is the greatest integer number satisfying
\[
\mathop{\rm max}\limits_{i=1,...,m}\big\{\|A_i(t)\|.\|A_i^{-1}(t)\|\big\}<2^{-\Theta_n^*},\,\, \text{\rm for a.e. }\, t\in\mathbb R^n.
\]
Then, $H_{\Phi,\vec{A}}$ is a bounded operator from ${M{\mathop K\limits^.}}^{\alpha_1(\cdot),\lambda_1}_{p_1,\zeta q_1(\cdot),\omega_1}\,\times\cdots\times {M{\mathop K\limits^.}}^{\alpha_m(\cdot),\lambda_m}_{p_m,\zeta q_m(\cdot),\omega_m}$ to ${M{\mathop K\limits^.}}^{\alpha(\cdot),\lambda}_{p, q(\cdot),\omega}$.
\end{theorem}
\begin{proof}
By estimating as (\ref{HALqLeb}) above, we have
\begin{equation}\label{HALq}
\big\|H_{\Phi,\vec{A}}(\vec{f})\chi_k\big\|_{L^{q(\cdot)}_{\omega}}\lesssim \int\limits_{\mathbb R^n}{\frac{\Phi(t)}{|t|^n}\prod_{i=1}^{m}\big\|f_i(A_i(t).)\chi_k\big\|_{L^{q_i(\cdot)}_{\omega_i}}}dt.
\end{equation}
Let us now fix $i\in \big\{1,2,...,m\big\}$. Since $\|A_i(t)\|\neq 0$, there exists an integer number $\ell_i=\ell_i(t)$ such that $2^{\ell_i-1}<\|A_i(t)\|\leq 2^{\ell_i}$. For simplicity of notation, we write $\rho_{\vec A}^* (t) =\mathop{\rm max}\limits_{i=1,...,m}\big\{\big\|A_i(t)\big\|.\big\|A_i^{-1}(t)\big\|\big\}$. Then, by letting $y=A_i(t).z$ with $z\in C_k$, it follows that
\[
| y | \geq {\left\| {A_i^{ - 1}(t)} \right\|^{ - 1}}\left| z \right|\geq \frac{{{2^{{\ell_i} + k - 2}}}}{{{\rho _{\vec{A}}^*}}} > {2^{k + {\ell_i} - 2 + {\Theta_n^*}}},
\]
and
\[
| y |\leq \left\| {{A_i}(t)} \right\|.\left| z \right| \leq {2^{{\ell_i} + k}}.
\]
These estimations can be used to get
\begin{equation}\label{AiCk}
{A_i}(t).{C_k} \subset \left\{ {z \in {\mathbb R^n}:{2^{k + {\ell_i} - 2 + {\Theta _n^*}}} < \left| z \right| \leq {2^{k + {\ell_i}}}} \right\}.
\end{equation}
Now, we need to prove that
\begin{equation}\label{fAchikLq}
\big\|f_i(A_i(t).)\chi_k\big\|_{L^{q_i(\cdot)}_{\omega_i}}\lesssim c_{A_i,q_i,\gamma_i}(t)\big\|1 \big\|_{L^{r_i(t,\cdot)}}.\sum\limits_{r=\Theta_n^*-1}^{0} \big\|f_i\chi_{k+\ell_i+r}\big\|_{L^{\zeta q_i(\cdot)}_{\omega_i}}.
\end{equation}
Indeed, for $\eta>0$, by (\ref{AiCk}), we find
\begin{eqnarray}
&&\int\limits_{\mathbb R^n}{\left(\dfrac{\big|f_i(A_i(t)x)\big|\chi_k(x)\omega_i(x)}{\eta}\right)^{q_i(x)}dx}\nonumber
\\
&&=\int\limits_{A_i(t)C_k}{\left(\dfrac{\big|f_i(z)\big|\big|A_i^{-1}(t)z\big|^{\gamma_i}}{\eta}\right)^{q_i(A_i^{-1}(t)z)}\big|
\textit{\rm det} A_i^{-1}(t)\big|dz}\nonumber
\\
&&\leq \big|\textit{\rm det} A_i^{-1}(t)\big|\int\limits_{A_i(t)C_k}\left(\frac{{\rm max}\big\{\big\|A_i^{-1}(t)\big\|^{\gamma_i},\big\|A_i(t)\big\|^{-\gamma_i}\big\}\big| f_i(z)\big|\omega_i(z)}{\eta}\right)^{q_i(A_i^{-1}(t).z)}dz.
\nonumber
\end{eqnarray}
So, we have that
\begin{eqnarray}\label{fAeta}
&&\int\limits_{\mathbb R^n}{\left(\dfrac{\big|f_i(A_i(t)x)\chi_k(x)\big|\omega_i(x)}{\eta}\right)^{q_i(x)}dx}
\nonumber
\\
&&\leq\int\limits_{\mathbb R^n}\left(\frac{c_{A_i,q_i,\gamma_i}(t)\big|\sum\limits_{r=\Theta_n^*-1}^0 f_i(z)\chi_{k+\ell_i+r}(z)\big|\omega_i(z)}{\eta}\right)^{q_i(A_i^{-1}(t).z)}dz.
\nonumber
\end{eqnarray}
Therefore, by the definition of Lebesgue space with variable exponent, it is easy to get that
\[
\big\|f_i(A_i(t).)\chi_k\big\|_{L^{q_i(\cdot)}_{\omega_i}}\leq c_{A_i,q_i,\gamma_i}(t).\sum\limits_{r=\Theta_n^*-1}^{0} \big\|f_i\chi_{k+\ell_i+r}\big\|_{L^{q_i(A_i^{-1}(t)\cdot)}_{\omega_i}},
\]
which completes the proof of the inequalities (\ref{fAchikLq}), by (\ref{nhung2}). Now, it immediately follows from (\ref{HALq}) and (\ref{fAchikLq}) that
\begin{eqnarray}\label{LemmaVeH}
\big\|H_{\Phi,\vec A}(\vec f)\chi_k\big\|_{L^{q(\cdot)}_{\omega}}&\lesssim & \int\limits_{\mathbb R^n}{\frac{\Phi(t)}{|t|^n}\prod\limits_{i=1}^{m}c_{A_i,q_i,\gamma_i}(t)\big\|1\big\|_{L^{r_i(t,\cdot)}}}\times
\nonumber
\\
&&\,\,\,\,\,\,\,\,\times\prod\limits_{i=1}^{m}\Big(\sum\limits_{r=\Theta_n^*-1}^{0} \big\|f\chi_{k+\ell_i+r}\big\|_{L^{\zeta q_i(\cdot)}_{\omega_i}}\Big)dt.
\end{eqnarray}
Consequently, by applying Lemma \ref{lemmaVE} in Section 2, we get
\begin{eqnarray}\label{HlemmaVe}
\big\|H_{\Phi,\vec A}(\vec f)\chi_k\big\|_{L^{q(\cdot)}_{\omega}}&\lesssim& \int\limits_{\mathbb R^n}{\frac{\Phi(t)}{|t|^n}\mathcal L(t).\prod\limits_{i=1}^{m}c_{A_i,q_i,\gamma_i}(t)\big\|1\big\|_{L^{r_i(t,\cdot)}}}\times\nonumber
\\
&&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\times\prod_{i=1}^{m}\big\|f_i\big\|_{M{\mathop K\limits^.}^{\alpha_i(\cdot), \lambda_i}_{p_i,\zeta q_i(\cdot),\omega_i}}dt,
\end{eqnarray}
where
\[
\mathcal L(t) =\prod\limits_{i=1}^{m}\Big(2^{(k+\ell_i)(\lambda_i-\alpha_i(0))}\sum\limits_{r=\Theta_n^*-1}^{0} 2^{r(\lambda_i-\alpha_i(0))}+ 2^{(k+\ell_i)(\lambda_i-\alpha_{i\infty})}\sum\limits_{r=\Theta_n^*-1}^{0}2^{r(\lambda_i-\alpha_{i\infty})}\Big).
\]
By having $2^{\ell_i-1}<\big\|A_i(t)\big\|\leq 2^{\ell_i}$, for all $i=1,...,m$, it implies that
\[
2^{\ell_i(\lambda_i-\alpha_i(0))}+2^{\ell_i(\lambda_i-\alpha_{i\infty})}\lesssim \max\big\{\big\|A_i(t)\big\|^{\lambda_i-\alpha_i(0)}, \big\|A_i(t)\big\|^{\lambda_i-\alpha_{i\infty}} \big\}.
\]
Thus, we can estimate $\mathcal L$ as follows
\begin{eqnarray}
\mathcal L(t)&\lesssim &\prod\limits_{i=1}^{m} {\max\Big\{\big\|A_i(t)\big\|^{\lambda_i-\alpha_i(0)}, \big\|A_i(t)\big\|^{\lambda_i-\alpha_{i\infty}} \Big\}}\times\nonumber
\\
&&\times\Big\{2^{k(\lambda_i-\alpha_i(0))}\sum\limits_{r=\Theta_n^*-1}^{0} 2^{r(\lambda_i-\alpha_i(0))}+ 2^{k(\lambda_i-\alpha_{i\infty})}\sum\limits_{r=\Theta_n^*-1}^{0}2^{r(\lambda_i-\alpha_{i\infty})}\Big\}\nonumber
\\
&\lesssim & \prod\limits_{i=1}^{m} {\max\Big\{\big\|A_i(t)\big\|^{\lambda_i-\alpha_i(0)}, \big\|A_i(t)\big\|^{\lambda_i-\alpha_{i\infty}} \Big\}}\times\nonumber
\\
&&\times{\max\Big\{\sum\limits_{r=\Theta_n^*-1}^{0} 2^{r(\lambda_i-\alpha_i(0))},\sum\limits_{r=\Theta_n^*-1}^{0} 2^{r(\lambda_i-\alpha_{i\infty})}\Big\}}\Big\{2^{k(\lambda_i-\alpha_i(0))}+ 2^{k(\lambda_i-\alpha_{i\infty})}\Big\}.
\nonumber
\end{eqnarray}
From this, by (\ref{HlemmaVe}), it is not difficult to show that
\begin{equation}\label{HAchikLq}
\big\|H_{\Phi,\vec A}(\vec f)\chi_k\big\|_{L^{q(\cdot)}_{\omega}}\lesssim \mathcal C_3\prod_{i=1}^{m}\big(2^{k(\lambda_i -\alpha_i(0))}+2^{k(\lambda_i -\alpha_{i\infty})}\big).\prod_{i=1}^{m}\big\|f_i\big\|_{M{\mathop K\limits^.}^{\alpha_i(\cdot), \lambda_i}_{p_i,\zeta q_i(\cdot),\omega_i}}.
\end{equation}
On the other hand, using Proposition 2.5 in \cite{LZ2014}, we get
\begin{equation}\label{HAfMK}
\big\| H_{\Phi,\vec A}(\vec f)\big\|_{M{\mathop K\limits^.}^{\alpha(\cdot), \lambda}_{p,q(\cdot),\omega}} \lesssim \max\big\{
\sup\limits_{k_0<0,k_0\in\mathbb Z}E_1, \sup\limits_{k_0\geq 0,k_0\in\mathbb Z}(E_2+E_3)
\big\},
\end{equation}
where
\begin{eqnarray}
&&E_1=2^{-k_0\lambda}\Big(\sum\limits_{k=-\infty}^{k_0}2^{k\alpha(0)p}\big\|H_{\Phi,\vec A}(\vec f)\chi_k\big\|^p_{L^{q(\cdot)}_{\omega}}\Big)^{\frac{1}{p}},\nonumber
\\
&&E_2 = 2^{-k_0\lambda}\Big(\sum\limits_{k=-\infty}^{-1}2^{k\alpha(0)p}\big\|H_{\Phi,\vec A}(\vec f)\chi_k\big\|^p_{L^{q(\cdot)}_{\omega}}\Big)^{\frac{1}{p}},\nonumber
\\
&&E_3=2^{-k_0\lambda}\Big(\sum\limits_{k=0}^{k_0}2^{k\alpha_\infty p}\big\|H_{\Phi,\vec A}(\vec f)\chi_k\big\|^p_{L^{q(\cdot)}_{\omega}}\Big)^{\frac{1}{p}}\nonumber.
\end{eqnarray}
In order to complete the proof, it remains to estimate the upper bounds for $E_1, E_2$ and $E_3$. Note that,
using (\ref{HAchikLq}), $E_1$ is dominated by
\[
\mathcal C_3.2^{-k_0\lambda}\Big(\sum\limits_{k=-\infty}^{k_0}2^{k\alpha(0)p}\Big(\prod_{i=1}^{m}\big(2^{k(\lambda_i -\alpha_i(0))}+2^{k(\lambda_i -\alpha_{i\infty})}\big).\prod_{i=1}^{m}\big\|f_i\big\|_{M{\mathop K\limits^.}^{\alpha_i(\cdot), \lambda_i}_{p_i,\zeta q_i(\cdot),\omega_i}}\Big)^p\Big)^{\frac{1}{p}}.
\]
This implies that
\begin{equation}\label{E1MVar}
E_1\lesssim \mathcal C_3.\mathcal T_0.\prod_{i=1}^{m}\big\|f_i\big\|_{M{\mathop K\limits^.}^{\alpha_i(\cdot), \lambda_i}_{p_i,\zeta q_i(\cdot),\omega_i}},
\end{equation}
where $\mathcal T_0= 2^{-k_0\lambda}.\Big(\sum\limits_{k=-\infty}^{k_0}2^{k\alpha(0)p}\prod\limits_{i=1}^{m}\big(2^{k(\lambda_i -\alpha_i(0))p}+2^{k(\lambda_i -\alpha_{i\infty})p}\big)\Big)^{\frac{1}{p}}$. By some simple computations, we obtain
\begin{eqnarray}
\mathcal T_0 &=& 2^{-k_0\lambda}\Big(\sum\limits_{k=-\infty}^{k_0}\prod_{i=1}^{m}\big(2^{k\lambda_i p}+2^{k(\lambda_i -\alpha_{i\infty}+\alpha_i(0))p}\big)\Big)^{\frac{1}{p}}\nonumber
\\
&\lesssim& \Big(\prod_{i=1}^{m} 2^{-k_0\lambda_i p}\Big\{ \sum\limits_{k=-\infty}^{k_0} 2^{k\lambda_i p}+ \sum\limits_{k=-\infty}^{k_0}2^{k(\lambda_i -\alpha_{i\infty}+\alpha_i(0))p}\Big\}\Big)^{\frac{1}{p}}.\nonumber
\end{eqnarray}
Hence, by assuming that $\lambda_i >0$, for all $i=1,...,m$ and (\ref{Var-MorreyHerzDK3}), we see at once that
\begin{eqnarray}
\mathcal T_0 &\lesssim& \Big(\prod_{i=1}^{m}2^{-k_0\lambda_i p}\Big\{\dfrac{2^{k_0\lambda_i p}}{1-2^{-\lambda_ip}}+ \dfrac{2^{k_0(\lambda_i -\alpha_{i\infty}+\alpha_i(0))p}}{1- 2^{-(\lambda_i -\alpha_{i\infty}+\alpha_i(0))p}}\Big\}\Big)^{\frac{1}{p}}\nonumber
\\
&\lesssim &\prod_{i=1}^{m}\Big\{\dfrac{1}{1-2^{-\lambda_ip}}+ \dfrac{2^{k_0(-\alpha_{i\infty}+\alpha_i(0))}}{1- 2^{-(\lambda_i -\alpha_{i\infty}+\alpha_i(0))p}}\Big\}\lesssim \prod\limits_{i=1}^{m}\Big(1+ 2^{{k_0}\big(\alpha_i(0)-\alpha_{i\infty}\big)}\Big).\nonumber
\end{eqnarray}
Then, from (\ref{E1MVar}), we have
\begin{equation}\label{E1}
E_1\lesssim \mathcal C_3\prod\limits_{i=1}^{m}\Big(1+ 2^{{k_0}\big(\alpha_i(0)-\alpha_{i\infty}\big)}\Big).\prod\limits_{i=1}^{m}\big\| f_i\big\|_{M{\mathop K\limits^.}^{\alpha_i(\cdot), \lambda_i}_{p_i,\zeta q_i(\cdot),\omega_i}}.
\end{equation}
By estimating in the same way as $E_1$, we also get
\begin{equation}\label{E2}
E_2\lesssim \mathcal C_3.2^{-k_0\lambda}\prod\limits_{i=1}^{m}\big\| f_i\big\|_{M{\mathop K\limits^.}^{\alpha_i(\cdot), \lambda_i}_{p_i,\zeta q_i(\cdot),\omega_i}}.
\end{equation}
For $i=1,...,m$, we denote
\[
{K_i} = \left\{ \begin{gathered}
{2^{{k_0}({\alpha _{i\infty} } - \alpha_i(0))}} + {\left| {{2^{\lambda_i p}} - 1} \right|^{ - \frac{1}{p}}} + {2^{ - {k_0}\lambda_i }},\,\textit{\rm if}\,\lambda_i + {\alpha_{i\infty}} - \alpha_i(0) \ne 0, \hfill \\
{2^{ - {k_0}\lambda_i }}{({k_0} + 1)^{\frac{1}{p}}} + {\left| {{2^{\lambda_i p}} - 1} \right|^{ - \frac{1}{p}}},\, \rm otherwise .\hfill \\
\end{gathered} \right.
\]
Then, we may show that
\begin{equation}\label{E3MVar}
E_3\lesssim \mathcal C_3\Big(\prod\limits_{i=1}^m K_i\Big).\prod\limits_{i=1}^{m}\big\| f_i\big\|_{M{\mathop K\limits^.}^{\alpha_i(\cdot), \lambda_i}_{p_i,\zeta q_i(\cdot),\omega_i}}.
\end{equation}
The proof of inequality (\ref{E3MVar}) is not difficult, but for convenience to the reader, we briefly give here. By employing (\ref{HAchikLq}) again, we make
\begin{equation}\label{E3MVar1}
E_3 \lesssim \mathcal C_3.\mathcal T_\infty .\prod_{i=1}^{m}\big\|f_i\big\|_{M{\mathop K\limits^.}^{\alpha_i(\cdot), \lambda_i}_{p_i,\zeta q_i(\cdot),\omega_i}},
\end{equation}
where $\mathcal T_\infty= 2^{-k_0\lambda}\Big(\sum\limits_{k=0}^{k_0}2^{k\alpha_\infty p}\prod\limits_{i=1}^{m}\big(2^{k(\lambda_i -\alpha_i(0))p}+2^{k(\lambda_i -\alpha_{i\infty})p}\big)\Big)^{\frac{1}{p}}$. By a similar argument as $\mathcal T_0$, we also get
\begin{eqnarray}
\mathcal T_\infty &\lesssim& \prod_{i=1}^{m} 2^{-k_0\lambda_i}\Big( \sum\limits_{k=0}^{k_0} 2^{k\lambda_i p}+ \sum\limits_{k=0}^{k_0}2^{k(\lambda_i +\alpha_{i\infty}-\alpha_i(0))p}\Big)^{\frac{1}{p}}\equiv \prod\limits_{i=1}^{m} \mathcal T_{i,\infty}.
\nonumber
\end{eqnarray}
In the case $\lambda_i+\alpha_{i\infty}-\alpha_i(0)\neq 0$, we deduce that $\mathcal T_{i,\infty}$ is dominated by
\begin{eqnarray}
\mathcal T_{i,\infty}&\leq& 2^{-k_0\lambda_i}\Big( \dfrac{2^{k_0\lambda_i p}-1}{2^{\lambda_i.p}-1}+\dfrac{2^{k_0(\lambda_i +\alpha_{i\infty}-\alpha_i(0))p}-1}{2^{(\lambda_i +\alpha_{i\infty}-\alpha_i(0))p}-1}\Big)^{\frac{1}{p}} \nonumber
\\
&\lesssim & {2^{{k_0}({\alpha _{i\infty} } - \alpha_i(0))}} + {\left| {{2^{\lambda_i p}} - 1} \right|^{ - 1/p}} + {2^{ - {k_0}\lambda_i }}\nonumber.
\end{eqnarray}
Otherwise, we have
\[
\mathcal T_{i,\infty} \leq \Big( \dfrac{2^{k_0\lambda_i p}-1}{2^{\lambda_i.p}-1}+(k_0+1)\Big)^{\frac{1}{p}}\lesssim {2^{ - {k_0}\lambda_i }}{({k_0} + 1)^{\frac{1}{p}}} + {\left| {{2^{\lambda_i p}} - 1} \right|^{ - \frac{1}{p}}}.
\]
This leads that $\mathcal T_\infty\lesssim \prod\limits_{i=1}^m K_i.$ Hence, by (\ref{E3MVar1}), we obtain the inequality (\ref{E3MVar}).
From (\ref{HAfMK}) and (\ref{E1})-(\ref{E3MVar}), we conclude that the proof of Theorem \ref{TheoremVar-MorreyHerz} is finished.
\end{proof}
Next, we will discuss the interesting case when $\lambda_1 = \cdots = \lambda_m = 0$. Remark that these special cases of variable exponent Morrey-Herz spaces are variable exponent Herz spaces. Hence, we also have the boundedness for the multilinear Hausdorff operators on the product of weighted Herz spaces with variable exponent as follows.
\begin{theorem}\label{TheoremVarHerz}
Suppose that we have the given supposition of Theorem $\ref{TheoremVar-MorreyHerz}$ and $\alpha_i(0)=\alpha_{i\infty}$, for all $i=1,...,m$. Let $1\leq p, p_i <\infty$ such that
\begin{equation}\label{lienhieppi}
\frac{1}{p_1}+\cdots+\frac{1}{p_m}=\frac{1}{p}.
\end{equation}
At the same time, let
\begin{eqnarray}\label{DKMorreyHerz}
\mathcal C_4 &=&\int\limits_{{\mathbb R^n}} {{(2 - {\Theta_n^*})^{m - \frac{1}{p}}}\frac{\Phi(t)}{|t|^n}\prod\limits_{i=1}^{m} c_{A_i,q_i,\gamma_i}(t)\big\|1\big\|_{L^{r_i(t,\cdot)}}}\times
\nonumber
\\
&&\,\,\,\,\,\,\,\,\,\,\,\times{{{\left\| {{A_i}(t)} \right\|}^{{ - {\alpha_i(0)}}}}}\big(\sum\limits_{r=\Theta_n^*-1}^{0}{2^{-r\alpha_i(0)}}\big)dt<\infty.
\end{eqnarray}
Then, $H_{\Phi,\vec{A}}$ is a bounded operator from ${\mathop K\limits^.}_{\zeta q_1(\cdot),{\omega_1}}^{{\alpha _1(\cdot)}, p_1} \times \cdots\times {\mathop K\limits^.}_{\zeta q_m(\cdot),{\omega_m}}^{{\alpha _m(\cdot)}, p_m} $ to ${\mathop K\limits^.}_{q(\cdot),{\omega}}^{{\alpha(\cdot)}, p}.$
\end{theorem}
\begin{proof}
It follows from Proposition 3.8 in \cite{Almeida2012} that
\begin{eqnarray}
&&{\big\| {{H_{\Phi ,\vec A }}\big( {\vec f } \big)} \big\|_{{\mathop K\limits^.}_{q(\cdot),\omega}^{{\alpha(\cdot)}, p}}}\lesssim \Big(\sum\limits_{k=-\infty}^{-1}{2^{k\alpha(0)p}\big\|{{H_{\Phi ,\vec A }}\big( {\vec f } \big)}\chi_k\big\|_{L^{q(\cdot)}_{\omega}}^p}\Big)^{\frac{1}{p}}\nonumber
\\
&&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,+ \Big(\sum\limits_{k=0}^{\infty}{2^{k\alpha_\infty p}\big\|{{H_{\Phi ,\vec A }}\big( {\vec f } \big)}\chi_k\big\|_{L^{q(\cdot)}_{\omega}}^p}\Big)^{\frac{1}{p}}.\nonumber
\end{eqnarray}
From this, by $\alpha(0) =\alpha_\infty$, we conclude that
\begin{equation}\label{ptVarHerz}
{\big\| {{H_{\Phi ,\vec A }}\big( {\vec f } \big)} \big\|_{{\mathop K\limits^.}_{q(\cdot),\omega}^{{\alpha(\cdot)}, p}}}\lesssim \Big(\sum\limits_{k=-\infty}^{\infty}{2^{k\alpha(0)p}\big\|{{H_{\Phi ,\vec A }}\big( {\vec f } \big)}\chi_k\big\|_{L^{q(\cdot)}_{\omega}}^p}\Big)^{\frac{1}{p}}.
\end{equation}
For convenience, let us denote by
\[
\mathcal H=\Big(\sum\limits_{k=-\infty}^{\infty}{2^{k\alpha(0)p}\big\|{{H_{\Phi ,\vec A }}\big( {\vec f } \big)}\chi_k\big\|_{L^{q(\cdot)}_{\omega}}^p}\Big)^{\frac{1}{p}}.
\]
Next, we need to estimate the upper bound of $\mathcal H$. By (\ref{LemmaVeH}) and using the Minkowski inequality, we get
\begin{eqnarray}\label{H1L1}
\mathcal H &\leq& {\int\limits_{{\mathbb R^n}} {\frac{\Phi(t)}{|t|^n}\prod\limits_{i=1}^{m}c_{A_i,q_i,\gamma_i}(t)\big\|1\big\|_{L^{r_i(t,\cdot)}}}}\times
\\
&&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\times\Big\{ {{{\sum\limits_{k = - \infty }^{{\infty}} {{2^{k\alpha(0) p}}\prod\limits_{i = 1}^m {\Big( {\sum\limits_{r = {\Theta _n^*} - 1}^0 {\left\| {{f_i}{\chi _{k + {\ell_i} + r}}} \right\|_{L^{\zeta q_i(\cdot)}_{\omega_i}}^{}} } \Big)} } }^p}} \Big\} ^{\frac{1}{p}}dt.
\nonumber
\end{eqnarray}
By (\ref{lienhieppi}) and the H\"{o}lder inequality, it follows that
\begin{eqnarray}\label{HolderlrH1}
&&{\Big\{ {{{\sum\limits_{k = - \infty }^{{\infty}} {{2^{k\alpha(0) p}}\prod\limits_{i = 1}^m {\Big( {\sum\limits_{r = {\Theta _n^*} - 1}^0 {\left\| {{f_i}{\chi _{k + {\ell_i} + r}}} \right\|_{L^{\zeta q_i(\cdot)}_{\omega_i}}} } \Big)^p} } }}} \Big\}^{\frac{1}{p}}}
\\
&&\,\,\,\,\,\,\,\,\leq \prod\limits_{i = 1}^m {{{\Big\{ {{{\sum\limits_{k = - \infty }^{{\infty}} {{2^{k{\alpha_i(0)}{p_i}}}\Big( {\sum\limits_{r = {\Theta_n^*} - 1}^0 {\left\| {{f_i}{\chi _{k + {\ell_i} + r}}} \right\|_{L^{\zeta q_i(\cdot)}_{\omega_i}}} } \Big)^{p_i}} }}} \Big\}}^{\frac{1}{{p_i}}}}}.\nonumber
\end{eqnarray}
On the other hand, by $p_i\geq 1$, we have
\[
{\Big( {\sum\limits_{r = {\Theta _n^*} - 1}^0 {\left\| {{f_i}{\chi _{k + {\ell_i} + r}}} \right\|_{L^{\zeta q_i(\cdot)}_{\omega_i}}^{}} } \Big)^{{p_i}}} \leq {\left( {2 - {\Theta _n^*}} \right)^{{p_i} - 1}}\sum\limits_{r = {\Theta _n^*} - 1}^0 {\left\| {{f_i}{\chi _{k + {\ell_i} + r}}} \right\|_{L^{\zeta q_i(\cdot)}_{\omega_i}}^{{p_i}}}.
\]
Hence, combining (\ref{H1L1}) and (\ref{HolderlrH1}), we obtain
\begin{equation}\label{HAf1Var}
\mathcal H \leq \int\limits_{{\mathbb R^n}} {{(2 - {\Theta_n^*})^{m - \frac{1}{p}}}\frac{\Phi(t)}{|t|^n}\prod\limits_{i=1}^{m}c_{A_i,q_i,\gamma_i}(t)\big\|1\big\|_{L^{r_i(t,\cdot)}}.\prod\limits_{i = 1}^m {{\mathcal{H}_{i}}} } dt,
\end{equation}
where $\mathcal{H}_{i} = {\sum\limits_{r = {\Theta _n^*} - 1}^0 {{{\Big( {\sum\limits_{k = - \infty }^{{\infty}} {{2^{k{\alpha _i(0)}{p_i}}}\left\| {{f_i}{\chi _{k + {\ell_i} + r}}} \right\|_{L^{\zeta q_i(\cdot)}_{\omega_i}}^{{p_i}}} } \Big)}^{^{\frac{1}{{p_i}}}}}} }$ for all $i=1,2,...,m$.
\\
Then, we find
\begin{eqnarray}\label{Hi}
{\mathcal{H}_{i}} &=& {\sum\limits_{r = {\Theta _n^*} - 1}^0 {{{\Big( {\sum\limits_{t = - \infty }^{\infty} {{2^{(t-\ell_i-r){\alpha _i(0)}{p_i}}}\left\| {{f_i}{\chi _{t}}} \right\|_{L^{\zeta q_i(\cdot)}_{\omega_i}}^{{p_i}}} } \Big)}^{^{\frac{1}{{p_i}}}}}} }
\nonumber
\\
&=&{\sum\limits_{r = {\Theta _n^*} - 1}^0 { 2^{-(\ell_i+r)\alpha_i(0)}{{\Big( {\sum\limits_{t = - \infty }^{\infty} {{2^{t{\alpha _i(0)}{p_i}}}\left\| {{f_i}{\chi _{t}}} \right\|_{L^{\zeta q_i(\cdot)}_{\omega_i}}^{{p_i}}} } \Big)}^{^{\frac{1}{{p_i}}}}}} }
\nonumber
\\
&\leq& \big(\sum\limits_{r=\Theta_n^*-1}^{0} {2^{-r\alpha_i(0)}}\big).{2^{^{-\ell_i{\alpha_i(0)}}}}{\left\| {{f_i}} \right\|_{{\mathop K\limits^.}_{\zeta q_i(\cdot),\omega_i}^{{\alpha_i(\cdot)}, p_i}}}.
\end{eqnarray}
Since $2^{\ell_i-1}<\left\|A_i(t)\right\|\leq 2^{\ell_i}$, we imply that ${2^{^{{-\ell_i}{\alpha _i(0)}}}} \lesssim {\left\| {{A_i}(t)} \right\|^{- {\alpha_i(0)}}}$. Thus, by (\ref{HAf1Var}) and (\ref{Hi}), we get
\begin{eqnarray}
{\big\| {{H_{\Phi ,\vec A }}\big( {\vec f } \big)} \big\|_{{\mathop K\limits^.}_{q(\cdot),\omega}^{{\alpha(\cdot)}, p}}}\lesssim \mathcal C_4.\prod\limits_{i = 1}^m {{{\left\| {{f_i}} \right\|}_{{\mathop K\limits^.}_{\zeta q_i(\cdot),\omega_i}^{{\alpha_i(\cdot)}, p_i}}}},
\nonumber
\end{eqnarray}
which finishes our desired conclusion.
\end{proof}
\vskip 5pt
\hskip-15pt {\bfseries Remark 3.} We would like to give several comments on Theorem \ref{TheoremVar-MorreyHerz} and Theorem \ref{TheoremVarHerz}. If we suppose that
\[
\mathop {\rm ess\,sup}\limits_{t\in {\rm supp}(\Phi)}\big\|A_i(t)\big\|<\infty,\,\textit{\rm for all} \,i=1,...,m,
\]
then we do not need to assume the conditions $\alpha_i(0)-\alpha_{i\infty}\geq 0$ in Theorem \ref{TheoremVar-MorreyHerz} and $\alpha_i(0)=\alpha_{i\infty}$ in Theorem \ref{TheoremVarHerz}.
Indeed, by putting
\[
\beta= \mathop {\rm ess\,sup}\limits_{t\in {\rm supp}(\Phi)\,\textit{\rm and}\,i=1,...,m}\ell_i(t),
\]
and applying Lemma \ref{lemmaVE} in Section 2, we refine the estimation as follow:
\\
In the case $k<\beta$, we get
\[
\big\| f_i\chi_{k+\ell_i+r}\big\|_{L^{\zeta q_i(\cdot)}_{\omega_i}}\lesssim 2^{(k+\ell_i+r)(\lambda_i-\alpha_i(0))}\big\|f_i\big\|_{M{\mathop K\limits^.}^{\alpha_i(\cdot), \lambda_i}_{p_i,\zeta q_i(\cdot),\omega_i}}.
\]
In the case $k\geq {\beta} -\Theta_n^*+1$, we have
\[
\big\| f_i\chi_{k+\ell_i+r}\big\|_{L^{\zeta q_i(\cdot)}_{\omega_i}}\lesssim 2^{(k+\ell_i+r)(\lambda_i-\alpha_{i\infty})}\big\|f_i\big\|_{M{\mathop K\limits^.}^{\alpha_i(\cdot), \lambda_i}_{p_i,\zeta q_i(\cdot),\omega_i}}.
\]
Otherwise, then we obtain
\[
\big\| f_i\chi_{k+\ell_i+r}\big\|_{L^{\zeta q_i(\cdot)}_{\omega_i}}\lesssim \big(2^{(k+\ell_i+r)(\lambda_i-\alpha_i(0))}+2^{(k+\ell_i+r)(\lambda_i-\alpha_{i\infty})}\big)\big\|f_i\big\|_{M{\mathop K\limits^.}^{\alpha_i(\cdot), \lambda_i}_{p_i,\zeta q_i(\cdot),\omega_i}}.
\]
Also, the other estimations can be done by similar arguments as two theorems above. From this we omit details, and their proof are left to reader.
\begin{theorem}\label{TheoremVarMorreyHerz1}
Suppose that the given supposition of Theorem \ref{TheoremVar-MorreyHerz} and the hypothesis (\ref{DKVarLeb}) in Theorem \ref{TheoremVarLebesgue1} are true.
\\
{\rm (a)} If
\begin{eqnarray}
\mathcal C_5 &=&\int\limits_{\mathbb R^n}{\dfrac{\Phi(t)}{|t|^n}\prod\limits_{i=1}^{m}\max\Big\{\big\|A_i^{-1}(t)\big\|^{\frac{n}{q_{i+}}+\gamma_i}, \big\|A_i^{-1}(t)\big\|^{\frac{n}{q_{i-}}+\gamma_i}\Big\}\big\|A_i^{-1}(t)\big\|^{-\lambda_i}}\times\nonumber
\\
&&{{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\times\max\Big\{\big\|A_i^{-1}(t)\big\|^{\alpha_i(0)}, \big\|A_i^{-1}(t)\big\|^{\alpha_{i\infty}} \Big\}}}\big\|1\big\|_{L^{r_{1i}(t,\cdot)}}dt<\infty,\nonumber
\end{eqnarray}
then
\[
\big\|H_{\Phi,\vec A}(\vec f)\big\|_{M{\mathop K\limits^.}^{\alpha(\cdot), \lambda}_{p, q(\cdot),\omega}}\lesssim \mathcal C_5\prod\limits_{i=1}^{m}\big\| f_i\big\|_{M{\mathop K\limits^.}^{\alpha_i(\cdot), \lambda_i}_{p_i, q_i(\cdot),\omega_i}}.
\]
{\rm (b)} Denote by
\begin{eqnarray}
\mathcal C_5^* &=&\int\limits_{\mathbb R^n}{\dfrac{\Phi(t)}{|t|^n}\prod\limits_{i=1}^{m}\min\Big\{\big\|A_i^{-1}(t)\big\|^{\frac{n}{q_{i+}}+\gamma_i}, \big\|A_i^{-1}(t)\big\|^{\frac{n}{q_{i-}}+\gamma_i}\Big\}\big\|A_i^{-1}(t)\big\|^{-\lambda_i}}\times
\nonumber
\\
&&{{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\times\min\Big\{\big\|A_i^{-1}(t)\big\|^{\alpha_i(0)+C_0^{\alpha_i}}, \big\|A_i^{-1}(t)\big\|^{\alpha_i(0)-C_0^{\alpha_i}} \Big\}}}dt.\nonumber
\end{eqnarray}
Suppose that $H_{\Phi,\vec{A}}$ is a bounded operator from ${M{\mathop K\limits^.}}^{\alpha_1(\cdot),\lambda_1}_{p_1, q_1(\cdot),\omega_1}\,\times\cdots\times {M{\mathop K\limits^.}}^{\alpha_m(\cdot),\lambda_m}_{p_m, q_m(\cdot),\omega_m}$ to ${M{\mathop K\limits^.}}^{\alpha(\cdot),\lambda}_{p,q(\cdot),\omega}$ and one of the following conditions holds:
\begin{enumerate}
\item[\rm (b1)] $q_{i+}=q_{i-}$, $C_0^{\alpha_i}$ and $C_\infty^{\alpha_i}\leq \alpha_i(0)-\alpha_{i\infty}$, for all $i=1,...,m$;
\item[\rm (b2)]$q_{i+}\neq q_{i-}$, $C_0^{\alpha_i} = C_\infty^{\alpha_i}=0$, $\lambda_i=\alpha_i(0)=\alpha_{i\infty}$, for all $i=1,...,m$;
\item[\rm (b3)] $q_{i+}\neq q_{i-}$, both $C_0^{\alpha_i}$ and $C_\infty^{\alpha_i}$ are less than $ \alpha_i(0)-\alpha_{i\infty}$, $C_\infty^{\alpha_i}+C_0^{\alpha_i}\leq C^{\alpha_i}$, $\lambda_i\in [\eta^0_i, \eta^1_i]\cap [\zeta^0_i,\zeta^1_i]$, for all $i=1,...,m$.
\end{enumerate}
Here $C^{\alpha_i}=\dfrac{q_{i-}(\alpha_i(0)-\alpha_{i\infty})(1+\frac{q_{i+}}{q_{i-}})}{q_{i+}}$ and $\eta^0_i, \eta^1_i, \zeta^0_i,\zeta^1_i$ are defined by
\[
\eta^0_i=\dfrac{C_0^{\alpha_i} \frac{q_{i-}}{q_{i+}}-\alpha_i(0)\frac{q_{i-}}{q_{i+}}+\alpha_{i\infty}}{1-\frac{q_{i-}}{q_{i+}}},\eta^1_i=\dfrac{C_0^{\alpha_i} \frac{q_{i+}}{q_{i-}}-\alpha_i(0)\frac{q_{i+}}{q_{i-}}+\alpha_{i\infty}}{1-\frac{q_{i+}}{q_{i-}}},
\]
\[
\zeta^0_i=\dfrac{C_\infty^{\alpha_i} \frac{q_{i+}}{q_{i-}}-\alpha_i(0)+\alpha_{i\infty}\frac{q_{i+}}{q_{i-}}}{\frac{q_{i+}}{q_{i-}}-1}, \zeta^1_i=\dfrac{C_\infty^{\alpha_i} \frac{q_{i-}}{q_{i+}}-\alpha_i(0)+\alpha_{i\infty}\frac{q_{i-}}{q_{i+}}}{\frac{q_{i-}}{q_{i+}}-1}.
\]
Then, we have that $\mathcal C_5^*$ is finite. Furthermore,
\[
\big\|H_{\Phi,\vec A}\big\|_{{M{\mathop K\limits^.}}^{\alpha_1(\cdot),\lambda_1}_{p_1, q_1(\cdot),\omega_1}\,\times\cdots\times {M{\mathop K\limits^.}}^{\alpha_m(\cdot),\lambda_m}_{p_m, q_m(\cdot),\omega_m}\to {M{\mathop K\limits^.}}^{\alpha(\cdot),\lambda}_{p,q(\cdot),\omega}}\gtrsim \mathcal C_5^*.
\]
\end{theorem}
\begin{proof}
Firstly, we prove for the case ${\rm (a)}$. From (\ref{DK1}), we call $\Theta_n$ is the greatest integer number such that $\rho_{\vec A} < 2^{-\Theta_n}$. Now, we replace $\Theta_n^*$ by $\Theta_n$ in the proof of Theorem \ref{TheoremVar-MorreyHerz}, and the other results are estimated in the same way. Then, by (\ref{nhungVarLeb2}), we get
\begin{eqnarray}\label{MHVar1}
&&\big\|H_{\Phi,\vec A}(\vec f)\big\|_{M{\mathop K\limits^.}^{\alpha(\cdot), \lambda}_{p, q(\cdot),\omega}}\lesssim \Big(\int\limits_{\mathbb R^n}{\frac{\Phi(t)}{|t|^n}\prod\limits_{i=1}^{m}c_{A_i,q_i,\gamma_i}(t)}\big\|1\big\|_{L^{r_{1i}(t,\cdot)}}\times
\\
&&{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\times\max\Big\{\big\|A_i(t)\big\|^{\lambda_i-\alpha_i(0)}, \big\|A_i(t)\big\|^{\lambda_i-\alpha_{i\infty}} \Big\}}dt\Big)\prod\limits_{i=1}^{m}\big\| f_i\big\|_{M{\mathop K\limits^.}^{\alpha_i(\cdot), \lambda_i}_{p_i, q_i(\cdot),\omega_i}}.
\nonumber
\end{eqnarray}
By the inequality (\ref{DK1.1}), we have
\begin{eqnarray}
&&\max\Big\{\big\|A_i(t)\big\|^{\lambda_i-\alpha_i(0)}, \big\|A_i(t)\big\|^{\lambda_i-\alpha_{i\infty}} \Big\}\nonumber
\\
&&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\lesssim \big\|A_i^{-1}(t)\big\|^{-\lambda_i}\max\Big\{\big\|A_i^{-1}(t)\big\|^{\alpha_i(0)}, \big\|A_i^{-1}(t)\big\|^{\alpha_{i\infty}} \Big\}.
\nonumber
\end{eqnarray}
Thus, by (\ref{cAiq+-}), (\ref{MHVar1}) and $\mathcal C_3 <\infty$, we finish the proof for this case.
\vskip 5pt
Next, we will prove for case ${\rm (b)}$. By choosing
\[
f_i(x)=|x|^{-\alpha_i(x)-\frac{n}{q_i(x)}-\gamma_i+\lambda_i},
\]
it is evident that $\big\| f_i\big\|_{{M{\mathop K\limits^.}}^{\alpha_i(\cdot),\lambda_i}_{p_i, q_i(\cdot),\omega_i}}>0$, for all $i=1,...,m$. Now, we need to show that
\begin{equation}\label{fiMHVar}
\big\| f_i\big\|_{{M{\mathop K\limits^.}}^{\alpha_i(\cdot),\lambda_i}_{p_i, q_i(\cdot),\omega_i}}<\infty,\,\textit{\rm for all }\, i=1,...,m.
\end{equation}
Indeed, we find
\[
F_{q_i}(f_i\omega_i.\chi_k)=\int\limits_{C_k}{|x|^{(\lambda_i-\alpha_i(x))q_i(x)-n}dx}=\int\limits_{2^{k-1}}^{2^k}\int\limits_{S^{n-1}}{r^{(\lambda_i-\alpha_i(r.x'))q_i(r.x')-1}d\sigma(x')}dr.
\]
\textit{Case 1}: $k\leq 0$. Since $\alpha_i \in \mathbf C_\infty^{\rm log}(\mathbb R^n)$, it follows that
\[
- C_{\infty}^{\alpha_i}+ \alpha_{i\infty}\leq \alpha_i(x)\leq \alpha_{i\infty}+ C_{\infty}^{\alpha_i}.
\]
As a consequence, we get
\begin{eqnarray}
F_{q_i}(f_i\omega_i.\chi_k)&\leq& \int\limits_{2^{k-1}}^{2^k}\int\limits_{S^{n-1}}{r^{(\lambda_i-\alpha_{i\infty}-C_{\infty}^{\alpha_i})q_i(r.x')-1}d\sigma(x')}dr \nonumber
\\
&\lesssim &\textit{\rm max}\big\{ 2^{k(\lambda_i-\alpha_{i\infty}-C_{\infty}^{\alpha_i})q_{i-}}, 2^{k(\lambda_i-\alpha_{i\infty}-C_{\infty}^{\alpha_i})q_{i+}} \big\}\nonumber.
\end{eqnarray}
Thus, by (\ref{maxminvar}), we obtain
\begin{eqnarray}\label{fiXkVar}
\big\|f_i\chi_k\big\|_{L^{q_i(\cdot)}_{\omega_i}}&\lesssim & \textit{\rm max}\big\{ 2^{k(\lambda_i-\alpha_{i\infty}-C_{\infty}^{\alpha_i})\frac{q_{i-}}{q_{i+}}}, 2^{k(\lambda_i-\alpha_{i\infty}-C_{\infty}^{\alpha_i})\frac{q_{i+}}{q_{i-}}} \big\}
\nonumber
\\
&=& 2^{k(\lambda_i-\alpha_{i\infty}-C_{\infty}^{\alpha_i})\beta_{i\infty}},
\end{eqnarray}
where
\[
\beta_{i\infty} = \left\{ \begin{array}{l}
\dfrac{q_{i+}}{q_{i-}}, \, \textit{\rm if}\, \lambda_i-\alpha_{i\infty}-C_{\infty}^{\alpha_i} <0,
\\
\\
\dfrac{q_{i-}}{q_{i+}}, \,\textit{\rm otherwise.}
\end{array} \right.
\]
\textit{Case 2}: $k>0$.
Since $\alpha_i \in \mathbf C_0^{\rm log}(\mathbb R^n)$, we have
\begin{equation}\label{C0alphai}
- C_{0}^{\alpha_i}+ \alpha_i(0)\leq \alpha_i(x)\leq \alpha_i(0)+ C_{0}^{\alpha_i}.
\end{equation}
Denote
\[
\beta_{i0} = \left\{ \begin{array}{l}
\dfrac{q_{i+}}{q_{i-}}, \, \textit{\rm if }\, \lambda_i-\alpha_{i}(0)+C_{0}^{\alpha_i} \geq 0,
\\
\\
\dfrac{q_{i-}}{q_{i+}}, \,\textit{\rm otherwise.}
\end{array} \right.
\]
By having (\ref{C0alphai}) and estimating in the same way as the case 1, we deduce
\begin{equation}\label{fiXkVar1}
\big\|f_i\chi_k\big\|_{L^{q_i(\cdot)}_{\omega_i}} \lesssim 2^{k(\lambda_i-\alpha_{i}(0)+C_{0}^{\alpha_i})\beta_{i0}}.
\end{equation}
Next, it follows from Proposition 2.5 in \cite{LZ2014} that
\begin{equation}\label{ptfiMHVar}
\big\|f_i\big\|_{M{\mathop K\limits^.}^{\alpha_i(\cdot), \lambda}_{p_i,q_i(\cdot),\omega_i}} \lesssim \max\big\{
\sup\limits_{k_0<0,k_0\in\mathbb Z}E_{i,1}, \sup\limits_{k_0\geq 0,k_0\in\mathbb Z}(E_{i,2}+E_{i,3})
\big\},
\end{equation}
where
\begin{eqnarray}
&&E_{i,1}=2^{-k_0\lambda_i}\Big(\sum\limits_{k=-\infty}^{k_0}2^{k\alpha_i(0)p_i}\big\|f_i\big\|^{p_i}_{L^{q_i(\cdot)}_{\omega_i}}\Big)^{\frac{1}{p_i}},\nonumber
\\
&&E_{i,2} = 2^{-k_0\lambda_i}\Big(\sum\limits_{k=-\infty}^{-1}2^{k\alpha_i(0)p_i}\big\|f_i\big\|^{p_i}_{L^{q_i(\cdot)}_{\omega_i}}\Big)^{\frac{1}{p_i}},\nonumber
\\
&&E_{i,3}=2^{-k_0\lambda_i}\Big(\sum\limits_{k=0}^{k_0}2^{k\alpha_{i\infty} p_i}\big\|f_i\big\|^{p_i}_{L^{q_i(\cdot)}_{\omega_i}}\Big)^{\frac{1}{p_i}}\nonumber.
\end{eqnarray}
Notice that the relation $\alpha_i(0)+(\lambda_i-\alpha_{i\infty}-C_{\infty}^{\alpha_i})\beta_{i\infty}$ is required positively which is proved later, beacause $E_{i,1}$ is infinite otherwise. Thus, because of (\ref{fiXkVar}), we have $E_{i,1}$ and $E_{i,2}$ are dominated by
\begin{eqnarray}\label{E12iMHVar}
E_{i,1}&\lesssim& 2^{-k_0\lambda_i}\Big(\sum\limits_{k=-\infty}^{k_0}2^{k\alpha_i(0)p_i}2^{kp_i(\lambda_i-\alpha_{i\infty}-C_{\infty}^{\alpha_i})\beta_{i\infty}}\Big)^{\frac{1}{p_i}}
\nonumber
\\
&\lesssim& 2^{k_0(\alpha_i(0)+ (\lambda_i-\alpha_{i\infty}-C_{\infty}^{\alpha_i})\beta_{i\infty}-\lambda_i)},
\nonumber
\\
E_{i,2}&\lesssim& 2^{-k_0\lambda_i}.2^{-\alpha_i(0)-(\lambda_i-\alpha_{i\infty}-C_{\infty}^{\alpha_i})\beta_{i\infty}}\lesssim 2^{-k_0\lambda_i}.
\end{eqnarray}
\\
By (\ref{fiXkVar1}), we have $E_{i,3}$ is controlled by
\begin{eqnarray}
E_{i,3}&\lesssim & 2^{-k_0\lambda_i}+2^{-k_0\lambda_i}\Big(\sum\limits_{k=1}^{k_0}2^{k\alpha_{i\infty} p_i}\big\|f_i\big\|^{p_i}_{L^{q_i(\cdot)}_{\omega_i}}\Big)^{\frac{1}{p_i}}
\nonumber
\\
&\lesssim & 2^{-k_0\lambda_i}+2^{-k_0\lambda_i}\Big(\sum\limits_{k=1}^{k_0}2^{kp_i(\alpha_{i\infty}+(\lambda_i-\alpha_i(0)+C_0^{\alpha_i})\beta_{i0})}\Big)^{\frac{1}{p_i}}
\nonumber
\\
&\lesssim & \left\{ \begin{array}{l}
2^{-k_0\lambda_i}(k_0^{\frac{1}{p}}+1), \,\textit{\rm if}\,\, \alpha_{i\infty}+(\lambda_i-\alpha_i(0)+C_0^{\alpha_i})\beta_{i0} =0,
\\
\\
2^{-k_0\lambda_i}+2^{-k_0(\lambda_i-\alpha_{i\infty}-(\lambda_i-\alpha_i(0)+C_0^{\alpha_i})\beta_{i0})},\,\textit{\rm otherwise}.
\end{array} \right.
\nonumber
\end{eqnarray}
This implies that
\begin{equation}\label{Ei3MHVar}
E_{i,3} \lesssim 2^{-k_0\lambda_i}(k_0^{\frac{1}{p}}+1)+ 2^{-k_0(\lambda_i-\alpha_{i\infty}-(\lambda_i-\alpha_i(0)+C_0^{\alpha_i})\beta_{i0})}.
\end{equation}
For convenience, we set
\begin{equation}\label{DKfi}
\left\{ \begin{array}{l}
\theta_{i0}= \lambda_i-\alpha_{i\infty}-(\lambda_i-\alpha_i(0)+C_0^{\alpha_i})\beta_{i0},
\\
\theta_{i\infty}= \alpha_i(0)+ (\lambda_i-\alpha_{i\infty}-C_{\infty}^{\alpha_i})\beta_{i\infty}-\lambda_i.
\end{array} \right.
\end{equation}
Combining (\ref{ptfiMHVar})-(\ref{DKfi}), we get that
\[
\big\|f_i\big\|_{M{\mathop K\limits^.}^{\alpha_i(\cdot), \lambda}_{p_i,q_i(\cdot)}(\omega_i)} \lesssim \max\big\{
\sup\limits_{k_0<0,k_0\in\mathbb Z}2^{k_0\theta_{i\infty}}, \sup\limits_{k_0\geq 0,k_0\in\mathbb Z}\big(2^{-k_0\lambda_i}(k_0^{\frac{1}{p}}+1)+2^{-k_0\theta_{i0}}\big)
\big\}.
\]
From the above estimation, we will finish the proof of (\ref{fiMHVar}) if the following result can be proved
\begin{equation}\label{DKfi*}
\theta_{i0}\geq 0\,\textit{\rm and}\,\,\theta_{i\infty}\geq 0.
\end{equation}
In order to do this, let us consider three cases as follows.
\vskip 5pt
\textit{Case b1}. By $q_{i+}=q_{i-}$, we have $\beta_{i0}=\beta_{i\infty}=1$. So, by the information
of $C_0^{\alpha_i}$ and $C_\infty^{\alpha_i}$, it is easy to have the desired result (\ref{DKfi*}).
\vskip 5pt
\textit{Case b2}. In this case, we find $\theta_{i0}=\theta_{i\infty}=0$. This follows immediately that the result (\ref{DKfi*}) is true.
\vskip 5pt
\textit{Case b3}. Because both $C_0^{\alpha_i}$ and $C_\infty^{\alpha_i}$ are less than $ \alpha_i(0)-\alpha_{i\infty}$, we have
$[\eta^0_i, \eta^1_i]$ and $[\zeta^0_i,\zeta^1_i]$ are not empty sets. Also, we obtain
\begin{equation}\label{empty}
\alpha_i(0)-C_{0}^{\alpha_i}\in [\eta^0_i, \eta^1_i]\,\textit{\rm and}\,\,\alpha_{i\infty}+C_\infty^{\alpha_i}\in [\zeta^0_i,\zeta^1_i].
\end{equation}
From $C_\infty^{\alpha_i}+C_0^{\alpha_i}\leq C^{\alpha_i}$, it implies that $\eta^1_i\geq \zeta^0_i\,\textit{\rm and}\,\,\zeta^1_i\geq \eta^0_i.$
Hence, we also have $[\eta^0_i, \eta^1_i]\cap [\zeta^0_i,\zeta^1_i]$ is not an empty set. Thus, by (\ref{empty}), we observe that
\begin{eqnarray}
[\eta^0_i, \eta^1_i]\cap [\zeta^0_i,\zeta^1_i]
&=&\Big([\eta^0_i,\alpha_i(0)-C_{0}^{\alpha_i})\cup[\alpha_i(0)-C_{0}^{\alpha_i}, \eta^1_i]\Big)\nonumber
\\
&&\,\,\,\,\cap \Big([\zeta^0_i,\alpha_{i\infty}+C_\infty^{\alpha_i})\cup[\alpha_{i\infty}+C_\infty^{\alpha_i},\zeta^1_i]\Big).
\nonumber
\end{eqnarray}
For the above separation, by calculating directly and defining $\beta_{i0}$, $\beta_{i\infty}$, we have that
\[
\lambda_i\in [\eta^0_i, \eta^1_i]\cap [\zeta^0_i,\zeta^1_i]\Leftrightarrow (\ref{DKfi*})\,\,\, \text {\rm holds}.
\]
This claims that the desired estimation (\ref{fiMHVar}) is completed.
Combining (\ref{DK1.2}) and (\ref{C0alphai}), we obtain
\begin{eqnarray}
H_{\Phi,\vec A}(\vec f)(x)&=&\int\limits_{\mathbb R^n}{\dfrac{\Phi(t)}{|t|^n}\prod\limits_{1}^{m}{|A_i(t)x|^{-\alpha_i(x)-\frac{n}{q_i(x)}-\gamma_i+\lambda_i}}dt}\nonumber
\\
&\gtrsim & \Big(\int\limits_{\mathbb R^n}{\dfrac{\Phi(t)}{|t|^n}\prod\limits_{1}^{m}{\big\|A_i^{-1}(t)\big\|^{\alpha_i(x)+\frac{n}{q_i(x)}+\gamma_i-\lambda_i}}dt}\Big).|x|^{-\alpha(x)-\frac{n}{q(x)}-\gamma+\lambda}\nonumber
\\
&\gtrsim & \mathcal C_5^*.|x|^{-\alpha(x)-\frac{n}{q(x)}-\gamma+\lambda}\nonumber.\
\end{eqnarray}
From this, because of (\ref{fiMHVar}) and assuming that $H_{\Phi,\vec{A}}$ is a bounded operator,
we conclude
\[
\big\|H_{\Phi,\vec{A}}\big\|_{{M{\mathop K\limits^.}}^{\alpha_1(\cdot),\lambda_1}_{p_1, q_1(\cdot),\omega_1}\,\times\cdots\times {M{\mathop K\limits^.}}^{\alpha_m(\cdot),\lambda_m}_{p_m, q_m(\cdot),\omega_m}\to {M{\mathop K\limits^.}}^{\alpha(\cdot),\lambda}_{p,q(\cdot),\omega}}\gtrsim\mathcal C_5^*\frac{\big\||\cdot|^{-\alpha(\cdot)-\frac{n}{q(\cdot)}-\gamma+\lambda}\big\|_{{M{\mathop K\limits^.}}^{\alpha(\cdot),\lambda}_{p,q(\cdot),\omega}}}{\prod\limits_{i=1}^m \big\|f_i\big\|_{{M{\mathop K\limits^.}}^{\alpha_i(\cdot),\lambda_i}_{p_i, q_i(\cdot),\omega_i}}} .
\]
This implies the desired assertion.
\end{proof}
\begin{theorem}\label{TheoremVarHerz1}
Suppose that the assumptions of Theorem \ref{TheoremVarHerz} and the hypothesis (\ref{DKVarLeb}) in Theorem \ref{TheoremVarLebesgue1} are true.
\\
{\rm (a)} If
\[
\mathcal C_6 =\int\limits_{\mathbb R^n}{\dfrac{\Phi(t)}{|t|^n}\prod\limits_{i=1}^{m}\max\Big\{\big\|A_i^{-1}(t)\big\|^{\frac{n}{q_{i+}}+\gamma_i}, \big\|A_i^{-1}(t)\big\|^{\frac{n}{q_{i-}}+\gamma_i}\Big\}\big\|A_i^{-1}(t)\big\|^{\alpha_{i}(0)}}\big\|1\big\|_{L^{r_{1i}(t,\cdot)}}dt<\infty,
\]
then $H_{\Phi,\vec{A}}$ is a bounded operator from ${\mathop K\limits^.}_{q_1(\cdot),{\omega_1}}^{{\alpha _1(\cdot)}, p_1} \times \cdots\times {\mathop K\limits^.}_{q_m(\cdot),{\omega_m}}^{{\alpha _m(\cdot)}, p_m} $ to ${\mathop K\limits^.}_{q(\cdot),{\omega}}^{{\alpha(\cdot)}, p}.$
\\
\\
{\rm (b)} Denote by
\[
\mathcal C_6^* = \left\{ \begin{array}{l}
\int\limits_{\mathbb R^n}{\dfrac{\Phi(t)}{|t|^n}\prod\limits_{i=1}^{m}\big\|A_i^{-1}(t)\big\|^{\alpha_i(0)+\frac{n}{q_{i}}+\gamma_i}}dt,\,\textit{\rm if}\,\,q_{i+}=q_{i-} \,\,\textit{\rm for all}\,\,i=1,...,m,
\\
\\
\int\limits_{\mathbb R^n}{\dfrac{\Phi(t)}{|t|^n}\prod\limits_{i=1}^{m}\min\Big\{\big\|A_i^{-1}(t)\big\|^{\frac{n}{q_{i+}}+\gamma_i}, \big\|A_i^{-1}(t)\big\|^{\frac{n}{q_{i-}}+\gamma_i}\Big\}\big\|A_i^{-1}(t)\big\|^{\|\alpha_i\|_{L^{\infty}}}}dt,\,\textit{\rm otherwise.}
\end{array} \right.\]
Let $H_{\Phi,\vec{A}}$ be a bounded operator from ${\mathop K\limits^.}_{q_1(\cdot),{\omega_1}}^{{\alpha _1(\cdot)}, p_1} \times \cdots\times {\mathop K\limits^.}_{q_m(\cdot),{\omega_m}}^{{\alpha _m(\cdot)}, p_m} $ to ${\mathop K\limits^.}_{q(\cdot),{\omega}}^{{\alpha(\cdot)}, p}$ and one of the following conditions is satisfied:
\begin{enumerate}
\item[\rm (b1)] $q_{i-}=q_{i+},\,\textit{\rm for all}\,\,i=1,...,m$;
\item[\rm (b2)] The case $\rm (b1)$ is not true and $\alpha_i(0) < \|\alpha_i\|_{L^\infty}\dfrac{q_{i-}}{q_{i+}},\,\textit{\rm for all}\,\,i=1,...,m$.
\end{enumerate}
Then, we have that $\mathcal C_6^*$ is finite. Furthermore, there exists $C>0$ such that the operator norm of $H_{\Phi,\vec A}$ is not greater than $C.\mathcal C_6^*$.
\end{theorem}
\begin{proof}
In the case $\rm (a)$, by combining Theorem \ref{TheoremVarHerz} and the part $\rm (a)$ of Theorem \ref{TheoremVarMorreyHerz1}, we immediately imply the desired result.
\vskip 5pt
In the case $\rm (b1)$, we have that $q_1(\cdot), ... , q_m(\cdot),$ and $ q(\cdot)$ are constant. Thus, for all $i=1,..., m$,
we will choose the function $f_i$ as follows:
\[{f_i}(x) =
\begin{cases}
0,\,\,\;\;\;\;\;\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\rm if }\, | x | < p_{\vec A}^{ - 1},&\\
{| x |^{ - {\alpha _i(0)} - \frac{n}{q_i}-\gamma_i - \varepsilon }},\,\,{\rm otherwise.}&
\end{cases}
\]
It is obvious to see that when $k$ is an integer number satisfying $k\leq\frac{-{\rm log}(\rho_{\vec A})}{{\rm log}(2)}$ then $\big\|f_i\chi_k\big\|_{L^{q_i}_{\omega_i}} =0$. Otherwise, we have
\[
\big\|f_i\chi_k\big\|_{L^{q_i}_{\omega_i}}^{q_i}\lesssim 2^{-kq_i(\alpha_i(0)+\varepsilon)}\dfrac{\big(2^{q_i(\alpha_i(0)+\varepsilon)}-1\big)}{{q_i(\alpha_i(0)+\varepsilon)}}.
\]
Hence, by applying Proposition 3.8 in \cite{Almeida2012} again and $\alpha_i(0)=\alpha_{i\infty}$, we find
\begin{eqnarray}
{\left\| f_i\right\|_{{\mathop K\limits^.}_{q_i, {\omega_i}}^{{\alpha_i(\cdot)}, p_i}}} &\lesssim & {\left\{ {\sum\limits_{k =\rho}^\infty {{2^{k{\alpha _i(0)}{p_i}}}} \left\| {{f_i}{\chi _k}} \right\|_{L^{q_i}_{\omega_i}}^{{p_i}}} \right\}^{\frac{1}{{p_i}}}}\nonumber
\\
&\lesssim& {\left(\dfrac{{{2^{{q_i}({\alpha _i(0)} + \varepsilon )}} - 1}}{q_i(\alpha_i(0)+\varepsilon)} \right)^{\frac{1}{{{q_i}}}}}{\left( {\frac{{{2^{\varepsilon {p_i} - \rho\varepsilon {p_i}}}}}{{{2^{\varepsilon {p_i}}} - 1}}} \right)^{\frac{1}{{{p_i}}}}} < \infty,
\nonumber
\end{eqnarray}
where $\rho$ is the smallest integer number such that $\rho>\frac{-{\rm log}(\rho_{\vec A})}{{\rm log}(2)}$.
Estimating as (\ref{HAf3Leb}), we have
\begin{equation}\label{HAf3}
{H_{\Phi ,\vec A }}(\vec{f)} (x) \gtrsim \Big(\int\limits_{U} {\frac{{\Phi (t)}}{{{{\left| t \right|}^n}}}} \prod\limits_{i = 1}^m {{{\big\| A_i^{-1}(t)\big\|}^{\alpha_i(0)+\frac{n}{q_i}+\gamma_i+ \varepsilon }}} dt\Big)|x|^{-\alpha(0)- \frac{{n }}{{{q}}}-\gamma - m\varepsilon}\chi_{\mathbb R^n\setminus B(0,\varepsilon^{-1})}(x).
\end{equation}
Let $k_0$ be the smallest integer number such that $2^{k_0-1}\geq \varepsilon^{-1}$. Using Proposition 3.8 in \cite{Almeida2012} again, $\alpha_i(0)=\alpha_{i\infty}$ and (\ref{HAf3}), we obtain
\begin{eqnarray}\label{HAf4}
\big\| {{H_{\Phi ,\vec A }}\big( {\vec f } \big)} \big\|_{{\mathop K\limits^.}_{q, {\omega}}^{{\alpha(\cdot)}, p}}^p&\gtrsim& \sum\limits_{k = {k_0}}^\infty {{2^{k\alpha(0) p}}} {\Big( {\int\limits_{{2^{k - 1}} < \left| x \right| \leq {2^k}} {{{\left| x \right|}^{ - \varepsilon mq - \alpha(0)q - n}}} }dx \Big)^{\frac{p}{q}}}\times
\nonumber
\\
&&\,\,\times{\Big( {\int\limits_U {\frac{{\Phi (t)}}{{{{\left| t \right|}^n}}}\prod\limits_{i = 1}^m {{{\left\| {{A_i}^{ - 1}(t)} \right\|}^{{\alpha_i(0)} + \frac{{n}}{{{q_i}}}+\gamma_i + \varepsilon }}dt} } } \Big)^p}.
\end{eqnarray}
An elementary calculation leads that
\begin{equation}\label{Vanh}
\sum\limits_{k = {k_0}}^\infty {{2^{k\alpha(0)p}}} {\Big( {\int\limits_{{2^{k - 1}} < \left| x \right| \leq {2^k}} {{{\left| x \right|}^{ - \varepsilon mq - \alpha(0)q - n}}} dx} \Big)^{\frac{p}{q}}}\gtrsim\Big( {\frac{{{2^{ - {k_0}\varepsilon mp}}}}{{1 - {2^{ - \varepsilon mp}}}}} \Big){\Big( {\frac{{{2^{q(\varepsilon m + \alpha(0))}} - 1}}{{q(\varepsilon m + \alpha(0))}}} \Big)^{\frac{p}{q}}}.
\end{equation}
For simplicity of notation, we write
\[
\vartheta^* \big(\varepsilon\big) = \dfrac{{\left( {\frac{{{2^{ - {k_0}\varepsilon mp}}}}{{1 - {2^{ - \varepsilon mp}}}}} \right)^{\frac{1}{p}}{{\left( {\frac{{{2^{q(\varepsilon m + \alpha(0))}} - 1}}{{q(\varepsilon m + \alpha(0))}}} \right)}^{\frac{1}{q}}}}}{{\prod\limits_{i = 1}^m {{{\left( {\frac{{{2^{\varepsilon {p_i} - \rho \varepsilon {p_i}}}}}{{{2^{\varepsilon {p_i}}} - 1}}} \right)}^{\frac{1}{{{p_i}}}}}{{\left( {\frac{{{2^{{q_i}({\alpha _i(0)} + \varepsilon )}} - 1}}{{{q_i}({\alpha _i(0)} + \varepsilon )}}} \right)}^{\frac{1}{{{p_i}}}}}} }}.
\]
Therefore, by (\ref{HAf4}) and (\ref{Vanh}), we estimate
\begin{eqnarray}\label{HAf5}
\big\| {{H_{\Phi ,\vec A }}\big( {\vec f } \big)} \big\|_{{\mathop K\limits^.}_{q,\omega}^{{\alpha(\cdot)}, p}}&\gtrsim& \varepsilon^{-m\varepsilon}\vartheta^*.\prod\limits_{i=1}^m\big\|f_i\big\|_{{\mathop K\limits^.}_{q_i,{\omega_i}}^{{\alpha_i(\cdot)}, p_i}}\times
\\
&&\times{\Big( {\int\limits_U {\frac{{\Phi (t)}}{{{{\left| t \right|}^n}}}\prod\limits_{i = 1}^m {{{\left\| {{A_i}^{ - 1}(t)} \right\|}^{{\alpha_i(0)} + \frac{n}{q_i}+\gamma_i}}\prod\limits_{i=1}^m\big\|A_i^{-1}(t)\big\|^\varepsilon\varepsilon^{m\varepsilon} dt} } } \Big)}.
\nonumber
\end{eqnarray}
By (\ref{lienhieppi}), it is easy to show that
\[
\mathop {\lim }\limits_{\varepsilon \to {0^ + }} \varepsilon^{-m\varepsilon}\vartheta^*(\varepsilon) = a > 0.
\]
Thus, by (\ref{HAf5}), (\ref{prodLeb}) and the dominated convergence theorem of Lebesgue, we complete the proof for this case.
\vskip 5pt
Next, let us consider the case $\rm (b2)$. We now choose the functions $f_i$ for all $i=1,...,m$ as follows:
\[{f_i}(x) =
\begin{cases}
0,\,\,\;\;\;\;\;\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\rm if }\,\, | x | < \rho_{\vec A}^{ - 1},&\\
{|x|^{ - {\|\alpha _i\|_{L^{\infty}}} - \frac{n}{q_i(x)}-\gamma_i - \varepsilon }},\,\,{\rm otherwise.}&
\end{cases}
\]
Thus, we have
\[
F_{q_i}(f_i\omega_i.\chi_k)= \int\limits_{C^k}{|x|^{-(\|\alpha_i\|_{L^\infty}+\varepsilon)q_i(x)-n}dx}=\int\limits_{2^{k-1}}^{2^k}\int\limits_{S^{n-1}}{r^{-(\|\alpha_i\|_{L^\infty}+\varepsilon)q_i(r.x')-1}d\sigma(x')}dr.
\]
Hence, by letting $k\leq 0$, $F_{q_i}(f_i\omega_i.\chi_k)$ is controlled as follows
\begin{eqnarray}
\int\limits_{2^{k-1}}^{2^k}\int\limits_{S^{n-1}}{r^{-(\|\alpha_i\|_{L^\infty}+\varepsilon)q_{i+}-1}d\sigma(x')}dr \lesssim 2^{-k(\|\alpha_i\|+\varepsilon)q_{i+}}. \dfrac{2^{(\|\alpha_i\|_{L^{\infty}}+\varepsilon)q_{i+}}-1}{q_{i+}(\|\alpha_i\|_{L^\infty}+\varepsilon)}.
\nonumber
\end{eqnarray}
As a consequence of the above estimate, by (\ref{maxminvar}), we get
\begin{equation}\label{fichikVarHerz}
\big\|f_i\chi_k\big\|_{L^{q_i(\cdot)}_{\omega_i}}\lesssim (\eta_{j+})^{\frac{1}{q_{i-}}}2^{-k(\|\alpha_i\|_{L^\infty}+\varepsilon)\frac{q_{i+}}{q_{i-}}},
\end{equation}
where $\eta_{j+}=\dfrac{2^{(\|\alpha_i\|_{L^{\infty}}+\varepsilon)q_{i+}}-1}{q_{i+}(\|\alpha_i\|_{L^\infty}+\varepsilon)}$. Otherwise, by the similar argument as above, we also obtain
\begin{equation}\label{fichikVarHerz1}
\big\|f_i\chi_k\big\|_{L^{q_i(\cdot)}_{\omega_i}}\lesssim (\eta_{j-})^{\frac{1}{q_{i-}}}2^{-k(\|\alpha_i\|+\varepsilon)\frac{q_{i-}}{q_{i+}}},
\end{equation}
where $\eta_{j-}=\dfrac{2^{(\|\alpha_i\|_{L^{\infty}}+\varepsilon)q_{i-}}-1}{q_{i-}(\|\alpha_i\|_{L^\infty}+\varepsilon)}$. From defining $\rho$ and assuming $\alpha_i(0)=\alpha_{i\infty}$, by Proposition 3.8 in \cite{Almeida2012} again, we get
\begin{eqnarray}\label{fiVarHerz}
\big\| f_i\big\|_{{\mathop K\limits^.}_{q_i(\cdot),\omega_i}^{{\alpha _i(\cdot)}, p_i}} &\leq & {\left\{ {\sum\limits_{k =\rho}^0 {{2^{k{\alpha _i(0)}{p_i}}}} \left\| {{f_i}{\chi _k}} \right\|_{L^{q_{i}(\cdot)}_{\omega_i}}^{{p_i}}} \right\}^{\frac{1}{{p_i}}}}
\\
&&\,\,\,\,\,\,\,\,\,\,\,\,\,+{\left\{ {\sum\limits_{k =1}^\infty {{2^{k{\alpha _i(0)}{p_i}}}} \left\| {{f_i}{\chi _k}} \right\|_{L^{q_{i}(\cdot)}_{\omega_i}}^{{p_i}}} \right\}^{\frac{1}{{p_i}}}}.
\nonumber
\nonumber
\end{eqnarray}
Notice that, from assuming in this case, we deduce
\[
\alpha_i(0)-(\|\alpha_i\|_{L^\infty}+\varepsilon)\dfrac{q_{i-}}{q_{i+}}<0,\,\textit{\rm for all}\,\,\varepsilon\in \mathbb R^+.
\]
Thus, by (\ref{fichikVarHerz})-(\ref{fiVarHerz}), $\big\| f_i\big\|_{{\mathop K\limits^.}_{q_i(\cdot),\omega_i}^{{\alpha _i(\cdot)}, p_i}}$ is dominated by
\[
\eta_{j+}^{\frac{1}{q_{i-}}}\Big(\dfrac{2^{(-\rho+1)p_i(-\alpha_i(0)+(\|\alpha_i\|_{L^\infty}+\varepsilon)\frac{q_{i+}}{q_{i-}}}-1}{2^{p_i(-\alpha_i(0)+(\|\alpha_i\|_{L^\infty}+\varepsilon)\frac{q_{i+}}{q_{i-}})}-1}\Big)^{\frac{1}{p_i}}+ \eta_{j-}^{\frac{1}{q_{i-}}}\Big(\dfrac{2^{p_i(\alpha_i(0)-(\|\alpha_i\|_{L^\infty}+\varepsilon)\frac{q_{i-}}{q_{i+}})}}{1-2^{p_i(\alpha_i(0)-(\|\alpha_i\|_{L^\infty}+\varepsilon)\frac{q_{i-}}{q_{i+}})}}\Big)^{\frac{1}{p_i}}.
\]
This implies that
\begin{equation}\label{bdtfiVarHez}
\big\| f_i\big\|_{{\mathop K\limits^.}_{q_i(\cdot),\omega_i}^{{\alpha _i(\cdot)}, p_i}}\lesssim
\dfrac{I_i(\varepsilon)}{\Big(1-2^{p_i(\alpha_i(0)-(\|\alpha_i\|_{L^\infty}+\varepsilon)\frac{q_{i-}}{q_{i+}})}\Big)^{\frac{1}{p_i}}},
\end{equation}
where
\begin{eqnarray}
I_i(\varepsilon)&=& \eta_{j+}^{\frac{1}{q_{i-}}}\Big(\dfrac{2^{(-\rho+1)p_i(-\alpha_i(0)+(\|\alpha_i\|_{L^\infty}+\varepsilon)\frac{q_{i+}}{q_{i-}}}-1}{2^{p_i(-\alpha_i(0)+(\|\alpha_i\|_{L^\infty}+\varepsilon)\frac{q_{i+}}{q_{i-}})}-1}\Big)^{\frac{1}{p_i}}.\Big(1-2^{p_i(\alpha_i(0)-(\|\alpha_i\|_{L^\infty}+\varepsilon)\frac{q_{i-}}{q_{i+}})}\Big)^{\frac{1}{p_i}}
\nonumber
\\
&&+\,\eta_{j-}^{\frac{1}{q_{i-}}}{2^{\alpha_i(0)-(\|\alpha_i\|_{L^\infty}+\varepsilon)\frac{q_{i-}}{q_{i+}}}}.
\nonumber
\end{eqnarray}
On the other hand, by the similar estimating as (\ref{HAf3Leb}), we also obtain
\[
H_{\Phi,\vec A}(\vec f)(x)\geq \Big(\int\limits_{U} {\frac{{\Phi (t)}}{{{{\left| t \right|}^n}}}} \prod\limits_{i = 1}^m {{{| {{A_i}(t)x}|}^{ - {\|\alpha _i\|_{L^\infty}} - \frac{n}{q_i(x)}-\gamma_i - \varepsilon }}} dt\Big)\chi_{\mathbb R^n\setminus B(0,\varepsilon^{-1})}(x).
\]
For convenience, we put
\[
\Gamma^*_\varepsilon= \int\limits_{U}{\dfrac{\Phi(t)}{|t|^n}\prod\limits_{i=1}^{m}\min\Big\{\big\|A_i^{-1}(t)\big\|^{\frac{n}{q_{i+}}+\gamma_i}, \big\|A_i^{-1}(t)\big\|^{\frac{n}{q_{i-}}+\gamma_i}\Big\}\big\|A_i^{-1}(t)\big\|^{\|\alpha_i\|_{L^{\infty}}+\varepsilon}}dt.
\]
From this, by (\ref{DK1.2}), it is not hard to see that
\begin{equation}
H_{\Phi,\vec A}(\vec f)(x)\geq \Gamma^*_\varepsilon.|x|^{-(\sum\limits_{i=1}^{m}\|\alpha_i\|_{L^\infty})-\frac{n}{q(x)}-\gamma-m\varepsilon}\chi_{\mathbb R^n\setminus B(0,\varepsilon^{-1})}=: \Gamma^*_\varepsilon.g(x),
\nonumber
\end{equation}
where we denote $g(x)=|x|^{-(\sum\limits_{i=1}^{m}\|\alpha_i\|_{L^\infty})-\frac{n}{q(x)}-\gamma-m\varepsilon}\chi_{\mathbb R^n\setminus B(0,\varepsilon^{-1})}$.
\\
Since $\alpha(0)=\alpha_{\infty}$, we deduce that
\begin{equation}\label{HfVarHez}
\big\|H_{\Phi,\vec A}(\vec f)\big\|_{{\mathop K\limits^.}_{q(\cdot),\omega}^{{\alpha(\cdot)}, p}}\geq \Gamma^*_\varepsilon.\Big(\sum\limits_{k=k_0}^{\infty}2^{k\alpha(0)p}\big\|g\chi_k\big\|_{L^{q(\cdot)}_{\omega}}^p\Big)^{\frac{1}{p}},
\end{equation}
by using Proposition 3.8 in \cite{Almeida2012} again. Here we recall that $k_0$ is the smallest integer number so that $2^{k_0-1}\geq \varepsilon^{-1}$.
Let us now show that
\begin{equation}\label{gvar}
\Big(\sum\limits_{k=k_0}^{\infty}2^{k\alpha(0)p}\big\|g\chi_k\big\|_{L^{q(\cdot)}_{\omega}}^p\Big)^{\frac{1}{p}}\gtrsim \eta_{+}^{\frac{1}{q_+}}\dfrac{2^{k_0(\alpha(0)-(\sum\limits_{i=1}^{m}\|\alpha_i\|_{L^\infty}+\varepsilon m)\frac{q_+}{q_-})}}{\Big(1-2^{\alpha(0)-(\sum\limits_{i=1}^{m}\|\alpha_i\|_{L^\infty}+\varepsilon m)\frac{q_+}{q_-}}\Big)^{\frac{1}{p}}},
\end{equation}
where $\eta_{+}=\dfrac{2^{(\sum\limits_{i=1}^{m}\|\alpha_i\|_{L^\infty}+\varepsilon m)q_+}-1}{(\sum\limits_{i=1}^{m}\|\alpha_i\|_{L^\infty}+\varepsilon m)q_+}.$ Indeed, by $k\geq k_0>1$, we get
\begin{eqnarray}
F_{q}(g\omega.\chi_k)&=& \int\limits_{C^k}{|x|^{-(\sum\limits_{i=1}^m\|\alpha_i\|_{L^\infty}+m\varepsilon)q(x)-n}dx}\nonumber
\\
&\geq&\int\limits_{2^{k-1}}^{2^k}\int\limits_{S^{n-1}}{r^{-(\sum\limits_{i=1}^m\|\alpha_i\|_{L^\infty}+m\varepsilon)q_+-1}d\sigma(x')}dr \gtrsim \eta_+.2^{-k(\sum\limits_{i=1}^m\|\alpha_i\|_{L^\infty}+m\varepsilon)q_+}.
\nonumber
\end{eqnarray}
Thus, by (\ref{maxminvar}), we have $\big\|g\chi_k\big\|_{L^{q(\cdot)}_{\omega}}\gtrsim \eta_+^{\frac{1}{q_+}}.2^{-k(\sum\limits_{i=1}^m\|\alpha_i\|_{L^\infty}+m\varepsilon)\frac{q_+}{q_-}}.$ This finishes the proof of the estimation (\ref{gvar}).
Now, we define
\[
\vartheta^{**}(\varepsilon)=\dfrac{\eta_{+}^{\frac{1}{q_+}}2^{k_0(\alpha(0)-(\sum\limits_{i=1}^{m}\|\alpha_i\|_{L^\infty}+\varepsilon m)\frac{q_+}{q_-})}\prod\limits_{i=1}^m \Big(1-2^{p_i(\alpha_i(0)-(\|\alpha_i\|_{L^\infty}+\varepsilon)\frac{q_{i-}}{q_{i+}})}\Big)^{\frac{1}{p_i}}}{\prod\limits_{i=1}^m I_i(\varepsilon).\Big(1-2^{\alpha(0)-(\sum\limits_{i=1}^{m}\|\alpha_i\|_{L^\infty}+\varepsilon m)\frac{q_+}{q_-}}\Big)^{\frac{1}{p}}}.
\]
By (\ref{bdtfiVarHez})-(\ref{gvar}), we estimate
\begin{eqnarray}\label{bdtHfVarHerz}
&&\big\|H_{\Phi,\vec A}(\vec f)\big\|_{{\mathop K\limits^.}_{q(\cdot),\omega}^{{\alpha(\cdot)}, p}}\nonumber
\\
&&\gtrsim\varepsilon^{-m\varepsilon}\vartheta^{**}.\Big(\int\limits_{U}{\dfrac{\Phi(t)}{|t|^n}\prod\limits_{i=1}^{m}\min\Big\{\big\|A_i^{-1}(t)\big\|^{\frac{n}{q_{i+}}+\gamma_i}, \big\|A_i^{-1}(t)\big\|^{\frac{n}{q_{i-}}+\gamma_i}\Big\}}\big\|A_i^{-1}(t)\big\|^{\|\alpha_i\|_{L^{\infty}}}\nonumber
\\
&&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\times\prod\limits_{i=1}^m\big\|A_i^{-1}(t)\big\|^{\varepsilon}\varepsilon^{m\varepsilon}dt\Big).\prod\limits_{i=1}^m \big\|f_i\big\|_{{\mathop K\limits^.}_{q_i(\cdot),\omega_i}^{{\alpha_i(\cdot)}, p_i}}.
\end{eqnarray}
Because of assuming $\alpha_i(0)<\|\alpha_i\|_{L^\infty}\frac{q_{i-}}{q_{i+}}$, we have $\alpha (0)<\sum\limits_{i=1}^m\big\|\alpha_i\big\|_{L^\infty}.$ From this, the limit of function $\varepsilon^{-m\varepsilon}\vartheta^{**}$ is a positive number when $\varepsilon$ tends to zero.
Therefore, by (\ref{prodLeb}), (\ref{bdtHfVarHerz}) and the dominated convergence theorem of Lebesgue, we obtain
\[
\big\|H_{\Phi,\vec A}(\vec f)\big\|_{{\mathop K\limits^.}_{q(\cdot),\omega}^{{\alpha(\cdot)}, p}}
\gtrsim \mathcal C_6^*.\prod\limits_{i=1}^m \big\|f_i\big\|_{{\mathop K\limits^.}_{q_i(\cdot),\omega_i}^{{\alpha_i(\cdot)}, p_i}},
\]
which ends the proof for this case.
\end{proof}
When all of $\alpha_1(\cdot)$,...,$\,\alpha_m(\cdot)$ and $q_1(\cdot)$,...,$\,q_m(\cdot)$ are constant, we obtain the following useful result which is seen as an extension of Theorem 3.1 and Theorem 3.2 in the work \cite{CHH2016} to the case of matrices having property (\ref{DK1}) as mentioned above.
\begin{theorem}\label{TheoremMorreyHerz}
Let $\omega(x)= |x|^\gamma$, $\gamma_1, ..., \gamma_m\in\mathbb R$, $\lambda_1,...,\lambda_m \in\mathbb R^+$, $\alpha_1,...,\alpha_m\in\mathbb R$, $1\leq q_i, q<\infty$, $0< p_i, p<\infty$ and $\omega_i(x)=|x|^{\gamma_i}$ for all $i= 1,...,m$. Simultaneously, let
\begin{equation}\label{gammq}
\frac{\gamma}{q}=\frac{\gamma_1}{q_1}+\cdots+\frac{\gamma_m}{q_m}.
\end{equation}
Then $H_{\Phi,\vec{A}}$ is a bounded operator from ${M\mathop{K}\limits^.}_{{p_1},{q_1}}^{{\alpha _1},{\lambda _1}}({\omega_1}) \times \cdots \times {M\mathop{K}\limits^.}_{{p_m},{q_m}}^{{\alpha _m},{\lambda _m}}({\omega_m})$ to ${M\mathop{K}\limits^.}_{{p},{q}}^{{\alpha},{\lambda}}({\omega})$ if and only if
\[
\mathcal C_7=\int\limits_{{\mathbb R^n}} {\frac{{\Phi (t)}}{{{{\left| t \right|}^n}}}\prod\limits_{i = 1}^m {{{\left\| {A_i^{ - 1}(t)} \right\|}^{ - {\lambda _i} + {\alpha _i} + \frac{{n+{\gamma _i}}}{{{q_i}}}}}} } dt < + \infty,
\]
Moreover,
\[
{\big\| {{H_{\Phi ,\vec A }}} \big\|_{{M\mathop{K}\limits^.}_{{p_1},{q_1}}^{{\alpha _1},{\lambda _1}}({\omega_1}) \times \cdots \times {M\mathop{K}\limits^.}_{{p_m},{q_m}}^{{\alpha _m},{\lambda _m}}({\omega_m})\to {M\mathop{K}\limits^.}_{{p},{q}}^{{\alpha},{\lambda}}({\omega})}} \simeq \mathcal C_7.
\]
\end{theorem}
\begin{proof}
It is clear to see that the resuts of Theorem \ref{TheoremMorreyHerz} can be viewed as consequence of Theorem \ref{TheoremVarMorreyHerz1}. Indeed, we put $\gamma^*=\frac{\gamma}{q}$,$\gamma^*_i=\frac{\gamma_i}{q_i}$ for $i=1,...,m$ and $\omega^*=|x|^{\gamma^*}, \omega^*_i=|x|^{\gamma^*_i}$ for $i=1,...,m$. By having (\ref{gammq}) and assuming that $\alpha_1(\cdot)$,...,$\,\alpha_m(\cdot)$ and $q_1(\cdot)$,...,$\,q_m(\cdot)$ are constant, we have
\[
\mathcal C_5= \mathcal C_5^*= \int\limits_{{\mathbb R^n}} {\frac{{\Phi (t)}}{{{{\left| t \right|}^n}}}\prod\limits_{i = 1}^m {{{\left\| {A_i^{ - 1}(t)} \right\|}^{ - {\lambda _i} + {\alpha _i} + \frac{n}{q_i}+\gamma^*_i}}} }dt=\mathcal C_7.
\]
Thus, combining the case $\rm (a)$ and case $\rm (b1)$ of Theore \ref{TheoremVarMorreyHerz1}, we deduce
\[
{\big\| {{H_{\Phi ,\vec A }}} \big\|_{{M\mathop{K}\limits^.}_{{p_1},{q_1},\omega^*_1}^{{\alpha _1},{\lambda _1}}\times \cdots \times {M\mathop{K}\limits^.}_{{p_m},{q_m},{\omega^*_m}}^{{\alpha _m},{\lambda _m}}\to {M\mathop{K}\limits^.}_{{p},{q},{\omega}^*}^{{\alpha},{\lambda}}}} \simeq \mathcal C_7.
\]
At this point, by relation (\ref{rel-func}), we immediately get the desired result.
\end{proof}
As a consequence of Theorem \ref{TheoremVarHerz1}, we also obtain the analogous result for the
constant parameters case as follows.
\begin{theorem}\label{Herz}
Let $1\leq p, p_1,...,p_m <\infty$, the assumptions of Theorem \ref{TheoremMorreyHerz} and the hypothesis (\ref{lienhieppi}) in Theorem \ref{TheoremVarHerz} hold. We have that
$H_{\Phi,\vec{A}}$ is a bounded operator from ${\mathop K\limits^.}_{{q_1}}^{ {\alpha _1},{p_1}}({\omega_1}) \times \cdots \times {\mathop K\limits^.}_{{q_m}}^{ {\alpha _m},{p_m}}({\omega_m})$ to ${\mathop K\limits^.}_q^{ \alpha ,p}(\omega)$ if and only if
\[
\mathcal C_8= \int\limits_{{\mathbb R^n}} {\frac{{\Phi (t)}}{{{{\left| t \right|}^n}}}\prod\limits_{i = 1}^m {{{\left\| {A_i^{ - 1}(t)} \right\|}^{{\alpha _i} + \frac{{n+{\gamma _i}}}{{{q_i}}}}}} } dt < + \infty.
\]
Furthermore,
\[
\big\| {{H_{\Phi ,\vec A }}} \big\|_{{\mathop K\limits^.}_{{q_1}}^{ {\alpha _1},{p_1}}({\omega_1}) \times \cdots \times {\mathop K\limits^.}_{{q_m}}^{ {\alpha _m},{p_m}}({\omega_m}) \to {\mathop K\limits^.}_q^{ \alpha ,p}(\omega)}\simeq \mathcal C_8.
\]
\end{theorem}
\begin{proof}
By putting $\gamma^*,\gamma^*_1,...,\gamma^*_m$ and $\omega^*,\omega^*_1,...,\omega^*_m$ above, it is not hard to see that $\mathcal C_6=\mathcal C_6^* =\mathcal C_8$. Therefore, by using case $\rm a$, case $\rm b1$ of Theorem \ref{TheoremVarHerz1} and the relation (\ref{rel-func}), we finish the proof of this theorem.
\end{proof}
Now, let us take measurable functions $s_1(t),..., s_m(t)\neq 0$ almost everywhere in $\mathbb R^n$. We consider a special case that the matrices $A_i(t)={\rm diag }[ s_{i1}(t),...,s_{in}(t) ]$ with $|s_{i1}|=\cdots=|s_{in}|=|s_i|$, for almost everywhere $t\in\mathbb R^n$, for all $i=1,...,m$. It is obvious that the matrices $A_i$'s satisfy the condition (\ref{DK1}). Therefore, since the Lebesgue space with power weights is a special case of the Herz space, we also obtain the following corollary.
\begin{corollary}\label{HquaHerz}
Let $1\leq p, p_1,...,p_m <\infty$, $\alpha_1,...,\alpha_m\in\mathbb R$, and the hypothesis (\ref{lienhieppi}) in Theorem \ref{TheoremVarHerz} is true. Then $H_{\Phi,\vec{A}}$ is a bounded operator from $L^{p_1}(|x|^{\alpha_1p_1}dx)\times \cdots \times L^{p_n}(|x|^{\alpha_np_n}dx)$ to $L^{p}(|x|^{\alpha p}dx)$ if and only if
\[
\mathcal C_9= \int\limits_{{\mathbb R^n}} {\frac{{\Phi (t)}}{{{{\left| t \right|}^n}}}\prod\limits_{i = 1}^m {{{|s_i(t)|}^{{-\alpha _i} - \frac{n}{{p_i}}}}} } dt < + \infty.
\]
Furthermore,
\[
\big\| {{H_{\Phi ,\vec A }}} \big\|_{L^{p_1}(|x|^{\alpha_1p_1}dx)\times \cdots \times L^{p_n}(|x|^{\alpha_np_n}dx) \to L^{p}(|x|^{\alpha p}dx)}= \mathcal C_9.
\]
\end{corollary}
\begin{proof}
By the assumption of the matrices $A_i$'s, it is easy to see that
\[
|A_i(t)x|^{\alpha} = |s_i(t)|^\alpha.|x|^{\alpha},\,\textit{\rm for all}\,\, \alpha\in\mathbb R,\, i=1,...,m.
\]
Hence, we immedialately obtain the desired result.
\end{proof}
By the relation between the Hausdorff operators and the Hardy-Ces\`{a}ro operators as mentioned in Section 1, we see that Corollary \ref{HquaHerz} extends and strengthens the results of Theorem 3.1 in \cite{HK2015} with power weights.
\vskip 5pt
Let us now assume that $q(\cdot)$ and $q_i(\cdot)\in\mathcal P_\infty(\mathbb R^n)$, $\lambda,\alpha,\gamma ,\alpha_i, {\lambda _i},{\gamma_i}$ are real numbers such that $\lambda_i\in \big(\frac{-1}{q_{i\infty}},0\big)$, $\gamma_i \in (-n,\infty)$, $i=1,2,...,m$ and
$$ \frac{1}{{{q_1}(\cdot)}} + \frac{1}{{{q_2}(\cdot)}} + \cdots + \frac{1}{{{q_m}(\cdot)}} = \frac{1}{q(\cdot)}, $$
$$ \frac{{\gamma _1}}{{{q_{1\infty}}}} + \frac{\gamma _2}{{{q_{2\infty}}}} + \cdots + \frac{\gamma _m}{{{q_{m\infty}}}} = \frac{\gamma}{q_\infty},$$
$$ \frac{{n + {\gamma _1}}}{{n + \gamma}}{\lambda _1} + \frac{{n + {\gamma_2}}}{{n + \gamma }}{\lambda _2} +\cdots+ \frac{{n + {\gamma _m}}}{{n + \gamma}}{\lambda _m} = \lambda,$$
$$ \alpha_1+\cdots+\alpha_m=\alpha.$$
We are also interested in the multilinear Hausdorff operators on the product of weighted $\lambda$-central Morrey spaces with variable exponent. We have the following interesting result.
\begin{theorem}\label{TheoremMorreyVar}
Let $\omega_1(x)=|x|^{\gamma_1},...,\omega_m(x)=|x|^{\gamma_m}$, $\omega(x)= |x|^\gamma$ and $v_1(x)=|x|^{\alpha_1},...,v_m(x)=|x|^{\alpha_m}$, $v(x)= |x|^\alpha$. In addition, the hypothesis (\ref{DKVarLeb}) in Theorem \ref{TheoremVarLebesgue1}
holds and the following condition is true:
\begin{equation}\label{DK2MVar}
\mathcal C_{10}= \int\limits_{\mathbb R^n} { \frac{{ {\Phi (t)}}}{{{{\left| t \right|}^n}}}\prod\limits_{i = 1}^m {{{\left\| {A_i(t)} \right\|}^{(n + {\gamma _i})\left(\frac{1}{q_{i\infty}}+{\lambda _i}\right)}}}}c_{A_i,q_i,\alpha_i}(t)\big\|1\big\|_{L^{r_{1i}(t,\cdot)}}dt <+ \infty,
\end{equation}
Then, we have $H_{\Phi ,\vec{A}}$ is bounded from ${\mathop B\limits^.}_{\omega_1,v_1}^{q_1(\cdot),\lambda _1}\times \cdots\times {\mathop B\limits^.}_{{\omega_m,v_m}}^{{q_m(\cdot)},{\lambda _m}}$ to ${\mathop B\limits^.}_{\omega,v}^{q(\cdot),\lambda}$.
\end{theorem}
\begin{proof}
For $R>0$, we denote
\[
\Delta_R={\frac{1}{{{\omega}\big(B(0,R)\big)^{\frac{1}{q_\infty}+\lambda}}}} \big\|H_{\Phi ,\vec{A}}(\vec{f})\big\|_{L^{q(\cdot)}_{v}(B(0,R))}.
\]
It follows from using the Minkowski inequality for the variable Lebesgue space that
\begin{equation}\label{MVarDeltaR}
\Delta_R \lesssim \int\limits_{\mathbb R^n}{\frac{1}{{\omega(B(0,R))}^{\frac{1}{q_\infty} +\lambda}}.\frac{\Phi(t)}{|t|^n}\big\|\prod\limits_{i=1}^{m}{f_i(A_i(t).)}\big\|_{L^{q(\cdot)}_{v}(B(0,R))}dt}.
\end{equation}
On the other hand, we apply the H\"{o}lder inequality for the variable Lebesgue space to obtain
\begin{equation}\label{MVarHolder1}
\big\| \prod\limits_{i=1}^{m}f_i(A_i(t).)\big\|_{L^{q(\cdot)}_{v}(B(0,R))}\lesssim \prod\limits_{i=1}^{m}\big\|f_i(A_i(t).)\big\|_{L^{q_i(\cdot)}_{v_i}(B(0,R))}.
\end{equation}
By estimating as (\ref{MVarfiBLeb}) and (\ref{nhungVarLeb2}), we have
\begin{equation}\label{MVarfiB}
\big\|f_i(A_i(t).)\big\|_{L^{q_i(\cdot)}_{v_i}(B(0,R))}\lesssim c_{A_i,q_i,\alpha_i}(t).\big\|1 \big\|_{L^{r_{1i}(t,\cdot)}}.\big\|f_i\big\|_{L^{q_i(\cdot)}_{v_i}(B(0,R||A_i(t)||))}.
\end{equation}
In view of $\frac{{n + {\gamma _1}}}{{n + \gamma}}{\lambda _1} + \frac{{n + {\gamma_2}}}{{n + \gamma }}{\lambda _2} +\cdots+ \frac{{n + {\gamma _m}}}{{n + \gamma}}{\lambda _m} = \lambda$, we estimate
\[
\frac{1}{{\omega( B(0,R))}^{\frac{1}{q_{\infty}} + \lambda}} \lesssim \frac{\big\|A_i(t)\big\|^{(\gamma_i+n)(\frac{1}{q_{i\infty}}+\lambda_i)}}{{\omega_i( B(0,R\|A_i(t)\|))}^{\frac{1}{q_{i\infty}} + \lambda _i}}.
\]
Thus, by (\ref{MVarDeltaR}) and (\ref{MVarfiB}), it follows that
$\Delta_R\lesssim \mathcal C_{10}\prod\limits_{i = 1}^m {{{\left\| {{f_i}} \right\|}_{{\mathop B\limits^.}_{{\omega_i,v_i}}^{{q_i(\cdot)},{\lambda _i}}}}}.$
\\
Consequently, it is straightforward ${\left\| {{H_{\Phi ,\vec{A} }}(\vec{f} )} \right\|_{{\mathop B\limits^.}_{\omega,v}^{q(\cdot),\lambda }}}\lesssim \mathcal C_{10}\prod\limits_{i = 1}^m {{{\left\| {{f_i}} \right\|}_{{\mathop B\limits^.}_{{\omega_i,v_i}}^{{q_i(\cdot)},{\lambda _i}}}}}.$
\end{proof}
As a consequence of Theorem \ref{TheoremMorreyVar}, by the reason (\ref{detA}) and (\ref{DK1.1}), we also have the analogous result for the $q,q_1,...,q_m$-constant case as follows.
\begin{corollary}\label{CoroMorreyVar}
Let $\omega_i,v_i,\omega,v$ be as Theorem \ref{TheoremMorreyVar}. In addition, the following condition holds:
\begin{equation}\label{DK4MVarCoro}
\mathcal C_{11}= \int\limits_{\mathbb R^n} { \frac{{ {\Phi (t)}}}{{{{\left| t \right|}^n}}}\prod\limits_{i = 1}^m {{{\left\| {A_i^{ - 1}(t)} \right\|}^{\alpha_i-\frac{\gamma_i}{q_i} - \lambda_i(n + {\gamma _i})}}} } dt<+ \infty.
\end{equation}
Then, we have
\[
{\left\| {{H_{\Phi ,\vec{A} }}(\vec{f} )} \right\|_{{\mathop B\limits^.}_{\omega,v}^{q,\lambda }}}\lesssim \mathcal C_{11}.\prod\limits_{i = 1}^m {{{\left\| {{f_i}} \right\|}_{{\mathop B\limits^.}_{{\omega_i,v_i}}^{{q_i},{\lambda_i}}}}}.
\]
\end{corollary}
\begin{proof}
By (\ref{detA}) and (\ref{DK1.1}), it is clear to see that
\[
{{\left\| {A_i(t)} \right\|}^{(n + {\gamma _i})\left(\frac{1}{q_{i\infty}}+{\lambda _i}\right)}} c_{A_i,q_i,\alpha_i}(t)\lesssim {{\left\| {A_i^{ - 1}(t)} \right\|}^{\alpha_i-\frac{\gamma_i}{q_i} - \lambda_i(n + {\gamma _i})}}.
\]
Hence, by Theorem (\ref{TheoremMorreyVar}), the proof is finished.
\end{proof}
Moreover, we also obtain the above operator norm on the product of weighted $\lambda$-central Morrey spaces as follows.
\begin{theorem}\label{TheoremMorrey}
Let $\omega(x)= |x|^\gamma$ and $\omega_i(x)=|x|^{\gamma_i}$ for $i= 1,...,m$.
Then, we have that $H_{\Phi ,\vec{A}}$ is bounded from ${\mathop B\limits^.}^{q_1,\lambda _1}(\omega_1)\times\cdots \times {\mathop B\limits^.}^{{q_m},{\lambda _m}}(\omega_m)$ to ${\mathop B\limits^.}^{q,\lambda}(\omega)$ if and only if
\[
\mathcal C_{12}= \int\limits_{\mathbb R^n} {\frac{\Phi (t)}{{\left| t \right|}^n}\prod\limits_{i = 1}^m {{{\left\| {A_i^{ - 1}(t)} \right\|}^{ - (n + {\gamma_i}){\lambda _i}}}} }dt < + \infty.
\]
Furthermore, we obtain
\[
{\big\|{H_{\Phi ,\vec{A} }} \big\|_{{\mathop B\limits^.}^{{q_1},{\lambda_1}}({{\omega_1}}) \times \cdots \times {\mathop B\limits^.}^{{q_m},{\lambda _m}}({{\omega_m}}) \to {\mathop B\limits^.}^{q,\lambda } (_\omega)}}\simeq \mathcal C_{12}.
\]
\end{theorem}
\begin{proof}
We first note that the sufficient condition of the theorem is derived from Corollary \ref{CoroMorreyVar}. In more details, by letting $\alpha_i=\frac{\gamma_i}{q_i}$ for $i=1,...,m$, we have $\mathcal C_{11}=\mathcal C_{12}<\infty$. Hence, by Corollary \ref{CoroMorreyVar}, we find
\[
{\left\| {{H_{\Phi ,\vec{A} }}(\vec{f} )} \right\|_{{\mathop B\limits^.}_{\omega,{\omega}^{1/q}}^{q,\lambda }}}\lesssim \mathcal C_{12}.\prod\limits_{i = 1}^m {{{\left\| {{f_i}} \right\|}_{{\mathop B\limits^.}_{{\omega_i,{\omega_i}^{1/q_i}}}^{{q_i},{\lambda_i}}}}}.
\]
From this, by ${\mathop B\limits^.}_{\omega,{\omega}^{1/q}}^{q,\lambda }={\mathop B\limits^.}^{q,\lambda}(\omega)$ and ${\mathop B\limits^.}_{\omega_i,{\omega_i}^{1/q_i}}^{q_i,\lambda_i }={\mathop B\limits^.}^{q_i,\lambda_i}(\omega_i)$ for $i=1,...,m$, the proof of sufficient condition of this theorem is ended.
To give the proof for the necessary condition, let us now choose
\[
f_i(x)=\left|x\right|^{(n+\gamma_i)\lambda_i}.
\]
Then, it is not hard to show that
\[
{\left\| {{f_i}} \right\|_{{\mathop B\limits^.}^{{q_i},{\lambda _i}}({{\omega_i}})}} = {\left( {\frac{{n + {\gamma _i}}}{{\left| {{S_{n - 1}}} \right|}}} \right)^{{\lambda _i}}}\frac{1}{{{{\left( {1 + {q_i}{\lambda _i}} \right)}^{\frac{1}{{q_i}}}}}}.
\]
Thus, we have
\begin{equation}\label{Choosefi}
\prod\limits_{i = 1}^m {{{\big\| {{f_i}} \big\|}_{{\mathop B\limits^.}^{{q_i},{\lambda _i}}({\omega_i})}}} \lesssim {\left( {\frac{{\gamma + n}}{{\left| {{S_{n - 1}}} \right|}}} \right)^{\lambda }}{(1 + \lambda q)^{\frac{-1}{q}}}.
\end{equation}
By choosing $f_i$'s, we also have
\begin{eqnarray}
&&{\left\| {{H_{\Phi ,\vec{A} }}\left( {\vec{f} } \right)} \right\|_{{\mathop B\limits^.}^{q,\lambda }(\omega)}}
\nonumber
\\
&&= \mathop {\sup }\limits_{R > 0} {\Big( {\frac{1}{{\omega{{(B(0,R))}^{1 + q\lambda }}}}\int\limits_{B(0,R)} {{{\Big| {\int\limits_{\mathbb R^n} {\frac{{\Phi (t)}}{{{{\left| t \right|}^n}}}} \prod\limits_{i = 1}^m {{{\left| {{A_i}(t)x} \right|}^{(n + {\gamma _i}){\lambda _i}}}} dt} \Big|}^q}\omega(x)dx} }\Big)^{\frac{1}{q}}}.
\nonumber
\end{eqnarray}
By (\ref{DK1.2}), we get ${\left| {{A_i}(t)x} \right|^{(n + {\gamma _i}){\lambda _i}}} \gtrsim {\left\| {A_i^{ - 1}(t)} \right\|^{ - (n + {\gamma_i}){\lambda _i}}}.{\left| x \right|^{(n + {\gamma_i}){\lambda _i}}}.$ Therefore, we imply that
\begin{eqnarray}
{\left\| {{H_{\Phi ,\vec{A} }}\big( {\vec{f} } \big)} \right\|_{{\mathop B\limits^.}^{q,\lambda }(\omega)}}&\gtrsim & \Big( {\int\limits_{{\mathbb R^n}} {\frac{{\Phi (t)}}{{{{\left| t \right|}^n}}}} {\prod\limits_{i=1}^{m}{\left\| {A_i^{ - 1}(t)} \right\|}^{ - (n + {\gamma_i}){\lambda _i}}}dt} \Big)\times\nonumber
\\
&&\,\,\,\times\,\mathop {\sup }\limits_{R > 0} {\Big( {\frac{1}{{\omega{{(B(0,R))}^{1 + q\lambda }}}}\int\limits_{B(0,R)} \Big({\prod\limits_{i = 1}^m {{{\left| x \right|}^{(n + {\gamma_i}){\lambda _i}q}\Big)|x|^{\gamma}}} dx} } \Big)^{\frac{1}{q}}}\nonumber
\\
&=&\Big( {\int\limits_{{\mathbb R^n}} {\frac{{\Phi (t)}}{{{{\left| t \right|}^n}}}} {\prod\limits_{i=1}^{m}{\left\| {A_i^{- 1}(t)} \right\|}^{ - (n + {\gamma_i})\lambda_i}}dt} \Big){\Big( {\frac{{\gamma + n}}{{\left| {{S_{n - 1}}} \right|}}} \Big)^{\lambda }}{(1 + \lambda q)^{\frac{-1}{q}}}. \nonumber
\end{eqnarray}
Hence, it follows from (\ref{Choosefi}) that
\[
{\left\| {{H_{\Phi ,\vec{A} }}\big( {\vec{f} } \big)} \right\|_{{\mathop B\limits^.}^{q,\lambda }(\omega)}} \gtrsim\Big( {\int\limits_{{\mathbb R^n}} {\frac{{\Phi (t)}}{{{{\left| t \right|}^n}}}} {\prod\limits_{i=1}^{m}{\left\| {A_i^{ - 1}(t)} \right\|}^{ - (n + {\gamma_i}){\lambda _i}}}dt} \Big).\prod\limits_{i = 1}^m {{{\left\| {{f_i}} \right\|}_{{\mathop B\limits^.}^{{q_i},{\lambda _i}}({{\omega_i}})}}}.
\]
Because of assuming that $H_{\Phi ,\vec{A}}$ is bounded from ${\mathop B\limits^.}^{q_1,\lambda _1}({\omega_1})\times \cdots \times {\mathop B\limits^.}^{{q_m},{\lambda _m}}({\omega_m})$ to ${\mathop B\limits^.}^{q,\lambda}(\omega)$, it immediately deduces that $\mathcal C_{12}<\infty$, and hence, the proof of the theorem is completed.
\end{proof}
{\textbf{Acknowledgments}}. This paper is supported by the Vietnam National Foundation for Science and Technology Development (NAFOSTED) under grant number 101.02-2014.51.
\bibliographystyle{amsplain}
|
2,877,628,088,495 | arxiv | \section{Introduction} \label{S:intro}
Richard Thompson introduced his group $F$ in 1965, and it has since been extensively studied. $F$ is finitely presented, has exponential growth, and its abelianization is $\mathbb{Z} \times \mathbb{Z}$. The question as to whether $F$ is amenable was first posed in 1979. $F$ is, in a sense, ``on the edge of amenability'', as it is not elementary amenable but does not contain a free subgroup on two generators \cite{BS85}. For several years it was hoped to provide the first finitely-presented counterexample to the Von Neumann conjecture, until Ol'shanskii and Sapir provided a different counterexample in 2000 \cite{OS02}.
F{\o}lner provided a geometric criterion for the amenability of a group in 1955 \cite{FO55}, based on the existence of subsets of the Cayley graph with arbitrarily small ``boundary''. This criterion holds for semigroups as well (one may find a proof in \cite{NA64}. This criterion allows one to extend the definition of ``amenable'' to cover arbitrary graphs. In 1992, Block and Weinberger extended the definition to an even broader class of metric spaces. They defined the {\em uniformly finite homology groups} $H^{uf}_n(M)$ of a metric space $M$, and proved that a space $M$ was amenable if and only if $H^{uf}_0(M) \neq 0$. This paper seeks to apply the results of Block and Weinberger to Thompson's Group $F$.
The results of Block and Weinberger center around the existence of ``Ponzi schemes'', which come from the uniformly finite 1-chains. In this paper we will be looking only at graphs, and we will define a slightly simpler notion of a ``Ponzi flow'' which works for our purposes.
In Section 2, we give a very brief overview of Thompson's Group $F$. Readers interested in a more in-depth introduction are referred to \cite{jB04} or \cite{CF96}. We also will define amenability and F{\o}lner's criterion. In Section 3, we discuss the results of Block and Weinberger and define Ponzi flows, as well as proving certain results about Ponzi flows on subgraphs of Cayley graphs (namely, a subgraph with a Ponzi flow always has measure zero). In Section 4, we state and prove the main result:
\begin{theorem} \label {T:main}
Let $k,l$ be positive integers, with $l > 0$. Let $\Gamma_k^l$ be the subgraph of the Cayley Graph of $F$ induced by vertices which can be expressed in the form $wv$, where $w$ is a positive word in the infinite generating set $\{x_0, x_1, x_2, ...\}$ of length $\leq k$ and $v$ is a positive word in $\{x_0, ..., x_l\}$ (of any length). Then $\Gamma_k^l$ is not amenable.
\end{theorem}
The case $k=1, l=1$ was proved by D. Savchuk in \cite{DS08}.
A corollary of this theorem is that all finitely-generated submonoids of the positive monoid of $F$ are not amenable, and therefore if $F$ is amenable these sets have measure zero.
\section{Thompson's Group F}\label{S:TGF}
Thompson's group $F$ has been studied for several decades. It can be described as the group of piecewise-linear homeomorphisms of the unit interval, all of whose derivatives are integer powers of 2 and with a finite number of break points which are all dyadic rationals. It can also be described as the group with the following infinite presentation:
$$
<x_0,x_1,x_2,x_3,... | x_ix_j = x_jx_{i-1} \forall i>j >
$$
From this presentation, we may see $x_i = x_0x_{i-1}x_0^{-1}$ for $i > 1$, thus this group is finitely generated by $\{x_0, x_1\}$. It turns out this group is finitely presented as well. However, it is still useful to consider the infinite generating set $\{x_0, x_1, x_2, ...\}$. We have the following definition:
\begin{definition}
The {\em positive monoid} of $F$ is the submonoid of $F$ consisting of elements which can be expressed as words in $\{x_0, x_1, x_2, ...\}$, without using inverses.
\end{definition}
Any element of $F$ can be expressed as an element of the positive monoid times the inverse of such an element. (Elements of $F$ have a normal form which is such a product).
In \cite{jB04}, the group $F$ is studied using {\it two-way forest diagrams} (so called because trees can extend in either direction in the forest, though we will only be studying forests with trees extending to the right). We will make extensive use of these diagrams when studying the positive monoid in section 4. We describe the two-way forest diagrams of the positive monoid here, referring the reader to \cite{jB04} for the proofs.
\begin{definition}
A {\it binary forest} is a sequence of binary trees, such that all but finitely many of the trees are trivial (i.e., have 1 node):
\end{definition}
\vspace{.2in}
\centerline {
\includegraphics[width=4in]{pic1}
}
\begin{definition}
A {\it pointed forest} is a binary forest with a distinguished, or ``pointed'', tree.
\end{definition}
\vspace{.2in}
\centerline {
\includegraphics[width=4in]{pic2}
}
For the remainder of this paper, we will omit the ellipses, and assume a forest diagram or pointed forest diagram has an infinite number of trivial trees continuing to the right.
\smallskip
Each element of the positive monoid of $F$ can be identified with a pointed forest. The identity element is the pointed forest consisting only of trivial trees, with the the pointer on the leftmost. Right multiplication by $x_0$ moves the pointer one tree to the right:
\vspace{.1in}
\centerline {
\includegraphics[width=2.9in]{pic3}
}
\centerline {Multiplication by $x_0$}
\vspace{.2in}
Right multiplication by $x_1$ adds a caret between the pointed tree and the tree immediately to its right, making a new tree whose left child is the pointed tree and whose right child is the tree to its right. This new tree becomes the pointed tree:
\vspace{.2in}
\centerline {
\includegraphics[width=3in]{pic4}
}
\centerline {Multiplication by $x_1$}
\vspace{.2in}
Since $x_i = x_0^{i-1}x_1x_0^{-(i-1)}$, we can see that right multiplication by $x_i$ moves the pointer $i-1$ trees to the right, adds a caret, and then moves the pointer $i-1$ trees to the left again. This is equivalent to adding a caret between the trees $i-1$ and $i$ steps away from the pointed tree.
\vspace{.2in}
\centerline {
\includegraphics[width=3in]{pic5}
}
\centerline {Multiplication by $x_3$}
\vspace{.2in}
Multiplication of pointed forests corresponds to ``putting one on top of the other'': If $P$ and $Q$ are pointed forests, then $PQ$ is the forest obtained by using the trees of $P$ as the leaves of $Q$, with the pointed tree in $P$ acting as the leftmost leaf:
\vspace{.2in}
\centerline {
\includegraphics[width=4in]{pic6}
}
\centerline {Multiplying the two forests on the left yields the forest on the right}
\vspace{.2in}
The pointer is then placed above whatever tree was pointed in $Q$.
It has been a longstanding open question as to whether $F$ is {\em amenable}:
\begin{definition}
A group $G$ is called {\em amenable} if there exists a ``left-invariant measure'' on $G$, that is, a function $\mu : ${\em P}$(G) \rightarrow [0,1]$ such that:
- $\mu(G) = 1$
- $\mu$ is finitely additive: If $A$ and $B$ are disjoint subsets of $G$, $\mu(A) + \mu(B) = \mu(A\cup B)$.
- $\mu$ is G-invariant: For any $g \in G$, $A \subset G$, $\mu(A) = \mu(gA)$.
\end{definition}
A useful result for determining amenability is {\it F{\o}lner's Criterion}, which uses the left Cayley graph of $G$ (the graph obtained by taking a generating set $A$ and using $G$ as the vertex set, connecting two vertices $g$ and $g'$ by an edge if $g' = ag$ for some $a \in A$).
\begin{theorem} (F{\o}lner's Criterion): A group $G$ is amenable if and only if, for any $\epsilon > 0$, there exists a finite subset $A$ of vertices in the Cayley graph of $G$ such that
$$
\frac{\#\partial(A)}{\# A} < \epsilon,
$$
where $\#(A)$ is the number of vertices in $A$, and $\#\partial(A)$ is the number of edges connecting a vertex in $A$ to a vertex outside $A$.
\end{theorem}
Intuitively, F{\o}lner's criterion says there are finite subsets of the Cayley graph whose boundaries are arbitrarily small when compared to the size of the sets themselves.
Since the left- and right- Cayley graphs of a group are isomorphic, F{\o}lner's criterion shows that a left-invariant measure exists on a group if and only if a right-invariant measure exists. For the remainder of this paper, we will be concerned with {\em right}-invariant measures, and so all Cayley graphs we consider from now on will be right Cayley graphs ($g$ and $g'$ are connected by an edge iff $ga = g'$ for some generator $a$).
F{\o}lner's criterion is very useful, in that it can be applied not only to a group but to any graph. In particular, we say an arbitrary graph is amenable if F{\o}lner's criterion holds for that graph. This allows us to state the following proposition:
\begin{proposition}
Let $\Gamma$ be the subgraph of the Cayley graph of Thompson's Group $F$ (using the $x_0, x_1$ generating set) consisting of vertices in the positive monoid and all edges between such vertices. Then $\Gamma$ is amenable if and only if $F$ is.
\end{proposition}
The proof uses the facts that any finite set in $F$ can be translated into the positive monoid, and that all outgoing edges from the positive monoid are of the $x_0^{-1}$ or $x_1^{-1}$ type. (A proof can be found in \cite{DS08}).
\section{Uniformly Finite Homology} \label{S:UFH}
This section describes and obtains a few results from the uniformly finite homology of Block and Weinberger. We will always be considering a graph $\Gamma$ of bounded degree, though many of the results apply to a much broader class of metric spaces.
\begin{definition} \label{D:UF1C}
Let $\Gamma$ be a graph of bounded degree with vertex set $V$. A {\em uniformly finite 1-chain with integer coefficients} on $\Gamma$ is a formal infinite sum $\sum a_{x,y}(x,y)$, where the $(x,y)$ are ordered pairs of vertices of $\Gamma$, $a_{x,y} \in \mathbb{Z}$, such that the following properties are satisfied:
\bigskip
1) There exists $K>0$ such that $ \forall x,y, |a_{x,y}| < K$
2) There exists some $R>0$ so that $a_{x,y} = 0$ if $d(x,y) > R$
\end{definition}
Notice that condition 2) guarantees that for any fixed $x \in V$, the set of pairs $(x,y)$ such that $a_{x,y} \neq 0$ is finite. Similarly, the pairs $(y,x)$ with $a_{y,x} \neq 0$ also form a finite set. This allows us to make the following definition:
\begin{definition} \label{D:PSI}
A uniformly finite 1-chain is a {\em Ponzi scheme} if, for all $x \in \Gamma$, we have $\sum_{v \in \Gamma}a_{v,x} - \sum_{v\in \Gamma}a_{x,v} > 0$.
\end{definition}
We now state the main result of \cite{BW92} which we will use in this paper:
\begin{theorem} \label{T:BW}
Let $\Gamma$ be a graph of bounded degree. A Ponzi scheme exists on $\Gamma$ if and only if $\Gamma$ is not amenable (in the F{\o}lner sense).
\end{theorem}
We will use a rephrased version of this theorem for the case of our graphs:
\begin{definition} \label{D:PonziFlow}
Let $\Gamma$ be a graph of bounded degree with vertex set $V$. A {\it Ponzi flow on $\Gamma$} will mean a function $G: V\times V\rightarrow \mathbb{Z}$ with the following properties:
\bigskip
i) $G(a,b) = 0$ if there is no edge from $a$ to $b$ in $\Gamma$,
\smallskip
ii) $G(a,b) = -G(b,a)$ for all $a, b \in V$,
\smallskip
iii) $G$ is uniformly bounded, i.e., $\exists N \in \mathbb{Z}$ such that $\forall a, b \in V, G(a,b) \leq N $
\smallskip
iv) For each $a\in V$, $\sum_{b\in G} G(b,a) > 0$.
\end{definition}
Note that the sum in condition ($iv$) is guaranteed to be finite by condition ($i$). Intuitively, the function $G$ defines a ``flow'' on the graph $\Gamma$, assigning an integer and a direction to each edge, such that for any vertex the total ``inward'' flow is more than the total ``outward'' flow. This is almost exactly a Ponzi scheme in different language, with the exception that all ``pairs'' must be exactly of distance 1. However, this difference is unimportant:
\begin{proposition} \label {P:PSPF}
Let $\Gamma$ be a graph of bounded degree. There exists a Ponzi scheme on $\Gamma$ if and only there exists a Ponzi flow on $\Gamma$.
\end{proposition}
\begin{proof} The ``if'' direction is trivial: Given a Ponzi flow, we may define our formal sum to be $\sum G(x,y)(x,y)$, and this will be a uniformly finite 1-chain with integer coefficients: Condition 1) is implied by ($iii$) together with ($iv$), and condition 2) is implied by ($i$). This 1-chain will be a Ponzi scheme by conditions ($ii$) and ($iv$).
To see the ``only if'' direction, we start with a Ponzi scheme $\sum a_{x,y}(x,y)$, and formally add canceling sums to it to obtain a new scheme $\sum a'_{x,y}(x,y)$ with $a'_{x,y} = 0$ if $d(x,y) > 1$. We do this in the following way: for each $a_{x,y}$ such that $d(x,y) = n > 1$, let $x = v_0, v_1, ..., v_{n-1}, v_n = y$ be a sequence of vertices forming a path from $x$ to $y$, and add $\sum_{i=0}^{n-1} a_{x,y}(v_i, v_{i+1}) - a_{x,y}(x,y)$ to our sum coefficientwise. This will still satisfy the inequality of \ref{D:PSI}, since each $v_i$ appears exactly twice in our added sum, once contributing $a_{x,y}$ and once contributing $-a_{x,y}$, leaving the sum unchanged. The coefficients of $(x,y)$ will cancel, leaving the new coefficient 0. Thus we have cancelled a coefficient $a_{x,y}$ whose vertices are not adjacent while only affecting other coefficients with adjacent vertices.
Each adjacent pair $(x,y)$ has no more than ${d^{R+1}}\choose{2}$ pairs of vertices within distance $R$, where $d$ is the bound on degree and $R$ is the radial bound from condition 2). That means that if we choose paths as above for every pair $(x,y)$ with $d(x,y) > 1$ and $a_{x,y} > 0$, each edge will be part of no more than ${d^{R+1}}\choose{2}$ of these paths. Thus, adding a sum as above for each path will yield us a new well-defined formal sum $\sum a'_{x,y}(x,y)$. We will have $a_{x,y} = 0$ if $d(x,y) > 1$, since these coefficients have been cancelled. Furthermore, we have that each $a'_{x,y}$ is bounded by $K ({{d^{R+1}}\choose{2}} + 1)$, thus condition 1) of \ref{D:UF1C} still holds (with a different bound), so we have a uniformly finite 1-chain with $R=1$. Since we did not change the sums in \ref{D:PSI}, $\sum a'_{x,y}(x,y)$ is still a Ponzi scheme. Now we simply define $G(x,y) = a_{x,y} - a_{y,x}$, and condition ($i$) is implied by $R=1$, condition ($ii$) is clear, ($iii$) is implied by 1), and ($iv$) is implied by the inequality of \ref{D:PSI}, thus $G$ is a Ponzi flow on $\Gamma$.
\end{proof}
A quantified treatment of Ponzi flows can be found in \cite{LY99}.
If a Ponzi flow exists on a Cayley graph, we then have that there can be no right-invariant measure on the group, since no F{\o}lner sequence exists and this implies the group cannot be amenable. We give here an elementary proof that existence of a Ponzi flow directly implies no right-invariant measure exists, for an appropriate type of graph on which the notion ``right invariant'' can be defined:
\begin{definition} By a {\em labeled directed graph} we shall mean a directed graph of bounded degree, each of whose edges are labeled by elements from a finite set $\{g_1, g_2, ..., g_n\}$, such that each vertex of $\Gamma$ has at most one incoming edge and one outgoing edge with each label. It is not necessary that a vertex have an edge with each label.
\end{definition}
The motivating example is the case where $\Gamma$ is a subgraph of a Cayley graph of a group generated by $\{g_1, g_2, ..., g_n\}$, but the results here hold for all labeled directed graphs.
\begin{definition}
Let $\Gamma$ be a labeled directed graph. Suppose $S$ is a subset of vertices of $\Gamma$. For $1 \leq i \leq n$, we say $S$ is {\em $g_i$-translatable} if each vertex in $S$ has an outgoing edge labeled by $g_i$. In such a case we denote by $Sg_i$ the set of vertices with an incoming edge labeled by $g_i$ whose opposite vertex lies in $S$, i.e., the set of vertices at the other ends of the outgoing $g_i$-labeled edges.
\end{definition}
In the case where $\Gamma$ is a subgraph of a Cayley graph, $Sg_i$ is just the right-translate of the elements of $S$ under the group multiplication, and $S$ being $g_i$-translatable simply means $Sg_i \subseteq \Gamma$. We will abuse notation slightly in the case of one-element sets, so that if $v$ has an outgoing edge labeled by $g_i$, we will call the vertex on the other side of the edge $vg_i$ (which corresponds to standard group multiplication in the case of a Cayley graph). We will also abuse notation in that we will occasionally identify $\Gamma$ with its vertex set.
If $S$ (or a single vertex $v$) is the $g_i$-translate of some other vertex set, we will call that set $Sg_i^{-1}$ (or the single vertex $vg_i^{-1}$). Again, this is the same as the standard multiplication in the case of a Cayley graph.
\begin{definition}
Let $\mu$ be a finitely additive measure on the vertex set of $\Gamma$, such that $\mu(\Gamma) = 1$. Then we say $\mu$ is {\em right-invariant} if for each $g_i$ and each $g_i$-translatable subset $S \subseteq \Gamma$, $\mu(S) = \mu(Sg_i)$.
\end{definition}
If $\Gamma$ is the full Cayley graph of a finitely generated group or semigroup, then this definition of right-invariant measure coincides with the standard one, since every set of vertices is $g_i$-translatable for every $i$. We now have the terminology to state:
\begin{theorem} \label{T:PonziNoMeasure}
Suppose $\Gamma$ is a labeled directed graph, and has a Ponzi flow $G$. Then there is no finitely additive, right-invariant measure on $\Gamma$.
\end{theorem}
\begin{proof}
We begin by associating to each vertex $v$ of $\Gamma$ an ordered list of symbols, taken from the alphabet $\{g_1, g_2, ..., g_n\} \cup \{g_1^{-1}, g_2^{-1}, ..., g_n^{-1}\} \cup \{h\}$. $h$ has no meaning here except as an ``extra'' symbol to be used in the list. We shall call this list $L_v$, and construct it as follows: If $v$ has an outgoing edge labeled by $g_1$, and $G(v, vg_1) > 0$, we begin $L_v$ by repeating the symbol $g_1$ $G(v, vg_1)$ times. If $v$ has an outgoing edge labeled by $g_2$ and $G(v, vg_2) > 0$, we append $G(v,vg_2)$ copies of $g_2$ to the list. We continue in this fashion, appending $G(v, vg_i)$ copies of $g_i$ to the list if $v$ has an outgoing edge labeled $g_i$ and $G(v, vg_i) > 0$. For example, if $v$ has an outgoing $g_1$-edge and $g_3$-edge, but no $g_2$-edge, and $G(v, vg_1) = 2$ and $G(v, vg_3) = 4$, then $L_v$ would begin $(g_1, g_1, g_3, g_3, g_3, g_3, ...)$.
Once this is complete for positive powers, we repeat this process with the inverse symbols using incoming edges, i.e., if $v$ has an incoming edge labeled $g_i$ and $G(v, vg_i^{-1}) > 0$, we append $G(v, vg_i^{-1})$ copies of $g_i^{-1}$ to our list, and repeat this process for each $g_i^{-1}$ in turn. Intuitively, the lists so far are measuring the ``outgoing flow'' at each vertex.
Since $G$ is bounded by some $K$, these lists so far all have length $\leq 2nK$. We append copies of $h$ to each list so that each list has length $2nK$. We now have associated to each $v$ an ordered list $L_v$ of length $2nK$ of symbols from the alphabet $\{g_1, ..., g_n, g_1^{-1}, ..., g_n^{-1}, h\}$.
Using these lists, we will now define a family of vertex subsets of $\Gamma$, which we will call $A_L$, for any ordered list $L$ of length $2nK$ of symbols from the alphabet $\{g_1, ..., g_n, g_1^{-1}, ..., g_n^{-1}, h\}$. $A_L$ will consist of all vertices $v$ for which $L_v = L$. Note that many of the sets $A_L$ may be empty. But since each vertex has a unique list associated to it, we have $\Gamma = \bigcup_L A_L$.
Suppose $\Gamma$ has a right-invariant measure $\mu$. Then since each $A_L$ is disjoint and their union is the entire graph, we have
$$
\sum_{L} \mu(A_L) = 1
$$
Now we define some more subsets of $\Gamma$, which we call $B_g^j$, for $g$ any letter in the alphabet $\{g_1, ..., g_n, g_1^{-1}, ..., g_n^{-1}, h\}$ and $1 \leq j \leq K$. $B_g^j$ will be the set of vertices $v$ such that $L_v$ contains $j$ or more copies of $g$. The $B_g^j$ are certainly not disjoint, however, each $B_g^j$ can be expressed as a disjoint union of some of the $A_L$. Namely, $B_g^j = \bigcup_{L'} A_{L'}$, where the union is taken over all lists $L'$ that contain $j$ or more occurrences of $g$. Since we are looking at lists of a fixed length with symbols taken from a finite alphabet, there are finitely many such lists and thus the union is finite. We therefore have
$$
\sum_{g,j}\mu(B_g^j) = \sum_{g,j} \sum_{L'} \mu(A_{L'})
$$
We claim each list $L$ appears exactly $2nK$ times in the double sum on the right-hand side of the above equation.
To prove this claim, we observe that each entry of $L$ causes $A_L$ to be contained in exactly one of the $B_g^j$. Namely, the $j^{th}$ occurrence of $g$ ensures that $A_L \subset B_g^j$. For example, if $L$ starts as $(g_1, g_3, g_3, ...)$, then the $g_1$ term guarantees $A_L \subset B_{g_1}^1$, the first $g_3$ term guarantees $A_L \subset B_{g_3}^1$, and the second $g_3$ term guarantees $A_L \subset B_{g_3}^2$. These are the only $B_g^j$ that will contain $A_L$, by the definition of the $B_g^j$. Thus, since the lists are of length $2nK$, each $A_L$ appears in an $L'$ sum for exactly $2nK$ of the $B_g^j$, and thus appears a total of $2nK$ times in the above sum, proving the claim.
\bigskip
This allows us to explicitly calculate the sum of the measures of the $B_g^j$:
$$
\sum_{g,j}\mu(B_g^j) = 2nK \sum_{L} \mu(A_L) = 2nK
$$
Now, for $g \neq h$, let $C_g^j = B_g^jg$, i.e., $C_{g_i}^j$ is the set of all vertices with an {\em incoming} edge labeled $g_i$ such that $G(vg_i^{-1}, v) \geq j$, and $C_{g_i^{-1}}^j$ is the set of all vertices $v$ with an {\em outgoing} edge labeled $g_i$ such that $G(vg_i, v) \geq j$. We define $C_h^j = B_h^j$. Since each $B_g^j$ is $g$-translatable and its translate is $C_g^j$ for $g \neq h$, we have
$$
\sum_{g,j}\mu(C_g^j) = \sum_{g,j}\mu(B_g^j) = 2nK
$$
Now we construct lists $L_v'$ analogously to the $L_v$, with two major changes: Firstly, we count ``incoming flow'' instead of ``outgoing flow'': we use incoming edges and $G(vg_i^{-1}, v)$ values with the symbols $g_i$ instead of outgoing edges and $G(v, vg_i)$ values, and similarly we use outgoing edges and $G(vg_i, v)$ values with the $g_i^{-1}$ symbols. Secondly, we append the $h$'s so that each $L_v'$ has length $2nK+1$, not length $2nK$. Now as before we define $A_L'$ as the set of vertices $v$ such that $L_v' = L$. We now have for $g \neq h$, $C_g^j = \bigcup_{L'}A_{L'}'$, where the $L'$ are taken over the set of lists with at least $j$ occurrences of $g$.
\smallskip
However, we cannot say the same about $C_h^j$, since it corresponds to $h$'s in the lists $L_v$, not in the $L_v'$. But since $G$ is a Ponzi flow, the total ``incoming flow'' at any vertex $v$ is greater than the total ``outgoing flow'', which means that there are more non-$h$ terms in $L_v'$ than there are in $L_v$ for each $v$ (See Definition \ref{D:PonziFlow}). This means that $L_v'$ has no more $h$ terms than $L_v$ does. Thus, if we let $C'^j$ be the set of vertices $v$ such that $L_v'$ contains at least $j$ occurrences of $h$, then $C'^j \subset C_h^j$, and so $\mu(C'^j) \leq \mu(C_h^j)$. But $C'^j = \bigcup_{L'}A_{L'}'$, where the $L'$ are taken over lists with at least $j$ occurrences of $h$. Thus we may use the exact same arguments as above to obtain
$$
\sum_{g,j}\mu(C_g^j) \geq \sum_{g,j} \sum_{L'} \mu(A_{L'}'),
$$
where the $L'$ sums are taken over lists of length $2nK + 1$ which have at least $j$ occurrences of $g$. The same argument as the above claim, however, shows that each $A_{L}'$ occurs precisely $2nK+1$ times in the above sum. Since $\Gamma$ is a disjoint union of all the $A_L'$, this gives us
$$
\sum_{g,j}\mu(C_g^j) \geq (2nK+1)\sum_L \mu(A_L') = 2nK+1
$$.
Since we previously concluded that $\sum_{g,j}\mu(C_g^j) = 2nK$, we have a contradiction. Thus, no right-invariant measure $\mu$ can exist on $\Gamma$.
\end{proof}
\begin{corollary} \label{C:MeasureZero}
If $\Gamma$ is amenable but contains a nonamenable subgraph $P$, then for any right-invariant measure $\mu$ on $\Gamma$, $\mu(P) = 0$.
\end{corollary}
\begin{proof}
If $\mu(P) > 0$, then we can define a measure $\mu'$ on $P$ by setting $\mu'(A) = \frac{\mu(A)}{\mu(P)}$ for $A \subset P$. Since $\mu(P)$ is constant, $\mu'$ will inherit the properties of right invariance and finite additivity from $\mu$, and $\mu'(P) = \frac{\mu(P)}{\mu(P)} = 1$. But since $P$ is nonamenable it has a Ponzi flow, and \ref{T:PonziNoMeasure} says no such $\mu'$ can exist, yielding a contradiction.
\end{proof}
\section {Large Nonamenable Subgraphs of $F$} \label{S:LNS}
In this section we will prove the main theorem.
We begin by characterizing the two-way forest diagrams of $\Gamma_k^l$.
Given any binary tree on $n$ nodes, we may remove the top caret, giving us a 2-tree forest on $n$ nodes. If the left of these two trees is nontrivial, we may remove its top caret to obtain a 3-tree forest. Suppose we continue to remove the top caret of the leftmost tree, until the leftmost tree is trivial. Applying this process to any tree gives a function $s$ from the set of binary trees on $n$ nodes to the set of binary forests on $n$ nodes with trivial leftmost tree ($s$ is in fact a bijection). Note that the inverse of $s$ is defined by starting with a forest on $n$ nodes with trivial leftmost tree, adding a caret between the two leftmost trees in the forest, and repeating this process until the entire forest has been combined into a single tree.
\vspace{.18in}
\centerline {
\includegraphics[width=5in]{pic7}
}
\centerline{Applying $s$ to a tree}
\vspace{.2in}
We extend the definition of $s$ to apply to forests as well as single trees, by applying $s$ separately to each nontrivial tree in the forest. We will define the {\em complexity} of a tree or forest on $n$ nodes to be the minimum number of applications of $s$ required to turn it into into a forest of $n$ trivial trees.
Note that applying $s$ to a tree $T$ leaves a forest whose rightmost tree is the right child of $T$, and the remainder of the forest is $s$ applied to the left child of $T$. This gives the following:
\begin{proposition} \label{P:calcCpx}
The complexity of a tree is the maximum of the complexity of its left child and one more than the complexity of its right child.
\end{proposition}
We record here a basic property of $s$:
\begin{proposition} \label{P:skills}
Let $T$ be a tree on $n$ nodes, and let $R$ be an $n$-tree forest. Denote by $RT$ the tree obtained by attaching the roots of $R$ to the nodes of $T$. If $T$ has complexity $j$, then $s^j(RT)$ consists only of carets in $R$, i.e., every caret from $T$ is removed by $s^j$.
\end{proposition}
\vspace{.2in}
\centerline {
\includegraphics[width=3in]{pic8}
}
\centerline {A 4-node tree $T$ of complexity 2 and a 4-tree forest $R$}
\vspace{.2in}
\centerline {
\includegraphics[width=3in]{pic9}
}
\centerline {$RT$ and $s^2(RT)$}
\bigskip
\begin{proof} This is easy to see, as we can determine whether a caret is removed by $s^j$ by examining its relationship with those above it. Namely, when we examine the unique path from a caret to the root of the tree, it consists of moves to the right and moves to the left. An application of $s$ removes all carets whose path consists only of moves to the right. Further, any caret's path to the root hits the left edge at some point, and consists only of moves to the right afterwards. After $s$ is applied, the path ends the move before reaching the left edge (which was a move to the left). Thus $s$ leaves each new path from a remaining caret to the root of its new tree with one less move to the left. So $s^j$ removes all carets whose paths consist of $j-1$ or fewer moves to the left. Since this property is unchanged in the carets of $T$ whether or not it sits on $R$, the effect of $s^j$ is the same on carets of $T$, whether or not we consider it a top-tree of $R$.
\end{proof}
For a positive integer $j$, we define a function $\phi_j$ from pointed forests to forests in the following way: Apply $s$ to the pointed tree and every tree to its left $j$ times . Apply $s$ $(j-q)$ times to the tree that is $q$ to the right of the pointed tree. That is to say, apply $s$ ($j-1$) times to the tree to the immediate right of the pointed tree, $j-2$ times to next tree to the right, etc.
\begin{proposition} \label{P:charGam}
A pointed forest $P$ lies in $\Gamma_k^l$ if and only if $\phi_l(P)$ has $k$ or fewer carets.
\end{proposition}
{\em Proof of $\Rightarrow$}: Let $P \in \Gamma_k^l$. First suppose that $k=0$. In this case $P$ can be expressed as a word $v$ in $\{x_0,...,x_l\}$, and the proposition says it is annihilated by $\phi_l$, i.e., $\phi_l(P)$ consists only of trivial trees. We proceed by induction on the length of $v$. Clearly, $vx_0$ is annihilated by $\phi_l$ if $v$ is, since $\phi_l(vx_0)$ is a subforest of $\phi_l(v)$ (each tree has $s$ applied to it either the same number of times or one more time, since the pointer has simply moved one tree to the right).
For $0 < i \leq l$, multiplying by $x_i$ adds a caret to the right of the tree $i-1$ trees from the pointer. By \ref{P:calcCpx}, this will increase the number of applications of $s$ required to make that tree trivial only if its right child has complexity greater than or equal to its left child. In this case, the new tree's complexity will be one greater than that of its right child. Since $i \leq l$, the right child was $i$ trees to the right of the caret, and by induction the right child had complexity no more than $(l-i)$. Thus the new tree has complexity no more than $(l-i+1)$. Since this new tree is $i-1$ trees to the right of the pointer, it is still annihilated by $\phi_l$. The trees to the left of the new caret are unchanged, and the trees to the right of the caret have each been brought 1 tree closer to the pointer since two intervening trees have been merged. These trees now have complexity $\leq \max(l - d - 1, 0)$, where $d$ is their distance from the pointed tree, since they were distance $d+1$ before the caret was added and were made trivial by the application of $\phi_l$. This means that they will still be annihilated by $\phi_l$, and so the new pointed forest is still turned into a trivial forest by $\phi_l$.
The above argument shows that $\phi_l(v)$ is trivial if $v \in \Gamma_0^l$. But if we take a $w$ of length $\leq k$ as in the theorem, $wv$ uses the trees of $w$ as the leaves for the trees of $v$. Thus each all the carets added in each tree of $v$ are still removed by $\phi_l$ by \ref{P:skills}, since $s$ is applied the same number of times to each tree. Thus $\phi(wv)$ has at most as many carets as $w$, i.e., $k$ or fewer. \qed
To prove the reverse direction of \ref{P:charGam}, we will use the following proposition:
\begin{proposition} \label{P:buildT}
A pointed forest diagram consisting of a single nontrivial tree $T$ of complexity $j$ in the leftmost position, with the pointer on that tree, can be expressed as as word in $x_1, ..., x_j$.
\end{proposition}
\begin{proof} We will proceed by induction on $j$. It is clear that $s(T)$ is a forest with trivial leftmost tree and the remaining trees $T_1, ... ,T_n$ of complexity $\leq j-1$. By induction, let $u$ be a word in $x_1, ... , x_{j-1}$ that creates the forest diagram with only $T_1$. Now consider $x_0ux_0^{-1}$. Since $x_0$ moves the pointer one to the right and $x_0^{-1}$ moves it one to the left, $x_0ux_0^{-1}$ represents creating $T_1$ one tree to the right of the pointer (leaving the pointed tree trivial). But since $x_i = x_0x_{i-1}x_0^{-1}$, by inserting $x_0^{-1}x_0$ between each letter in $u$ we can rewrite $x_0ux_0^{-1}$ as a word in $x_2,...,x_j$. Applying $x_1$ then creates the caret from the leftmost node to this tree (this would be the last caret removed by the first application of $s$ to $T$). We may now repeat this process for the remaining trees: Multiply by some $x_0ux_0^{-1}$ to create $T_i$ one tree to the right of the pointer, and then multiply by $x_1$ to attach the caret removed by the first application of $s$ (this process is illustrated below). This constructs the entire tree $T$.
\end{proof}
\vspace{.2in}
\centerline {
\includegraphics[width=4.3in]{pic10}
}
{\center \small To construct the tree $T$ on the left, we construct the first nontrivial tree in $s(T)$ to the right of the pointer (top right), then multiply by $x_1$ (middle right), then construct the next tree in $s(T)$ to the right of the pointer (bottom right), then multiply by $x_1$.}
\vspace{.2in}
{\em Proof of Proposition \ref{P:charGam}, $\Leftarrow$:} Suppose that $P$ is a pointed forest such that $\phi_l(P)$ has $k$ or fewer carets. We can then create $w$ to put these carets in place without moving the pointer (the generator $x_i$ creates a caret on the $i^{th}$ tree without moving the pointer).
Consider the element $w^{-1}P$. This is the pointed forest obtained by taking the trees in $P$ which remain after applying $\phi_l$, and replacing them with trivial trees:
\vspace{.2in}
\centerline {
\includegraphics[width=4.3in]{pic11}
}
{\center \small For $l = 2$, if $P$ is the forest in the top left then $w$ is $\phi_2(P)$ with the pointer on the first tree (bottom left), and $w^{-1}P$ is shown on the right.}
\bigskip
The resulting pointed forest is then annihilated by $\phi_l$, and so each tree under or to the left of the pointer has complexity at most $l$. Thus we may construct these trees as words in $x_0,...,x_l$ using \ref{P:buildT} and inserting $x_0$ between each word, which will result in building the first tree, moving the pointer to the right, building the next tree, etc. Further, the tree that is $j$ trees to the right of the pointer has complexity at most $l-j$, and so \ref{P:buildT} says we can construct it as $x_0^jux_0^{-j}$, where $u$ is a word in $x_1,...,x_{l-j}$. As above, we then insert $x_0^jx_0^{-j}$ between each letter of $u$, which allows us to rewrite it as a word in $x_{j+1},...,x_l$. Repeating this for each $j$ and appending these words in increasing order constructs all trees to the right of the pointer. This completes the construction of $w^{-1}P$ as a word in $x_0,...,x_l$, which we will call $v$. Thus, $P = ww^{-1}P = wv$, and the proof is complete. \qed
\bigskip
{\em Proof of Theorem \ref{T:main}:} Let $F$ be the underlying forest of a pointed forest in $\Gamma^l_k$. Let $Q$ be the pointed forest with underlying forest $F$ whose pointer is as far to the left as possible while still remaining in $\Gamma^l_k$. Note that applying $\phi_l$ to $Q$ affects at most $l$ trees under or to the right of the pointer. Thus, by \ref{P:charGam} there are at most $k+l$ nontrivial trees to the right of the pointer in $P$, otherwise, $\phi_l(Q)$ would have more than $k$ nontrivial trees and thus certainly have more than $k$ carets.
For any $P \in \Gamma^l_k$, let $c_P$ be the number of nontrivial trees $T$ to the left of the pointed tree such that if the pointer is moved to $T$ the resulting forest remains in $\Gamma^l_k$. By the preceding paragraph $c_P$ is never more than $k+l$. Now define $G: \Gamma^l_k \times \Gamma^l_k \rightarrow \mathbb{Z}$ as follows: For each pointed forest $P$,
\bigskip
-If $Px_0^{-1} \in \Gamma^l_k$, set $G(P, Px_0^{-1}) = c_P$ and $G(Px_0^{-1}, P) = -c_P$.
\smallskip
-If the pointed tree in $P$ is nontrivial, and $Px_1^{-1} \in \Gamma_k^l$, set $G(P, Px_1^{-1}) = 1$ and $G(Px_1^{-1}, P) = -1$.
\smallskip
-For all other pairs $(P, P')$, set $G(P, P') = 0$.
\bigskip
Now it is clear from the definition that $G$ satisfies conditions ($i$) and ($ii$) in Definition \ref{D:PonziFlow}, and since $c_P \leq k+l$ for each $P$, $G$ also satisfies condition ($iii$). It thus remains only to check condition ($iv$). So we shall consider a pointed forest $P \in \Gamma^l_k$.
Note that $G(Px_1, P) + G(Px_1^{-1}, P) \geq 0$. Note also that that $G(Px_0, P) + G(Px_0^{-1}, P) \geq 0$, since any tree to the left of of the pointed tree in $P$ is also to the left of the pointed tree in $Px_0$, so $c_P \leq c_{Px_0}$. Further, since the only tree counted in $c_{Px_0}$ but not in $c_P$ is the pointed tree in $P$, we have that $G(Px_0, P) + G(Px_0^{-1}, P) > 0$ exactly when the pointed tree in $P$ is nontrivial. Thus condition (iv) is satisfied for all $P$ where the pointed tree is nontrivial. If the pointed tree in $P$ is trivial, $G(Px_1, P) = 1$ and $Px_1^{-1}$ is not in $\Gamma^l_k$, so $\sum_{P'\in \Gamma^l_k} G(P',P) = G(Px_0, P) + G(Px_0^{-1}, P) + G(Px_1, P) \geq 1.$ Thus condition ($iv$) holds for all pointed forests where the pointed tree is trivial as well. \qed
We close with some immediate corollaries:
\begin{corollary}
If $F$ is amenable, then for any right-invariant measure $\mu$, $\mu(\Gamma^l_k) = 0$.
\end{corollary}
\begin{proof}
By Theorem \ref{T:main}, $\Gamma^l_k$ has a Ponzi flow, and thus by \ref{C:MeasureZero} it always has measure zero.
\end{proof}
\begin{corollary}
If $F$ is amenable, then for any right-invariant measure $\mu$, and any finitely generated submonoid $P'$ of $P$, $\mu(P') = 0$.
\end{corollary}
\begin{proof}
Letting $p_1, ..., p_n$ be generators of $P'$, express each as a word in the $x_0, x_1, x_2, ...$ generating set. Let $L$ be the maximum index of the $x_i$ used to express the $p_j$; then $P'$ is a subset of the monoid generated by $x_0, x_1, ..., x_L$. But this monoid is exactly $\Gamma_0^L$, which by the previous corollary has measure zero. Thus, $\mu(P') = 0$.
\end{proof}
|
2,877,628,088,496 | arxiv | \section{Introduction, results and outlook}\label{intro}
Holography \cite{Maldacena:1997re, Gubser:1998bc, Witten:1998qj, Witten:1998zw}
is a correspondence between quantum field theory and gravity. It
provides a powerful tool to analyze strong-coupling theories by using
the dual gravity description.
At finite temperature and in the long-wavelength regime,
holography provides a correspondence between fluids and black holes.
This fluid/gravity correspondence was first studied
in the AdS/CFT correspondence
by \cite{Son:2002sd, Policastro:2002se, Policastro:2002tn, Baier:2007ix} in the linearized approximation and in \cite{Bhattacharyya:2008jc} by studying the fluctuations of the bulk black-hole horizon.
Holography has also been applied to geometries with
Lifshitz or Schr\"odinger symmetry mainly for application to condensed matter systems
\cite{Son:2008ye, Balasubramanian:2008dm, kachru, taylor, Guica:2010sw, cgkkm, gk1, gk2}\footnote{For a recent review see \cite{Marika}.}.
Many condensed matter systems have non-relativistic scale invariance
\cite{Eagles:1969zz, Nozieres:1985zz, O'Hara:2002zz, Regal:2004zza, Bartenstein:2004zza,Patel:2016ymd}, and
some of them have Lifshitz or Schr\"odinger symmetry
\cite{Hornreich:1975zz, Mehen:1999nd, Ardonne:2003wa}.
Moreover, the hydrodynamics of charge and energy in such systems may be
interesting as has been argued recently for the case of cold fermions at unitarity, \cite{fu}, other strongly correlated systems, \cite{zaa} and graphene, \cite{gra}.
Recent experiments in various materials that are strongly coupled as well as graphene have indicated that the hydrodynamics of electrons is observable and it exhibits non-trivial shear viscosity,
\cite{g1,g2,g3,g4,g5,g6}.
The fluid/gravity correspondences
in Schr\"odinger spacetimes has been studied in \cite{Herzog:2008wg, Rangamani:2008gi,Bra},
where the dual description for non-relativistic fluid mechanics was given.
A proposal for hydrodynamics in Lifshitz invariant theories was considered
in \cite{Hoyos:2013qna, Hoyos:2013cba} from the effective field theory point of view.
In \cite{cgkkm, gk1, gk2}, all quantum critical holographic scaling geometries with a $U(1)$ symmetry respecting translation invariance and spatial rotation invariance were classified in terms of three scaling exponents.
Two of them $(z,\theta)$ appear in the metric while another exponent,
which is referred to as $\zeta$ in \cite{cgkkm, gk1, gk2},
appears in the profile of the $U(1)$ gauge field
\footnote{This charge exponent controls the anomalous scaling of the charge density, even if it is conserved. It has also been introduced independently in \cite{hartong-obers} and was studied in more detail in \cite{G1} and \cite{karch}). The reason for the existence of anomalous charge exponent despite conservation is the RG running of the bulk coupling for charged degrees of freedom.}
The exponent $z$ is the Lifshitz (dynamical) scaling exponent, and
$\theta$ is the hyperscaling violation exponent, \cite{gk1,sh}.
Even though such theories have been studied intensively, many of their aspects are still unclear. In particular, hydrodynamics with Lifshitz scaling symmetry is not fully understood.
In non-relativist quantum field theories, the geometry that captures the symmetry and dynamics is the Newton Cartan geometry, \cite{Son}.
For Lifshitz space-times, the dual field theory is non-relativistic and
the source terms at the boundary
are related to the torsional Newton-Cartan theory
\cite{Christensen:2013lma, Christensen:2013rfa, BHR,Hartong:2014oma, Hartong:2014pma, Hartong:2015wxa}.
In particular, the role of torsional Newton-Cartan geometry in the boundary structure of bulk non-relativistic solutions as well as on the boundary symmetries was investigated in \cite{Hartong:2014oma, Hartong:2014pma, Hartong:2015wxa}. Alternatively the boundary structure can be analyzed using the Hamilton-Jacobi method, \cite{RS,CP}.
In \cite{km1}, the correspondence between fluids in the torsional Newton-Cartan theory and black holes in Lifshitz space-times has been analyzed.
Lifshitz space-times with unbroken $U(1)$ gauge symmetry
were considered, that were solutions of the Einstein-Maxwell-dilaton (EMD) theories with constant potential.
Although the geometry has Lifshitz scaling symmetry with dynamical exponent $z$, there was a non-trivial gauge field in the bulk that introduced non-trivial scaling exponents in the charged sector.
The black-hole solution of that theory was considered, boosted using Galilean boosts and then all parameters of the solution including the velocities, were made $\vec x$-dependent.
Using the technique introduced in \cite{Bhattacharyya:2008jc},
the bulk equations of motion were solved order by order in boundary derivatives and the (fluid) stress-energy tensor was computed and renormalized.
The results found were as follows:
\begin{itemize}
\item The stress-energy tensor was expressed in terms of the fluid variables:
velocity field $v^i$, energy density $\mathcal E$ and pressure $P$,
but it also contained the (particle number) density $n$ and
external source $\mathcal A_i$ associated to the $U(1)$ symmetry current.
It satisfied the scaling condition for Lifshitz invariant theories
$$z\mathcal E = (d-1) P\;.$$
\item It was found that the stress tensor satisfied the conservation equations of the Newton-Cartan theory.
\item The role of the (unbroken) U(1) symmetry in this class of theories is important. It was found that it behaves very closely to the U(1) mass conservation symmetry in non-relativistic hydrodynamics.
\item The fluid equations found were non-relativistic in contrast to the
relativistic fluid analysis in \cite{Hoyos:2013qna, Hoyos:2013cba}.
Even though the continuity equation and energy conservation equation
agree with those in the ordinary non-relativistic fluids,
the Navier-Stokes equation was different from that in the ordinary non-relativistic fluids.
\item By redefining variables and allowing a (Milne-invariant) Newton potential in the sources, \cite{Hartong:2014oma}-\cite{Hartong:2015wxa} the fluid equations can be mapped to the standard non-relativistic fluid equations coupled to the torsional Newton-Cartan geometry in the presence of a Newtonian potential.
\item It was found that the form of the fluid equations is independent of the Lifshitz exponent $z$ as well as of the (non-trivial) conduction exponent.
It is only the constitutive relations (equation of state) that depend on these scaling exponents.
\item The entropy satisfies the local thermodynamic relation with the energy density and pressure.
The divergence of the entropy current is non-negative, compatible with the second law.
\end{itemize}
The hydrodynamics analysis of \cite{km1} was generalized beyond the hydrodynamic limit in \cite{GJSV} using numerical techniques.
In this paper, we generalize the studies in \cite{km1} to those
with hyperscaling-violation in the associated holographic geometries.
In \cite{km1}, we have considered only the $\theta=0$ cases
in which hyperscaling-violation comes only from the gauge field.
Here, we introduce a non-trivial coupling between the dilaton and
cosmological constant in order to consider $\theta\neq 0$.
Fluids now, have hyperscaling-violation.
We find the following results:
\begin{itemize}
\item
As in \cite{km1}, the fluid equations and the stress-energy tensor reproduce those in the Newton-Cartan theory if
the holographic gauge field is identified to that that enters the Newton-Cartan theory.
\item
The fluid equations are similar to those in \cite{km1} but for additional external forcing terms
which come from the coupling to the dilaton
and its external source term. Note however that in the Lifshitz case with $\theta=0$,
although the dilaton was non-trivial, no such terms appeared in the hydrodynamics. Their appearance is therefore tied to hyperscaling violation in the bulk geometry.
\item With a judicious choice of the pressure, these forcing terms can be redefined away. The hydrodynamic equations are now equivalent to the non-relativistic hydrodynamics equations with a conserved mass current, but with an additional chemical potential for the mass current, that is given in (\ref{FluidVariablesZ}). This chemical potential is not related to external sources like the Newtonian potential but is directly related to the fact that the theory violates hyperscaling.
\item
The bulk viscosity is zero even in the presence of hyperscaling-violation.
\item The hydrodynamic equations respect the full scaling symmetry of Lifshitz invariance with hyperscaling violation as detailed in appendix \ref{app:ScaleDim}.
\end{itemize}
The Lifshitz space-time with hyperscaling-violation can be also obtained
by dimensional reduction from higher-dimensional Lifshitz space-times without hyperscaling violation, \cite{gk1}.
In this case, the model has two dilatons. We derive also the associated hydrodynamics from the black holes. We find the following:
\begin{itemize}
\item
The bulk viscosity is non-zero if we consider the naive dimensional reduction,
i.e., with trivial background in the extra dimensions and fixed volume.
\item
When the internal metric on the extra dimensions, or equivalently,
the volume of the extra dimensions satisfies a specific relation to the U(1) charge, the bulk viscosity becomes zero. This reduction gives the lower dimensional hydrodynamics we derived earlier.
\item We also present the hydrodynamic ansatz which describes a Lifshitz-invariant fluid with hyperscaling violation on a non-trivial (conformally-flat ) boundary metric.
The hydrodynamic equations in such a metric turn out to be simpler than on flat space.
\end{itemize}
The hydrodynamic equations we derive are general and are given by
(\ref{LifContinuity})-(\ref{LifChargeCons}) and supplemented by (\ref{a}), with independent variables the temperature $T$, the (mass) charge density $n$ and velocities $v^i$.
All other variables like the energy $\mathcal E$, the pressure $P$ and the
chemical potential as well as the transport coefficients are determined in terms of $T,n$ by constitutive relations that are dynamics/theory specific
\footnote{
Here, $n$ plays the role of the density of the non-relativistic fluid. We use this (and not the associated chemical potential) as a fluid variable because of the important role it plays in the realization of the non-relativistic momenta. It is the same reason that the mass density is usually used
in non-relativistic fluid mechanics.
}
Finally there is the Lifshitz invariance Ward identity, namely (\ref{WardHV}).
The $z,\theta$ dependence does not appear explicitly in the hydrodynamic equations. To leading order, it appears only through the constituent relations that express the energy, pressure and (mass) chemical potential as functions of temperature and mass density. To next order the transport coefficients may also depend on $z$ and $\theta$. For this reason we expect that although the hydrodynamic equations we derived strictly apply to the duals of EMD theories we considered, their validity is universal.
This analysis completes the derivation of fluid dynamics for non-relativistic scale-invariant and U(1)-invariant theories with Lifshitz scaling and a violation of hyperscaling. Remains open the case of such theories without a U(1) symmetry (ie. a broken U(1) symmetry). The case of a perfect fluid was studied recently in \cite{HO}. It is highly probable that the hydrodynamics in this case will turn out to be non-relativistic hydrodynamics in the absence of a conserved ``mass" current.
A further interesting question concerns the appropriate hydrodynamics for QFTs that are described by RG flows that interpolate between relativistic and non-relativistic theories.
To motivate the answer to this question, we consider first non-Lorentz invariant (but rotationally invariant) flows between Lorentz invariant fixed points
\footnote
The fact that the speed off light can vary on branes was pointed out first in \cite{kk}.}
\cite{tw,gn}, but where the velocity of light in the IR is different for that in the UV. In such a case, the hydrodynamics of this theory, is quasi-relativistic, but with a speed of light that is {\em temperature dependent}. This context resembles more the proposal of \cite{Hoyos:2013qna, Hoyos:2013cba}.
The example above suggests that in a (Lorentz-violating) RG flow from a CFT (with a un unbroken U(1) symmetry that is used to drive the breaking of Lorentz invariance) to an IR non-relativistic scaling (rotational invariant) geometry at an arbitrary temperature, the hydrodynamics will be again of the relativistic form (but with general equation of state) and with a speed of light $c(T)$ that is again temperature dependent. In the IR, $c(T\to 0)=\infty$ and the hydrodynamics reduces to the one found here with the U(1) symmetry becoming the mass-related symmetry. This is the standard non-relativistic limit of the relativistic hydrodynamics while all thermodynamic functions and transport coefficients are smooth functions of $T$ (if no phase transition exists at finite $T$). Otherwise they follow the standard behavior at phase transitions.
More general breaking of Lorentz invariance during RG flow must involve higher form fields of tensors in the bulk, and the details of the RG flow become complicated. It is plausible that a RG-covariant definition of velocities and other state functions is necessary in order to define hydrodynamics globally in the full energy landscape.\footnote{A proposal was put forward in \cite{mukho} but we think the appropriate setup is a bit different.}
This paper is organized as follows.
In Section~\ref{sec:Model}, we introduce the model and
its Lifshitz solution with hyperscaling-violation.
In Section~\ref{sec:HydroAnsatz}, we introduce the hydrodynamic ansatz.
We consider the derivative expansion and solve the equations of motion to first order.
We calculate the stress-energy tensor in Section~\ref{sec:StressTensor},
and discuss the relation to non-relativistic fluids in Section~\ref{sec:FluidEquation}.
In Section~\ref{sec:DimRed}, we consider the relation to the dimensional reduction from
higher-dimensional Lifshitz space-times without hyperscaling-violation.
In Section~\ref{sec:NaiveAnsatz}, we show that a simpler hydrodynamic ansatz
gives a fluid moving in a non-trivial but conformally flat metric.
In Appendix~\ref{app:NewtonCartan}, we review
the Newton-Cartan theory and its application to fluids.
In Appendix~\ref{app:ScaleDim}, we discuss the scaling dimensions
of the fluid variables and other relevant coefficients.
\section{Hyperscaling-violating solutions and black holes}\label{sec:Model}
We consider a holographic model with Lifshitz scaling symmetry and hyperscaling-violation.
The gravity dual is $(d+1)$-dimensional Einstein gravity with a Maxwell field $A_\mu$ and a dilaton $\phi$.
The action is given by
\begin{equation}
S
= \frac{1}{16\pi G} \int d^{d+1} x \sqrt{-g}
\left(R - 2 \Lambda e^{- \nu \phi} - \frac{1}{2} (\partial\phi)^2 - \frac{1}{4} e^{\lambda\phi}F^2\right) \ ,
\label{BulkAction}
\end{equation}
where $F = dA$ is the field strength of the gauge field,
$\Lambda$ is a negative cosmological constant with a coupling to dilaton,
and $\lambda$ and $\nu$ are dimensionless coupling constants.
The equations of motion are given by
\begin{align}
R_{\mu\nu}
&=
\frac{2 \Lambda}{d-1} e^{-\nu\phi} g_{\mu\nu}
+ \frac{1}{2} (\partial_\mu\phi)(\partial_\nu\phi)
+ \frac{1}{4} e^{\lambda\phi}
\left(
2 F_{\mu\rho} {F_\nu}^\rho - \frac{1}{d-1} F^2 g_{\mu\nu}
\right)
\label{EinsteinEq}
\\
0
&=
\nabla_\mu (e^{\lambda\phi} F^{\mu\nu}) ,
\label{MaxwellEq}
\\
\Box \phi
&=
\frac{1}{4} \lambda e^{\lambda\phi} F^2 - 2 \nu \Lambda e^{-\nu\phi}\ .
\label{DilatonEOM}
\end{align}
This model has the Lifshitz geometry with hyperscaling-violation as a solution;
\begin{align}
ds^2 &= e^{2\chi} d\tilde s^2 \ , \label{hvLifMetric}
\\
d\tilde s^2 &= - r^{2z} dt^2 + \frac{dr^2}{r^2} + \sum_i r^2 (dx^i)^2 , \label{LifMetric}
\\
e^\chi &= e^{\chi_0} r^{-\theta/(d-1)}
\end{align}
The solution for the gauge field and dilaton are given by
\begin{align}
A_t &= a \sqrt{\mu}\, r^{z+d-1-\theta} \ , &
e^{\lambda\phi} &= e^{\lambda\phi_0} r^{-2 d_1} \ .
\label{PureA}
\end{align}
where
\begin{equation}
d_1 = (d-1) - \frac{d-2}{d-1} \theta \ .
\end{equation}
Here, the cosmological constant is related to a length scale of the solution, which we fixed to be 1.
$\Lambda$ is then given by
\begin{equation}
\Lambda = - \frac{(z-d-1-\theta)(z+d-2-\theta)}{2} \ .
\end{equation}
The exponents $z$, $\theta$ and constants $\phi_0$, $\chi_0$, $\mu$
are determined by the coupling constants in the action $\lambda$, $\nu$,
and the free parameter $a$ of the solution by the following relations;
\begin{align}
\lambda^2 &= 2 \frac{(d-1) d_1^2}{[(z-1)(d-1)-\theta][(d-1)-\theta]} \ ,\label{la} \\
\nu &= \frac{\theta \lambda}{(d-1) d_1} \ , \label{nu}\\
\phi_0 &= -\frac{2}{\lambda}\left(1+\frac{\theta}{(d-1)(d-1-\theta)}\right)\log a \ , \label{Phi0}\\
\chi_0 &= -\frac{\theta}{(d-1)(d-1-\theta)}\log a \ , \label{x0} \\
\mu &= \frac{2(z-1)}{z+d-1-\theta} \ . \label{Constmua}
\end{align}
To simplify further calculations, we rescale the gauge field such that the solution becomes
\begin{equation}
A_t = a r^{z+d-1-\theta} \ .
\end{equation}
the original action (\ref{BulkAction}) becomes
\begin{equation}
S
= \frac{1}{16\pi G} \int d^{d+1} x \sqrt{-g}
\left(R - 2 \Lambda e^{- \nu \phi} - \frac{1}{2} (\partial\phi)^2 - \frac{\mu}{4} e^{\lambda\phi}F^2\right) \ .
\end{equation}
The following black hole geometry is also a solution of this theory;
\begin{align}
ds^2 &= e^{2\chi} d\tilde s^2
\\
d\tilde s^2 &= - r^{2z} f(r) dt^2 + \frac{dr^2}{f(r) r^2} + \sum_i r^2 (dx^i)^2 ,
\\
e^\chi &= e^{\chi_0} r^{-\theta/(d-1)}
\end{align}
where
\begin{equation}
f = 1 - \frac{r_0^{z+d-1-\theta}}{r^{z+d-1-\theta}} \ .
\label{WarpF}
\end{equation}
The radius of the horizon is given by $r_0$ and the Hawking temperature of the black hole is
\begin{equation}
T = \frac{z+d-1-\theta}{4\pi} r_0^z \ .
\label{HawkingT}
\end{equation}
The gauge field and dilaton take almost the same form as in the the zero temperature solution
\begin{align}
A_t &= a (r^{z+d-1-\theta} - r_0^{z+d-1-\theta}) , &
e^{\lambda\phi} &= e^{\lambda\phi_0} r^{-2d_1} .
\end{align}
but the constant mode of $A_t$ is chosen such that
$A_t$ vanishes at the horizon for regularity.
\section{Solving the hydrodynamics ansatz}\label{sec:HydroAnsatz}
In this paper, we focus on the case of $d=4$.
In some parts however, we may give results in arbitrary dimension $d$.
Extensions to other dimensions (namely $d=3$ and $d>4$) are expected to be straightforward.
We introduce the hydrodynamic ansatz, which describes the fluid mechanics
on the field theory side.
For the regularity at the future horizon,
we transform the black-hole solution to Eddington-Finkelstein coordinates;
\begin{equation}
dt_+ = dt + \frac{dr}{r^{z+1}f} \ .
\end{equation}
Hereafter, we will always work in Eddington-Finkelstein coordinates
and $t$ will henceforth stand for the null coordinate $t_+$.
In these coordinates,
the black hole solution for the metric and gauge field are expressed as
\begin{align}
ds^2
&=
r^{2\chi} d\tilde s^2
\\
d\tilde s^2
&=
- r^{2z} f dt^2 + 2 r^{z-1} dt\,dr + r^2 (dx^i)^2 \ ,
\\
A
&=
a\left( r^{z+3-\theta} - r_0^{z+3-\theta}\right) dt
- a r^{2-\theta} dr \ ,
\end{align}
where we have taken the gauge such that $A_r$ vanishes in the original Fefferman-Graham coordinates.
Then, we perform the Galilean boost
\begin{align}
t &\to t \ ,
&
x^i & \to x^i - v^i t \ ,
\end{align}
where $v^i$ is a set of constant parameters of the Galilean boost (that we call velocity).
Note that this is a general coordinate transformation and for constant $v^i$ it provides a new class of solutions parametrized by $\vec v$. Moreover, we assume in this paper\footnote{The hydrodynamics of $z=1$ with hyperscaling violation
was considered in \cite{Kanitscheider:2009as} obtaining such solutions by dimensional reduction from higher dimensional AdS geometries.} that $z\not= 1$ and therefore such a coordinate transformation keeps the boundary source
\footnote{
They are defined as the most divergent parts of the metric and gauge field near the boundary.
}
invariant, as it should. It therefore provides a different state (more precisely ensemble) of the same boundary theory.
The black hole geometry becomes
\begin{align}
ds^2 &= e^{2\chi} d\tilde s^2 \ , \label{BoostedMetric}
\\
d\tilde s^2 &= - (r^{2z} f - v^2 r^2) dt^2 + 2 r^{z-1} dt dr - 2 r^2 v^i dt\, dx^i + r^2 (dx^i)^2 \ ,
\\
e^\chi &= e^{\chi_0} r^{-\theta/3}
\end{align}
and the solution for the gauge field and dilaton remains invariant;
\begin{align}
A
&=
a\left( r^{z+3-\theta} - r_0^{z+3-\theta}\right) dt
- a r^{2-\theta} dr \ ,
&
e^{\lambda\phi}
&=
e^{\lambda\phi_0} r^{-2d_1} \ . \label{BoostedGaugeDilaton}
\end{align}
In order to describe the dynamics of the fluid,
we replace the free parameters $r_0$, $v^i$ and $a$ by slowly varying functions of the boundary coordinates ($\vec x,t$).
However, this procedure generates a non-trivial background metric at the boundary since
$a$ appears as an overall factor of the metric.
In order to obtain a flat space background at the boundary,
we absorb this overall factor by introducing the additional coordinate transformation
\begin{equation}
x^\mu \to e^{-\chi_0} x^\mu \, \label{CoordRedef}
\end{equation}
with $\chi_0$ given in (\ref{x0}),
before replacing the parameters by functions.
Then, the metric becomes
\begin{align}
ds^2 &= r^{-2\theta/3}
\left[
- (r^{2z} f - v^2 r^2) dt^2 + 2 e^{\chi_0} r dt dr - 2 r^2 v^i dt\, dx^i + r^2 (dx^i)^2 \ ,
\right]
\label{BoostedBH}
\end{align}
and the gauge field is also rescaled as
\begin{equation}
A
=
a^{1+\frac{\theta}{3(3-\theta)}}\left( r^{z+3-\theta} - r_0^{z+3-\theta}\right) dt
- a r^{2-\theta} dr \ .
\end{equation}
Since we have rescaled the time coordinate,
the Hawking temperature is also rescaled as
\begin{equation}
T = \frac{z+d-1-\theta}{4\pi} \, e^{-\chi_0} r_0^z
= \frac{z+d-1-\theta}{4\pi} \, a^{\frac{\theta}{(d-1)(d-1-\theta)}} r_0^z \ .
\label{HawkingTres}
\end{equation}
Now, we replace the parameters $r_0$, $v^i$ and $a$
by slowly varying function of the boundary coordinates $x^\mu$.
Moreover, we promote the constant part of $A_i$, which is usually gauged away
to $x^\mu$-dependent functions as was also done in \cite{km1}.
Then, the metric, gauge field and dilaton become
\begin{align}
ds^2 &= r^{-2\theta/3}
\left[
- r^{2z} f dt^2 + 2 e^{\chi_0(x)} r dt\, dr + r^2 (dx^i-v^i(x) dt)^2
\right]
\label{BGmetric}
\\
f &= 1 - \frac{r_0^{z+3-\theta}(x)}{r^{z+3-\theta}} \ ,
\\
A
&=
a^{1+\frac{\theta}{3(3-\theta)}}(x)\left( r^{z+3-\theta} - r_0^{z+3-\theta}(x) \right) dt
- a(x) r^{2-\theta} dr + \mathcal A_\mu(x) dx^\mu \ ,\label{gauge}
\\
e^{\lambda\phi}
&=
e^{\lambda\phi_0(x)} r^{-2d_1} \ , \label{BGdilaton}
\end{align}
where $x^\mu$-dependence of $\phi_0(x)$ and $\chi_0(x)$ comes from that of $a(x)$.
We have also introduced $\mathcal A_t(x)$ and $\mathcal A_i(x)$, that
originate in the constant parts of $A_t$ and $A_i$, respectively, and which have been now replaced by functions of the boundary coordinates.
The relations (\ref{Phi0})-(\ref{Constmua}) are however preserved.
The above is no longer a solution of the equations of motion,
and we must introduce additional correction terms.
We consider the derivative expansion in $t,\vec x$ and calculate
the first order solution for the hydrodynamic ansatz.
We first expand \eqref{BGmetric}-\eqref{BGdilaton} at a point,
which we can take $x^\mu = 0$ without loss of generality.
Then, we assume that the derivatives are small since
$x^\mu$ dependence appears only through the slowly varying functions
$r_0(x)$, $v^i(x)$, $a(x)$, etc., and expand with respect to the derivatives $\partial_\mu$;
\begin{align}
g_{\mu\nu}
&=
g_{\mu\nu}^{(0)} + g_{\mu\nu}^{(1)} + \cdots \ ,
\\
A_\mu
&=
A_\mu^{(0)} + A_\mu^{(1)} + \cdots \ ,
\\
\phi
&=
\phi^{(0)} + \phi^{(1)} + \cdots \ ,
\end{align}
where $g_{\mu\nu}^{(n)}$, etc.\ stands for $n$-th order terms in the derivative expansion.
At the leading order of the derivative expansion, the equations of motion
contain only the leading order terms $g_{\mu\nu}^{(0)}$, $A_\mu^{(0)}$ and $\phi^{(0)}$
which are equivalent to the solutions \eqref{BoostedMetric}-\eqref{BoostedGaugeDilaton}
before replacing the parameters by slowly varying functions, and hence, are satisfied.
At the next-to-leading order, only the linear order terms of the derivatives $\partial_\mu$ appear,
and the equations of motion are not satisfied due to these linear order terms.
Now, we introduce the correction terms to \eqref{BGmetric}-\eqref{BGdilaton} as
\begin{align}
g_{\mu\nu}
&=
g_{\mu\nu}^{(0)} + g_{\mu\nu}^{(1)} + h^{(1)}_{\mu\nu} + \mathcal O(\partial^2) \ ,
\label{Gcorr}
\\
A_\mu
&=
A_\mu^{(0)} + A_\mu^{(1)} + a_\mu^{(1)} + \mathcal O(\partial^2) \ ,
\label{Acorr}
\\
\phi
&=
\phi^{(0)} + \phi^{(1)} + \varphi^{(1)} + \mathcal O(\partial^2) \ ,
\label{Pcorr}
\end{align}
where $h_{\mu\nu}^{(n)}$, $a_\mu^{(n)}$ and $\varphi^{(n)}$ are the correction terms
at the $n$-th order of the derivative expansion, which start from $n=1$ since
the equations of motion satisfied without the correction terms at leading order.
At the next-to-leading order, or equivalently, at the linear order of the derivative expansion,
the equations of motion give the inhomogeneous linear differential equations for
the correction terms, $h_{\mu\nu}^{(1)}$, $a_\mu^{(1)}$ and $\varphi^{(1)}$.
The inhomogeneity comes from the first order terms of \eqref{BGmetric}-\eqref{BGdilaton},
namely, $g_{\mu\nu}^{(1)}$, $A_\mu^{(1)}$ and $\phi^{(1)}$.
By solving the differential equations for the correction terms,
we obtain the following first order solution of the derivative expansion;
\begin{align}
ds^2 &= r^{-2\theta/3}
\biggl[
- r^{2z} f dt^2 + 2 a^{-\frac{\theta}{3(3-\theta)}} r^{z-1} dt dr
+ r^2 (dx^i - v^i dt)^2
\notag\\&\qquad\qquad
+ \frac{2}{3-\theta} a^{-\frac{\theta}{3(3-\theta)}} r^z \partial_i v^i dt^2
- r^2 F_1(r) \sigma_{ij} (dx^i - v^i dt) (dx^j - v^j dt)
\notag\\&\qquad\qquad
+ 2 \left(F_3(r) \partial_i r_0 + F_5(r) \partial_i a\right)dt (dx^i - v^i dt)
\biggr]
\label{SolGeom}
\end{align}
where the first line is the original solution and the the rest are the corrections terms. $\sigma_{ij}$ is the shear tensor
\begin{equation}
\sigma_{ij} = \left(\partial_i v^j + \partial_j v^i\right)
- \frac{2}{3} \partial_k v^k \delta_{ij} \ ,
\end{equation}
and the functions $F_i(r)$ are given by
\begin{align}
F_1(r)
&=
a^{-\frac{\theta}{3(3-\theta)}}
\int_\infty^r dr' \frac{r^{\prime\,3-\theta}-r_0^{3-\theta}}
{r'(r^{\prime\,z+3-\theta}-r_0^{z+3-\theta})} \ ,
\\
F_2(r)
&=
\left( 2(z-1) r^{z+3-\theta} - (z-5+\theta) r_0^{z+3-\theta}\right) \int_\infty^r dr'\, \widehat F_1(r'),
\\
F_3(r)
&=
- 2(z-1) a^{-1-\frac{\theta}{3(3-\theta)}} \int_\infty^r \frac{dr'}{r^{\prime\,6-z+\theta}} F_2(r')
\\
F_4(r)
&=
\left( 2(z-1) r^{z+3-\theta} - (z-5+\theta) r_0^{z+3-\theta}\right) \int_\infty^r dr'\, \widehat F_2 (r') ,
\\
F_5(r)
&=
\int_\infty^r \frac{dr'}{r^{\prime\,6-z+\theta}}
\left(- 2(z-1) a^{-1-\frac{\theta}{3(3-\theta)}} F_4(r')
+ \frac{\theta}{3(3-\theta)} a^{-1-\frac{\theta}{3(3-\theta)}} r^{\prime\,3-\theta}
\right)
\\
\widehat F_i(r) &= \frac{r^{7-2\theta}\widetilde F_i}{(r^{z+3-\theta}-r_0^{z+3-\theta})[2(z-1) r^{z+3-\theta} - (z-5+\theta) r_0^{z+3-\theta}]^2}
\\
\widetilde F_1 (r) &= \frac{z+3-\theta}{2(z-1)} a \frac{r_0^{z-\theta}}{r^{5-\theta}}
\Bigl(2 (z-1) (5-\theta)r^{z+3-\theta} r_0^2 - z(z+3-\theta) r^{5-\theta} r_0^z
\notag\\&\qquad\qquad\qquad\qquad\qquad
+ (z-5+\theta)(z-2) r_0^{z+3-\theta} \Bigr)
\\
\widetilde F_2(r)
&= - \frac{\theta}{6 (3-\theta) (z-1)} r^{-\theta -5} r_0^{-2 \theta }
\Bigl(-4 (z-1)^2 r_0^{2 \theta } r^{2 z+6}-r^{2 \theta } r_0^{2 z+6} (z-5+\theta)^2
\notag\\&\qquad\qquad\qquad
+r^{\theta +5} (z+3-\theta)^2 r_0^{\theta +2 z+1}
+4 (z-1) (z-5+\theta) r_0^{\theta +z+3} r^{\theta+z+3}\Bigr)
\ .
\end{align}
The first order solution for the gauge field is
\begin{align}
A &= a(x) \left[ a^{\frac{\theta}{3(3-\theta)}} \left(r^{z+3-\theta} - r_0^{z+3-\theta}(x)\right)
- \frac{1}{3-\theta}r^{3-\theta} \partial_i v^i(x)\right]dt
\notag\\&\quad
- a(x) r^{2-\theta} dr + \mathcal A_\mu(x) dx^\mu
+ \left(F_2(r) \partial_i r_0 + F_4(r) \partial_i a \right) (dx^i - v^i dt) \ ,
\label{SolA}
\end{align}
and the dilaton has no correction term,
\begin{equation}
e^{\lambda\phi} = e^{\lambda\phi_0(x)} r^{-2d_1} \ , \label{SolPhi}
\end{equation}
We find that, in order for \eqref{SolGeom}, \eqref{SolA} and \eqref{SolPhi} to be a solution of the equations of motion,
the functions $r_0(x)$, $v^i(x)$, $a(x)$ and $\mathcal A_\mu(x)$ must satisfy the following constrains;
\begin{align}
0 &= \partial_t a + v^i \partial_i a - a \partial_i v^i ,
\label{ConstA}
\\
0 &= \partial_t r_0 + v^i \partial_i r_0 + \frac{1}{3-\theta} r_0 \partial_i v^i ,
\label{ConstT}
\\
0 &= \mathcal F_{ti} + v^j \mathcal F_{ji}
+ \frac{z+3-\theta}{2 (z-1)} a^{1+\frac{\theta}{3(3-\theta)}} r_0^{z+3-\theta}
\left(z \frac{\partial_i r_0}{r_0} + \frac{\theta}{3(3-\theta)} \frac{\partial_i a}{a} \right)
\label{ConstS}
\end{align}
where $\mathcal F = d\mathcal A$.
\section{Calculation of the stress tensor}\label{sec:StressTensor}
We will calculate now the boundary stress-energy tensor.
The asymptotically Lifshitz space-times have anisotropic behavior
in temporal and spatial directions, and hence
it is convenient to introduce the vielbeins in order to consider
their general asymptotic behavior.
The metric is expressed in terms of the vielbein $E^A$ as
\begin{equation}
ds^2 = - (E^0)^2 + \delta_{ab} E^a E^b + (E^r)^2 \ .
\end{equation}
For the asymptotic Lifshitz space-time in Fefferman-Graham coordinates,
the vielbein can be expressed as \cite{Ross:2011gu}
\begin{align}
E^r &= e^\chi \frac{dr}{r} \ , &
E^0_\mu &= e^\chi r^z \tau_\mu(r,x^\mu) \ , &
E^a_\mu &= e^\chi r \hat e^a_\mu(r,x^\mu) \ , \label{Edef}
\end{align}
where the one forms $\tau(r,x^\mu)$ and $\hat e^a(r,x^\mu)$ have
a finite and non-degenerate limit near the boundary, $r\to\infty$,
and provide the characteristic quantities of Newton Cartan geometry.
Eq.\ \eqref{Edef} does not however fix the $r\to\infty$ limit of $\tau(r,x^\mu)$ and $\hat e^a(r,x^\mu)$, uniquely.
For our solution \eqref{SolGeom}, we take the vielbein $E^A$ such that
$\tau(r,x^\mu)$ and $\hat e^a(r,x^\mu)$ behave in $r\to\infty$ as
\begin{align}
\tau_\mu dx^\mu &= dt \ ,
&
\hat e^a_\mu dx^\mu &= dx^a - v^a dt \ .
\label{vl}
\end{align}
Then, the induced metric at transverse surfaces, $dr=0$, is expressed for large $r$ a
\begin{align}
\gamma_{\mu\nu}
&=
r^{-2\theta/3} \left(- r^{2z} \tau_\mu \tau_\nu + r^2 \delta_{ab} \hat e^a_\mu \hat e^b_\nu\right) \ ,
\label{InducedMetric}
\\
\gamma^{\mu\nu}
&=
r^{2\theta/3} \left(- r^{-2z} \hat v^\mu \hat v^\nu + r^{-2} \delta^{ab} e_a^\mu e_b^\nu\right) \ ,
\label{InducedMetric2}\end{align}
where $\hat v^\mu$ and $e_a^\mu$ are the inverse vielbeins, which take the following form\footnote{ Note that our notations is somewhat different from \cite{Hartong:2014oma}-\cite{Hartong:2015wxa}. The detailed notation and variables used in summarized in appendix \ref{app:Notation}.}
as $r\to\infty$ for \eqref{SolGeom};
\begin{align}
\hat v^\mu \nabla_\mu &= \nabla_t + v^i \nabla_i \ ,
&
e_a^\mu \nabla_\mu &= \nabla_a \ .
\label{hat}\end{align}
Relations (\ref{InducedMetric}) and (\ref{InducedMetric2}) do not completely fix the $r$-dependent vielbeins $\tau_{\m}$ $\hat e^a_{\m}$ and $\hat v^{\m}$.
For example, the one form $\tau_\mu$ may also have subleading terms of order $\mathcal O(r^{2-2z})$,
which appear at the same order as the leading terms of $\hat e^a_\mu$. This would change $\hat e^a_\mu$ and $\hat v^{\m}$ to leading order.
This ambiguity is equivalent to a Milne boost at the boundary.
We fix this ambiguity in the sequel in order to proceed. We will discuss the action of Milne boosts at the boundary data later.
To summarize, for our solution we take
\be
\tau=(1,\vec 0)\sp \hat v=(1,\vec v)\sp \hat e^a_{\m}=\left(\begin{matrix} -v^1&1&0&0\\
-v^2&0&1&0\\
-v^3&0&0&1\end{matrix}\right) \sp e_a^{\m}=\left(\begin{matrix} 0&0&0\\ 1&0&0\\
0&1&0\\
0&0&1\end{matrix}\right)
\label{holo}\ee
that describe the asymptotic Newton Cartan geometry in what we called in \cite{km1} ``the holographic frame".
For the gauge field with boundary Lorentz indices we define
\begin{align}
\hat A_0 &\equiv \hat v^\mu A_\mu=A_t+\vec v\cdot \vec A \ , &
\hat A_a &\equiv e_a^\mu A_\mu \ ,
\label{Ahat}
\end{align}
The variation of the action with respect to these boundary variables is given by
\begin{equation}
\delta S_r
= \int d^4x \sqrt{-\gamma}\left(
- \hat S^0_\mu \delta \hat v^\mu + \hat S^a_\mu \delta e_a^\mu
+ \hat J^0 \delta\hat A_0 + \hat J^a \delta\hat A_a
+ \mathcal O_\phi \delta \phi
\right) \ ,
\label{VariationOfAction}
\end{equation}
where $S_r$ is the renormalized action with appropriate (boundary) counter terms.
Since the volume form behaves as
\begin{equation}
\sqrt{-\gamma} \sim r^{z+3-4\theta/3} \ ,
\end{equation}
the terms at $\mathcal O(r^{-z-3+4\theta/3})$ in the expectation values
give the regular contributions
\footnote{
Even though $\hat A_0$ starts from $\mathcal O(r^{z+3-\theta})$,
the leading contribution to the effective action is
$\hat J^0 \mathcal A_0$ and hence,
the regular term in $\hat J^0$ originates at the same order as the others, namely
$\mathcal O(r^{-z-3+4\theta/3})$.
}
The definition above does not give the ordinary stress-energy tensor but
the one with contributions from the gauge field and current.
It is related to the ordinary Brown-York tensor as
\begin{equation}
\hat S^0_\nu \hat v^\mu - \hat S^a_\nu e^\mu_a
= T_\text{(BY)}{}^\mu{}_\nu + J^\mu A_\nu + T_\text{(ct)}{}^\mu{}_\nu
\end{equation}
where $T_\text{(BY)}^{\mu\nu}$ is the Brown-York tensor which is defined
in terms of the extrinsic curvature $K_{\mu\nu}$ as
\begin{equation}
T_\text{(BY)}^{\mu\nu} = \frac{1}{8\pi G} \left(\gamma^{\mu\nu} K - K^{\mu\nu} \right) \ ,
\label{BYdef}
\end{equation}
and $T_\text{(ct)}^{\mu\nu}$ is the terms from the counter terms.
By using appropriate counter terms, the stress-energy tensor becomes finite at the boundary $r\to\infty$
and we define
\begin{align}
\widetilde T^\mu{}_\nu
\equiv \lim_{r\to\infty} r^{z+3-4\theta/3} \left(\hat S^0_\nu \hat v^\mu - \hat S^a_\nu e^\mu_a\right) \ ,
\label{TtildeDef}
\end{align}
and the current is given by
\begin{align}
J^\mu \equiv \lim_{r\to\infty} r^{z+3-4\theta/3}
\left(\hat J^0 \hat v^\mu + \hat J^a e^\mu_a\right) \ ,
\end{align}
As we will see later, the stress tensor in (\ref{TtildeDef}) is also different from the ordinary stress-energy tensor
on the boundary as it contains the contributions from the external gauge field.
Now, we calculate the renormalized stress-energy tensor from the first order solution \eqref{SolGeom}.
In order to regularize the stress-energy tensor and other expectation values,
we introduce the following counter terms;
\begin{equation}
S_\text{ct} =
\frac{1}{16\pi G} \int d^4 x \sqrt{-\gamma}
\left[
- (5+z - 2 \theta) e^{-\frac{1}{2}\nu\phi}+ \frac{z+3-\theta}{2} \, e^{(\lambda - \frac{1}{2}\nu)\phi} \gamma^{\mu\nu} A_\mu A_\nu
\right] \ .
\label{CT}
\end{equation}
Although the last counterterm is apparently not gauge invariant,
the effective action is still invariant under boundary gauge transformations (up to boundary terms on the boundary).
To see this, the second term in \eqref{CT} is expanded near the boundary as
\begin{equation}
\sqrt{-\gamma}\, e^{(\lambda-\frac{1}{2}\nu)\phi} \gamma^{\mu\nu} A_\mu A_\nu
\propto
a^\frac{\theta}{3(3-\theta)} r^{z+3-4\theta/3}
- a^\frac{\theta}{3(3-\theta)} r_0^{z+3-\theta}
+ \frac{2}{a} \left(\mathcal A_t + v^i \mathcal A_i\right) \cdots \ .
\end{equation}
Near the boundary, $r\to\infty$,
the first term is leading and will cancel with the divergent part in the original action. The second and third terms
give finite contributions to the effective action. The ellipsis denotes terms that are vanishing as we take the cutoff surface to the boundary.
We therefore observe that indeed the finite terms are not gauge invariant but are linear in the gauge field.
Under the boundary gauge transformation $\delta\mathcal A = d \Lambda$,
the finite term in \eqref{CT} transforms as
\begin{equation}
\delta_\Lambda S_\text{ct}
\propto \int d^4 x \left[\partial_t \frac{\Lambda}{a} + \partial_i \frac{\Lambda v^i}{a}\right] \ ,
\label{cu2}\end{equation}
where we used \eqref{ConstA}.
Therefore the transformed terms are surface terms and should vanish at infinity on the boundary. The coefficient of these terms is indeed the conserved current (see equation (\ref{cu})) and (\ref{cu}) can be written as
\begin{equation}
\delta_\Lambda S_\text{ct}
\propto \int d^4 x J^{\m}\pa_{\m}\Lambda=\int d^4 x (\pa_{\m}J^{\m})\Lambda=0
\end{equation}
Therefore if no charge is leaking to infinity on the boundary the effective action is gauge invariant.
By using the counter terms \eqref{CT}, the renormalized stress-energy tensor is calculated as
\begin{align}
\widetilde T^0{}_0
&=
\frac{1}{8\pi G}
\left(- \frac{3-\theta}{2} a^{\frac{\theta}{3(3-\theta)}} r_0^{z+3-\theta}
- \frac{z-1}{a} v^i \mathcal A_i \right)
\ ,
\\
\widetilde T^i{}_0
&=
\frac{1}{8\pi G}
\biggl[-\frac{z+3-\theta}{2} a^{\frac{\theta}{3(3-\theta)}} r_0^{z+3-\theta} v^i
+ \frac{z-1}{a} v^i \mathcal A_t
+ \frac{1}{2} r_0^{3-\theta} \sigma_{ij} v^j
\notag\\&\quad\qquad\qquad\qquad
+ \frac{z(z+3-\theta)}{4(z-1)} r_0^{2z-\theta} \left( \partial_i r_0
+ \frac{\theta}{3 (3-\theta) z} \frac{r_0}{a} \partial_i a \right)
\biggr]
\ ,
\\
\widetilde T^0{}_i
&=
\frac{1}{8\pi G}\frac{z-1}{a} \mathcal A_i
\ ,
\\
\widetilde T^i{}_j
&=
\frac{1}{8\pi G}
\left[\frac{z}{2} a^{\frac{\theta}{3(3-\theta)}} r_0^{z+3-\theta} \delta_{ij}
- \frac{1}{2} r_0^{3-\theta} \sigma_{ij}
+ \frac{z-1}{a} v^i \mathcal A_j
- \frac{z-1}{a} \left(\mathcal A_t + v^k \mathcal A_k \right) \delta_{ij}\right]
\ .
\end{align}
The current $J^\mu$ is obtained as
\be
J^0 = \frac{z-1}{16\pi G a} \sp
J^i = \frac{z-1}{16\pi G a} v^i \ .
\label{cu}\ee
The expectation value of the dual operator of the dilaton $\phi$
is also calculated as
\begin{align}
\langle {\mathcal O}_\phi\rangle
&= - \frac{1}{16 \pi G}
\frac{9(z-1)+(3-\theta)\theta}{\sqrt{6(3-\theta)[3(z-1)-\theta]}}
a^{\frac{\theta}{3(3-\theta)}} {r_0^{z+3-\theta}}
\notag\\&\quad
+ \frac{1}{16 \pi G}
\frac{12(z-1)(6-\theta)}{\sqrt{6(3-\theta)[3(z-1)-\theta]}}
\frac{1}{a} \left(\mathcal A_t + v^i\mathcal A_i \right)
\label{svev} \ .
\end{align}
We also calculate the entropy current that is given by, \cite{Bhattacharyya:2008xc},
\begin{equation}
J_S^\mu = \frac{\sqrt{h}}{4G} \frac{n^\mu}{n^0} \ , \label{EntropyCurrent}
\end{equation}
where $\sqrt{h}$ is the volume form on the time-slice at the horizon and
$n^\mu$ is the normal vector to the horizon.
By using the first order solution \eqref{SolGeom}, the entropy current becomes
\begin{align}
J_S^0 &= \frac{1}{4G} r_0^{3-\theta} \ ,
\\
J_S^i &= \frac{1}{4G} r_0^{3-\theta} v^i
- \frac{1}{8(z-1)G} a^{-\frac{\theta}{3(3-\theta)}} r_0^{z-\theta}
\left(z \partial_i r_0 + \frac{\theta}{3(3-\theta)} \frac{r_0}{a} \partial_i a\right) \ .
\end{align}
\section{Hyperscaling-violating Lifshitz hydrodynamics}\label{sec:FluidEquation}
\subsection{Thermodynamics}
In order to consider the relation between the form of the stress-energy tensor \eqref{TtildeDef} and that for fluids,
we first calculate the thermodynamic functions.
The energy $E$, entropy $S$ and charge $N$ in volume $V$ (in the $x^i$ directions)
of the Lifshitz black hole geometry are given by\footnote{In this subsection we return temporarily to arbitrary dimension, $d$.}
\begin{align}
E
&=
\mathcal E V = \frac{d-1-\theta}{16\pi G} a^\frac{\theta}{(d-1)(d-1-\theta)} r_0^{z+d-1-\theta} V \ ,
\\
S
&=
s V = \frac{1}{4G} r_0^{d-1-\theta} V \ ,
\\
N
&=
n V = \frac{z-1}{16\pi G a} V \ ,
\end{align}
where we have also defined $\mathcal E$, $s$ and $n$,
which are densities of energy, entropy and charge.
Here we have also taken into account the effect of the coordinate redefinition \eqref{CoordRedef},
and hence the energy has an additional factor of $a^\frac{\theta}{(d-1)(d-1-\theta)}$.
Now, we consider the first law of thermodynamics;
\begin{equation}
d E
=
T dS - P dV
+ \mu dN \ ,
\end{equation}
where the variations with respect to entropy $S$, volume $V$ and charge $N$
give the temperature $T$, pressure $P$ and chemical potential $\mu$,
which are calculated as
\begin{align}
T
&=
\left(\frac{\partial E}{\partial S}\right)_{V,N}
= \frac{z+d-1-\theta}{4\pi} a^\frac{\theta}{(d-1)(d-1-\theta)} r_0^{z} \ ,
\\
P &=
- \left(\frac{\partial E}{\partial V}\right)_{S,N}
= \frac{1}{16\pi G} \left(z-\frac{\theta}{d-1}\right)
a^\frac{\theta}{(d-1)(d-1-\theta)} r_0^{z+d-1-\theta} \ ,
\label{NewPressure}
\\
\mu &= \left(\frac{\partial E}{\partial N}\right)_{S,V}
= - \frac{\theta}{(d-1)(z-1)}
a^{1 + \frac{\theta}{(d-1)(d-1-\theta)}} r_0^{z+d-1-\theta} \ .
\label{ChemicalPotential}
\end{align}
For $d=4$, the energy density, pressure, entropy density, charge density and
chemical potential are given by
\begin{align}
\mathcal E &= \frac{3-\theta}{16\pi G} a^{\frac{\theta}{3(3-\theta)}} r_0^{z+3-\theta} \ , &
P &= \frac{1}{16\pi G} \left(z-\frac{\theta}{d-1}\right)
a^{\frac{\theta}{3(3-\theta)}} r_0^{z+3-\theta} \ ,
\notag\\
s &= \frac{1}{4G} r_0^{3-\theta} \ , &
n &= \frac{z-1}{16\pi G} a^{-1}\ ,
\notag\\
\mu &= - \frac{\theta}{3(z-1)} a^{1 + \frac{\theta}{3(3-\theta)}} r_0^{z+3-\theta} \ .
\label{FluidVariablesZ}
\end{align}
Note that at $\theta=0$, the chemical potential $\mu$ vanishes.
\subsection{Relation to hydrodynamics in Newton-Cartan theory}
In this section we will rewrite the boundary stress tensor in terms of the Newton-Cartan geometry data along the lines of \cite{Hartong:2014oma}-\cite{Hartong:2015wxa} following the detailed formalism of \cite{km1}
In the Newton-Cartan theory, (that is explained in more detail in appendix \ref{app:NewtonCartan}),
the spacelike vielbein and inverse timelike vielbein $\bar e^a_\mu$ and $\bar v^\mu$,
transform under the Milne boost.
In this paper, $\bar v^\mu$ and $\bar e^a_\mu$ indicate
the vielbeins in an arbitrary (Milne) frame.
There is a special frame, the ``holographic frame,'' introduced in \cite{km1}, where
the vielbeins are given by
\be
\bar v^\mu = \hat v^\mu\sp \bar e^a_\mu = \hat e^a_\mu\;.
\ee
where $\hat v^{\mu}$ and $\hat e^a_{\mu}$ are defined on the gravity side, in (\ref{hat}),
and $\hat v^\mu$ always satisfies in any frame
\be
\hat v^\mu = u^\mu\equiv (1,\vec v)\;,
\ee
where $u^\mu$ is the fluid velocity.
The timelike vielbein and inverse spacelike vielbein do not transform
under the Milne boost and hence are simply referred to as $\tau_\mu$ and $e^\mu_a$, respectively
The renormalized boundary stress-energy tensor, which we have calculated in the previous section,
can be expressed in the following form;
\begin{align}
\widetilde T^\mu{}_\nu
&=
- \mathcal E \hat v^\mu \tau_\nu + (P - n\mu) \hat h^\mu{}_\nu
- \kappa \tau_\nu h^{\mu\rho} \partial_\rho T
- \eta \sigma_{ab} e_a^\mu \hat e^b_\nu
+ n \hat v^\mu \mathcal A_\nu - n \hat v^\rho \mathcal A_\rho \delta^\mu{}_\nu \ ,
\label{Ttilde1}
\end{align}
where the spatial metric is defined in terms of the vielbeins $e_a^\mu$ and $\hat e^a_\mu$;
\begin{align}
h^{\mu\nu} &= e_a^\mu e_a^\nu \ ,
&
\hat h^\mu{}_\nu &= e_a^\mu \hat e^a_\nu \ .
\end{align}
The energy density $\mathcal E$, pressure $P$, particle number density $n$
and chemical potential $\mu$ are given by \eqref{FluidVariablesZ}.
The transport coefficients like heat conductivity $\kappa$, shear viscosity $\eta$ and
bulk viscosity $\zeta$ are also read off (in $d=4$) as
\begin{align}
\kappa &= \frac{1}{8(z-1)G} a^{-\frac{\theta}{3(3-\theta)}} r_0^{z+1-\theta} \ ,
&
\eta &= \frac{1}{16\pi G} r_0^{3-\theta} \ ,
&
\zeta &= 0 \ , \label{Transport}
\end{align}
where the bulk viscosity $\zeta$ is the coefficient of the expansion $\partial_i v^i$
in the stress-energy tensor \eqref{Ttilde1}.
Hence,
$\zeta=0$ can be deduced from the absence of $\partial_i v^i$
in \eqref{Ttilde1}.
The stress-energy tensor can also be expressed in terms of
the energy flow $\widetilde{\mathcal E}^\mu$, momentum density $\widetilde{\mathcal P}_\mu$,
stress tensor $\widetilde{\mathcal T}^\mu{}_\nu$ and current $J^\mu$, which are defined by
\begin{align}
\widetilde{\mathcal E}^\mu
&=
- \widetilde T_0^\mu{}_\nu \hat v^\nu
\\
\widetilde{\mathcal P}_\nu
&=
\widetilde T_0^\mu{}_\rho \tau_\mu \hat h^\rho{}_\nu
\\
\widetilde{\mathcal T}^\mu{}_\nu
&=
\widetilde T_0^\rho{}_\sigma \hat h^\mu{}_\rho \hat h^\sigma{}_\nu
\end{align}
where
\begin{equation}
\widetilde T_0^\mu{}_\nu = \widetilde T^\mu{}_\nu
+ n\mu \hat h^\mu{}_\nu
- J^\mu \mathcal A_\nu + J^\rho \mathcal A_\rho \delta^\mu{}_\nu \ .
\end{equation}
Then, $\widetilde T^\mu{}_\nu$ is written as
\begin{align}
\widetilde T^\mu{}_\nu
&=
- \widetilde{\mathcal E}^\mu \tau_\nu + \hat v^\mu \widetilde{\mathcal P}_\nu
+ \widetilde{\mathcal T}^\mu{}_\nu - n\mu \hat h^\mu{}_\nu
+ J^\mu \mathcal A_\nu - J^\rho \mathcal A_\rho \delta^\mu{}_\nu
\ .
\label{Ttilde2}
\end{align}
Comparing \eqref{Ttilde2} with \eqref{Ttilde1}, we obtain
\begin{align}
\widetilde{\mathcal E}^\mu
&=
\mathcal E \hat v^\mu - \kappa h^{\mu\rho} \partial_\rho T
\\
\widetilde{\mathcal P}_\nu
&=
0
\\
\widetilde{\mathcal T}^\mu{}_\nu
&=
P \hat h^\mu{}_\nu - \eta \sigma_{ab} e_a^\mu \hat e^b_\nu
\\
J^\mu
&=
n \hat v^\mu\label{current}
\end{align}
The above stress-energy tensor can be identified with that in the Newton-Cartan theory\footnote{There can be ambiguities in the definition of the stress tensor. In relativistic CFTs they are known to affect higher-order transport coefficients, \cite{nakayama}. In non-relativistic scaling theories the situation is even more sensitive as they affect even the ideal hydrodynamics equations as we will see further. Moreover, as we discuss in section 6, they also affect the bulk viscosity.}.
In the Newton-Cartan theory, the geometry is described by
the timelike vielbein $\tau_\mu$, spatial (inverse) metric $h^{\mu\nu}$,
timelike inverse vielbein $\bar v^\mu$ and gauge field $B_\mu$.
Here, the spacelike vielbein, its inverse and spatial metric with lower indices are denoted as
$\bar e^a_\mu$, $e_a^\mu$ and $\bar h_{\mu\nu}$.
The energy current $\mathcal E^\mu$, stress tensor $\mathcal T_{\mu\nu}$,
momentum density $\mathcal P_\mu$ and mass current $\mathcal J^\mu$ are
the conserved quantities associated to $\tau_\mu$, $h^{\mu\nu}$,
$\bar v^\mu$ and $B_\mu$, respectively.
For a generic fluid in Eckart frame, they are given by \cite{Jensen:2014ama}
\begin{align}
\mathcal E^\mu
&=
\mathcal E u^\mu + \frac{1}{2} \rho u^2 u^\mu - \kappa h^{\mu\nu} \partial_\nu T
+ h^{\mu\rho} u^\sigma \mathcal T_{\rho \sigma} \ ,
\label{NewtonE}
\\
\mathcal P_\mu &= \rho u_\mu \ ,
\label{NewtonP}
\\
\mathcal T_{\mu\nu}
&=
P \bar h_{\mu\nu} + \rho u_\mu u_\nu
- \eta \sigma_{\rho \sigma} \bar P^\rho{}_\mu \bar P^\sigma{}_\nu
- \zeta \vartheta \bar h_{\mu\nu} \ ,
\label{NewtonT}
\\
\mathcal J^\mu
&=
\rho u^\mu \ ,
\label{Mcurrent}
\end{align}
where
\be
\bar P^\mu{}_\nu \equiv e_a^\mu \bar e^a_\nu \sp \vartheta\equiv \pa_i v^i
\ee
$\bar P^\mu{}_\nu$ is the projector to the spatial directions and $\vartheta$ is the expansion of the fluid.
The fluid velocity $u^\mu$ satisfies the normalization condition $\tau_\mu u^\mu = 1$
and is given by $u^\mu = (1,u^i)$ for $\tau = dt$.
In the Newton-Cartan theory,
the generic stress-energy tensor (not only for the fluid above)
is defined from the energy current $\mathcal E^\mu$, stress tensor $\mathcal T_{\mu\nu}$,
and momentum density $\mathcal P_\mu$
as\footnote{We always denote stress-energy tensors
by the letter $T$ and their spatial projection (the stress tensor) by the letter $\mathcal T$.}
\begin{equation}
\bar T^\mu{}_\nu
= - \mathcal E^\mu \tau_\nu + \bar v^\mu \mathcal P_\nu
+ h^{\mu\rho} \mathcal T_{\rho\nu} \ .
\label{Tbar}
\end{equation}
The Newton-Cartan theory has a symmetry which is known as Milne boost
\footnote
When the geometry is torsion-free then gauge invariance and Milne boost invariance can be simultaneously present. However in more general cases they are incompatible, \cite{Hartong:2014oma}-\cite{Hartong:2015wxa}. In the case described in this paper the geometry is torsion-free.}
\begin{align}
\bar v^\mu &\to \bar v'{}^\mu = \bar v^\mu + h^{\mu\nu} V_\nu \ , \\
{B} &\to { B}' =
{ B} + \bar P_\mu^\nu V_\nu dx^\mu
- \frac{1}{2} h^{\mu\nu} V_\mu V_\nu \tau_\rho dx^\rho \ .
\end{align}
and we can introduce the Milne boost invariant combination for the gauge field 1-form as \cite{Jensen:2014ama,Hartong:2015wxa}
\begin{align}
\widehat B = B + u_\mu dx^\mu
- \frac{1}{2} u^2 \tau_\rho dx^\rho \ , \label{HtoNewton}
\end{align}
where
\begin{align}
u_\mu &= \bar h_{\mu\nu} u^\nu \ ,
&
u^2 &= \bar h_{\mu\nu} u^\mu u^\nu \ .
\end{align}
We also define a Milne boost invariant stress-energy tensor as
\begin{equation}
T^\mu{}_\nu = \bar T^\mu{}_\nu + \mathcal J^\mu B_\nu - \mathcal J^\mu \widehat B_\nu
= \bar T^\mu{}_\nu - \mathcal J^\mu \left(u_\nu - \frac{1}{2}\tau_\nu u^2\right) .
\label{TandTbar}
\end{equation}
We choose $\bar v^\mu= u^\mu$ and we have
\begin{align}
u_\mu &= \bar h_{\mu\nu} u^\nu = \bar h_{\mu\nu} \bar v^\mu = 0 \ ,
\\
u^2 &= \bar h_{\mu\nu} u^\mu u^\nu = 0 \ ,
\end{align}
and then, \eqref{HtoNewton} and \eqref{TandTbar} give
\begin{align}
B &= \widehat B \ ,
&
T^\mu{}_\nu &= \bar T^\mu{}_\nu \ .
\end{align}
Therefore, the Milne boost invariants $\widehat B$ and $T^\mu{}_\nu$ are the same as
$B$ and $\bar T^\mu{}_\nu$ in the $\bar v^\mu = u^\mu$ frame.
We further define the Milne boost invariants for the energy flow
$\widehat{\mathcal E}^\mu$, momentum density $\widehat{\mathcal P}_\mu$,
stress tensor $\widehat{\mathcal T}^\mu{}_\nu$ from the Milne boost-invariant stress-energy tensor $T^\mu{}_\nu$ as
\begin{align}
\widehat{\mathcal E}^\mu
&=
- T^\mu{}_\nu u^\nu \ ,
\\
\widehat{\mathcal P}_\nu
&=
T^\mu{}_\rho \tau_\mu \bar P^\rho_\nu \ ,
\\
\widehat{\mathcal T}^\mu{}_\nu
&=
T^\rho{}_\sigma \bar P^\mu_\rho \bar P^\sigma_\nu \ ,
\end{align}
and then, $T^\mu{}_\nu$ is written as
\begin{align}
T^\mu{}_\nu
&\equiv
- \widehat{\mathcal E}^\mu \tau_\nu + u^\mu \widehat{\mathcal P}_\nu
+ \widehat{\mathcal T}^\mu{}_\nu \ .
\label{DefMinv}
\end{align}
{}From \eqref{NewtonE}-\eqref{NewtonT}, \eqref{Tbar}, \eqref{TandTbar} and \eqref{DefMinv},
the Milne boost invariants are expressed as
\begin{align}
\widehat{\mathcal E}^\mu
&=
\mathcal E u^\mu - \kappa h^{\mu\nu} \partial_\nu T \ ,
\label{Ehat}
\\
\widehat{\mathcal P}_\mu &= 0 \ ,\label{Phat}
\\
\widehat{\mathcal T}_{\mu\nu}
&=
P \bar h_{\mu\nu}
- \eta \sigma_{\rho \sigma} \bar P^\rho{}_\mu\bar P^\sigma{}_\nu
- \zeta \theta \bar h_{\mu\nu} \ ,
\label{cThat}
\end{align}
and here $\bar h^{\mu\nu}$ and $\bar P^\mu{}_\nu$ are given by those in $\bar v^\mu = u^\mu$ frame.
Then, \eqref{NewtonE}-\eqref{NewtonT} take the same form to \eqref{Ehat}-\eqref{cThat}, respectively,
for $v^i = u^i$ and $\zeta=0$.
Therefore, we identify the energy flow $\widetilde{\mathcal E}^\mu$,
momentum density $\widetilde{\mathcal P}_\mu$ and stress tensor $\widetilde{\mathcal T}^\mu{}_\nu$,
which are calculated from the black hole geometry in the previous section,
with the Milne boost-invariants
$\widehat{\mathcal E}^\mu$, $\widehat{\mathcal P}_\mu$ and $\widehat{\mathcal T}^\mu{}_\nu$, respectively.
We also identify the gauge field $\mathcal A_\mu$,
which originates in the constant mode of the bulk gauge field $A_\mu$,
with the Milne boost-invariant combination $\widehat B_\mu$ as
\begin{equation}
\mathcal A_\mu = m \widehat B_\mu - \mu \tau_\mu \ , \label{AtoB}
\end{equation}
where $m$ is the coupling constant for the gauge field $B_\mu$,
which is equivalently in the non-relativistic language the mass per particle.
Then, the mass current $\mathcal J^\mu$ is related to the particle number current $J^\mu$ as
\begin{equation}
\mathcal J^\mu = m J^\mu \ .
\end{equation}
It is then straightforward to verify that the stress-energy tensor $\widetilde T^\mu{}_\nu$,
which we calculated from the black hole geometry,
is related to the Milne boost-invariant stress-energy tensor $T^\mu{}_\nu$ in the Newton-Cartan theory as
\begin{equation}
\widetilde T^\mu{}_\nu =
T^\mu{}_\nu + \mathcal J^\mu \widehat B_\nu - \mathcal J^\rho \widehat B_\rho \delta^\mu{}_\nu \ .
\label{TandTtilde}
\end{equation}
The conservation law for these Milne boost-invariants is given by \cite{Jensen:2014aia}
\begin{align}
D_\mu \widehat{\mathcal E}^\mu
&=
\hat v^\mu \widehat H_{\mu\nu} \mathcal J^\nu
- \frac{1}{2}\left(h^{\mu\rho}D_\rho \hat v^\nu
+ h^{\nu\rho} D_\rho \hat v^\mu\right) \widehat{\mathcal T}_{\mu\nu} \ , \label{ConsInvE}
\\
h^{\rho\mu} h^{\sigma\nu} D_\rho \widehat{\mathcal T}_{\mu\nu}
&=
h^{\sigma\nu}
\left[
\hat v^\mu D_\nu \widehat{\mathcal P}_\mu - D_\mu \left(\hat v^\mu \widehat{\mathcal P}_\nu\right)
+ \widehat H_{\mu\nu} \mathcal J^\mu
\right] \ ,
\label{ConsInvT}
\end{align}
where $\widehat H = d\widehat B$.
The covariant derivative $D_{\m}$ is defined as usual in the Newton-Cartan geometry (see appendix \ref{app:NewtonCartan}).
In terms of the Milne boost invariant stress-energy tensor,
the conservation law can be expressed as
\begin{equation}
D_\mu T^\mu{}_\nu = J^\mu \mathcal F_{\mu\nu} = \mathcal J^\mu \widehat H_{\mu\nu} \ ,
\end{equation}
since $\widehat{\mathcal P}_\mu=0$.
More details are given in Appendix~\ref{app:NewtonCartan}.
The renormalized stress-energy tensor $\widetilde T^\mu{}_\nu$ satisfies
the conservation equations which are equivalent to \eqref{ConsInvE} and \eqref{ConsInvT}.
These equations come from the following components
of the bulk equations of motion;
\begin{align}
n^\mu\gamma^{\nu\rho} R_{\mu\nu}
&=
8\pi G n^\mu\gamma^{\nu\rho} T_{\mu\nu}^\text{(bulk)} \label{ConstEin} \ ,
\\
n_\nu \nabla_\mu (e^{\lambda\phi} F^{\mu\nu})
&= 0 , \label{ConstMax}
\end{align}
where $n_\mu$ and $\gamma_{\mu\nu}$ are the normal vector and
the induced metric on the boundary, respectively.
In order to derive the conservation equations for the first order fluids,
we have to calculate the constraint equations (\ref{ConstEin}), (\ref{ConstMax}) to second order in the derivative expansion.
The correction terms at each order do not contribute to
the constraint equations at that same order, and hence,
we do not need to solve the differential equation for the correction terms to second order.
The constraint equations (\ref{ConstEin}), (\ref{ConstMax}) can be expressed in terms of
the fluid variables $\mathcal E$, $P$, $n$ and $v^i$ defined in (\ref{NewtonE})-(\ref{Mcurrent}) as
\begin{align}
0 &= \partial_t \mathcal E + v^i \partial_i \mathcal E + (\mathcal E + P) \partial_i v^i
\notag\\&\qquad
- \frac{1}{2} \eta \sigma_{ij}\sigma_{ij}
- \partial_i( \kappa \partial_i T) \ ,
\label{LifContinuity}
\\
0 &= \partial_i P - n \partial_i \mu + n \mathcal F_{ti} + n v^j \mathcal F_{ji}
- \partial_j \left(\eta\sigma_{ij} \right) \ ,
\label{LifNavierStokes}
\\
0 &= \partial_t n + \partial_j (n v^j) \ .
\label{LifChargeCons}
\end{align}
Note that the main difference in these equations for the hyperscaling violating case studied here and the case with $\theta=0$ studied in \cite{km1} is the appearance of the term $n\pa_i\mu$ in the Navier-Stokes-like equation (\ref{LifNavierStokes}). Here we have a chemical potential in the absence of an external Newtonian potential and this is due to hyperscaling violation.
The fluid equations \eqref{LifContinuity}-\eqref{LifChargeCons} are equations for
energy density $\mathcal E$, pressure $P$, velocity field $v^i$,
particle number density $n$ and temperature $T$.
There are also constituent relations between these variables.
In particular, the temperature and energy density are related to each other, and hence are not independent.
For the fluid dual to the Lifshitz geometry,
the energy density is constrained first by the Lifshitz Ward identity with hyperscaling violation. This is given by
\begin{equation}
\left(z-\frac{\theta}{d-1}\right) \mathcal E = (d-1-\theta) P \ .
\label{WardHV}
\end{equation}
The transport coefficients $\eta$ and $\kappa$ are not constant
but depend on the temperature, and also on the particle number density in this case, as in (\ref{Transport}).
By using \eqref{HtoNewton} and \eqref{AtoB},
the gauge field $\mathcal A$ is rewritten in terms of the velocity field $v^i$
and the external gauge field of $B$;
\begin{align}
\mathcal A &= m \left(B + u_\mu dx^\mu
- \frac{1}{2} u^2 \tau_\rho dx^\rho \right) - \mu \tau
\notag\\
&= m \left(B + v^idx^i
- \frac{1}{2} \vec v^2 dt \right) - \mu dt .
\label{a}
\end{align}
Therefore, we have 5 equations for 5 independent variables $\mathcal E$, $v^i$ and $n$.
We also have external sources in the hydrodynamic equations:
one is of course the space metric that is flat here (but in subsequent sections this will change).
The other is the external gauge field $B_{\mu}$ that couples to the $U(1)$ current.
In ordinary non-relativistic hydrodynamics, $B_t$ would be the ordinary Newtonian potential
that couples to mass \cite{Hartong:2014oma,Hartong:2014pma,Hartong:2015wxa,km1}.
\subsection{The entropy current}
The entropy current \eqref{EntropyCurrent} can be expressed as
\begin{equation}
J_S^\mu = s \hat v^\mu - \frac{\kappa}{T} h^{\mu\rho} \partial_\rho T
\end{equation}
where the entropy density $s$ is given by
\begin{equation}
s = \frac{1}{4G} r_0^{3-\theta} \ .
\end{equation}
The entropy current satisfies the following relation with
the (internal) energy density $\widehat{\mathcal E}$ and pressure $P$;
\begin{align}
T J_S^\mu = \widehat{\mathcal{E}}^\mu + P \hat v^\mu - \mu J^\mu
= - T^\mu{}_\nu \hat v^\nu\ + (P - \mu n) \hat v^\mu \ .
\label{ThermoRel}
\end{align}
It also satisfies the second law.
By using \eqref{LifContinuity}, the divergence of $J_S^\mu$ is expressed as
\begin{equation}
\partial_\mu J_S^\mu
= \frac{1}{2} \frac{\eta}{T} \sigma_{ij} \sigma_{ij} + \frac{\kappa}{T^2} \left(\partial_i T\right)^2 \ ,
\end{equation}
which is manifestly non-negative. Therefore, the entropy current satisfies the second law.
The entropy density $s = J_S^0$ is such that
the KSS bound, \cite{Kovtun:2004de}, is saturated;
\begin{equation}
\frac{\eta}{s} = \frac{1}{4\pi} \ .
\end{equation}
\section{The relation to dimensional reduction}\label{sec:DimRed}
In the previous section, we studied the fluids in the field theory dual of
the Lifshitz space-time with hyperscaling-violation.
We found that the fluid has zero bulk viscosity $\zeta=0$.
It is known that the Lifshitz space-time with hyperscaling-violation
can be obtained by the dimensional reduction from
the higher-dimensional Lifshitz space-time without hyperscaling-violation, \cite{gk1}.
As discussed in \cite{km1}, the fluid for the Lifshitz space-time
without hyperscaling-violation also has zero bulk viscosity.
Upon compactification, this Lifshitz geometry with flat internal space and constant radius, becomes the lower dimensional geometry with hyperscaling violation.
In such a case it was shown, \cite{Kanitscheider:2009as,gs}, that
the bulk viscosity becomes non-zero after dimensional reduction.
In this section, we will first derive the general fluid equations in the higher-dimensional theory by allowing also the internal volume to be an additional thermodynamic variable.
Then we will reduce to the lower dimension. We will show that the appropriate reduction that corresponds to the thermodynamic ansatz in the lower dimension is compatible with our four-dimensional results of the previous section.
We first consider the following theory, which is
the Einstein gravity with the Maxwell field and a single scalar field;
\begin{equation}
S
= \frac{1}{16\pi G} \int d^{D+1} x \sqrt{-g_D}
\left(R - 2 \Lambda - \frac{1}{2} (\partial\phi_1)^2 - \frac{1}{4} e^{\tilde\lambda\phi_1}F^2\right) \ ,
\label{ActionHigherDim}
\end{equation}
where $D=d-\theta$, and we assumed that $\theta$ is a negative integer at this moment.
This model has the Lifshitz space-time without hyperscaling-violation as a solution;
\begin{equation}
ds^2_D = - r^{2z} dt^2 + \frac{dr^2}{r^2} + \sum_{i=1}^{d-\theta-1} r^2 (dx^i)^2 ,
\label{MetricHigherDim}
\end{equation}
with the following gauge field and dilaton;
\begin{align}
A_t &= a \sqrt{\mu} \, r^{z+d-\theta-1} \ , &
e^{\tilde\lambda\phi_1} &= a^{-2} r^{-2(d-\theta-1)} \ ,
\end{align}
where the parameters $z$, $a$ and $\mu$ are related to the parameters of the action (coupling constants) as
\begin{align}
\tilde\lambda^2 &= 2 \frac{d-\theta-1}{z-1} \ , \\
\Lambda &= - \frac{(z+d-\theta-1)(z+d-\theta-2)}{2} \ , \\
\mu &= \frac{2(z-1)}{z+d-\theta-1} \ .
\end{align}
We compactify the $(-\theta)$-dimensional extra dimensions and
consider the dimensional reduction to $(d+1)$-dimensional spacetime.
The metric is decomposed as
\begin{align}
ds^2_D &= e^{-\tilde\nu\phi_2}ds^2 + \sum_{i=d}^{d-\theta-1} e^{-(d-1)\tilde\nu\phi_2/\theta} (dx^i)^2 \ ,
\label{MetricHD}
\end{align}
where $ds^2$ is the metric after the dimensional reduction and
we compactified $x^i$-directions with $i=d,\cdots,d-\theta-1$
to circles with period $x^i \sim x^i + 1$.
The constant $\tilde\nu$ is given by
\begin{equation}
\tilde\nu^2 = - \frac{2\theta}{(d-1)(d-\theta-1)} \ .
\end{equation}
Then, after the dimensional reduction, the action becomes
\begin{equation}
S
= \frac{1}{16\pi G} \int d^{d+1} x \sqrt{-g}
\left(R - 2 \Lambda e^{- \tilde\nu \phi_2} - \frac{1}{2} (\partial\phi_1)^2
- \frac{1}{2} (\partial\phi_2)^2
- \frac{1}{4} e^{\tilde\lambda\phi_1 + \tilde\nu \phi_2}F^2\right) \ ,
\label{BulkActionDimRed}
\end{equation}
Now, the geometry has hyperscaling-violation because of
the redefinition of the metric, and is given by
\begin{align}
ds^2 &= r^{-2\theta/(d-1)} d\tilde s^2
\\
d\tilde s^2 &= - r^{2z} dt^2 + \frac{dr^2}{r^2} + \sum_{i=1}^{d-1} r^2 (dx^i)^2 .
\end{align}
while the additional dilaton field $\phi_2$ is
\begin{equation}
e^{\tilde\nu\phi_2} = r^{-2\theta/(d-1)} \ .
\end{equation}
More generally, $\phi_2$ also has the constant mode as is $\phi_1$
but it is independent from the gauge field.
Then, the solution becomes
\begin{align}
ds^2 &= e^{2\chi} d\tilde s^2
\\
d\tilde s^2 &= - r^{2z} dt^2 + \frac{dr^2}{r^2} + \sum_{i=1}^{d-1} r^2 (dx^i)^2 , \label{gtilde}
\\
e^{\tilde\nu\phi_2} &= e^{2\chi} = e^{2\chi_0} r^{-2\theta/(d-1)} \ ,
\end{align}
where $\chi_0$ is an arbitrary constant. We define a new parameter $b$ as
\begin{align}
\chi_0 &= - \frac{\theta}{(d-1)(d-1-\theta)}\log b \ .
\end{align}
In a sense $b$ parametrizes the internal volume.
Now, the model has 2 dilatons instead of one.
This solution has an additional parameter $b$
which is the constant mode of the additional dilaton $\phi_2$.
The solution becomes equivalent to that in the previous sections
if the dilatons satisfy the following condition;
\begin{equation}
\lambda\phi
= \left(1 + \frac{\theta}{(d-1)(d-\theta-1)}\right) \tilde \lambda\phi_1
= \left(1 + \frac{(d-1)(d-\theta-1)}{\theta} \right) \tilde\nu\phi_2 \ .
\end{equation}
In fact the above solution satisfies this condition
except for the constant mode $b$,
and hence, it is equivalent to the solution in the previous sections if
\begin{equation}
b=a \ . \label{b=a}
\end{equation}
As already noted, the value of $b$ controls the volume of the internal dimensions.
For the higher-dimensional Lifshitz geometry \eqref{MetricHigherDim},
we have
\begin{equation}
b=1 \ ,
\end{equation}
but arbitrary $b$ can be introduced by the redefinition of
the coordinates in the extra dimensions.
The solution which describes the hydrodynamics
can be calculated in a similar fashion to the previous sections.
We now specialize to $d=4$.
The first order solution of the derivative expansion
is obtained as
\begin{align}
ds^2_D
&= d\tilde s^2 + \sum_{i=3}^{3-\theta} e^{-(d-1)\tilde\nu\phi_2/\theta} (dx^i)^2 \ ,
\label{SolBeforeDR}
\\
ds^2 &= r^{-2\theta/(d-1)} e^{\nu\varphi_2} d\tilde s^2 \ ,
\label{SolAfterDR}
\\
d\tilde s^2 &= - r^{2z} f dt^2 + 2 b^{-\frac{\theta}{3(3-\theta)}} r^{z-1} dt\,dr
+ r^2 (dx^i - v^i dt)^2
\notag\\&\quad
+ \frac{2}{3-\theta} b^{-\frac{\theta}{3(3-\theta)}} r^z \partial_i v^i dt^2
- r^2 F_1(r) \sigma_{ij} (dx^i - v^i dt) (dx^j - v^j dt)
\notag\\&\quad
- \frac{2\theta}{3(3-\theta)} F_1(r)
\left[\partial_i v^i - b^{-1}\left(\partial_t b + v^i \partial_i b\right)\right]
\left( - r^{2z} f dt^2 + 2 b^{-\frac{\theta}{3(3-\theta)}} r^{z-1} dt dr \right)
\notag\\&\quad
+ 2 \left(F_3(r) \partial_i r_0 + F_5(r) \partial_i b\right)dt (dx^i - v^i dt) \ ,
\\
\varphi_2 &= \sqrt{-\frac{2\theta}{3(3-\theta)}} F_1(r)
\left[\partial_i v^i - b^{-1}\left(\partial_t b + v^i \partial_i b\right)\right] \ .
\end{align}
where $\theta$ must be negative integer for the metric before the dimensional reduction $ds_D^2$.
The non-zero components of $v^i$ and $\mathcal A_i$ are introduced only for $i=1,2,3$ and
the parameters $v^i$, $r_0$, $a$, $b$ and $\mathcal A_\mu$ are replaced by functions
which depend on $x^\mu$ with $\mu = 0,\cdots,3$.
We have redefined the coordinates $x^\mu$ as \eqref{CoordRedef} in the previous section
and hence the Hawking temperature is also rescaled as
\begin{equation}
T = \frac{z+d-1-\theta}{4\pi} \, e^{-\chi_0} r_0^z
= \frac{z+d-1-\theta}{4\pi} \, b^{\frac{\theta}{(d-1)(d-1-\theta)}} r_0^z \ .
\label{HawkingTdr}
\end{equation}
where we have written this relation in arbitrary dimension.
The function $F(r)$ is given by
\begin{align}
F_1(r)
&=
b^{-\frac{\theta}{3(3-\theta)}}
\int dr \frac{r^{3-\theta}-r_0^{3-\theta}}{r(r^{z+3-\theta}-r_0^{z+3-\theta})} \ ,
\\
F_2(r)
&=
\left( 2(z-1) r^{z+3-\theta} - (z-5+\theta) r_0^{z+3-\theta}\right) \int dr\, \widehat F_1 (r)
\\
F_3(r)
&=
- 2(z-1) a^{-1}b^{-\frac{\theta}{3(3-\theta)}} \int \frac{dr}{r^{6-z+\theta}} F_2(r)
\\
F_4(r)
&=
\left( 2(z-1) r^{z+3-\theta} - (z-5+\theta) r_0^{z+3-\theta}\right) \int dr\, \widehat F_2 (r)
\\
F_5(r)
&=
\int \frac{dr}{r^{6-z+\theta}}
\left(- 2(z-1) a^{-1} b^{-\frac{\theta}{3(3-\theta)}} F_4(r)
+ \frac{\theta}{3(3-\theta)} b^{-1-\frac{\theta}{3(3-\theta)}} r^{3-\theta}
\right)
\\
\widehat F_i(r)
&=
\frac{r^{7-2\theta} \widetilde F_i(r)}{(r^{z+3-\theta}-r_0^{z+3-\theta})[2(z-1) r^{z+3-\theta} - (z-5+\theta) r_0^{z+3-\theta}]^2}
\\
\widetilde F_1(r)
&=
\frac{z+3-\theta}{2(z-1)} a \frac{r_0^{z-\theta}}{r^{5-\theta}}
\Bigl(2 (z-1) (5-\theta)r^{z+3-\theta} r_0^2 - z(z+3-\theta) r^{5-\theta} r_0^z
\notag\\&\qquad\qquad\qquad\qquad\qquad
+ (z-5+\theta)(z-2) r_0^{z+3-\theta} \Bigr)
\\
\widetilde F_2(r)
&=
- \frac{\theta}{6 (3-\theta) (z-1)} a b^{-1} r^{-\theta -5} r_0^{-2 \theta }
\Bigl(-4 (z-1)^2 r_0^{2 \theta } r^{2 z+6}-r^{2 \theta } r_0^{2 z+6} (z-5+\theta)^2
\notag\\&\qquad\qquad\qquad
+r^{\theta +5} (z+3-\theta)^2 r_0^{\theta +2 z+1}
+4 (z-1) (z-5+\theta) r_0^{\theta +z+3} r^{\theta+z+3}\Bigr) \ .
\end{align}
The first order solution of the gauge field is
\begin{align}
\frac{1}{\sqrt\mu} A &= a(x) \left[b^{\frac{\theta}{3(3-\theta)}}\left(r^{z+3-\theta} - r_0^{z+3-\theta}(x)\right)
- \frac{1}{3-\theta}r^{3-\theta} \partial_i v^i(x)\right]dt
\notag\\&\quad
- a(x) r^{2-\theta} dr + \mathcal A_\mu(x) dx^\mu
+ \left(F_2(r) \partial_i r_0 + F_4(r) \partial_i b \right) (dx^i - v^i dt) \ ,
\end{align}
and $\phi_1$ has no correction term,
while $\phi_2$ receives the correction term $\varphi_2$;
\begin{align}
\phi_2 &= - \frac{\theta}{(d-1)\tilde\nu} \log r
+ \frac{2}{\tilde\nu} \chi_0 + \varphi_2 \ .
\end{align}
The solution above must satisfy the following constraints;
\begin{align}
0 &= \partial_t a + v^i \partial_i a - a \partial_i v^i ,
\label{ConstADimRed}
\\
0 &= \partial_t r_0 + v^i \partial_i r_0 + \frac{1}{3-\theta} r_0 \partial_i v^i ,
\label{ConstTDimRed}
\\
0 &= \mathcal F_{ti} + v^j \mathcal F_{ji}
+ \frac{z+3-\theta}{2 (z-1)} a b^{\frac{\theta}{3(3-\theta)}} r_0^{z+3-\theta}
\left(z \frac{\partial_i r_0}{r_0} + \frac{\theta}{3(3-\theta)} \frac{\partial_i b}{b} \right)
\label{ConstSDimRed}
\end{align}
The hydrodynamic solution from the dimensional reduction also
reproduce the result in the previous sections, if $b = a$.
\subsection{Higher-dimensional thermodynamics and hydrodynamics}
We first consider thermodynamics before the dimensional reduction.
For $D$-dimensional space-time with arbitrary $D = d-\theta$,
the energy, entropy and charge are given by
\begin{align}
E_D
&=
\mathcal E_D V_{D-1} = \frac{D-1}{16\pi G} r_0^{z+D-1} V_{D-1}
\\
S_D
&=
s V_{D-1} = \frac{1}{4G} r_0^{D-1} V_{D-1} \ ,
\\
N_D &= n V_{D-1} = \frac{z-1}{16 \pi G a} V_{D-1} \ ,
\end{align}
where $V_{D-1}$ is the volume of the $(D-1)$-dimensional space.
The 1st law of thermodynamics is expressed as
\begin{equation}
dE_D = T_D dS_D - P_D dV_{D-1} + \mu_D dN_D \ ,
\end{equation}
and the temperature, pressure and chemical potential are calculated as
\begin{align}
T_D &= \left(\frac{\partial E_D}{\partial S_D}\right)_{V_{D-1}, N_D}
= \frac{D-1}{4\pi} r_0^z \ ,
\\
P_D &= - \left(\frac{\partial E_D}{\partial V_{D-1}}\right)_{S_D, N_D}
= \frac{z}{16\pi G} r_0^{z+D-1} \ ,
\\
\mu_D &= \left(\frac{\partial E_D}{\partial N_D}\right)_{S_D, V_{D-1}}
= 0 \ .
\end{align}
Here, the temperature $T_D$ agrees with that for the local observer
\begin{equation}
T_D = \frac{T}{\sqrt{g_{tt}}} \ ,
\end{equation}
where $T$ is the Hawking temperature of the black hole.
Next, we consider the fluid in $D$-dimensional space-time with $D -1=3-\theta$.
The vielbein behaves near the boundary $r\to\infty$ as
\begin{align}
E^0_{(D)} &= r^z \tau_D \ , &
E^i_{(D)} &= r \hat e^i_D \ ,
\end{align}
where
\begin{align}
\tau_D &= e^{-\chi_0} dt \ ,
\\
\hat e^i_D &= e^{-\chi_0} \left(dx^i - v^i dt\right) \ , &
(i &= 1,2,3)
\\
\hat e^i_D &= e^{-\frac{6}{\theta}\chi_0} dx^i \ . &
(i &= 4,\cdots,3-\theta)
\end{align}
This implies that the Newton-Cartan data on the boundary is given by
\begin{align}
\tau_D &= e^{-\chi_0} dt \ ,
\\
\hat v_D^\mu &= e^{\chi_0} (1,v^i,0) \ ,
\\
h_D^{\mu\nu} &= \mathrm{diag}
(0,
e^{2\chi_0} , \cdots , e^{2\chi_0} ,
e^{\frac{6}{\theta}\chi_0} , \cdots , e^{\frac{6}{\theta}\chi_0} ) \ ,
\end{align}
The stress-energy tensor on the boundary is calculated from
the first order solution for the metric before the dimensional reduction \eqref{SolBeforeDR},
in a similar fashion to Section~\ref{sec:StressTensor} as
\begin{align}
\widetilde T_D{}^0{}_0
&=
\frac{1}{8\pi G}
\left(- \frac{3-\theta}{2} r_0^{z+3-\theta}
- \frac{z-1}{a} b^{-\frac{\theta}{3(3-\theta)}} v^i \mathcal A_i \right)
\ ,
\\
\widetilde T_D{}^i{}_0
&=
\frac{1}{8\pi G}
\biggl[-\frac{z+3-\theta}{2} r_0^{z+3-\theta} v^i
+ \frac{z-1}{a} b^{-\frac{\theta}{3(3-\theta)}} v^i \mathcal A_t
+ \frac{1}{2} r_0^{3-\theta} \sigma_D{}^i{}_{j} v^j
\notag\\&\quad\qquad\qquad
+ \frac{z(z+3-\theta)}{4(z-1)} b^{-\frac{\theta}{3(3-\theta)}} r_0^{2z-\theta} \left( \partial_i r_0
- \frac{\theta}{3 (3-\theta) z} \frac{r_0}{b} \partial_i b \right)
\biggr]
\ ,
\\
\widetilde T_D{}^0{}_i
&=
\frac{1}{8\pi G}\frac{z-1}{a} b^{-\frac{\theta}{3(3-\theta)}}
\mathcal A_i
\ ,
\\
\widetilde T_D{}^i{}_j
&=
\frac{1}{8\pi G}
\biggl\{\frac{z}{2} r_0^{z+3-\theta} \delta_{ij}
- \frac{1}{2} r_0^{3-\theta} \sigma_D{}^i{}_{j}
+ \frac{z-1}{a} b^{-\frac{\theta}{3(3-\theta)}} \left[v^i \mathcal A_j
- \delta_{ij} \left(\mathcal A_t + v^k \mathcal A_k\right)\right]
\biggr\} \ ,
\end{align}
where the shear tensor is given by
\begin{equation}
\sigma_D{}^i{}_j
= b^{-\frac{\theta}{3(3-\theta)}}
\left(\partial_i v^j + \partial_j v^i - \frac{2}{3-\theta} \delta_{ij} \partial_k v^k \right)
+ \frac{2\theta}{3(3-\theta)}
b^{-\frac{\theta}{3(3-\theta)}-1} \left(\partial_t b + v^k \partial_k b\right) \delta_{ij} \ ,
\end{equation}
for $i,j=1,\cdots,d-1$.
This stress-energy tensor can be expressed in the context of the Newton-Cartan theory as
\begin{align}
\widetilde T_D{}^\mu{}_\nu
&=
\mathcal E_D \hat v_D^\mu \tau_D{}_\nu + P_D \hat h_D{}^\mu{}_\nu
- \kappa_D \tau_D{}_\nu h_D^{\mu\rho} \left(\partial_\rho - \mathcal G_\rho\right) T_D
\notag\\&\quad
- \eta_D \sigma_D{}_{\rho\sigma} h_D^{\rho\mu} \hat h_D{}^\sigma{}_\nu
+ n \hat v_D^\mu \mathcal A_\nu - n \hat v_D^\rho \mathcal A_\rho \delta^\mu{}_\nu \ ,
\end{align}
where we have defined
\begin{equation}
\mathcal G_\mu = \left(\partial_\nu \tau_D{}_\mu- \partial_\mu \tau_D{}_\nu \right) \hat v_D^\nu \ .
\end{equation}
The shear tensor can be expressed in terms of the Newton-Cartan geometry as
\begin{align}
\sigma_D{}_{\mu\nu}
&=
\hat h_D{}_{\rho\nu} \widehat D_\mu \hat v_D^\rho
+ \hat h_D{}_{\rho\mu} \widehat D_\nu \hat v_D^\rho
- \frac{2}{3-\theta} \hat h_D{}_{\mu\nu} \widehat D_\rho \hat v_D^\rho
\notag\\
&= \pounds_{\hat v_D} \hat h_D{}_{\mu\nu}
- \frac{2}{3-\theta} \hat h_D{}_{\mu\nu} \widehat D_\rho \hat v_D^\rho \ ,
\end{align}
where $\widehat D_\mu$ is the covariant derivative in the Newton-Cartan geometry
in the holographic frame $\bar v^\mu = \hat v^\mu_D$,
(whose Christoffel symbol is given by \eqref{NCC} with $\bar v^\mu = \hat v_D^\mu$,
see Appendix~\ref{app:NewtonCartan} for more details),
and $\pounds$ is the Lie derivative.
The energy density, pressure, particle number density and the temperature are given by
\begin{align}
\mathcal E_D &= \frac{3-\theta}{16\pi G} r_0^{z+3-\theta} \ , &
P_D &= \frac{z}{16\pi G} r_0^{z+3-\theta} \ , &
n &= \frac{z-1}{16\pi G} a^{-1}\ , &
T_D &= \frac{z+3-\theta}{4\pi} r_0^z \ .
\end{align}
The transport coefficients are read off as
\begin{align}
\eta_D &= \frac{1}{16\pi G} r_0^{3-\theta} \ , &
\zeta_D &= 0 \ , &
\kappa_D &= \frac{1}{8(z-1)G} r_0^{z+1-\theta} \ .
\end{align}
The stress-energy tensor can also be expressed in terms of the energy current,
momentum density, stress tensor and particle number current as
\begin{align}
\widetilde T_D{}^\mu{}_\nu
=
- \widetilde{\mathcal E}_D^\mu \tau_D{}_\nu + \hat v_D^\mu \widetilde{\mathcal P}_D{}_\nu
+ \widetilde{\mathcal T}_D{}^\mu{}_\nu
+ J_D^\mu \mathcal A_\nu - J_D^\rho \mathcal A_\rho \delta^\mu{}_\nu
\ ,
\end{align}
where
\begin{align}
\widetilde{\mathcal E}_D^\mu
&=
\mathcal E_D \hat v_D^\mu
- \kappa_D h_D^{\mu\rho} \left(\partial_\rho - \mathcal G_\rho\right) T_D \ ,
\\
\widetilde{\mathcal P}_\nu
&=
0 \ ,
\\
\widetilde{\mathcal T}_D{}^\mu{}_\nu
&=
P_D \hat h_D{}^\mu{}_\nu
- \eta_D \sigma_D{}_{\rho\sigma} h_D{}^{\rho\mu} \hat h_D{}^\sigma_\nu \ ,
\\
J_D^\mu
&=
n \hat v_D^\mu \ .
\end{align}
The conservation equations in the Newton-Cartan theory is given by
\begin{align}
(\widehat D_\mu - \mathcal G_\mu) \widetilde{\mathcal E}_D^\mu
&=
\hat v_D^\mu (F^\tau_{\mu\nu} \widetilde{\mathcal E}_D^\nu - \mathcal F_{\mu\nu} J_D^\nu)
- (\widehat D_\mu \hat v_D{}^\nu) \widetilde{\mathcal T}_D{}^\mu{}_\nu \ ,
\\
h_D^{\rho\mu} (\widehat D_\nu - \mathcal G_\nu) \widetilde{\mathcal T}_D{}^\nu{}_\mu
&=
h_D^{\rho\mu} \left[\hat v_D^\nu \widehat D_\mu \widetilde{\mathcal P}_D{}_\nu
- \widehat D_\nu (\hat v_D^\nu \widetilde{\mathcal P}_D{}_\mu)
+ \mathcal F_{\mu\nu} J_D^\nu - F^\tau_{\mu\nu} \widetilde{\mathcal E}_D^\nu \right] \ ,
\\
0 &= \left(\widehat D_\mu - \mathcal G_\mu\right) J_D^\mu \ ,
\end{align}
where
\begin{align}
F^\tau_{\mu\nu} &= \partial_\mu \tau_D{}_\nu - \partial_\nu \tau_D{}_\mu \ .
\end{align}
Then, they can be expressed in terms of the fluid variables as
\begin{align}
0 &= \hat v_D^\mu \partial_\mu \mathcal E_D
+ (\mathcal E_D + P_D) \widehat D_\mu \hat v_D^\mu
- \frac{1}{2} \eta_D \sigma_D{}^\mu{}_\nu \sigma_D{}^\nu{}_\mu
- (\widehat D_\mu - 2 \mathcal G_\mu)
\left[\kappa_D h_D^{\mu\rho} \left(\partial_\rho - \mathcal G_\rho\right) T_D\right]
\ ,
\\
0 &= h_D^{\rho\nu} \partial_\nu P_D - h_D^{\rho\nu} \mathcal G_\nu (\mathcal E_D + P_D)
- h_D^{\rho\mu} \mathcal F_{\mu\nu} J_D^\nu
- h_D^{\mu\rho} h_D^{\nu\sigma}(\widehat D_\sigma - \mathcal G_\sigma)
\left(\eta_D \sigma_D{}_{\mu\nu}\right)
\ ,
\\
0 &= \widehat D_\mu \left(n \hat v_D^\mu\right) \ .
\end{align}
It is straightforward to verify that these equations are equivalent to
the constraint equations in the bulk equations of motion.
\subsection{Thermodynamics after the dimensional reduction}\label{sec:ThermoDR}
In this subsection we take the lower dimensional boundary theory to have space dimension $d-1$. For general $b$, we consider in the following the first law, where the thermodynamic variables are the entropy $S$, the $(d-1)$-dimensional volume\footnote{Measured in the $d$-dimensional metric in the Einstein frame.} $V$ and the charge $N$, which is related to the variable $a$.
We have already found that (in general $d$)
\begin{align}
E
&=
\mathcal E V = \frac{d-1-\theta}{16\pi G} b^\frac{\theta}{(d-1)(d-1-\theta)} r_0^{z+d-1-\theta} V
\\
S
&=
s V = \frac{1}{4G} r_0^{d-1-\theta} V \ ,
\\
N &= nV = \frac{z-1}{16 \pi G a} V \ .
\end{align}
In general, $b$ is not related to the particle number density $n$
and hence the energy does not depend on the charge.
Then the first law can be written as,
\begin{equation}
dE = TdS - PdV + \mu dN \ ,
\end{equation}
where
\begin{align}
T
&= \left(\frac{\partial E}{\partial S}\right)_{V,N}
= \frac{z+d-1-\theta}{4\pi} b^\frac{\theta}{(d-1)(d-1-\theta)} r_0^{z} \ ,
\\
P
&= - \left(\frac{\partial E}{\partial V}\right)_{S,N}
= \frac{z}{16\pi G} b^\frac{\theta}{(d-1)(d-1-\theta)} r_0^{z+d-1-\theta} \ \ ,
\\
\mu
&= \left(\frac{\partial E}{\partial N}\right)_{S,V}
= 0 \ .
\end{align}
The black hole solution also has another variable $b$,
which is related to the scalar source as
\begin{equation}
\tilde \phi_2 = \tilde\nu \log b \ .
\end{equation}
Then, by taking into account this scalar source as an additional thermodynamic variable,
the first law of thermodynamics may be expressed as
\begin{equation}
dE = TdS - PdV + \mu dN + \langle\widetilde{\mathcal O}_2\rangle d\tilde\phi_2 \ ,
\end{equation}
where $\langle\widetilde{\mathcal O}_2\rangle$ is the vev of the dual operator to the scalar $\phi_2$,
which is expressed as
\begin{equation}
\langle\widetilde{\mathcal O}_2\rangle
=
\frac{\tilde\nu}{2} b^\frac{\theta}{(d-1)(d-1-\theta)} \left(\sum_{\mu=0}^{d-1} T_D{}^\mu{}_\mu
+\frac{d-1}{\theta}\sum_{i=d}^{d-1-\theta} T_D{}^i{}_i\right)
=
- \frac{\tilde\nu}{2}\mathcal E - \tilde\nu \eta \bar\vartheta \ .
\end{equation}
We will discuss this operator later.
In the previous section, we obtained the thermodynamics in the lower dimension, with one less variable.
To recover it from the one here we must take a codimension one section of the thermodynamic variables.
As argued already, the correct section involves taking $b=a$ above.
With this constraint, we obtain
\begin{align}
E
&=
\mathcal E V = \frac{d-1-\theta}{16\pi G} a^\frac{\theta}{(d-1)(d-1-\theta)} r_0^{z+d-1-\theta} V
\\
S
&=
s V = \frac{1}{4G} r_0^{d-1-\theta} V \ ,
\\
N &= nV = \frac{z-1}{16 \pi G a} V \ ,
\end{align}
and the first law takes the form
\begin{align}
d E
=
T dS - \widetilde P dV
+ \tilde\mu dN \ .
\end{align}
The temperature, pressure and chemical potential are now given by
\begin{align}
T
&=
\left(\frac{\partial E}{\partial S}\right)_{V,N}
= \frac{z+d-1-\theta}{4\pi} a^\frac{\theta}{(d-1)(d-1-\theta)} r_0^{z} \ ,
\\
\widetilde P &=
- \left(\frac{\partial E}{\partial V}\right)_{S,N}
= \frac{1}{16\pi G} \left(z-\frac{\theta}{d-1}\right)
a^\frac{\theta}{(d-1)(d-1-\theta)} r_0^{z+d-1-\theta} \ , \label{Ptilde}
\\
\tilde\mu &= \left(\frac{\partial E}{\partial N}\right)_{S,V}
= - \frac{\theta}{(d-1)(z-1)}
a^{1 + \frac{\theta}{(d-1)(d-1-\theta)}} r_0^{z+d-1-\theta} \ . \label{muTilde}
\end{align}
Note that these are exactly the thermodynamic quantities and the first law we obtained in the previous section (for $d=4$).
The Ward identity for the scaling symmetry is simply expressed as
\begin{equation}
\left(z-\frac{\theta}{d-1}\right) \mathcal E = (d-1-\theta) P \ .
\end{equation}
\subsection{Hydrodynamics after the dimensional reduction}
We now set back $d=4$.
The stress-energy tensor and the fluid equations in lower dimensional theory can be calculated straightforwardly.
The stress-energy tensor is obtained as
\begin{align}
\widetilde T^0{}_0
&=
\frac{1}{8\pi G}
\left(- \frac{3-\theta}{2} b^{\frac{\theta}{3(3-\theta)}} r_0^{z+3-\theta}
- \frac{z-1}{a} v^i \mathcal A_i \right)
\ ,
\\
\widetilde T^i{}_0
&=
\frac{1}{8\pi G}
\biggl[-\frac{z+3-\theta}{2} b^{\frac{\theta}{3(3-\theta)}} r_0^{z+3-\theta} v^i
+ \frac{z-1}{a} v^i \mathcal A_t
+ \frac{1}{2} r_0^{3-\theta} \sigma_{ij} v^j
\notag\\&\quad\qquad\qquad
- \frac{\theta}{3(3-\theta)} r_0^{3-\theta}
\left[\partial_j v^j - b^{-1}\left(\partial_t b + v^j \partial_j b\right)\right] v^i
\notag\\&\quad\qquad\qquad
+ \frac{z(z+3-\theta)}{4(z-1)} r_0^{2z-\theta} \left( \partial_i r_0
- \frac{\theta}{3 (3-\theta) z} \frac{r_0}{b} \partial_i b \right)
\biggr]
\ ,
\\
\widetilde T^0{}_i
&=
\frac{1}{8\pi G}\frac{z-1}{a}
\mathcal A_i
\ ,
\\
\widetilde T^i{}_j
&=
\frac{1}{8\pi G}
\biggl\{\frac{z}{2} b^{\frac{\theta}{3(3-\theta)}} r_0^{z+3-\theta} \delta_{ij}
- \frac{1}{2} r_0^{3-\theta} \sigma_{ij}
+ \frac{z-1}{a} v^i \mathcal A_j - \delta_{ij} \left(\mathcal A_t + v^k \mathcal A_k\right)
\notag\\&\quad\qquad\qquad
+ \frac{\theta}{3(3-\theta)} r_0^{3-\theta}
\left[\partial_i v^i - b^{-1}\left(\partial_t b + v^i \partial_i b\right)\right]
\biggr\} \ ,
\end{align}
and the expectation values of the dual operators of the dilatons $\phi_1$ and $\phi_2$
are calculated as
\begin{align}
\langle{\mathcal O}_1\rangle
&= - \frac{\sqrt{(z-1)(3-\theta)}}{16 \pi G}
\left[ \frac{1}{2} b^\frac{\theta}{3(3-\theta)} r_0^{z+3-\theta}
- \frac{2\sqrt{2}}{a} \left( \mathcal A_t + v^i \mathcal A_i \right)
\right]
\ .
\\
\langle{\mathcal O}_2\rangle
&= - \frac{1}{16 \pi G}\biggl\{
\sqrt{\frac{-\theta}{6(3-\theta)}}\left[ (z+2-\theta) b^\frac{\theta}{3(3-\theta)} r_0^{z+3-\theta}
- \frac{2(z-1)}{a} \left( \mathcal A_t + v^i \mathcal A_i \right)
\right]
\notag\\&\quad\qquad
+ \sqrt{-\frac{2\theta}{3(3-\theta)}}\,
r_0^{3-\theta}\left[\partial_i v^i - b^{-1} (\partial_t b + v^i \partial_i b) \right]
\biggr\}
\ .
\end{align}
For $b=a$, the results above agree with those in Section~\ref{sec:StressTensor}.
The stress-energy tensor after the dimensional reduction $\widetilde T^\mu{}_\nu$
can be expressed in terms of the fluid variables as
\begin{align}
\widetilde T^\mu{}_\nu
&=
\mathcal E \hat v^\mu \tau_\nu + P \hat h^\mu{}_\nu
- \kappa \tau_\nu h^{\mu\rho} \partial_\rho T
- \eta \sigma_{ab} e_a^\mu \hat e^b_\nu
\notag\\&\quad
- \zeta \bar\vartheta ~e_a^\mu \hat e^a_\nu
+ n \hat v^\mu \mathcal A_\nu - n \hat v^\rho \mathcal A_\rho \delta^\mu{}_\nu \ ,
\label{StressEnergyDimRed}
\end{align}
where $\bar\vartheta$ is defined by
\begin{equation}
\bar\vartheta = \partial_i v^i - b^{-1}\left(\partial_t b + v^i \partial_i b\right) \ .
\end{equation}
If $b=1$, this term gives the expansion term $\partial_i v^i$.
The energy density $\mathcal E$, pressure $P$, particle number density $n$
are the same as those in Section~\ref{sec:ThermoDR};
\begin{align}
\mathcal E &= \frac{3-\theta}{16\pi G} b^{\frac{\theta}{3(3-\theta)}} r_0^{z+3-\theta} \ , &
P &= \frac{z}{16\pi G} b^{\frac{\theta}{3(3-\theta)}} r_0^{z+3-\theta} \ , &
n &= \frac{z-1}{16\pi G} a^{-1}\ .
\label{FluidVariablesDimRed}
\end{align}
The transport coefficients, heat conductivity $\kappa$ and shear viscosity $\eta$ are also read off as
\begin{align}
\kappa &= \frac{1}{8(z-1)G} b^{-\frac{\theta}{3(3-\theta)}} r_0^{z+1-\theta} \ ,
&
\eta &= \frac{1}{16\pi G} r_0^{3-\theta} \ .
\end{align}
For $b=a$, $\bar\vartheta$ vanishes but it does not mean that the fluid is incompressible,
since $\bar\vartheta$ is not only the expansion but has extra terms.
The expansion is non-zero but cancels with the extra terms, and hence,
this implies that the bulk viscosity vanishes in the lower dimension.
For the dimensional reduction with $b=1$ as in \eqref{MetricHigherDim},
$\bar\vartheta$ simply gives the expansion and hence the bulk viscosity $\zeta$ in that case is
\begin{equation}
\zeta = - \frac{1}{8\pi G}\frac{\theta}{3(3-\theta)} r_0^{3-\theta} \ .
\end{equation}
It should be noted that $\theta$ is negative
for the dimensional reduction and hence $\zeta$ is positive.
The constraint equations can be written in terms of the fluid variables as
\begin{align}
0 &= \partial_t \mathcal E + v^i \partial_i \mathcal E + (\mathcal E + P) \partial_i v^i
\notag\\&\qquad
- \frac{1}{2} \eta \sigma_{ij}\sigma_{ij} - \zeta \bar\vartheta \partial_i v^i
- \partial_i( \kappa \partial_i T)
- \tilde\nu\,\langle\widetilde{\mathcal O}_2\rangle
b^{-1} \left(\partial_t b + v^i \partial_i b \right)
\ , \label{LifContinuityDimRed}
\\
0 &= \partial_i P + J^\mu \mathcal F_{\mu i}
- \partial_j \left(\eta\sigma_{ij} \right)
- \partial_i \left(\zeta\bar\vartheta\right)
+ \tilde\nu \,\langle\widetilde{\mathcal O}_2\rangle b^{-1} \partial_i b
\ , \label{LifNavierStokesDimRed}
\\
0 &= \partial_t n + \partial_j (n v^j) \ , \label{LifChargeConsDimRed}
\end{align}
where $\widetilde{\mathcal O}_2$ is the dual of the dilaton $\phi_2$ but without the contribution from
the counter term $A^2$;
\begin{equation}
\langle\widetilde{\mathcal O}_2\rangle = - \frac{1}{16 \pi G}
\sqrt{\frac{-2\theta}{3(3-\theta)}}
\left\{ \frac{3-\theta}{2} b^\frac{\theta}{3(3-\theta)} r_0^{z+3-\theta}
+ r_0^{3-\theta}\left[\partial_i v^i - b^{-1} (\partial_t b + v^i \partial_i b) \right] \right\}\ .
\end{equation}
This expression is consistent to the dimensional reduction from
the higher dimensional fluid;
\begin{align}
\langle\widetilde{\mathcal O}_2\rangle
&=
\frac{\tilde\nu}{2} b^\frac{\theta}{3(3-\theta)} \left(\sum_{\mu=0}^3 T_D{}^\mu{}_\mu
+\frac{3}{\theta}\sum_{i=4}^{3-\theta} T_D{}^i{}_i\right)
=
- \frac{\tilde\nu}{2}\mathcal E - \tilde\nu \eta \bar\vartheta \ ,
\end{align}
where $T_D{}^\mu{}_\nu$ is the stress-energy tensor of the fluid before the dimensional reduction.
Thus, $\langle\widetilde{\mathcal O}_2\rangle$ is related to
the other fluid variables if the fluid is obtained by the dimensional reduction.
It should be noted that the constant mode of $\phi_2$ is given by $\tilde\nu\log b$,
and hence,the contribution from $\widetilde{\mathcal O}_2$ is interpreted
as the coupling to the external source of $\phi_2$.
Contrary to the single dilaton case in the previous sections,
the variable $b$ is independent from the fluid variables and
is interpreted as an external field.
The transport coefficients also depend on temperature and external field $b$,
but are independent of the particle number density.
The energy $\mathcal E$, stress tensor $\widehat{\mathcal T}^i{}_j$ and
scalar operator $\widetilde{\mathcal O}_2$ satisfy
the following condition;
\begin{equation}
0 =
- \left(z - \frac{\theta}{d-1}\right) \mathcal E
+ \left(1 - \frac{\theta}{d-1}\right) \widehat{\mathcal T}^i{}_i
- (d-1-\theta) \tilde\nu \langle\widetilde{\mathcal O}_2\rangle
\end{equation}
where the trace of the Milne invariant stress tensor is given by
\begin{equation}
\widehat{\mathcal T}^i{}_i = (d-1) \left(P - \zeta \bar\vartheta\right) \ .
\end{equation}
The above condition is nothing but the Ward identity of
the Lifshitz scaling symmetry with the hyperscaling-violation.
The coefficients of $\mathcal E$, $P$ and $\tilde\nu\langle\widetilde{\mathcal O}_2\rangle$
equal to the scaling dimensions of $t$, $x^i$ and $b$ with appropriate signs, respectively.
If the fluid satisfies the condition $\bar\vartheta=0$,
the fluid equations \eqref{LifContinuityDimRed} and \eqref{LifNavierStokesDimRed}
can be rewritten as
\begin{align}
0 &= \partial_t \mathcal E + v^i \partial_i \mathcal E + (\mathcal E + \widetilde P) \partial_i v^i
- \frac{1}{2} \eta \sigma_{ij}\sigma_{ij} - \partial_i( \kappa \partial_i T)
\ ,
\\
0 &= \partial_i \widetilde P - \tilde n \partial_i \tilde\mu + J^\mu \mathcal F_{\mu i}
- \partial_j \left(\eta\sigma_{ij} \right)
\ ,
\end{align}
where
\begin{align}
\widetilde P &= P + \tilde \mu \tilde n
= \frac{1}{16\pi G} \left(z-\frac{\theta}{d-1}\right)
b^\frac{\theta}{(d-1)(d-1-\theta)} r_0^{z+d-1-\theta} \ ,
\label{EfPressure}
\\
\tilde n &= \frac{(z-1)}{16\pi G b} \ ,
\\
\tilde \mu &= - \tilde n^{-1} \tilde\nu\langle\widetilde{\mathcal O}_2\rangle
= - \frac{\theta}{(d-1)(z-1)}
b^{1 + \frac{\theta}{(d-1)(d-1-\theta)}} r_0^{z+d-1-\theta}
\ .
\label{2ndChemical}
\end{align}
The Ward identity of the Lifshitz scaling symmetry can also be expressed
in terms of $\widetilde P$ as
\begin{equation}
\left(z-\frac{\theta}{d-1}\right) \mathcal E = (d-1-\theta) \widetilde P \ .
\end{equation}
For $b=a$, these equations agree with those in the previous section.
The above effective pressure \eqref{EfPressure} and chemical potential \eqref{2ndChemical}
agree with those in the thermodynamic relation \eqref{Ptilde} and \eqref{muTilde}, respectively.
Then, the thermodynamic relations, fluid equations and Ward identity,
as well as the stress-energy tensor reproduce the result in the previous section.
The result in this section is a generalization
of \cite{Kanitscheider:2009as} to non-relativistic and $z\neq 1$ cases.
The contributions from the gauge field and $\phi_1$
vanish for $z=1$ limit in which the Lifshitz black hole geometry becomes the Schwarzschild-AdS.
The hydrodynamic ansatz should be given by using the Lorentz boost for $z=1$,
and hence the fluid will be relativistic.
The results in this section are not well defined in $z=1$ limit,
since the ansatz is obtained by using the the Galilean boost.
The non-relativistic fluid which is obtained in this section
agrees with the non-relativistic limit of \cite{Kanitscheider:2009as} for $z=1$.
\section{Lifshitz hydrodynamics on a conformally flat background}\label{sec:NaiveAnsatz}
We have introduced the redefinition of the boundary coordinate before
replacing the parameters by slowly varying functions.
This coordinate redefinition is introduced to make
a flat space background on the boundary.
Here, we show that the naive hydrodynamic ansatz without
such a coordinate redefinition gives
fluids on a non-trivial but conformally flat background.
It can be calculated straightforwardly in a similar fashion to
Section~\ref{sec:HydroAnsatz} but without introducing
the coordinate redefinition \eqref{CoordRedef}.
Then, the first order constraint equations, which are
equivalent to the fluid equations in the perfect fluid limit,
are obtained as
\begin{align}
0 &= \partial_t a + v^i \partial_i a - a \left(1-\frac{\theta}{3}\right) \partial_i v^i ,
\label{ConstAConf}
\\
0 &= \partial_t r_0 + v^i \partial_i r_0 + \frac13 r_0 \partial_i v^i ,
\label{ConstTConf}
\\
0 &= \mathcal F_{ti} + v^j\mathcal F_{ji} + \frac{z (z+3-\theta)}{2 (z-1)} a r_0^{z+2-\theta} \partial_i r_0 \ .
\label{ConstSConf}
\end{align}
These equations are different from \eqref{ConstA}-\eqref{ConstS},
and as we will explain below, the difference can be interpreted as
the effect of the non-trivial background geometry at the boundary.
It is natural to expect that fluid variables as energy density, pressure and particle number density
are not affected by the background geometry.
In fact, we can calculate the stress-energy tensor straightforwardly and
they are read off as
\begin{align}
\mathcal E &= \frac{3-\theta}{16\pi G} a^{\frac{\theta}{3(3-\theta)}} r_0^{z+3-\theta} \ , &
P &= \frac{1}{16\pi G} \left(z-\frac{\theta}{d-1}\right)
a^{\frac{\theta}{3(3-\theta)}} r_0^{z+3-\theta} \ ,
\notag\\
n &= \frac{z-1}{8\pi G} a^{-1}\ , &
\mu &= - \frac{\theta}{3(z-1)} a^{1 + \frac{\theta}{3(3-\theta)}} r_0^{z+3-\theta} \ .
\end{align}
which are the same as in \eqref{FluidVariablesZ}.
In order to consider the fluid mechanics in the non-trivial background,
we first introduce the fluid velocity field $u^\mu$ which is normalized as
\begin{equation}
1 = \tau_\mu u^\mu
\end{equation}
where the timelike vielbein is given by
\begin{align}
\tau &= e^{\chi_0} dt \ ,
\\
e^{\chi_0} &= a^{-\frac{\theta}{3(3-\theta)}}
\end{align}
and hence the normalized velocity field $u^\mu$ is
\begin{align}
u^t &= e^{-\chi_0} \ , &
u^i &= e^{-\chi_0} v^i \ .
\end{align}
The constraint equations \eqref{ConstAConf}-\eqref{ConstSConf} can be expressed as
\begin{align}
0 &= u^\mu \partial_\mu \mathcal E + (\mathcal E + P) D_\mu u^\mu
\ , \label{PFContConf}
\\
0 &= \partial_i P - n \partial_i \mu + (\mathcal E + P) \partial_i \chi_0 + J^\mu \mathcal F_{\mu i}
\ ,
\\
0 &= D_\mu J^\mu \ , \label{PFChargeConf}
\end{align}
where the particle number current $J^\mu$ is defined by
\begin{align}
J^\mu &= n u^\mu \ .
\end{align}
Eq.~\eqref{PFContConf}-\eqref{PFChargeConf} are nothing but
the fluid equations in the perfect fluid limit in Newton-Cartan theory,
and the generalization to the first order fluid is straightforward.
The first order stress-energy tensor is obtained in the following form;
\begin{align}
\widehat T^\mu{}_\nu
&=
\mathcal E u^\mu \tau_\nu + \left(P-n\mu\right) \hat h^\mu{}_\nu
- \tilde\kappa \tau_\nu h^{\mu\rho} \partial_\rho T
- \tilde\eta \sigma_{ab} e_a^\mu \hat e^b_\nu
+ n u^\mu \mathcal A_\nu - n u^\rho \mathcal A_\rho \delta^\mu{}_\nu \ ,
\end{align}
where the transport coefficients are given by
\begin{align}
\tilde\kappa &= \frac{1}{8(z-1)G} r_0^{z+1-\theta} \ ,
&
\tilde\eta &= \frac{1}{16\pi G} a^{\frac{\theta}{3(3-\theta)}} r_0^{3-\theta} \ ,
&
\zeta &= 0 \ .
\end{align}
The bulk viscosity is zero as for the flat background \eqref{Transport}.
The difference of heat conductivity would come from the difference of the temperature.
In this case, the Hawking temperature is simply given by \eqref{HawkingT}
on the contrary to that in Section~\ref{sec:HydroAnsatz} where
the temperature is rescaled due to the coordinate transformation.
In the curved background, this should be expressed in terms of
the local temperature $T_T = e^{-\chi_0} T$ as
\begin{equation}
\tilde\kappa\partial_\mu T = \kappa \left(\partial_\mu - \mathcal G_\mu\right) T_T \ ,
\end{equation}
and then, the heat conductivity is same as \eqref{Transport};
\begin{equation}
\kappa = \frac{1}{8(z-1)G} a^{-\frac{\theta}{3(3-\theta)}} r_0^{z+1-\theta} \ ,
\end{equation}
The difference of the shear viscosity implies that
the shear tensor must be written in terms of
the normalized velocity field $u^\mu$;
\begin{equation}
\hat\sigma_{\mu\nu}
= \hat h_{\rho\nu} D_\mu u^\rho + \hat h_{\rho\mu} D_\nu u^\rho
- \frac{2}{3} \hat h_{\mu\nu} D_\rho u^\rho \ .
\end{equation}
Then, the shear can be written as
\begin{equation}
\tilde\eta \sigma_{ab} e_a^\mu \hat e^b_\nu = \eta \hat\sigma_{\rho\sigma} h^{\mu\rho} h^{\sigma}{}_\nu \ .
\end{equation}
Then, the shear viscosity $\eta$ equals to \eqref{Transport};
\begin{equation}
\eta = \frac{1}{16\pi G} r_0^{3-\theta} \ .
\end{equation}
\subsection{Dimensional reduction for conformally flat background}
We can also consider the naive hydrodynamic ansatz for
the dimensional reduction from the higher dimensional Lifshitz geometry.
In this case, we can see the effects of the conformal factor in the metric more explicitly.
The perfect fluid limit of the fluid equations are obtained as
\begin{align}
0 &= a^{-1}\left(\partial_t a + v^i \partial_i a \right)
+ \frac{\theta}{3-\theta} b^{-1}\left(\partial_t b + v^i \partial_i b\right) - \partial_i v^i ,
\\
0 &= r_0^{-1}\left(\partial_t r_0 + v^i \partial_i r_0\right)
+ \frac{\theta}{(3-\theta)^2} b^{-1}\left(\partial_t b + v^i \partial_i b\right)
+ \frac{1}{3-\theta} \partial_i v^i ,
\\
0 &= \mathcal F_{ti} + v^j \mathcal F_{ji}
+ \frac{z(z+3-\theta)}{2 (z-1)} a r_0^{z+2-\theta} \partial_i r_0 \ .
\end{align}
where $b$ comes from the effects of the non-trivial background;
\begin{equation}
e^{\chi_0} \equiv b^{-\frac{\theta}{3(3-\theta)}} \ .
\end{equation}
The fluid variables are the same as in \eqref{FluidVariablesDimRed}
\begin{align}
\mathcal E &= \frac{3-\theta}{16\pi G} b^{\frac{\theta}{3(3-\theta)}} r_0^{z+3-\theta} \ , &
P &= \frac{z}{16\pi G} b^{\frac{\theta}{3(3-\theta)}} r_0^{z+3-\theta} \ , &
n &= \frac{z-1}{8\pi G} a^{-1}\ ,
\end{align}
and the fluid equations can be written in terms of the normalized velocity field $u^\mu$
but now $\chi_0$ is independent from the particle number density $n\sim 1/a$;
\begin{align}
0 &= u^\mu \partial_\mu \mathcal E + (\mathcal E + P) D_\mu u^\mu
+ \tilde\nu\widetilde{\mathcal O}_2 b^{-1} u^\mu \partial_\mu b
\ ,
\\
0 &= \partial_i P + (\mathcal E + P) \partial_i \chi_0 + J^\mu \mathcal F_{\mu i}
- \tilde\nu\widetilde{\mathcal O}_2 b^{-1}\partial_i b
\ ,
\\
0 &= D_\mu J^\mu \ .
\end{align}
The first order stress-energy tensor is given by
\begin{align}
\widehat T^\mu{}_\nu
&=
\mathcal E u^\mu \tau_\nu + P \hat h^\mu{}_\nu
- \tilde\kappa \tau_\nu h^{\mu\rho} \partial_\rho T
- \eta \hat\sigma_{\rho\sigma} h^{\mu\rho} h^{\sigma}{}_\nu
\notag\\&\quad
- \zeta \left[\partial_i v^i - \frac{3}{3-\theta} b^{-1}\left(\partial_t b + v^i \partial_i b\right)\right] e_a^\mu \hat e^a_\nu
+ n u^\mu \mathcal A_\nu - n u^\rho \mathcal A_\rho \delta^\mu{}_\nu \ .
\end{align}
In this case, the expansion appears in the combination of
\begin{equation}
\partial_i v^i - \frac{3}{3-\theta} b^{-1}\left(\partial_t b + v^i \partial_i b\right) \ ,
\end{equation}
and it vanishes for $b=a$ by substituting the constraint equation.
For $b=1$, the background becomes flat and hence
this agrees with \eqref{StressEnergyDimRed}.
\section*{Acknowledgements}
\addcontentsline{toc}{section}{Acknowledgements}
We would like to thank J. deBoer, N. Obers for discussions. We would like to thank especially J. Hartong for enlightening discussions and for critical comments on the manuscript.
This work was supported in part by European Union's Seventh Framework Programme under grant agreements (FP7-REGPOT-2012-2013-1) no 316165 and the Advanced ERC grant SM-grav, No 669288.
The work is also supported in part by the Ministry of Science and Technology,
R.O.C. (project no. 104-2112-M-002 -003 -MY3) and by National Taiwan
University (project no. 105R8700-2).
\newpage
|
2,877,628,088,497 | arxiv | \section{Introduction}\label{sec_introduction}
\noindent A real matrix is called {\itshape totally positive} if all of its minors (i.e.\ determinants of square submatrices) are positive. Totally positive matrices first appeared in the 1930's in work of Schoenberg \cite{schoenberg30} and Gantmakher and Krein \cite{gantmakher_krein37}. In the 1990's, Lusztig \cite{lusztig94} generalized the theory of totally positive matrices to arbitrary semisimple algebraic groups $G$ and their partial flag varieties $G/P$. These spaces have since been widely studied, with connections to representation theory \cite{lusztig94}, combinatorics \cite{postnikov07}, cluster algebras \cite{fomin_williams_zelevinsky}, high-energy physics \cite{arkani-hamed_bourjaily_cachazo_goncharov_postnikov_trnka16, arkani-hamed_bai_lam17}, mirror symmetry \cite{rietsch_williams19}, topology \cite{galashin_karp_lam22}, and many other topics. The purpose of this paper is to examine the definition of the totally positive and totally nonnegative parts of a partial flag variety $G/P$ in type $A$.
\subsection{Lusztig's total positivity vs.\ Pl\"{u}cker positivity}
Given a subset $K = \{k_1 < \cdots < k_l\} \subseteq \{1, \dots, n-1\}$, let $\PFl{K}{n}(\mathbb{R})$ denote the partial flag variety of all tuples of subspaces $(V_{k_i})_{i=1}^l$ of $\mathbb{R}^n$, where $V_{k_1} \subset \cdots \subset V_{k_l}$ and each $V_k$ has dimension $k$. Equivalently, we may view $\PFl{K}{n}(\mathbb{R})$ as a parabolic quotient of $\GL_n(\mathbb{R})$, where a matrix $g\in\GL_n(\mathbb{R})$ represents the flag $(V_{k_i})_{i=1}^l$, where $V_{k_i}$ is the span of the first $k_i$ columns of $g$. Particular cases of interest are when $K = \{1, \dots, n-1\}$, in which case $\PFl{K}{n}(\mathbb{R})$ is the {\itshape complete flag variety} $\Fl_n(\mathbb{R})$, and when $K = \{k\}$, in which case $\PFl{K}{n}(\mathbb{R})$ is the {\itshape Grassmannian} $\Gr_{k,n}(\mathbb{R})$ of all $k$-dimensional subspaces of $\mathbb{R}^n$.
Lusztig \cite{lusztig94,lusztig98} defined the {\itshape totally positive part} $\PFl{K}{n}^{>0}$ of $\PFl{K}{n}(\mathbb{R})$ to be the subset of flags which can be represented by a totally positive matrix in $\GL_n(\mathbb{R})$ (see \cref{lusztig_definition} for further discussion). He also defined the {\itshape totally nonnegative part} $\PFl{K}{n}^{\ge 0}$ to be the closure of $\PFl{K}{n}^{>0}$. There is an alternative notion of positivity which arises from the {\itshape Pl\"{u}cker embedding} of $\PFl{K}{n}(\mathbb{R})$. Namely, we say that $(V_{k_i})_{i=1}^l$ is {\itshape Pl\"{u}cker positive} if all its Pl\"{u}cker coordinates are positive, or equivalently, if it can be represented by an element of $\GL_n(\mathbb{R})$ whose left-justified (i.e.\ initial) minors of orders $k_1, \dots, k_l$ are all positive. We similarly say that $(V_{k_i})_{i=1}^l$ is {\itshape Pl\"{u}cker nonnegative} if all its Pl\"{u}cker coordinates are nonnegative. We denote the Pl\"{u}cker-positive and Pl\"{u}cker-nonnegative parts of $\PFl{K}{n}(\mathbb{R})$ by $\PFl{K}{n}^{\Delta >0}$ and $\PFl{K}{n}^{\Delta\ge 0}$, respectively. The Pl\"{u}cker-nonnegative part $\Gr_{k,n}^{\Delta\ge 0}$ of the Grassmannian was studied by Postnikov \cite{postnikov07}, and the space $\PFl{K}{n}^{\Delta\ge 0}$ was introduced by Arkani-Hamed, Bai, and Lam \cite[Section 6.3]{arkani-hamed_bai_lam17}, who called it the {\itshape naive nonnegative part}.
We wish to emphasize that both total positivity and Pl\"{u}cker positivity are natural notions. Lusztig's total positivity is compatible with his theory of canonical bases \cite{lusztig90} and with the combinatorics of Coxeter groups \cite{lusztig94}. The space $\PFl{K}{n}^{\ge 0}$ can be decomposed into cells \cite{lusztig94,rietsch99}, each of which admits an explicit parametrization \cite{marsh_rietsch04}, and the cell decomposition forms a regular CW complex \cite{galashin_karp_lam22}. On the other hand, Pl\"{u}cker positivity is more concrete, and leads to connections with matroid theory \cite{ardila_rincon_williams17} and tropical geometry \cite{speyer_williams05}. It also arises in the definition of loop amplituhedra \cite{arkani-hamed_trnka14,bai_he_lam16}, which are spaces appearing in the physics of scattering amplitudes; for example, a {\itshape $1$-loop amplituhedron} is a certain projection of the space $\PFl{k,k+2}{n}^{\Delta \ge 0}$. The construction of loop amplituhedra is incompatible with Lusztig's total positivity, due to the presence of a cyclic symmetry among the particles in the scattering amplitude, as we explain in \cref{sec_cyclic_intro}.
It follows from the definitions that $\PFl{K}{n}^{>0}\subseteq\PFl{K}{n}^{\Delta >0}$ and $\PFl{K}{n}^{\ge 0}\subseteq\PFl{K}{n}^{\Delta\ge 0}$, and it is natural to ask when equality holds. This question has been explored previously in special cases, as we discuss in \cref{sec_history}. We answer this question in general:
\begin{thm}\label{converse}
Let $n\in\mathbb{N}$ and $K\subseteq \{1, \dots, n-1\}$. Then the following are equivalent:
\begin{enumerate}[label=(\roman*), leftmargin=*, itemsep=2pt]
\item\label{converse_tp} $\PFl{K}{n}^{>0} = \PFl{K}{n}^{\Delta > 0}$;
\item\label{converse_tnn} $\PFl{K}{n}^{\ge 0} = \PFl{K}{n}^{\Delta \ge 0}$; and
\item\label{converse_consecutive} the set $K$ consists of consecutive integers.
\end{enumerate}
\end{thm}
We give an elementary proof of \cref{converse}, using classical results in linear algebra and the theory of total positivity. In particular, our proof does not rely on any previously established special cases or on the cell decomposition of $\PFl{K}{n}^{\ge 0}$.
\subsection{Cell decomposition vs.\ matroid decomposition}
Lusztig \cite[Remark 8.15]{lusztig94} introduced a decomposition of $\PFl{K}{n}^{\ge 0}$, which Rietsch \cite{rietsch99} showed is a cell decomposition (i.e.\ each stratum is homeomorphic to an open ball). It is the intersection of the {\itshape projected Richardson stratification} \cite{kazhdan_lusztig79} with $\PFl{K}{n}^{\ge 0}$. There is another natural stratification of $\PFl{K}{n}(\mathbb{R})$, introduced by Gelfand and Serganova \cite{gel'fand_serganova87}, which is the common refinement of the vanishing and nonvanishing sets of all Pl\"{u}cker coordinates. We call this the {\itshape matroid decomposition}, since each stratum is labeled by a tuple of matroids on the ground set $\{1, \dots, n\}$ (or alternatively, a Coxeter matroid).
Postnikov \cite{postnikov07} studied the matroid decomposition of the Pl\"{u}cker-nonnegative part of $\Gr_{k,n}(\mathbb{R})$, which he called the {\itshape positroid decomposition}. He showed that it forms a cell decomposition \cite[Theorem 3.5]{postnikov07}, and that it coincides with the decomposition of Lusztig and Rietsch \cite[Theorem 3.8]{postnikov07} (see \cref{sec_history} for further discussion). The topology of this decomposition presents a stark contrast to the matroid decomposition of all of $\Gr_{k,n}(\mathbb{R})$, which exhibits a phenomenon known as {\itshape Mn\"{e}v universality} \cite{mnev88}. In general, Tsukerman and Williams \cite[Section 7]{tsukerman_williams15} showed that the cell decomposition of $\PFl{K}{n}^{\ge 0}$ is a refinement of the matroid decomposition. They also showed that the two decompositions coincide for complete flag varieties, but differ for $\PFl{1,3}{4}^{\ge 0}$. We determine in general when the two decompositions are equal:
\begin{thm}\label{decompositions}
Let $n\in\mathbb{N}$ and $K\subseteq \{1, \dots, n-1\}$. Then the cell decomposition of $\PFl{K}{n}^{\ge 0}$ coincides with the matroid decomposition if and only if $K$ consists of consecutive integers.
\end{thm}
The forward direction of \cref{decompositions} follows from \cref{converse}, using a topological argument. However, we do not know how to deduce the reverse direction directly from \cref{converse}. Instead, we use a result of Tsukerman and Williams \cite[Theorem 7.1]{tsukerman_williams15}, which in turn builds on unpublished work of Marsh and Rietsch. It states that each cell of $\PFl{K}{n}^{\ge 0}$ is contained in a single matroid stratum, which is uniquely determined by the $0$-dimensional cells in its closure. These $0$-dimensional cells have an explicit description in terms of the Bruhat order on the symmetric group $\mathfrak{S}_n$, due to Rietsch \cite{rietsch06b}. To prove the reverse direction of \cref{decompositions}, we use the combinatorics of $\mathfrak{S}_n$ to reduce the statement to the Grassmannian case, which was proved by Postnikov as described above. Our techniques also have implications in the study of the {\itshape Bruhat interval polytopes} of Tsukerman and Williams \cite{tsukerman_williams15}; see \cref{decompositions_remark} and \cref{minkowski_sum}.
\subsection{Cyclic symmetry}\label{sec_cyclic_intro}
For $\epsilon\in\mathbb{Z}/2\mathbb{Z}$, define the {\itshape (signed) left cyclic shift map} $\sigma_\epsilon\in\GL_n(\mathbb{R})$ by
$$
\sigma_\epsilon(v_1, \dots, v_n) := (v_2, \dots, v_n, (-1)^{\epsilon-1}v_1) \quad \text{ for all } v\in\mathbb{R}^n.
$$
Then $\sigma_\epsilon$ also acts on $\PFl{K}{n}(\mathbb{R})$. If all elements of $K$ have the same parity $\epsilon$, then by the alternating property of the determinant, $\sigma_\epsilon$ acts on Pl\"{u}cker coordinates by rotating the index set $\{1, \dots, n\}$. Therefore $\sigma_\epsilon$ preserves both $\PFl{K}{n}^{\Delta >0}$ and $\PFl{K}{n}^{\Delta\ge 0}$. It is natural to wonder whether $\sigma_\epsilon$ also preserves $\PFl{K}{n}^{>0}$ and $\PFl{K}{n}^{\ge 0}$. One motivation is that the cyclic symmetry for $\PFl{k,k+2}{n}^{\Delta \ge 0}$ is important in the definition of loop amplituhedra mentioned above, coming from a cyclic ordering on the $n$ external particles.
If $K = \{k\}$ and $k$ has parity $\epsilon$, then $\sigma_\epsilon$ preserves both $\Gr_{k,n}^{>0}$ and $\Gr_{k,n}^{\ge 0}$, using \cref{converse} for $\Gr_{k,n}(\mathbb{R})$. We show, however, that in all other cases, $\sigma_\epsilon$ does not preserve either $\PFl{K}{n}^{>0}$ or $\PFl{K}{n}^{\ge 0}$. In particular, one cannot substitute the notion of total positivity for Pl\"{u}cker positivity in the definition of loop amplituhedra.
\begin{thm}\label{cyclic}
Let $K\subseteq \{1, \dots, n-1\}$ such that $|K| \ge 2$, and let $\epsilon\in\mathbb{Z}/2\mathbb{Z}$. Then there exists $V\in\PFl{K}{n}^{>0}$ such that $\sigma_\epsilon(V)\notin\PFl{K}{n}^{\ge 0}$. In particular, $\sigma_\epsilon$ does not preserve $\PFl{K}{n}^{>0}$ or $\PFl{K}{n}^{\ge 0}$.
\end{thm}
While \cref{cyclic} does not follow directly from \cref{converse}, we use similar techniques to prove both results.
One of our initial motivations for this work was to understand which gradient flows on $\PFl{K}{n}(\mathbb{R})$ are compatible with total positivity, which we discuss in a separate paper \cite{bloch_karp1}. We discovered that in certain cases, the classification of such flows differs depending on whether one works with Lusztig's total positivity or with Pl\"{u}cker positivity; see Theorem 5.14, Theorem 5.18, and Remark 5.20 of \cite{bloch_karp1}. \cref{cyclic} above may be regarded as an infinitesimal analogue of this phenomenon.
\subsection{History}\label{sec_history}
We discuss previous work related to \cref{converse} and \cref{decompositions}. Much of this work has focused on the case when $\PFl{K}{n}(\mathbb{R})$ is a Grassmannian $\Gr_{k,n}(\mathbb{R})$, due to Postnikov's study of the matroid decomposition of $\Gr_{k,n}^{\Delta\ge 0}$ \cite{postnikov07}.
Rietsch \cite[Section 4.5]{rietsch98} (also announced in \cite[Section 3.12]{lusztig98}) stated that Lusztig's notion of total positivity coincides with Pl\"{u}cker positivity for all partial flag varieties $G/P$, but her proof contained an error. She later proved \cref{converse} for Grassmannians $\Gr_{k,n}(\mathbb{R})$ in unpublished notes \cite{rietsch09}. Subsequent proofs were given by Talaska and Williams \cite[Corollary 1.2]{talaska_williams13}, Lam \cite[Remark 3.8]{lam16}, and Lusztig \cite{lusztig}. \cref{decompositions} for the complete flag variety $\Fl_n(\mathbb{R})$ was stated in \cite[(9.15)]{galashin_karp_lam22}, and proofs were given by Lusztig \cite[p.\ 4]{lusztig19} and Boretsky \cite[Theorem 3.14]{boretsky}. Conversely, Chevalier \cite[Example 10.1]{chevalier11} gave an example showing that $\PFl{1,3}{4}^{>0} \neq \PFl{1,3}{4}^{\Delta >0}$; see \cite[Remark 5.17]{knutson_lam_speyer13} for a related discussion. In a different direction, Geiss, Leclerc, and Schr\"{o}er \cite[Conjecture 19.2]{geiss_leclerc_schroer08} have conjectured an algebraic description of the totally positive part of any partial flag variety $G/P$.
\cref{decompositions} was proved in the case of Grassmannians $\Gr_{k,n}^{\ge 0}$ by Postnikov \cite[Corollary 3.8]{postnikov07}. We point out that his proof implicitly uses the fact that $\Gr_{k,n}^{\ge 0} = \Gr_{k,n}^{\Delta\ge 0}$, which was only later proved by Rietsch \cite{rietsch09}, as described above. A subsequent proof was given by Talaska and Williams \cite[Corollary 1.2]{talaska_williams13}, and the result can also be proved using work of Tsukerman and Williams \cite[Section 7]{tsukerman_williams15}. \cref{decompositions} in the cases of $\Fl_n^{\ge 0}$ and $\PFl{1,3}{4}^{\ge 0}$ was proved by Tsukerman and Williams \cite[Theorem 7.1 and Remark 7.3]{tsukerman_williams15}.
\subsection{Further directions}
Lusztig \cite{lusztig94,lusztig98} introduced the totally positive and totally nonnegative part of an arbitrary partial flag variety $G/P$, where $G$ is a semisimple algebraic group $G$ over $\mathbb{R}$ and $P$ is a parabolic subgroup. It would be interesting to study the problems considered in this paper for such $G/P$. We mention that Fomin and Zelevinsky \cite{fomin_zelevinsky00b} have considered analogous problems for the group $G$. In particular, they showed that an element of $G$ is totally positive (respectively, totally nonnegative) in the sense of Lusztig \cite{lusztig94} if and only if all its generalized minors are positive (respectively, nonnegative). We consider only the type $A$ case here both for the sake of concreteness, and because our proofs of \cref{converse} and \cref{cyclic} are elementary. We point out that many of our arguments in \cref{sec_decompositions} used to prove \cref{decompositions} can be applied to general Coxeter groups; however, a key tool in our proof is Postnikov's \cref{decompositions_grassmannian}, which is only known in type $A$.
Lusztig \cite[Theorem 3.4]{lusztig98} showed that for any partial flag variety $G/P$, there exists a positive weight $\lambda$ such that a partial flag is totally positive (respectively, totally nonnegative) if and only if its coordinates with respect to the canonical basis of the irreducible $G$-module with highest weight $\lambda$ are all positive (respectively, nonnegative). He also posed the problem of finding the minimal such $\lambda$. Our \cref{converse} gives a partial answer for type $A$ partial flag varieties $G/P = \PFl{K}{n}(\mathbb{R})$: it implies that the sum of fundamental weights $\sum_{k\in K}\omega_k$ is a valid choice of $\lambda$ (and hence minimal) if and only if $K$ consists of consecutive integers.
As we described above, the combinatorics and topology of the cell decomposition of $\PFl{K}{n}^{\ge 0}$ have been extensively studied. For example, each cell has an explicit parametrization \cite[Section 11]{marsh_rietsch04}, the closure poset is shellable \cite[Theorem 1.2]{williams07}, and the cell decomposition forms a regular CW complex \cite[Theorem 1.1]{galashin_karp_lam22}. In light of \cref{converse} and \cref{decompositions}, it would be interesting to further study the matroid decomposition of $\PFl{K}{n}^{\Delta\ge 0}$. Bai, He, and Lam \cite{bai_he_lam16} studied the case $\PFl{1,3}{n}^{\Delta \ge 0}$. In \cite[Corollary 6.16]{bloch_karp1}, we show that $\PFl{K}{n}^{\Delta >0}$ is homeomorphic to an open ball and $\PFl{K}{n}^{\Delta\ge 0}$ is homeomorphic to a closed ball.
\subsection{Outline}
In \cref{sec_background}, we give some background on partial flag varieties and total positivity. In \cref{sec_converse}, we prove \cref{converse}. In \cref{sec_cyclic}, we prove \cref{cyclic}. In \cref{sec_decompositions}, we prove \cref{decompositions}.
\subsection*{Acknowledgments}
We thank George Lusztig and Lauren Williams for helpful comments.
\section{Background}\label{sec_background}
\noindent In this section, we define partial flag varieties and their totally positive and totally nonnegative parts, and recall some classical results in the theory of total positivity. For further details on total positivity, we refer to \cite{gantmaher_krein50, karlin68, lusztig94, fomin_zelevinsky00a, pinkus10, fallat_johnson11}.
\subsection{Notation}\label{sec_notation}
Let $\mathbb{N} := \{0, 1, 2, \dots\}$. For $n\in\mathbb{N}$, we let $[n]$ denote $\{1, 2, \dots, n\}$, and for $i,j\in\mathbb{Z}$, we let $[i,j]$ denote the interval of integers $\{i, i+1, \dots, j\}$. Given a set $S$ and $k\in\mathbb{N}$, we let $\binom{S}{k}$ denote the set of $k$-element subsets of $S$.
We let $e_1, \dots, e_n$ denote the unit vectors of $\mathbb{R}^n$. We let $\mathbb{P}^n(\mathbb{R})$ denote $n$-dimensional real projective space, defined to be $\mathbb{R}^{n+1}\setminus\{0\}$ modulo multiplication by $\mathbb{R}^\times$. For $\lambda_1, \dots, \lambda_n\in\mathbb{R}$, we let $\Diag{\lambda_1, \dots, \lambda_n}$ denote the $n\times n$ diagonal matrix with diagonal entries $\lambda_1, \dots, \lambda_n$. We let $\GL_n(\mathbb{R})$ denote the group of invertible real $n\times n$ matrices.
Given an $m\times n$ matrix $A$, we let $\transpose{A}$ denote the transpose of $A$. For $1 \le k \le m,n$ and subsets $I\in\binom{[m]}{k}$ and $J\in\binom{[n]}{k}$, we let $\Delta_{I,J}(A)$ denote the determinant of the submatrix of $A$ in rows $I$ and columns $J$, called a {\itshape minor} of $A$ of {\itshape order} $k$. If $J = [k]$, we call $\Delta_{I,J}(A)$ a {\itshape left-justified minor} of $A$. We also let $\sumof{I}$ denote the sum of the elements in $I$.
We will make repeated use of the {\itshape Cauchy--Binet identity} (see e.g.\ \cite[I.(14)]{gantmacher59}): if $A$ is an $m\times n$ matrix, $B$ is an $n\times p$ matrix, and $1 \le k \le m,p$, then
\begin{align}\label{cauchy-binet}
\Delta_{I,J}(AB) = \sum_{K\in\binom{[n]}{k}}\Delta_{I,K}(A)\Delta_{K,J}(B) \quad \text{ for all } I\in\textstyle\binom{[m]}{k} \text{ and } J\in\binom{[p]}{k}.
\end{align}
We now introduce partial flag varieties.
\begin{defn}\label{defn_Fl}
Let $n\in\mathbb{N}$, and let $K = \{k_1 < \cdots < k_l\} \subseteq [n-1]$. Let $\P{K}{n}(\mathbb{R})$ denote the parabolic subgroup of $\GL_n(\mathbb{R})$ of block upper-triangular matrices with diagonal blocks of sizes $k_1, k_2 - k_1, \dots, k_l - k_{l-1}, n - k_l$. We define the {\itshape partial flag variety}
$$
\PFl{K}{n}(\mathbb{R}) := \GL_n(\mathbb{R})/\P{K}{n}(\mathbb{R}).
$$
We identify $\PFl{K}{n}(\mathbb{R})$ with the variety of partial flags of subspaces in $\mathbb{R}^n$
$$
\{V = (V_{k_1}, \dots, V_{k_l}) : 0 \subset V_{k_1} \subset \cdots \subset V_{k_l} \subset \mathbb{R}^n \text{ and } \dim(V_{k_i}) = k_i \text{ for } 1 \le i \le l\}.
$$
The identification sends $g\in\GL_n(\mathbb{R})/\P{K}{n}(\mathbb{R})$ to the tuple $(V_k)_{k\in K}$, where $V_k$ is the span of the first $k$ columns of $g$ for all $k\in K$. More generally, if $A$ is any real matrix with $n$ rows and at least $k_l$ columns such that $V_k$ is the span of the first $k$ columns of $A$ for all $k\in K$, we say that $A$ {\itshape represents $V$}. Note that $\GL_n(\mathbb{R})$ acts on $\PFl{K}{n}(\mathbb{R})$ on the left.
We have the {\itshape Pl\"{u}cker embedding}
\begin{align}\label{plucker_embedding}
\begin{gathered}
\PFl{K}{n}(\mathbb{R}) \hookrightarrow \mathbb{P}^{\left(\hspace*{-1pt}\binom{n}{k_1}-1\right)} \times \cdots \times \mathbb{P}^{\left(\hspace*{-1pt}\binom{n}{k_l}-1\right)}, \\
V \mapsto \Big((\Delta_I(A))_{I\in\binom{[n]}{k_1}}, \dots, (\Delta_I(A))_{I\in\binom{[n]}{k_l}}\Big),
\end{gathered}
\end{align}
where $A$ denotes any matrix representative of $V$. (We can check that the definition does not depend on the choice of $A$.) We call the left-justified minors $\Delta_I(A)$ appearing above the {\itshape Pl\"{u}cker coordinates} of $V\in\PFl{K}{n}(\mathbb{R})$ (also known as {\itshape flag minors}), which we denote by $\Delta_I(V)$. We point out that our Pl\"{u}cker coordinates differ from the {\itshape generalized Pl\"{u}cker coordinates} of Gelfand and Serganova \cite{gel'fand_serganova87}, though each encodes essentially the same data; see \cref{generalized_plucker_remark}.
For any $K'\subseteq K$, we have a projection map
\begin{align}\label{defn_Fl_surjection}
\PFl{K}{n}(\mathbb{R}) \twoheadrightarrow \PFl{K'}{n}(\mathbb{R}), \quad (V_k)_{k\in K} \mapsto (V_k)_{k\in K'}.
\end{align}
The map \eqref{defn_Fl_surjection} retains only the subspaces of a partial flag whose dimensions lie in $K'$.
We mention two instances of $\PFl{K}{n}(\mathbb{R})$ which are of particular interest. If $K = [n-1]$, then $\PFl{K}{n}(\mathbb{R})$ is the {\itshape complete flag variety} of $\mathbb{R}^n$, which we denote by $\Fl_n(\mathbb{R})$. If $K$ is the singleton $\{k\}$, then $\PFl{K}{n}(\mathbb{R})$ is the {\itshape Grassmannian} of all $k$-dimensional subspaces of $\mathbb{R}^n$, which we denote by $\Gr_{k,n}(\mathbb{R})$. We also extend the definition of $\Gr_{k,n}(\mathbb{R})$ to $k=0$ and $k=n$.
\end{defn}
\begin{eg}\label{eg_Fl}
Let $n := 4$ and $K := \{1, 3\}$. Then
$$
\P{1,3}{4}(\mathbb{R}) = \left\{\begin{bmatrix}
\ast & \ast & \ast & \ast \\
0 & \ast & \ast & \ast \\
0 & \ast & \ast & \ast \\
0 & 0 & 0 & \ast
\end{bmatrix}\right\}\subseteq\GL_4(\mathbb{R}) \quad \text{ and } \quad \PFl{1,3}{4}(\mathbb{R}) = \GL_4(\mathbb{R})/\P{1,3}{4}(\mathbb{R}).
$$
We identify $\PFl{1,3}{4}(\mathbb{R})$ with the variety of partial flags $V = (V_1, V_3)$ of subspaces of $\mathbb{R}^4$, where $\dim(V_1) = 1$, $\dim(V_3) = 3$, and $V_1 \subset V_3$.
We can represent a generic partial flag $V\in\PFl{1,3}{4}(\mathbb{R})$ by a matrix of the form
$$
A = \begin{bmatrix}
1 & 0 & 0 \\
a & 1 & 0 \\
b & 0 & 1 \\
c & d & e
\end{bmatrix}, \quad \text{ where } a,b,c,d,e\in\mathbb{R}.
$$
That is, $V_1$ is spanned by the first column of $A$, and $V_3$ is spanned by all three columns of $A$. Then the Pl\"{u}cker embedding \eqref{plucker_embedding} takes $V$ to
\begin{multline*}
\big((\Delta_1(V) : \Delta_2(V) : \Delta_3(V) : \Delta_4(V)), (\Delta_{123}(V) : \Delta_{124}(V) : \Delta_{134}(V) : \Delta_{234}(V))\big) \\
= \big((1 : a : b : c), (1 : e : -d : -ad+c-be)\big) \in \mathbb{P}^3(\mathbb{R})\times\mathbb{P}^3(\mathbb{R}).\qedhere
\end{multline*}
\end{eg}
\begin{defn}\label{defn_totally_positive_matrix}
We say that a real matrix is {\itshape totally positive} if all its minors are positive. For $n\in\mathbb{N}$, we let $\GL_n^{>0}$ denote the subset of $\GL_n(\mathbb{R})$ of totally positive matrices.
\end{defn}
For example, we have $\GL_2^{>0} = \Big\{\scalebox{0.8}{$\begin{bmatrix}a & b \\ c & d\end{bmatrix}$} : a,b,c,d,ad-bc > 0\Big\}$. We now introduce Lusztig's total positivity and Pl\"{u}cker positivity for partial flag varieties.
\begin{defn}\label{defn_totally_positive}
Let $n\in\mathbb{N}$ and $K\subseteq [n-1]$. Following \cite[Section 8]{lusztig94} and \cite[Section 1.5]{lusztig98}, we define the {\itshape totally positive part} of $\PFl{K}{n}(\mathbb{R})$, denoted by $\PFl{K}{n}^{>0}$, as the image of $\GL_n^{>0}$ inside $\PFl{K}{n}(\mathbb{R}) = \GL_n(\mathbb{R})/\P{K}{n}(\mathbb{R})$. Equivalently, $\PFl{K}{n}^{>0}$ consists of all partial flags which can be represented by a totally positive $n\times n$ matrix. We define the {\itshape totally nonnegative part} of $\PFl{K}{n}(\mathbb{R})$, denoted by $\PFl{K}{n}^{\ge 0}$, as the closure of $\PFl{K}{n}^{>0}$ in the Euclidean topology. Note that for any $K'\subseteq K$, the projection map \eqref{defn_Fl_surjection} restricts to surjections
\begin{align}\label{defn_tnn_Fl_surjections}
\PFl{K}{n}^{>0} \twoheadrightarrow \PFl{K'}{n}^{>0} \quad \text{ and } \quad \PFl{K}{n}^{\ge 0} \twoheadrightarrow \PFl{K'}{n}^{\ge 0}.
\end{align}
We also define the {\itshape Pl\"{u}cker-positive part} of $\PFl{K}{n}(\mathbb{R})$, denoted by $\PFl{K}{n}^{\Delta >0}$, as the subset of partial flags whose Pl\"{u}cker coordinates are all positive (up to rescaling). That is, $\PFl{K}{n}^{\Delta >0}$ consists of all partial flags which can be represented by a matrix $A$ such that all left-justified $k\times k$ minors of $A$ are positive for all $k\in K$. We similarly define the {\itshape Pl\"{u}cker-nonnegative part} $\PFl{K}{n}^{\Delta \ge 0}$ by replacing ``positive'' with ``nonnegative'' above.
\end{defn}
Note that by definition, we have $\PFl{K}{n}^{>0} \subseteq \PFl{K}{n}^{\Delta >0}$ and $\PFl{K}{n}^{\ge 0} \subseteq \PFl{K}{n}^{\Delta \ge 0}$. We also have that $\PFl{K}{n}^{\Delta \ge 0}$ is the closure of $\PFl{K}{n}^{\Delta >0}$; see \cref{topology}\ref{topology_plucker}.
\begin{eg}\label{eg_totally_positive}
We have
\begin{gather*}
\Fl_3^{\Delta >0} = \left\{\begin{bmatrix}1 & 0 & 0 \\ a+c & 1 & 0 \\ bc & b & 1\end{bmatrix} : a,b,c > 0\right\} \quad \text{ and } \quad \Gr_{2,4}^{\Delta >0} = \left\{\begin{bmatrix}1 & 0 \\ a & b \\ 0 & 1 \\ -c & d\end{bmatrix} : a,b,c,d > 0\right\}.\qedhere
\end{gather*}
\end{eg}
\begin{rmk}\label{lusztig_definition}
Lusztig's original definition of $\PFl{K}{n}^{>0}$ is slightly different than the one we give in \cref{defn_totally_positive}, but is equivalent. Namely, let $\operatorname{N}_n(\mathbb{R})$ be the subset of $\GL_n(\mathbb{R})$ of all upper-triangular matrices with $1$'s on the diagonal. We define $\operatorname{N}_n^{>0}$ to be the subset of $\operatorname{N}_n(\mathbb{R})$ of matrices whose minors are all positive, except for those which are zero due to upper triangularity. We similarly define $(\operatorname{N}_n^-)^{>0}$ to be the transpose of $\operatorname{N}_n^{>0}$. Also, let $\H_n^{>0}$ denote the subset of $\GL_n(\mathbb{R})$ of diagonal matrices with positive diagonal entries. Then Lusztig \cite[Section 8]{lusztig94} defines $\PFl{K}{n}^{>0}$ to be the image of $(\operatorname{N}_n^-)^{>0}$ inside $\PFl{K}{n}(\mathbb{R})$. This is equal to the image of $\GL_n^{>0}$, because
\begin{align}\label{LDU_decomposition}
\GL_n^{>0} = (\operatorname{N}_n^-)^{>0} \cdot \H_n^{>0} \cdot \operatorname{N}_n^{>0},
\end{align}
and $\H_n^{>0}\cdot\operatorname{N}_n^{>0} \subseteq \P{K}{n}(\mathbb{R})$. (In fact, Lusztig takes \eqref{LDU_decomposition} to hold by definition; see \cite[Section 2.12]{lusztig94}.) The decomposition \eqref{LDU_decomposition} is a result of Cryer \cite[Theorem 1.1]{cryer73}; we refer to \cite[Chapter 2]{fallat_johnson11} for further discussion and references.
\end{rmk}
\begin{rmk}\label{proj_tnn_GL_to_Fl}
A real matrix is called {\itshape totally nonnegative} if all its minors are nonnegative. Every totally nonnegative matrix in $\GL_n(\mathbb{R})$ represents a totally nonnegative flag in $\PFl{K}{n}(\mathbb{R})$. However, not every element of $\PFl{K}{n}^{\ge 0}$ is represented by a totally nonnegative matrix, unless $K = \emptyset$. For example, the element $\scalebox{0.8}{$\begin{bmatrix}0 & -1 \\ 1 & 0\end{bmatrix}$}\in\Fl_2^{\ge 0}$ cannot be represented by a totally nonnegative matrix.
\end{rmk}
We now recall two classical results from the theory of total positivity.
\begin{lem}\label{identity_perturbation}
There exists a continuous function $f : \mathbb{R}_{\ge 0} \to \GL_n(\mathbb{R})$ such that $f(0) = I_n$ and $f(t)\in\GL_n^{>0}$ for all $t > 0$.
\end{lem}
\begin{proof}
By \cite[II.3.(24)]{gantmaher_krein50} or \cite[(2)]{whitney52} (cf.\ \cite[Problem V.76]{polya_szego25}), we may take $f(t) := (t^{(i-j)^2})_{1 \le i,j \le n}$. Alternatively, by \cite[Theorem 3.3.4]{karlin68}, we may take $f(t) := \exp(tA)$, where $A$ is any tridiagonal $n\times n$ matrix whose entries immediately above and below the diagonal are positive.
\end{proof}
\begin{thm}[{Fekete \cite{fekete_polya12}; see \cite[Lemma 2.1]{pinkus10}}]\label{fekete}
Let $A$ be an $m\times n$ matrix, where $m\ge n$. Suppose that all left-justified $(n-1)\times (n-1)$ minors of $A$ are positive, and that all $n\times n$ minors of $A$ using consecutive rows are positive. Then all $n\times n$ minors of $A$ are positive.
\end{thm}
The following two results are duality and restriction statements for totally nonnegative Grassmannians. See \cite[Section 3]{ardila_rincon_williams16} for closely related results. We note that \cref{perpendicular_pluckers} follows from Jacobi's formula for the matrix inverse; we refer to \cite{karp17} for further discussion and references.
\begin{lem}[{\cite[Lemma 1.11(ii)]{karp17}}]\label{perpendicular_pluckers}
Define the bilinear pairing $\langle\cdot,\cdot\rangle$ on $\mathbb{R}^n$ by
$$
\langle v,w\rangle := v_1w_1 - v_2w_2 + v_3w_3 - \cdots + (-1)^{n-1}v_nw_n.
$$
Given $V\in\Gr_{k,n}(\mathbb{R})$, let $V^\perp := \{w\in\mathbb{R}^n : \langle v,w\rangle = 0 \text{ for all } v\in V\} \in \Gr_{n-k,n}(\mathbb{R})$. Then
$$
\Delta_I(V) = \Delta_{[n]\setminus I}(V^\perp) \quad \text{ for all } I\in\textstyle\binom{[n]}{k}.
$$
In particular, $\cdot^\perp$ defines bijections $\Gr_{k,n}^{\Delta >0}\leftrightarrow\Gr_{n-k,n}^{\Delta >0}$ and $\Gr_{k,n}^{\Delta \ge 0}\leftrightarrow\Gr_{n-k,n}^{\Delta \ge 0}$.
\end{lem}
\begin{eg}\label{eg_perpendicular_pluckers}
Let $V\in\Gr_{2,4}(\mathbb{R})$ be represented by the matrix
$$
\begin{bmatrix}
1 & 0 \\
0 & 1 \\
a & b \\
c & d
\end{bmatrix}.
$$
Then $V^\perp\in\Gr_{2,4}(\mathbb{R})$ is represented by the matrix
\begin{gather*}
\begin{bmatrix}
-a & c \\
b & -d \\
1 & 0 \\
0 & 1
\end{bmatrix}.\qedhere
\end{gather*}
\end{eg}
\begin{lem}\label{tnn_restriction}
Let $V\in\Gr_{k,n}^{\Delta\ge 0}$, and let $m \le n$. Define $W := V \cap \spn(e_1, \dots, e_m)$, and let $d := \dim(W)$. Then $W\in\Gr_{d,m}^{\Delta\ge 0}$.
\end{lem}
\begin{proof}
Take an $m\times d$ matrix $B$ representing $W$. Then there exists an $n\times k$ matrix $A$ representing $V$ of the form
$$
A = \begin{bmatrix}
B & \ast \\
0 & C
\end{bmatrix},
$$
where $C$ is an $(n-m)\times (k-d)$ matrix of rank $k-d$. Since $V\in\Gr_{k,n}^{\Delta\ge 0}$, all nonzero $k\times k$ minors of $A$ have the same sign. We then see that all nonzero $d\times d$ minors of $B$ have the same sign, so $W\in\Gr_{d,m}^{\Delta\ge 0}$.
\end{proof}
\section{Lusztig's total positivity vs.\ Pl\"{u}cker positivity}\label{sec_converse}
\noindent In this section we prove \cref{converse}. Our argument will be based on the preliminary results \cref{topology}, \cref{tp_extension}, and \cref{tnn_counterexample}.
\begin{lem}\label{converse_complete}
For $n\in\mathbb{N}$, we have $\Fl_n^{>0} = \Fl_n^{\Delta>0}$.
\end{lem}
\begin{proof}
We know that $\Fl_n^{>0} \subseteq \Fl_n^{\Delta >0}$. Conversely, let $V\in\Fl_n^{\Delta > 0}$. Then there exists an $n\times n$ matrix $A$ representing $V$ whose left-justified minors are all positive. After performing left-to-right column operations, we may assume that $A$ is lower-triangular. For $t > 0$, define
$$
g := A\Diag{t^{n-1},\dots,t,1}\transpose{A}\in\GL_n(\mathbb{R}),
$$
which also represents $V$. We claim that $g\in\GL_n^{>0}$ for all $t$ sufficiently large, whence $V\in\Fl_n^{>0}$, completing the proof. (In fact, $g\in\GL_n^{>0}$ for all $t > 0$, by \eqref{LDU_decomposition}.) To see this, note that for $1 \le k \le n$ and $I,J\in\binom{[n]}{k}$, by \eqref{cauchy-binet} we have that $\Delta_{I,J}(g)$ equals
$$
\sum_{K\in\binom{[n]}{k}}\Delta_{I,K}(A)\Delta_{J,K}(A)t^{kn-\sumof{K}} \\
= \Delta_{I,[k]}(A)\Delta_{J,[k]}(A)t^{kn - \binom{k+1}{2}} + \text{lower order terms}
$$
as $t\to\infty$.
\end{proof}
\begin{lem}\label{matrix_action}
Let $K\subseteq [n-1]$, and let $g\in\GL_n^{>0}$.
\begin{enumerate}[label=(\roman*), leftmargin=*, itemsep=2pt]
\item\label{matrix_action_lusztig} For all $V\in\PFl{K}{n}^{\ge 0}$, we have $g\cdot V\in\PFl{K}{n}^{>0}$.
\item\label{matrix_action_plucker} For all $V\in\PFl{K}{n}^{\Delta\ge 0}$, we have $g\cdot V\in\PFl{K}{n}^{\Delta >0}$.
\end{enumerate}
\end{lem}
\begin{proof}
Part \ref{matrix_action_plucker} follows from the Cauchy--Binet identity \eqref{cauchy-binet}. For part \ref{matrix_action_lusztig}, by \eqref{defn_tnn_Fl_surjections}, it suffices to prove the result for the complete flag variety (when $K = [n-1]$). Since $\Fl_n^{\ge 0} \subseteq \Fl_n^{\Delta \ge 0}$, this case follows from part \ref{matrix_action_plucker} and \cref{converse_complete}.
\end{proof}
\begin{prop}\label{topology}
Let $K\subseteq [n-1]$.
\begin{enumerate}[label=(\roman*), leftmargin=*, itemsep=2pt]
\item\label{topology_lusztig} $\PFl{K}{n}^{\ge 0}$ is the closure of $\PFl{K}{n}^{>0}$, and $\PFl{K}{n}^{>0}$ is the interior of $\PFl{K}{n}^{\ge 0}$.
\item\label{topology_plucker} $\PFl{K}{n}^{\Delta \ge 0}$ is the closure of $\PFl{K}{n}^{\Delta >0}$, and $\PFl{K}{n}^{\Delta >0}$ is the interior of $\PFl{K}{n}^{\Delta \ge 0}$.
\end{enumerate}
\end{prop}
\begin{proof}
Let $f : \mathbb{R}_{\ge 0} \to \GL_n(\mathbb{R})$ be as in \cref{identity_perturbation}.
\ref{topology_lusztig} By definition, $\PFl{K}{n}^{\ge 0}$ is the closure of $\PFl{K}{n}^{>0}$. Also, $\PFl{K}{n}^{>0}$ is open, so it is contained in the interior of $\PFl{K}{n}^{\ge 0}$. It remains to show that the interior of $\PFl{K}{n}^{\ge 0}$ is contained in $\PFl{K}{n}^{>0}$. To see this, let $V\in\PFl{K}{n}^{\ge 0}\setminus\PFl{K}{n}^{>0}$. We claim that $f(t)^{-1}\cdot V$ is not in $\PFl{K}{n}^{\ge 0}$ for all $t > 0$, whence $V$ is not in the interior of $\PFl{K}{n}^{\ge 0}$. Indeed, if $f(t)^{-1}\cdot V\in\PFl{K}{n}^{\ge 0}$ with $t > 0$, then by \cref{matrix_action}\ref{matrix_action_lusztig} we obtain
$$
V = f(t)\cdot (f(t)^{-1}\cdot V) \in \PFl{K}{n}^{>0},
$$
a contradiction.
\ref{topology_plucker} Note that the closure of $\PFl{K}{n}^{\Delta >0}$ is contained in $\PFl{K}{n}^{\Delta \ge 0}$. Conversely, given $V\in\PFl{K}{n}^{\Delta\ge 0}$, we have $V = \lim_{t\to 0,\hspace*{1pt} t>0}f(t)\cdot V$, and $f(t)\cdot V\in\PFl{K}{n}^{\Delta >0}$ for $t > 0$ by \cref{matrix_action}\ref{matrix_action_plucker}. Therefore $\PFl{K}{n}^{\Delta \ge 0}$ is the closure of $\PFl{K}{n}^{\Delta >0}$. The fact that $\PFl{K}{n}^{\Delta >0}$ is the interior of $\PFl{K}{n}^{\Delta \ge 0}$ follows from a similar argument as in the proof of part \ref{topology_lusztig}.
\end{proof}
\begin{lem}\label{tp_extension}
Let $V\in\Gr_{k,n}^{\Delta >0}$, where $1 \le k \le n-1$.
\begin{enumerate}[label=(\roman*), leftmargin=*, itemsep=2pt]
\item\label{tp_extension_plus} There exists $W\in\Gr_{k+1,n}^{\Delta >0}$ such that $V\subseteq W$.
\item\label{tp_extension_minus} There exists $W\in\Gr_{k-1,n}^{\Delta >0}$ such that $W\subseteq V$.
\end{enumerate}
\end{lem}
\begin{proof}
\ref{tp_extension_plus} Take an $n\times k$ matrix $A$ representing $V$ whose $k\times k$ minors are all positive. Let $B := \begin{bmatrix}A \hspace*{2pt}|\hspace*{2pt} w\end{bmatrix}$ denote the $n\times (k+1)$ matrix formed by concatenating $A$ and the vector $w\in\mathbb{R}^n$, where we define $w$ as follows. We set $w_1, \dots, w_k := 0$, and for $i = k+1, \dots, n$, we take $w_i > 0$ to be sufficiently large that the minor $\Delta_{[i-k,i],[k+1]}(B)$ is positive. By \cref{fekete}, all $(k+1)\times (k+1)$ minors of $B$ are positive. Therefore we may define $W$ to be the column span of $B$.
\ref{tp_extension_minus} This follows by applying part \ref{tp_extension_plus} to $V^\perp$, using \cref{perpendicular_pluckers}.
\end{proof}
\begin{lem}\label{tnn_counterexample}
Let $V\in\Gr_{k,n}^{\Delta\ge 0}$ and $W\in\Gr_{k+1,n}^{\Delta\ge 0}$ such that $V\subseteq W$. If $e_1 + ce_n\in V$ for some $c\in\mathbb{R}$, then $e_1 \in W$.
\end{lem}
\begin{proof}
If $e_1\in V$, then $e_1\in W$. Now suppose that $e_1\notin V$ and $e_1 + ce_n\in V$ for some $c\in\mathbb{R}$, so that $c\neq 0$ and $e_n\notin V$. Let $A$ denote the $n\times k$ matrix representing $V$ in reduced column echelon form. Let $I\in\binom{[n]}{k}$ index the rows containing the pivot $1$'s of $A$; equivalently, $I$ is lexicographically minimal such that $\Delta_I(V)\neq 0$. Since $e_1 + ce_n\in V$ and $e_n\notin V$, we have $1\in I$ and $n\notin I$, and the first column of $A$ is $e_1 + ce_n$. Therefore $\Delta_I(A) = 1$ and $\Delta_{(I\setminus\{1\})\cup\{n\}}(A) = (-1)^{k-1}c$. Since $V$ is Pl\"{u}cker nonnegative, we get that $(-1)^{k-1}c > 0$.
Now take $w\in W\setminus V$ such that $w_i = 0$ for all $i\in I$. Let $B := \begin{bmatrix}A \hspace*{2pt}|\hspace*{2pt} w\end{bmatrix}$ denote the $n\times (k+1)$ matrix representing $W$, formed by concatenating $A$ and $w$. Then for $i\in [n-1]\setminus I$, there exists $\epsilon\in\{1,-1\}$ such that $\Delta_{I\cup\{i\}}(B) = \epsilon w_i$ and $\Delta_{(I\setminus\{1\})\cup\{i,n\}}(B) = \epsilon(-1)^kcw_i$. Since $W$ is Pl\"{u}cker nonnegative and $(-1)^{k-1}c > 0$, we get that $w_i = 0$. Therefore $w$ is a nonzero scalar multiple of $e_n$. Since $e_1 + ce_n\in W$, we obtain $e_1\in W$.
\end{proof}
\begin{proof}[Proof of \cref{converse}]
\ref{converse_tp} $\Leftrightarrow$ \ref{converse_tnn}: This follows from \cref{topology}.
\ref{converse_consecutive} $\Rightarrow$ \ref{converse_tp}: Suppose that $K = [k,l]$. Recall that $\PFl{K}{n}^{>0}\subseteq\PFl{K}{n}^{\Delta >0}$. Conversely, we must show that given $V = (V_k, \dots, V_l)\in\PFl{K}{n}^{\Delta >0}$, we have $V\in\PFl{K}{n}^{>0}$. By repeatedly applying \cref{tp_extension}, there exist $V_i\in\Gr_{i,n}^{\Delta >0}$ for $i = l+1, \dots, n$ and $i = k-1, \dots, 1$ such that $V_1 \subset \cdots \subset V_{n-1}$. Let $W := (V_1, \dots, V_{n-1})\in\Fl_n(\mathbb{R})$. Then $W\in\Fl_n^{\Delta >0}$, so $W\in\Fl_n^{>0}$ by \cref{converse_complete}. Then by \eqref{defn_tnn_Fl_surjections}, we get $V\in\PFl{K}{n}^{>0}$.
\ref{converse_tnn} $\Rightarrow$ \ref{converse_consecutive}: We prove the contrapositive. Suppose that $K$ does not consist of consecutive integers, so that there exist consecutive elements $k<l$ of $K$ with $l-k \ge 2$. Define the element $V = (V_i)_{i\in K}$ of $\PFl{K}{n}(\mathbb{R})$ as follows:
$$
V_i :=
\begin{cases}
\spn(e_5, e_6, \dots, e_{i+4}), & \text{ if $i < k$}; \\
\spn(e_1 + e_4, e_5, e_6, \dots, e_{k+3}), & \text{ if $i = k$}; \\
\spn(e_1 + e_4, e_2, e_3, e_5, e_6, \dots, e_{i+1}), & \text{ if $i \ge l$}.
\end{cases}
$$
That is, $V$ is represented by the $n\times (n-1)$ matrix
$$
A := \begin{bmatrix}
0 & (-1)^{k-1}B & 0 \\
I_{k-1} & 0 & 0 \\
0 & 0 & I_{n-k-3}
\end{bmatrix}, \quad \text{ where } \quad B := \begin{bmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1 \\
1 & 0 & 0
\end{bmatrix}.
$$
Note that all left-justified minors of $A$ are nonnegative, except for a certain minor of order $k+1$. Since $k+1\notin K$, we get that $V\in\PFl{K}{n}^{\Delta\ge 0}$.
We claim that $V\notin\PFl{K}{n}^{\ge 0}$, which implies that $\PFl{K}{n}^{\ge 0} \neq \PFl{K}{n}^{\Delta \ge 0}$. Indeed, suppose otherwise that $V\in\PFl{K}{n}^{\ge 0}$. Then by \eqref{defn_tnn_Fl_surjections}, we can extend $V$ to a complete flag $(V_1, \dots, V_{n-1})\in\Fl_n^{\ge 0}$. For $1 \le i \le n-1$, define $W_i := V_i\cap\spn(e_1, e_2, e_3, e_4)$. Let $d_i := \dim(W_i)$, so that $W_i\in\Gr_{d_i,4}^{\Delta\ge 0}$ by \cref{tnn_restriction}. Note that $W_k = \spn(e_1 + e_4)$ and $W_l = \spn(e_1 + e_4, e_2, e_3)$. Since the sequence $d_k, d_{k+1}, \dots, d_l$ increases by $0$ or $1$ at each step, and $d_k = 1$ and $d_l = 3$, there exists $j\in [k,l]$ such that $d_j = 2$. Applying \cref{tnn_counterexample} to $W_k$ and $W_j$, we get $e_1\in W_j$. Since $W_j\subset W_l$, this implies $e_1\in W_l$, a contradiction.
\end{proof}
\section{Cyclic symmetry}\label{sec_cyclic}
\noindent In this section, we prove \cref{cyclic}.
\begin{proof}[Proof of \cref{cyclic}]
We claim that it suffices to construct $W\in\PFl{K}{n}^{\ge 0}$ such that $\sigma_\epsilon(W)\notin\PFl{K}{n}^{\ge 0}$. Indeed, let $f : \mathbb{R}_{\ge 0} \to \GL_n(\mathbb{R})$ be as in \cref{identity_perturbation}, so that by \cref{matrix_action}\ref{matrix_action_lusztig}, we have $f(t)\cdot W\in\PFl{K}{n}^{>0}$ for all $t > 0$. If $\sigma_\epsilon(f(t)\cdot W)\in\PFl{K}{n}^{\ge 0}$ for all $t > 0$, then taking $t \to 0$ we obtain $\sigma_\epsilon(W)\in\PFl{K}{n}^{\ge 0}$, a contradiction. Therefore there exists $t > 0$ such that $\sigma_\epsilon(f(t)\cdot W)\notin\PFl{K}{n}^{\ge 0}$, whence we may take $V := f(t)\cdot W$.
Now we construct such a $W = (W_i)_{i\in K} \in \PFl{K}{n}(\mathbb{R})$. Fix any two elements $k < l$ of $K$. We set
$$
W_i :=
\begin{cases}
\spn(e_3, e_4, \dots, e_{i+2}), & \text{ if $i < k$}; \\
\spn(e_1 + e_2, e_3, e_4, \dots, e_{i+1}), & \text{ if $i \ge k$}.
\end{cases}
$$
That is, $W$ is represented by the $n\times (n-1)$ matrix
$$
A := \begin{bmatrix}
0 & (-1)^{k-1} & 0 \\
0 & (-1)^{k-1} & 0 \\
I_{k-1} & 0 & 0 \\
0 & 0 & I_{n-k-1}
\end{bmatrix}.
$$
Note that all left-justified minors of $A$ are nonnegative, so $A$ represents an element of $\Fl_n^{\Delta\ge 0}$. By \cref{converse} we have $\Fl_n^{\Delta\ge 0} = \Fl_n^{\ge 0}$, so by \eqref{defn_tnn_Fl_surjections}, we get $W\in\PFl{K}{n}^{\ge 0}$.
Let $X = (X_i)_{i\in K}$ denote the left cyclic shift $\sigma_\epsilon(W)$. Note that
$$
X_k = \spn((-1)^{\epsilon-1}e_n + e_1, e_2, \dots, e_k) \quad \text{ and } \quad X_l = \spn((-1)^{\epsilon-1}e_n + e_1, e_2, \dots, e_l).
$$
Now proceed by contradiction and suppose that $X\in\PFl{K}{n}^{\ge 0}$. Then by \eqref{defn_tnn_Fl_surjections}, we can extend $X$ to a complete flag $(X_1, \dots, X_{n-1})\in\Fl_n^{\ge 0}$. Applying \cref{tnn_counterexample} to $X_k$ and $X_{k+1}$, we get $e_1\in X_{k+1}$. Since $X_{k+1}\subseteq X_l$, this implies $e_1\in X_l$, a contradiction.
\end{proof}
\section{Cell decomposition vs.\ matroid decomposition}\label{sec_decompositions}
\noindent In this section, we prove \cref{decompositions}. We begin by recalling some background in \cref{sec_decompositions_background}. We then give two proofs of the forward direction of \cref{decompositions} in \cref{sec_decompositions_forward}, and prove the reverse direction in \cref{sec_decompositions_reverse}. Throughout this section, we fix $n\in\mathbb{N}$, and let $W$ denote the symmetric group $\mathfrak{S}_n$ of all permutations of $[n]$. Also, $J$ and $K$ will denote complementary subsets of $[n-1]$.
\subsection{Background on Coxeter combinatorics}\label{sec_decompositions_background}
We recall some background on the combinatorics of the Coxeter group $W = \mathfrak{S}_n$; we refer to \cite{bjorner_brenti05} for further details.
\begin{defn}[{\cite[Chapter 2]{bjorner_brenti05}}]\label{defn_bruhat_order}
For $1 \le i \le n-1$, let $s_i := (i \hspace*{8pt} i+1) \in W$ be the simple transposition which exchanges $i$ and $i+1$, and let $e\in W$ denote the identity permutation. Given $w\in W$, a {\itshape reduced word} $\mathbf{w}$ for $w$ is a word in $s_1, \dots, s_{n-1}$ of minimal length whose product is $w$. Each reduced word for $w$ has the same number of letters, called the {\itshape length $\ell(w)$} of $w$, which is equal to the number of inversions of $w$. Any two reduced words for $w$ are related by a sequence of moves of the following form:
\begin{enumerate}[label={(M\arabic*)}, leftmargin=36pt, itemsep=2pt]
\item\label{move_commutation} $s_is_j = s_js_i$ for $1 \le i,j \le n-1$ with $|i-j| \ge 2$; and
\item\label{move_braid} $s_is_{i+1}s_i = s_{i+1}s_is_{i+1}$ for $1 \le i \le n-2$.
\end{enumerate}
In particular, if $s_i$ appears in some reduced word for $w$, then it appears in every reduced word for $w$.
The {\itshape (strong) Bruhat order $\le$} on $W$ is defined as follows: $v \le w$ if and only if for some (or equivalently, for every) reduced word $\mathbf{w}$ for $w$, there exists a reduced word $\mathbf{v}$ for $v$ which is a subword of $\mathbf{w}$. The Bruhat order is graded with rank function $\ell$. The Bruhat order on $W = \mathfrak{S}_3$ is shown in \cref{figure_S3}.
\end{defn}
\begin{figure}[ht]
\begin{center}
$$
\begin{tikzpicture}[baseline=(current bounding box.center),scale=1.0]
\pgfmathsetmacro{\s}{1.0};
\pgfmathsetmacro{\hd}{1.20};
\pgfmathsetmacro{\vd}{0.96};
\pgfmathsetmacro{\is}{1.68};
\node[inner sep=\is](123)at(0,0){\scalebox{\s}{$123$}};
\node[inner sep=\is](213)at($(123)+(-\hd,\vd)$){\scalebox{\s}{$213$}};
\node[inner sep=\is](132)at($(123)+(\hd,\vd)$){\scalebox{\s}{$132$}};
\node[inner sep=\is](312)at($(213)+(0,\vd)$){\scalebox{\s}{$312$}};
\node[inner sep=\is](231)at($(132)+(0,\vd)$){\scalebox{\s}{$231$}};
\node[inner sep=\is](321)at($(312)+(\hd,\vd)$){\scalebox{\s}{$321$}};
\path[semithick](123)edge(213) edge(132) (213)edge(312) edge(231) (132)edge(312) edge(231) (312)edge(321) (231)edge(321);
\end{tikzpicture}
$$
\caption{The Hasse diagram of the Bruhat order on $W = \mathfrak{S}_3$.}
\label{figure_S3}
\end{center}
\end{figure}
\begin{eg}\label{eg_bruhat_order}
Let $w := 5214763 \in W = \mathfrak{S}_7$. Then $\ell(w) = 9$, and a reduced word for $w$ is $\mathbf{w} = s_1s_3s_4s_3s_2s_1s_5s_6s_5$.
\end{eg}
We will need the following property of reduced words:
\begin{lem}[{\cite[Corollary 1.4.6(ii)]{bjorner_brenti05}}]\label{word_ending}
Let $w\in W$ and $1 \le i \le n-1$. If $\ell(ws_i) < \ell(w)$, then $w$ has a reduced word which ends in $s_i$.
\end{lem}
We define parabolic subgroups and quotients of $W$.
\begin{defn}[{\cite[Section 2.4]{bjorner_brenti05}}]\label{defn_parabolic}
Given $J\subseteq [n-1]$, let $W_J := \langle s_j : j\in J\rangle$ be the subgroup of $W$ generated by the simple transpositions indexed by $J$, called a {\itshape parabolic subgroup}. Equivalently, $W_J$ consists of the elements of $W$ which setwise fix the intervals $[1,k_1], [k_1 + 1, k_2], \dots, [k_l + 1,n]$, where $[n-1]\setminus J = \{k_1 < \cdots < k_l\}$.
Let $W^J$ denote the set of minimal-length coset representatives of the parabolic quotient $W/W_J$. Explicitly, we have
$$
W^J = \{w\in\mathfrak{S}_n : w(j) < w(j+1) \text{ for all } j\in J\}.
$$
Each $w\in W$ has a unique factorization $w = w^Jw_J$ such that $w^J\in W^J$ and $w_J\in W_J$; this factorization is length-additive. In particular, $w^J$ is the minimal-length coset representative of $w$ modulo $W_J$.
\end{defn}
\begin{eg}\label{eg_parabolic_quotient}
Let $w := 5214763 \in W = \mathfrak{S}_7$, as in \cref{eg_bruhat_order}, and let $J := \{1,2,4,6\}$. Then $w^J = 1254736 = s_3s_4s_3s_6s_5$ and $w_J = 3214576 = s_1s_2s_1s_6$.
\end{eg}
We will need the following property of the parabolic factorization:
\begin{lem}[{\cite[Proposition 2.5.1]{bjorner_brenti05}}]\label{parabolic_order}
Let $J\subseteq [n-1]$, and let $v\le w$ in $W$. Then $v^J \le w^J$.
\end{lem}
We now recall the {\itshape Demazure product} and {\itshape downwards Demazure product}, appearing in work of He \cite[Lemma 3.3]{he07} and He and Lu \cite[Appendix A]{he_lu11}. We refer to \cite[Section 2.1]{he_lam15} for further discussion and references.
\begin{defn}[{\cite[Section 1.3]{he09}}]\label{defn_demazure}
There exist binary operations $\ast$ and $\triangleleft$ on $W$ defined by
$$
v\ast w := \max\{vx : x \le w\} \quad \text{ and } \quad v\triangleleft w := \min\{vx : x \le w\}
$$
for all $v,w\in W$. Equivalently,
$$
v \ast (s_{i_1} \cdots s_{i_l}) = (\cdots (v \ast s_{i_1}) \ast \cdots ) \ast s_{i_l} \quad \text{ and } \quad v \triangleleft (s_{i_1} \cdots s_{i_l}) = (\cdots (v \triangleleft s_{i_1}) \triangleleft \cdots ) \triangleleft s_{i_l}
$$
for all $v\in W$ and reduced words $s_{i_1} \cdots s_{i_l}\in W$, where
$$
v\ast s_i = \begin{cases}
vs_i, & \text{ if $\ell(vs_i) > \ell(v)$}; \\
v, & \text{ if $\ell(vs_i) < \ell(v)$}
\end{cases} \quad \text{ and } \quad
v\triangleleft s_i = \begin{cases}
v, & \text{ if $\ell(vs_i) > \ell(v)$}; \\
vs_i, & \text{ if $\ell(vs_i) < \ell(v)$}
\end{cases}
$$
for all $1 \le i \le n-1$. We call $\ast$ the {\itshape Demazure product} and $\triangleleft$ the {\itshape downwards Demazure product}.\footnote{Our operation $\triangleleft$ is the `mirror image' of He's $\triangleright$. We also caution that the symbol $\triangleleft$ is used in \cite{bjorner_brenti05} with a different meaning, namely, to denote a cover relation in the Bruhat order.}
\end{defn}
\begin{eg}\label{eg_demazure}
We have $s_1s_2s_3 \ast s_2s_3s_2 = s_1s_2s_3s_2$ and $s_1s_2s_3 \triangleleft s_2s_3s_2 = s_1$.
\end{eg}
We will need the following property of the Demazure and downwards Demazure products:
\begin{lem}[{\cite[Corollary 1 and Lemma 2]{he09}}]\label{demazure_properites}
Let $v\le w$ in $W$. Then for all $x\in W$, we have $v\ast x \le w\ast x$ and $v\triangleleft x \le w\triangleleft x$.
\end{lem}
\subsection{Background on the cell and matroid decompositions}\label{sec_decompositions_algebra}
We recall the cell decomposition and matroid decomposition of $\PFl{K}{n}^{\ge 0}$, though we will mainly work with \cref{decompositions_equality} and \cref{decompositions_grassmannian}, rather than the definitions. We refer to \cite[Sections 6--7]{tsukerman_williams15} for further details.
\begin{defn}\label{defn_cell_decomposition}
Let $n\in\mathbb{N}$, and let $\operatorname{B}_n(\mathbb{R})$ and $\operatorname{B}^{-}_n(\mathbb{R})$ denote the subgroups of $\GL_n(\mathbb{R})$ of upper-triangular and lower-triangular matrices, respectively. For $w\in W$, let $\mathring{w}\in\GL_n(\mathbb{R})$ be any signed permutation matrix corresponding to $w$, i.e., $\mathring{w}_{i,j} = \pm\delta_{i,w(j)}$ for $1 \le i,j \le n$. Given $v,w\in W$ such that $v\le w$, we define the {\itshape (totally nonnegative) Richardson cell}
$$
\cell{v}{w} := (\operatorname{B}^{-}_n(\mathbb{R})\cdot\mathring{v}) \cap (\operatorname{B}_n(\mathbb{R})\cdot\mathring{w}) \cap \Fl_n^{\ge 0},
$$
which is the intersection inside $\Fl_n^{\ge 0}$ of the opposite Schubert cell indexed by $v$ and the Schubert cell indexed by $w$.
Now let $J$ and $K$ be complementary subsets of $[n-1]$. Given $v\in W$ and $w\in W^J$ such that $v \le w$, we define the {\itshape (totally nonnegative) projected Richardson cell} $\cell[K]{v}{w} \subseteq \PFl{K}{n}^{\ge 0}$ to be the image of $\cell{v}{w} \subseteq \Fl_n^{\ge 0}$ under the projection map \eqref{defn_tnn_Fl_surjections}. Rietsch \cite{rietsch99,rietsch06b} showed that $\cell[K]{v}{w}$ is homeomorphic to an open ball of dimension $\ell(w) - \ell(v)$. We have the cell decomposition
$$
\PFl{K}{n}^{\ge 0} = \bigsqcup_{\substack{v\in W,\hspace*{2pt} w\in W^J,\\ v\le w}}\cell[K]{v}{w},
$$
where $\PFl{K}{n}^{>0}$ is the unique cell of maximum dimension.
\end{defn}
\begin{rmk}\label{cell_decomposition_remark}
Our definition of the cell decomposition of $\PFl{K}{n}^{\ge 0}$ is different from, but equivalent to, the definition of Rietsch \cite[Section 6]{rietsch06b}. We refer to \cite[Appendix]{he_lam15} and \cite[Remark 4.9]{galashin_karp_lam22} for further discussion.
\end{rmk}
\begin{defn}\label{defn_matroid_decomposition}
Let $K\subseteq [n-1]$. Given a tuple $M = (M_k)_{k\in K}$, where $M_k\subseteq\binom{[n]}{k}$ for $k\in K$, we define
$$
S_M := \{V\in\PFl{K}{n}^{\ge 0} : \text{for all $k\in K$ and $I\in\textstyle\binom{[n]}{k}$, we have } \Delta_I(V) \neq 0 \Leftrightarrow I\in M_k\}.
$$
If $S_M$ is nonempty, we call it a {\itshape (totally nonnegative) matroid stratum}. The {\itshape matroid decomposition} (or {\itshape Gelfand--Serganova decomposition}) of $\PFl{K}{n}^{\ge 0}$ is its decomposition into matroid strata; equivalently, it is the common refinement of the decompositions
$$
\PFl{K}{n}^{\ge 0} = \{V\in\PFl{K}{n}^{\ge 0} : \Delta_I(V)\neq 0\} \sqcup \{V\in\PFl{K}{n}^{\ge 0} : \Delta_I(V)=0\}
$$
for all Pl\"{u}cker coordinates $\Delta_I$.
\end{defn}
\begin{rmk}\label{generalized_plucker_remark}
There is a different, but equivalent, way to define Pl\"{u}cker positivity and the matroid decomposition for partial flag varieties $\PFl{K}{n}(\mathbb{R})$, using the {\itshape generalized Pl\"{u}cker coordinates} of Gelfand and Serganova \cite{gel'fand_serganova87}, rather than the Pl\"{u}cker coordinates of \cref{defn_Fl}. Namely, let $K = \{k_1 < \cdots < k_l\}\subseteq [n-1]$. Given a tuple $I = (I_{k_1}, \dots, I_{k_l})$ such that $I_{k_1} \subset \cdots \subset I_{k_l}$ and $I_k\in\binom{[n]}{k}$ for $k\in K$, define the {\itshape generalized Pl\"{u}cker coordinate}
$$
\Delta_I := \Delta_{I_{k_1}}\Delta_{I_{k_2}} \cdots \Delta_{I_{k_l}}.
$$
Then $V\in\PFl{K}{n}(\mathbb{R})$ is totally positive (respectively, totally nonnegative) if and only if all its generalized Pl\"{u}cker coordinates are positive (respectively, nonnegative). Also, the matroid decomposition of $\PFl{K}{n}^{\ge 0}$ is the common refinement of the decompositions
$$
\PFl{K}{n}^{\ge 0} = \{V\in\PFl{K}{n}^{\ge 0} : \Delta_I(V)\neq 0\} \sqcup \{V\in\PFl{K}{n}^{\ge 0} : \Delta_I(V)=0\}
$$
for all generalized Pl\"{u}cker coordinates $\Delta_I$. These results follow from \cite[Section 9.1]{gel'fand_serganova87} (cf.\ \cite[Chapter 1]{borovik_gelfand_white03}).
\end{rmk}
\begin{rmk}\label{matroid_decomposition_remark}
We use the name {\itshape matroid decomposition} because if $S_M$ is a matroid stratum of $\PFl{K}{n}^{\ge 0}$, then each $M_k$ is a {\itshape (representable) matroid} of rank $k$ on the ground set $[n]$ (in fact, $M_k$ is a {\itshape positroid} \cite{postnikov07,postnikov_speyer_williams09}). Moreover, $M$ itself is a {\itshape (representable) Coxeter matroid}; see \cite[Section 9.1]{gel'fand_serganova87} and \cite[Section 1.7]{borovik_gelfand_white03}.
\end{rmk}
We also recall two results which will be key to our arguments; we refer to \cref{sec_introduction} for further discussion.
\begin{thm}[{Tsukerman and Williams \cite[Theorem 7.1]{tsukerman_williams15}}]\label{decompositions_equality}
Let $J$ and $K$ be complementary subsets of $[n-1]$, and let $v\le w$, where $v\in W$ and $w\in W^J$. Then the cell $\cell[K]{v}{w}$ of $\PFl{K}{n}^{\ge 0}$ is contained in a single matroid stratum, which is uniquely determined by the interval $[v,w]$ modulo $W_J$.
\end{thm}
Note that \cref{decompositions_equality} implies that the cell decomposition of $\PFl{K}{n}^{\ge 0}$ is a refinement of its matroid decomposition.
\begin{thm}[{Postnikov \cite[Theorem 3.8]{postnikov07}, \cite{rietsch09}}]\label{decompositions_grassmannian}
For $0 \le k \le n$, the cell decomposition of $\Gr_{k,n}^{\ge 0}$ coincides with the matroid decomposition.
\end{thm}
\begin{rmk}\label{decompositions_remark}
While it will be sufficient for our purposes to work with the combinatorial statement of \cref{decompositions_equality}, we mention that it has the following geometric interpretation; see \cite[Sections 6--7]{tsukerman_williams15} for further details. The moment polytope of $\PFl{K}{n}(\mathbb{C})$ is a convex polytope in $\mathbb{R}^n$ whose vertices are indexed by $W^J$, or equivalently, by generalized Pl\"{u}cker coordinates of $\PFl{K}{n}(\mathbb{C})$ (see \cref{generalized_plucker_remark}). The moment polytope of $V\in\PFl{K}{n}(\mathbb{C})$ is contained in the moment polytope of $\PFl{K}{n}(\mathbb{C})$, and its vertices correspond precisely to the generalized Pl\"{u}cker coordinates which are nonzero at $V$ \cite[Proposition 5.1]{gel'fand_serganova87}. On the other hand, the set $W^J$ also indexes the zero-dimensional cells of $\PFl{K}{n}^{\ge 0}$, i.e., the cells $\cell[K]{x}{x}$ for $x\in W^J$. If $V\in \cell[K]{v}{w}$, then the zero-dimensional cells in the closure of $\cell[K]{v}{w}$ are precisely $\cell[K]{x^J}{x^J}$ for $x\in [v,w]$ \cite[Theorem 6.1]{rietsch06b}. \cref{decompositions_equality} can be rephrased as saying that the vertices of the moment polytope of $V\in \cell[K]{v}{w}$ are indexed precisely by the zero-dimensional cells in the closure of $\cell[K]{v}{w}$. Implicit in this statement is the fact that the moment polytope of $V$ is equal to the moment polytope of $\cell[K]{v}{w}$, even though the torus orbit of $V$ may have dimension much less than that of $\cell[K]{v}{w}$. This moment polytope is called a {\itshape Bruhat interval polytope}, denoted\footnote{We caution that \cite{tsukerman_williams15} uses the superscript $J$, rather than $K$.} by $\bip[K]{v}{w}$. We will make a further comment about $\bip[K]{v}{w}$ in \cref{minkowski_sum}.
\end{rmk}
\subsection{Proof of the forward direction}\label{sec_decompositions_forward}
In this subsection, we give two proofs of the forward direction of \cref{decompositions}. The first proof uses \cref{converse}, while the second proof uses \cref{decompositions_equality}.
For the first proof, we will need the following result of Rietsch \cite{rietsch98}; see \cite[Corollary 6.16]{bloch_karp1} for a stronger result.
\begin{lem}[{Rietsch \cite[Lemma 5.2]{rietsch98}}]\label{positive_connected}
Let $K\subseteq [n-1]$. Then $\PFl{K}{n}^{\Delta >0}$ is connected.
\end{lem}
\begin{proof}[Proof of the forward direction of \cref{decompositions}]
We prove the contrapositive. Suppose that $K$ does not consist of consecutive integers, so that by the implication \ref{converse_tp} $\Rightarrow$ \ref{converse_consecutive} of \cref{converse}, $\PFl{K}{n}^{>0}$ is strictly contained in $\PFl{K}{n}^{\Delta >0}$. By \cref{positive_connected}, $\PFl{K}{n}^{>0}$ is not closed in $\PFl{K}{n}^{\Delta >0}$. Hence there exists a point $V\in (\PFl{K}{n}^{\ge 0}\setminus\PFl{K}{n}^{>0})\cap\PFl{K}{n}^{\Delta >0}$. Then $V$ and the cell $\PFl{K}{n}^{>0}$ of $\PFl{K}{n}^{\ge 0}$ are contained in the same matroid stratum, namely, the one where all Pl\"{u}cker coordinates are nonzero.
\end{proof}
We now proceed to the second proof of the forward direction of \cref{decompositions}. It is based on the following lemma, which generalizes an example of Tsukerman and Williams \cite[Remark 7.3]{tsukerman_williams15}.
\begin{lem}\label{equal_cells}
Let $J := [2,n-2]$ and $K := \{1,n-1\}$, and let $w := (1 \hspace*{8pt} n)\in W^J$. Then for all $j\in J$, the intervals $[e,w]$ and $[s_j,w]$ are equal modulo $W_J$.
\end{lem}
\begin{proof}
Consider the reduced word
$$
\mathbf{w} := s_1s_2 \cdots s_{n-2}s_{n-1}s_{n-2} \cdots s_2s_1
$$
for $w$. In particular, we see that for $j\in J$, we indeed have $s_j \le w$. Note that $[s_j,w]\subseteq [e,w]$. Conversely, we must show that given $x\in [e,w]$, there exists $y\in [s_j,w]$ such that $x$ and $y$ are equal modulo $W_J$. Take a subword $\mathbf{x}$ of $\mathbf{w}$ which is a reduced word for $x$. We will construct a reduced subword $\mathbf{y}$ of $\mathbf{w}$ which contains $s_j$, and such that the associated permutation $y$ is equal to $x$ modulo $W_J$.
If $\mathbf{x}$ contains $s_j$, we set $\mathbf{y} := \mathbf{x}$. Now suppose that $\mathbf{x}$ does not contain $s_j$. Note that $\mathbf{w}$ contains two occurrences of $s_{j-1}$. Since $\mathbf{x}$ does not contain $s_j$, it does not contain both occurrences of $s_{j-1}$, since otherwise we could use moves \ref{move_commutation} to obtain $s_{j-1}^2$, contradicting the fact that $\mathbf{x}$ is reduced. Similarly, if $\mathbf{x}$ contains the second occurrence of $s_{j-1}$ in $\mathbf{w}$, we may replace it with the first occurrence of $s_{j-1}$. Now let $\mathbf{y}$ be obtained from $\mathbf{x}$ by including the second occurrence of $s_j$ in $\mathbf{w}$. Since $\mathbf{y}$ does not contain the second occurrence of $s_{j-1}$ in $\mathbf{w}$, we can use \ref{move_commutation} to move $s_j$ to the end of $\mathbf{y}$. That is, $y = xs_j$. This implies that $\mathbf{y}$ is reduced; otherwise, by \cref{word_ending}, $x$ would have a reduced word ending in $s_j$, whereas $\mathbf{x}$ (and hence every reduced word for $x$) does not contain $s_j$. Since $s_j\in W_J$, we see that $y$ equals $x$ modulo $W_J$.
\end{proof}
\begin{proof}[Proof of the forward direction of \cref{decompositions}]
We prove the contrapositive. Suppose that $K$ does not consist of consecutive integers, so that there exist consecutive elements $k<l$ of $K$ with $l-k \ge 2$. Let $w := (k \hspace*{8pt} l)\in W^J$. Then by \cref{equal_cells}, for all $j\in [k+1,l-1]$, the intervals $[e,w]$ and $[s_j,w]$ are equal modulo $W_J$. Hence by \cref{decompositions_equality}, the cells $\cell[K]{e}{w}$ and $\cell[K]{s_j}{w}$ of $\PFl{K}{n}^{\ge 0}$ are contained in the same matroid stratum.
\end{proof}
\subsection{Proof of the reverse direction}\label{sec_decompositions_reverse}
In this subsection, we prove the reverse direction of \cref{decompositions}. We first establish two preliminary results, which will allow us to reduce the proof to \cref{decompositions_grassmannian}.
\begin{lem}\label{demazure_reduction}
Let $v\le w$ in $W$, and let $J\subseteq [n-1]$. Set $v' := v\triangleleft w_J^{-1}\in W$ and $w' := w^J\in W^J$. Then $v' \le w'$, and the intervals $[v,w]$ and $[v',w']$ are equal modulo $W_J$.
\end{lem}
\begin{proof}
Note that since the factorization $w = w^Jw_J$ is length-additive, we have $w = w'\ast w_J$ and $w' = w\triangleleft w_J^{-1}$. In particular, $v' \le w'$ by \cref{demazure_properites}.
First we show that given $x\in [v,w]$, there exists $x'\in [v',w']$ such that $x$ and $x'$ are equal modulo $W_J$. We set $x' := x\triangleleft w_J^{-1}$. Since $w_J^{-1}\in W_J$, we see that $x'$ equals $x$ modulo $W_J$. Also, $x'\in [v',w']$ by \cref{demazure_properites}.
Conversely, we show that given $x'\in [v',w']$, there exists $x\in [v,w]$ such that $x$ and $x'$ are equal modulo $W_J$. We set $x := x' \ast w_J$. Since $w_J\in W_J$, we see that $x$ equals $x'$ modulo $W_J$. Also, by \cref{demazure_properites}, we have
$$
v \le v'\ast w_J \le x'\ast w_J \le w'\ast w_J = w,
$$
so $x\in [v,w]$.
\end{proof}
\begin{eg}\label{eg_demazure_reduction}
As in \cref{eg_parabolic_quotient}, we let $w := s_1s_3s_4s_3s_2s_1s_5s_6s_5$ and $J := \{1,2,4,6\}$. Take $v := s_1s_4s_3s_2s_1s_5$, so that $v \le w$. We set
$$
v' := v\triangleleft w_J^{-1} = s_1s_4s_3s_2s_1s_5 \triangleleft s_6s_1s_2s_1 = s_4s_3s_5
$$
and $w' := w^J = s_3s_4s_3s_6s_5$. Then \cref{demazure_reduction} asserts that the intervals $[v,w]$ and $[v',w']$ are equal modulo $W_J$. Indeed, we can verify that both intervals modulo $W_J$ are equal to
$$
\{s_4s_3s_5, s_3s_4s_3s_5, s_4s_3s_6s_5, s_3s_4s_3s_6s_5\},
$$
where above, we represent equivalence classes by elements of $W^J$.
\end{eg}
\begin{lem}\label{reduction_to_grassmannian}
Let $J$ and $K$ be complementary subsets of $[n-1]$, and let $v_1, v_2\le w$, where $v_1, v_2\in W$ and $w\in W^J$. Suppose that the intervals $[v_1,w]$ and $[v_2,w]$ are equal modulo $W_J$. Then $v_1(i) = v_2(i)$ for all $i \le \min(K)$ and all $i \ge \max(K) + 1$.
\end{lem}
\begin{proof}
We prove the statement for $i \le \min(K)$; the statement for $i \ge \max(K) + 1$ follows by symmetry. Set $k := \min(K)$ and $J' := [n-1]\setminus\{k\}$, and note that $J \subseteq J'$. As in \cref{demazure_reduction}, we define $v_1' := v_1\triangleleft w_{J'}^{-1}$, $v_2' := v_2\triangleleft w_{J'}^{-1}$, and $w' := w^{J'}$, so that the intervals $[v_1,w]$, $[v_2,w]$, $[v_1',w']$, and $[v_2',w']$ are all equal modulo $W_{J'}$. By \cref{decompositions_grassmannian} (using \cref{decompositions_equality}), we obtain $v_1' = v_2'$. Now since $w\in W^J$, we have $w(1) < \cdots < w(k)$, so $w_{J'}$ is contained in the parabolic subgroup $W_{[k+1,n-1]}$. Hence $v_1(i) = v_1'(i) = v_2'(i) = v_2(i)$ for all $i \le k$.
\end{proof}
\begin{proof}[Proof of the reverse direction of \cref{decompositions}]
Suppose that $K = [k,l]$. Let $v_1 \le w_1$ and $v_2 \le w_2$, where $v_1,v_2\in W$ and $w_1,w_2\in W^J$. By \cref{decompositions_equality}, it suffices to show that if the intervals $[v_1,w_1]$ and $[v_2,w_2]$ are equal modulo $W_J$, then $v_1=v_2$ and $w_1=w_2$. To this end, note that by \cref{parabolic_order}, we have $v_1^J = v_2^J$ and $w_1 = w_2$. In particular, $v_1(i) = v_2(i)$ for all $i\in [k+1,l]$. Therefore it remains to show that $v_1(i) = v_2(i)$ for all $i \le k$ and all $i \ge l+1$. This follows from \cref{reduction_to_grassmannian}.
\end{proof}
\begin{eg}\label{eg_decompositions_reverse}
We show how the argument above can fail when $K$ is {\itshape not} an interval of integers. Take $n := 4$, $J := \{2\}$, $K := \{1,3\}$, and
$$
v_1 := 1234 = e, \quad v_2 := 1324 = s_2, \quad w := 4231 = s_1s_2s_3s_2s_1.
$$
By \cref{equal_cells} and \cref{decompositions_equality}, $\cell[K]{v_1}{w}$ and $\cell[K]{v_2}{w}$ are contained in the same matroid stratum. In agreement with \cref{reduction_to_grassmannian}, we have $v_1(i) = v_2(i)$ for all $i \le 1$ and all $i\ge 4$. Also, we have $v_1^J = v_2^J = e$, but this does not imply that $v_1 = v_2$.
\end{eg}
\begin{rmk}\label{minkowski_sum}
Recall the Bruhat interval polytopes $\bip[K]{v}{w}$ discussed in \cref{decompositions_remark}. It follows from the theory of Coxeter matroids (namely, \cite[Corollary 1.13.5]{borovik_gelfand_white03}), along with a result of Tsukerman and Williams \cite[Corollary 7.14]{tsukerman_williams15} (cf.\ \cite[Preface]{borovik_gelfand_white03}, \cite[Theorem 6.3]{caselli_d'adderio_marietti21}), that every Bruhat interval polytope for $\PFl{K}{n}(\mathbb{R})$ can be expressed as the Minkowski sum over $k\in K$ of a Bruhat interval polytope for $\Gr_{k,n}(\mathbb{R})$. \cref{demazure_reduction} allows us to write this Minkowski sum explicitly. Namely, for $v\le w$ with $v\in W$ and $w\in W^J$, we have
\begin{align}\label{minkowski_sum_equation}
\bip[K]{v}{w} = \sum_{k\in K}\bip[\{k\}]{v\hspace*{1pt}\triangleleft\hspace*{1pt} w_{[n-1]\setminus \{k\}}^{-1}}{\hspace*{1pt}w^{[n-1]\setminus \{k\}}}.
\end{align}
We point out that the Bruhat interval polytopes for $\Gr_{k,n}(\mathbb{R})$ are known as {\itshape positroid polytopes} \cite{ardila_rincon_williams16}; see \cite[Proposition 2.8]{tsukerman_williams15} for how to formulate \eqref{minkowski_sum_equation} in terms of positroids.
As an illustration of \eqref{minkowski_sum_equation}, we adopt the setup of \cref{eg_decompositions_reverse}, with $v = v_1$. Then \eqref{minkowski_sum_equation} gives
$$
\bip[\{1,3\}]{e}{s_1s_2s_3s_2s_1} = \bip[\{1\}]{e\hspace*{1pt}\triangleleft\hspace*{1pt} (s_2s_3)^{-1}}{s_3s_2s_1} + \bip[\{3\}]{e\hspace*{1pt}\triangleleft\hspace*{1pt} (s_2s_1)^{-1}}{s_1s_2s_3},
$$
or equivalently,
$$
\bip[\{1,3\}]{1234}{4231} = \bip[\{1\}]{1234}{4123} + \bip[\{3\}]{1234}{2341}.
$$
Note that if we had instead taken $v = v_2$, we would have obtained the same Minkowski sum decomposition for $\bip[\{1,3\}]{1324}{4231}$. Indeed, \cref{equal_cells} implies that $\bip[\{1,3\}]{1234}{4231} = \bip[\{1,3\}]{1324}{4231}$.
\end{rmk}
\bibliographystyle{alpha}
|
2,877,628,088,498 | arxiv | \section{Introduction}
Deciding reachability between a pair of vertices in a graph is an important computational problem from the perspective of space bounded computations. It is well known that reachability in directed graphs characterizes the complexity class nondeterministic logspace (\NL). For undirected graphs the problem was known to be hard for the class deterministic logspace (\L) and in a breakthrough result Reingold showed that is contained in {\L} as well \cite{Reingold08}. Several other restrictions of the reachability problem are known to characterize other variants of space bounded complexity classes \cite{Etessami97,Barrington89,BarringtonEtAl98}.
Unambiguous computations are a restriction of general nondeterministic computations where the Turing machine has at most one accepting computation path on every input. In the space bounded domain, unambiguous logspace (in short {\UL}) is the class of languages for which there is a nondeterministic logspace bounded machine that has a unique accepting path for every input in the language and zero accepting path otherwise. {\UL} was first formally defined and studied in \cite{BJLR91, AJ93}. In 2000 Reinhardt and Allender showed that the class {\NL} is contained in a non-uniform version of {\UL} \cite{RA00}. In a subsequent work it was shown that under the hardness assumption that deterministic linear space has functions that cannot be computed by circuits of size $2^{\epsilon n}$, it can be shown that $\NL=\UL$ \cite{ARZ99}. Although it is widely believed that {\NL} and {\UL} are the same unconditionally and in a uniform setting, the question still remains open.
Savitch's Theorem states that reachability in directed graphs is in $\DSPACE (\log^2 n)$, however the algorithm requires quasipolynomial time \cite{Sav70}. On the other hand standard graph traversal algorithms such as DFS and BFS can decide reachability in polynomial time (in fact linear time) but require linear space. Wigderson asked the question that can we solve reachability in $\mathcal{O} (n^{1-\epsilon})$ space and polynomial time simultaneously, for some $\epsilon >0$ \cite{Wig92}. Barnes et. al. gave a partial answer to this question by giving a $\mathcal{O} (n/2^{\sqrt{\log n}})$ space and polynomial time algorithm for the problem \cite{BBRS92}. Although this bound has been improved for several subclasses such as planar graphs \cite{INPVW13}, layered planar graphs \cite{CT15b}, minor-free and bounded genus graphs \cite{CPTVY14}, for general directed graphs (and hence for the class {\NL}) we still do not have a better deterministic space upper bound simultaneously with polynomial time.
\subsection{Main Result}
In this paper we show that directed graph reachability can be decided by an unambiguous $\mathcal{O} (\log^2 n)$ space algorithm that simultaneously requires only polynomial time. Thus we get an improvement in the time required by Savitch's algorithm by sacrificing determinism. Formally, we show the following theorem.
\begin{theorem}
\label{thm:main}
$\NL \subseteq \pUSP(\log^2 n)$.
\end{theorem}
For the remainder of this paper all graphs that we consider are directed graphs unless stated otherwise.
\subsection{Min-uniqueness of Graphs}
An important ingredient of our proof is the {\em min-uniqueness} property of graphs. A graph $G$ is said to be min-unique with respect to an edge weight function $W$ if the minimum weight path between every pair of vertices in $G$ is unique with respect to $W$. This turns out to be an important property and has been studied in earlier papers \cite{Wig94, GW96, RA00}. In fact, the fundamental component of Reinhardt and Allender's paper is a {\UL} algorithm for testing whether a graph is min-unique and then deciding reachability in min-unique graphs in {\UL} \cite{RA00}. They achieve this by proposing a {\em double inductive counting} technique which is a clever adaptation of the inductive counting technique of Immerman and Szelepcs\'{e}nyi \cite{Imm88,Sze88}. As a result of Reinhardt and Allender's algorithm, in order to show that reachability in a class of graphs can be decided in {\UL}, one only needs to design an efficient algorithm which takes as input a graph from this class and outputs an $ \mathcal{O} (\log n) $ bit weight function with respect to which the graph is min-unique. This technique was successfully used to show a {\UL} upper bound on the reachability problem in several natural subclasses of general graphs such as planar graphs \cite{BTV07}, graphs with polynomially many paths from the start vertex to every other vertex \cite{PTV12}, bounded genus graphs \cite{DKTV11} and minor-free graphs \cite{AGGT16}. For the latter two classes of graphs reachability was shown to be in {\UL} earlier as well by giving reductions to planar graphs \cite{KV10, TW09}. Note that Reinhardt and Allender defines min-uniqueness for unweighted graphs where the minimum length path is unique, whereas we define it for weighted graphs where the minimum weight path is unique. However it can easily be seen that both these notions are equivalent.
\subsection{Overview of the Proof}
We prove Theorem \ref{thm:main} in two parts. We first show how to construct an $\mathcal{O}(\log^2 n)$ bit weight function $W$ with respect to which the input graph $G$ becomes min-unique. Our construction of the weight function $W$ uses an iterative process to assign weights to the edges of $G$. We start by considering a subgraph of $G$ having a fixed radius and construct an $\mathcal{O} (\log n)$ bit weight function with respect to which this subgraph becomes min-unique. For this we first observe that there are polynomially many paths in such a subgraph and then use the prime based hashing scheme of Fredman, Koml\'{o}s and Szemer\'{e}di \cite{FKS84} to give distinct weights to all such paths. Thereafter, in each successive round of the algorithm, we construct a new weight function with respect to which a subgraph of double the radius of the previous round becomes min-unique and the new weight function has an additional $\mathcal{O} (\log n)$ bits. Hence in $\mathcal{O} (\log n)$ many rounds we get a weight function which has $\mathcal{O} (\log^2 n)$ bits and with respect to which $G$ is min-unique. We show that this can be done by an unambiguous, polynomial time algorithm using $\mathcal{O} (\log^2 n)$ space. This technique is similar to the isolating weight construction in \cite{FGT16}, but their construction is in {\qNC}.
We then show that given a graph $G$ and an $\mathcal{O} (\log^2 n)$ bit weight function with respect to which $G$ is min-unique, reachability in $G$ can be decided by an unambiguous, polynomial time algorithm using $\mathcal{O} (\log^2 n)$ space. Note that a straightforward application of Reinhardt and Allender's algorithm will not give the desired bound. This is because ``unfolding'' a graph with $\mathcal{O} (\log^2 n)$ bit weights will result in a quasipolynomially large graph. As a result we will not achieve a polynomial time bound. We tackle this problem by first observing that although there are $2^{\mathcal{O} (\log ^2 n)}$ many different weight values, the weight of a shortest path can only use polynomial number of distinct such values. Using this observation we give a modified version of Reinhardt and Allender's algorithm that iterates over the ``good'' weight values and ignores the rest. This allows us to give a polynomial time bound.
The rest of the paper is organized as follows. In Section \ref{sec:prelim} we define the various notations and terminologies used in this paper. We also state prior results that we use in this paper. In Section \ref{sec:result} we give the proof of Theorem \ref{thm:main}.
\section{Preliminaries}
\label{sec:prelim}
For a positive integer $n$, let $[n] = \{1,2, \ldots , n\}$. Let $G = (V, E)$ be a directed graph on $n$ vertices and let $E=\{e_{1}, e_{2}, \ldots, e_{m}\}$ be the set of edges in $G$. Let $s$ and $t$ be two fixed vertices in $G$. We wish to decide whether there exists a path from $s$ to $t$ in $G$. The {\em length} of a path $P$ is the number of edges in $P$ and is denoted as $\mathrm{len} (P)$. The {\em center} of a path $P$ is a vertex $x$ in $P$ such that the length of the path from either end point of $P$ to $x$ is at most $\lceil \mathrm{len}(P)/2 \rceil$ and $ x $ is no farther from the tail of $ P $ than from the head of $ P $.
A {\em weight function} $w : E \rightarrow \mathbb{N}$ is a function which assigns a positive integer to every edge in $G$. The weight function $w$ is said to be {\em polynomially bounded} if there exists a constant $k$ such that $w(e) \leq \mathcal{O} (n^k)$ for every edge $e$ in $G$. We use $ G_{w} $ to denote the weighted graph $G$ with respect to a weight function $ w $. For a graph $ G_{w} $, the {\em weight of a path} $ P $ denoted by $ w(P) $ is defined as the sum of weights of the edges in the path. A {\em shortest path} from $ u $ to $ v $ in $ G_{w} $ is a path from $ u $ to $ v $ with minimum weight. Let $ \calp{w}{i}{u}{v} $ denote the set of shortest paths from $u$ to $v$ of length at most $i$ in $G_w$. Thus in particular, the set of shortest paths from $ u $ to $ v $ in $ G_{w} $, $ \calp{w}{}{u}{v} = \calp{w}{n}{u}{v} $.
We define the {\em distance} function with respect to a weight function and a nonnegative integer $i$ as
\[ \dist{w}{i}{u}{v} = \left\{ \begin{array}{ll}
w(P) & \textrm{ for }P \in \calp{w}{i}{u}{v}\\
\infty & \textrm{ if } \calp{w}{i}{u}{v} = \emptyset \end{array} \right. \]
Correspondingly we define the function $l$ which represents the minimum length of such paths as
\[ \lrad{w}{i}{u}{v} = \left\{ \begin{array}{ll}
\min_{P \in \calp{w}{i}{u}{v}} \{ \mathrm{len} (P)\} & \textrm{ if } \calp{w}{i}{u}{v} \ne \emptyset\\
\infty & \textrm{ otherwise} \end{array} \right. \]
A graph $ G_{w} $ is said to be {\em min-unique} for paths of length at most $ i $, if for any pair of vertices $u$ and $v$, the shortest path from $ u $ to $ v $ with length at most $i$, is unique. $ G_{w} $ is said to be min-unique if $ G_{w} $ is min unique for paths of arbitrary length. Define weight function
\[w_{0}(e_{i}) := 2^{i -1}, \textrm{ where } i \in [m].\]
It is straightforward to see that for any graph $ G $, $w_{0}$ is an $ n $ bit weight function and $ G_{w_{0}} $ is min-unique. Wherever it is clear from the context that there is only one weight function $w$, we will drop the subscript $ w $ in our notations.
For a graph $ G_{w} $, vertex $ u $ in $G$, length $ i $ and weight value $ k $, we define the quantities $ c_{k}^{i}(u)$ and $D_{k}^{i}(u)$ as the number of vertices at a distance at most $k$ from $u$, using paths of length at most $i$ and the sum of the distances to all such vertices respectively. Formally,
\begin{align*}
c_{k}^{i}(u) &= | \{v \mid \dist{w}{i}{u}{v} \leq k\} | \\
D_{k}^{i}(u) &= \sum_{v \mid \dist{w}{i}{u}{v} \leq k}^{} \dist{w}{i}{u}{v}.
\end{align*}
An {\em unambiguous Turing machine} is a nondeterministic Turing machine that has at most one accepting computation path on every input \cite{VAL76}. We shall consider unambiguous computations in the context of space bounded computations. $\USPACE(s(n))$ denotes the class of languages decided by an unambiguous machine using $ \mathcal{O}(s(n)) $ space. In particular, $\UL = \USPACE(\log n)$. $\TIUSP(t(n), s(n))$ denotes the class of languages decided by an unambiguous machine using $ \mathcal{O}(s(n)) $ space and $ \mathcal{O}(t(n)) $ time simultaneously. In particular, when $t(n)$ is a polynomial, we define
\[\pUSP(s(n))=\bigcup_{k \geq 0} \TIUSP(n^k,s(n)).\]
For graphs having polynomially many paths, we use the well known hashing technique due to Fredman, Koml\'{o}s and Szemer\'{e}di \cite{FKS84} to compute a weight function that assigns distinct weights to all such paths. We state the result below in a form that will be useful for our purpose.
\begin{theorem} \label{thm:hashing}
\cite{FKS84, PTV12} For every constant $ c $ there is a constant $ c' $ so that for every set $ S $ of
$ n $ bit integers with $ |S| \leq n^{c} $ there is a $ c' \log n$ bit prime number $ p $ so that for all $ x \neq y \in S, \ x \not\equiv y \bmod{p} $.
\end{theorem}
Henceforth we will refer to Theorem \ref{thm:hashing} as the FKS hashing lemma.
\section{Min-unique Weight Assignment}
\label{sec:result}
Reinhardt and Allender \cite{RA00} showed that for every $n$ there is a sequence of $n^2$ $\mathcal{O} (\log n)$ bit weight functions such that every graph $G$ on $n$ vertices is min-unique with respect to at least one of them. For each weight function they construct an unweighted graph (say $G_w$) by replacing every edge with a path of length equal to the weight of that edge. Since the weights are $\mathcal{O} (\log n)$ bit values therefore $G_w$ is polynomially large in $n$. Next they show that using the double inductive counting technique one can check unambiguously using a logspace algorithm if $G_w$ is min-unique, and if so then check if there is a path from $s$ to $t$ as well. They iterate over all weight functions until they obtain one with respect to which $G_w$ is min-unique and use the corresponding graph $G_w$ to check reachability. Since we use an $\mathcal{O} (\log^2 n)$ bit weight function with respect to which the input graph is min-unique, we cannot construct an unweighted graph by replacing every edge with a directed path of length equal to the corresponding edge weight.
In Section \ref{sec:wt} we give an algorithm that computes an $\mathcal{O} (\log^2 n)$ bit, min-unique weight function and decides reachability in directed graphs. In Section \ref{sec:check} we check if a graph is min-unique. Although we use $\omega (\log n)$ bit weight functions, our algorithm still runs in polynomial time. In Section \ref{sec:dist} we show how to compute the $\dist{w}{i}{u}{v}$ function unambiguously.
\subsection{Construction of the weight function}
\label{sec:wt}
Theorem \ref{thm:wt} shows how to construct the desired weight function.
\begin{theorem}
\label{thm:wt}
There is a nondeterministic algorithm that takes as input a directed graph $G$ and outputs along a unique computation path, an $\mathcal{O} (\log^{2} n) $ bit weight function $ W $ such that $G_W$ is min-unique, while all other computation paths halt and reject. For any two vertices $ s $ and $ t $ the algorithm also checks whether there is a path from $ s $ to $ t $ in G. The algorithm uses $\mathcal{O}(\log^2 n)$ space and runs in polynomial time.
\end{theorem}
Since directed graph reachability is complete for {\NL}, Theorem \ref{thm:main} follows from Theorem \ref{thm:wt}.
\begin{algorithm}[h]
\SetAlgoNoLine
\DontPrintSemicolon
\SetKwFor{For}{for}{do}{endfor}
\SetKwFor{ForEach}{for each}{do}{endfor}
\SetKwIF{If}{ElseIf}{Else}{if}{then}{else if}{else}{endif}
\KwIn{($G, s, t)$ }
\KwOut{weight function $W:=W_q$, $\mathsf{true}$ if there is a path from $s$ to $t$ and $\mathsf{false}$ otherwise}
\Begin{
$ q := \log n $; $ W_{0} := 0 $ \;
\For{$j \leftarrow 1$ \KwTo $ q $}{
$ i := 2^{j} $; $ p := 2 $ \;
\Repeat{$ (G, W_{j}, i) $ is min-unique}
{
\tcc{ By the FKS hashing lemma $p$ is bounded by a polynomial in $n$, say $n^{c'}$. We define $B := n^{c'+2}$. }
$W_{j} := B \cdot W_{j-1} + (w_{0} \bmod p) $ \;
Check whether $ (G, W_{j}, i) $ is min-unique using Algorithm \ref{algo:minunique} \;
$ p := $ next prime \;
}
}
\lIf{$ \dist{W_{q}}{n}{s}{t} \leq B^{q}$}{\Return ($W_q, \mathsf{true}$)}
\lElse{\Return ($W_q$, $\mathsf{false}$)}
}
\caption{Computes a min-unique weight function and checks for an $s-t$ path in $G$}
\label{algo:final}
\end{algorithm}
\begin{proof}[Proof of Theorem \ref{thm:wt}]
To prove Theorem \ref{thm:wt} we design an algorithm that outputs the desired weight function. The formal description of the construction is given in Algorithm \ref{algo:final}. The algorithm works in an iterative manner for $\log n$ number of rounds. Initially we consider all paths in $G$ of length at most $l$ where $l=2^1$. The number of such paths is bounded by $n^l$ and therefore by the FKS hashing lemma there exist a $ c' \log n $ bit prime $ p_{1} $ such that with respect to the weight function $ W_{1} := w_{0} \bmod p_{1} $, $ G_{w_{1}} $ is min-unique for paths of length at most $ l $. To find the right prime $ p_{1} $ we iterate over all $c' \log n$ bit primes and use Lemma \ref{lem:minunique} to check whether $ G_{w_{1}} $ is min-unique for paths of length at most $ l $.
We prove this by induction on the number of rounds, say $j$. Assume that $G_{W_{j-1}}$ is min-unique for paths of length at most $2^{j-1}$. In the $j$-th round, the algorithm considers all paths of length at most $2^j$. By applying Lemma \ref{lem:induction} we get a weight function $W_j$ from $W_{j-1}$ which uses $\mathcal{O} (j \cdot \log n)$ bits and $G_{W_j}$ is min-unique for paths of length at most $2^j$. Hence in $\log n$ many rounds we get a weight function $W:= W_{\log n}$ such that $G_W$ is min-unique. Note that the inner {\bf repeat-until} loop runs for at most $n^{c'}$ iterations due to the FKS hashing lemma.
Let $p_j$ be the prime used in the $j$-th round of Algorithm \ref{algo:final}. Define $p' := \max \{p_j \mid j \in [\log n]\}$. By the FKS hashing lemma $p'$ is bounded by a polynomial in $n$, say $n^{c'}$. We set $B := n^{c'+2}$. This implies that for any weight function of the form $w = w_0 \bmod p_j$ and any path $P$ in $G$, $w(P) < B$. Observe that with respect to the final weight function $W$, for any path $P$ in $G$, $W(P) < B^q$.
Once we compute an $ \mathcal{O} (\log^{2} n) $ bit weight function $ W $ such that $ G_{W} $ is min-unique, there exist a path from $ s $ to $ t $ if and only if $ \dist{W}{n}{s}{t} \leq B^{q} $. This can be checked using Algorithm \ref{algo:dist} in $ \mathcal{O}(\log^{2} n) $ space and polynomial time. Also Algorithm \ref{algo:dist} is a nondeterministic algorithm which returns $ \mathsf{true} $ or $ \mathsf{false} $ along a unique computation path while all other computation paths halt and reject.
In each round the size of $ W_{j} $ increases by $ \mathcal{O}(\log n) $ bits and after $ \log n $ rounds $ W_{\log n} $ is an $\mathcal{O}(\log^{2} n) $ bit weight function. By Lemma \ref{lem:minunique} checking whether a graph is min-unique with respect to an $\mathcal{O}(\log ^{2} n) $ bit weight function requires $ \mathcal{O}(\log ^{2} n) $ space. Thus the total space complexity of Algorithm \ref{algo:final} is $ \mathcal{O}(\log ^{2} n) $.
The FKS hashing lemma guarantees that in each round only a polynomial number of primes need to be tested to find a weight function which is min-unique for paths of length at most $ 2^{j} $. By Lemma \ref{lem:minunique} checking whether a graph is min-unique for paths of length at most $ 2^{j} $ can be done in polynomial time. Thus each round runs in polynomial time. There are only $ \log n $ many round and hence Algorithm \ref{algo:final} runs in polynomial time.
By Lemma \ref{lem:minunique}, Algorithm \ref{algo:minunique} is a nondeterministic algorithm which outputs its answer along a unique computation path, while all other computation paths halt and reject. All other steps in Algorithm \ref{algo:final} are deterministic. This shows the unambiguity requirement of the theorem.
\end{proof}
\begin{lemma}
\label{lem:induction}
There is a nondeterministic algorithm $\mathcal{A}$, that takes as inputs $ (G, w) $ where $G$ is a graph on $n$ vertices and $w$ is a $k$ bit weight function such that $G_{w}$ is min-unique for paths of length at most $l$. $\mathcal{A}$ outputs a $(k+ \mathcal{O} (\log n))$ bit weight function $w'$ such that $G_{w'}$ is min-unique for paths of length at most $2l$, along a unique computation path while all other computation paths halt and reject. $\mathcal{A}$ uses $\mathcal{O}(k+ \mathcal{O} (\log n))$ space and runs in polynomial time.
\end{lemma}
The encoding of the output weight function $ w' $ is the concatenation of the $ k $ bit representation of the input weight function $ w $ and an $ \mathcal{O}(\log n) $ bit prime number $ p $. The output weight function $ w' $ is calculated as $ w' := B \cdot w + w_{0} \bmod p $, where $ B $ is the number defined in Algorithm $ \ref{algo:final} $. Multiplication using $ B $ is used just to left shift $ w $ and make room for the new function $w_{0} \bmod p$.
Lemma \ref{lem:induction} proves the correctness of each iteration of the outer {\bf for} loop of Algorithm \ref{algo:final}. Before proving the lemma, we will show that if $G_w$ is min-unique for paths of length at most $l$, then the number of minimum weight paths with respect to $w$ of length at most $2l$ is bounded by a polynomial independent of $l$. Hence it allows us to use the FKS hashing lemma to isolate such paths.
\begin{lemma} \label{lem:bound}
Let $G$ be a graph with $n$ vertices and $w$ be a weight function such the graph $ G_{w} $ is min-unique for paths of length at most $l$. Then for any pair of vertices $ u $ and $ v $, $ \left | \calp{w}{2l}{u}{v} \right | $ is at most $n$.
\end{lemma}
\begin{proof}
Let $P$ be a shortest path from $u$ to $v$ in $ G_{w} $ with length at most $2l$ with center vertex $x$. That is $ P \in \calp{w}{2l}{u}{v} $. Let $P_{1}$ and $P_{2}$ be the subpaths from $u$ to $x$ and $x$ to $v$. Since $x$ is the center of $P$, $P_{1}$ has length at most $l$. Note that $P_{1}$ is the unique shortest path of length at most $l$ from $u$ to $x$ in $ G_{w} $. This is because if there exists another path of length at most $l$ with a smaller weight than $ P_{1} $ from $u$ to $x$ then replacing $ P_{1} $ with this path in $P$ will result in a path of length at most $2l$ from $u$ to $v$ with a lower weight than $P$. But this cannot happen since $P$ is a shortest path from $u$ to $v$.
\begin{claim}
There is only one shortest path of length at most $2l$ from $u$ to $v$ with $x$ as its center.
\end{claim}
\begin{proof}
Assume there is another shortest path $P'$ of length at most $ 2l $ from $u$ to $v$ with $x$ as its center. Let $P_1'$ be the subpath of $P'$ from $u$ to $x$. Since $x$ is the center of $P'$, $ P'_{1} $ is of length at most $l$. Similar to $ P_{1} $, $P'_{1}$ is a shortest path of length at most $l$ from $u$ to $x$. This means there are two shortest paths of length at most $l$ from $u$ to $x$. This is a contradiction since $G$ is min-unique for paths of length at most $l$.
\end{proof}
Therefore each vertex can be the center of at most one path of length at most $ 2l $ from $ u $ to $ v $. Thus the total number of shortest paths of length at most $ 2l $ from $ u $ to $ v $ in $ G_{w} $ is at most $n$. Hence $ \left | \calp{w}{2l}{u}{v} \right | \leq n$. This completes the proof of Lemma \ref{lem:bound}.
\end{proof}
When we sum over all possible pairs of $ u $ and $ v $, the total number of shortest paths of length at most $2l$ in $G_{w}$ is at most $n^{3} $.
\begin{proof}[Proof of Lemma \ref{lem:induction}]
$G_{w}$ is min-unique for paths of length at most $l$. Therefore by Lemma \ref{lem:bound} the number of shortest paths between all pairs of vertices with at most $2l$ edges in $G$ is at most $ n^{3} $. Let $\mathcal{S}$ be the set of these $n^{3}$ shortest paths. With respect to the weight function $w_{0}$ (see Section \ref{sec:prelim}) each element of $\mathcal{S}$ gets a distinct weight. So by using the FKS hashing lemma we get a constant $c'$ and a $ c' \log n $ bit prime number $ p $ such that with respect to the weight function $\widehat{w}$ such that $ \widehat{w} := w_{0} \bmod p $, each element of $\mathcal{S}$ gets a distinct weight. Moreover, in $G$ between any pair of vertices the shortest path in $\mathcal{S}$ is unique.
Let $B$ be the number as defined in Algorithm \ref{algo:final}. Now consider the weight function $w' := B \cdot w + \widehat{w}$. Since $w$ is a $k$ bit weight function and $\widehat{w}$ is an $\mathcal{O} (\log n)$ bit weight function therefore $w'$ is a $(k+ \mathcal{O} (\log n))$ bit weight function. Clearly $ w $ has higher precedence than $ \widehat{w} $ in $ w' $. So for any two paths $ P_{1} $ and $ P_{2} $ in $ G $ , we have if $ w'(P_{1}) < w'(P_{2})$ then either $ w(P_{1}) < w(P_{2})$ or both the predicates $w(P_{1}) = w(P_{2})$ and $\widehat{w}(P_{1}) < \widehat{w}(P_{2})$ are true. Additionally if $ w'(P_{1}) = w'(P_{2})$ then $w(P_{1}) = w(P_{2})$ and $\widehat{w}(P_{1}) = \widehat{w}(P_{2})$.
All the unique shortest paths of length at most $ 2l $ in $ G_{w} $, will be unique shortest paths of length at most $ 2l $ in $ G_{w'} $ also. If there are multiple shortest paths of length at most $ 2l $ from $ u $ to $ v $ in $ G_{w} $, $\widehat{w}$ gives a unique weight to each of these paths. So $ G_{w'} $ is min-unique for paths of length at most $ 2l $.
We can check whether a graph $ G_{w'} $ is min-unique for paths of length at most $ 2l $ using Lemma \ref{lem:minunique}. Since $ p $ is an $ c' \log n $ bit prime number, we can iterate over all the $ c' \log n $ bit primes and find $p$.
\end{proof}
\subsection{Checking for min-uniqueness}
\label{sec:check}
The next lemma shows how to check whether $G_w$ is min-unique for paths of length at most $l$ in an unambiguous manner.
\begin{algorithm}[h]
\SetAlgoNoLine
\DontPrintSemicolon
\SetKwFor{For}{for}{do}{endfor}
\SetKwFor{ForEach}{for each}{do}{endfor}
\SetKwIF{If}{ElseIf}{Else}{if}{then}{else if}{else}{endif}
\KwIn{($G, w, i $)}
\KwOut{$\mathsf{true}$ if $G_w$ is not min-unique for paths of length at most $i$ and $\mathsf{false}$ otherwise}
\Begin{
$\mathsf{BAD.WEIGHT} := \mathsf{false}$ \;
\tcc{$\mathsf{BAD.WEIGHT}$ is set to $\mathsf{true}$ whenever the weight function does not make the graph min-unique. Otherwise it remains $\mathsf{false}$. It is a boolean variable shared between Algorithms \ref{algo:induct} and \ref{algo:minunique} }
\ForEach{vertex $v$} {
$c_{0}^{i}(v) := 1;$ $D_{0}^{i}(v) := 0;$ $k' := 0$ \;
\Repeat { $\mathsf{BAD.WEIGHT} = \mathsf{true}$ } {
$k := k';$ $c_{k}^{i}(v) := c_{k'}^{i}(v);$ $D_{k}^{i}(v) := D_{k'}^{i}(v)$ \;
Find next $k'$ from ($G, w, v, i, k, c_{k}^{i}(v), D_{k}^{i}(v)$) using Algorithm \ref{algo:nextk} \;
\lIf{$ k' = \infty $} {
break
}
Compute ($c_{k'}^{i}(v), D_{k'}^{i}(v)$) from ($G, w, v, i, k, c_{k}^{i}(v), D_{k}^{i}(v), k'$) using Algorithm \ref{algo:induct} \;
}
\lIf {$\mathsf{BAD.WEIGHT} = \mathsf{true}$} {
break
}
}
\Return $\mathsf{BAD.WEIGHT}$ \;
}
\caption{Check whether $G$ is min-unique for paths of length at most $i$}
\label{algo:minunique}
\end{algorithm}
\begin{lemma} \label{lem:minunique}
There is a nondeterministic algorithm that takes as input a directed graph $G$, a $ k $ bit weight function $ w $ and a length $ i $ and outputs along a unique computation path whether or not the graph $ G_{w} $ is min-unique for paths of length at most $ i $, while all other computation paths halt and reject. The algorithm uses $\mathcal{O}(k + \log n)$ space and runs in polynomial time.
\end{lemma}
For every vertex $v$ in the $G_w$ we check whether there are two minimum weight paths of length at most $i$ to some other vertex in $G$. Algorithm \ref{algo:minunique} gives a formal description of this process. The algorithm iterates over all shortest path weight values that can be achieved by some path of length at most $i$.
In the $k$-th stage of the algorithm it considers a ball of radius $k$ consisting of vertices which have a shortest path of weight at most $k$ from $v$ and length at most $i$. $c_{k}^{i}(v)$ denotes the number of vertices in this ball and $D_{k}^{i}(v)$ denotes the sum of the weights of the shortest paths to all such vertices. Initially $k = 0$, $c_{0}^{i}(v) =1$ (consisting of only the vertex $v$) and $D_{0}^{i}(v)=0$.
A direct implementation of the double inductive counting technique of Reinhardt and Allender \cite{RA00} does not work since this would imply that we cycle over all possible weight values, which we cannot afford. We bypass this hurdle by considering only the relevant weight values. We compute the immediate next shortest path weight value $k'$, and use $k'$ as the weight value for the next stage of the algorithm. This computation is implemented in Algorithm \ref{algo:nextk}). Lemma \ref{lem:nextk} proves the correctness of this process. Note that the number of shortest path weight values from a fixed vertex is bounded by the number of vertices in the graph. This ensure that the number of iterations of the inner {\bf repeat-until} loop of Algorithm \ref{algo:minunique} is bounded by $n$.
\begin{algorithm}[h]
\SetAlgoNoLine
\DontPrintSemicolon
\SetKwFor{For}{for}{do}{endfor}
\SetKwFor{ForEach}{for each}{do}{endfor}
\SetKwIF{If}{ElseIf}{Else}{if}{then}{else if}{else}{endif}
\KwIn{($G, w, u, i, k, c_{k}^{i}(u), D_{k}^{i}(u)$)}
\KwOut{ $ k' := \min \{ \dist{w}{i}{u}{v} \mid \dist{w}{i}{u}{v} > k, \ v \in V \} $ }
\Begin{
$k' := \infty$ \;
\ForEach{vertex $v$} {
\If {$\neg (\dist{w}{i}{u}{v} \leq k)$} {
$\mindist{w}{i}{u}{v} := \infty$ \;
\ForEach {$x$ such that $(x, v)$ is an edge} {
\If {$\dist{w}{i}{u}{x} \leq k$ and $ \lrad{w}{i}{u}{x} + 1 \leq i $} {
\If {$\mindist{w}{i}{u}{v} > \dist{w}{i}{u}{x} + w(x,v)$} {
$\mindist{w}{i}{u}{v} := \dist{w}{i}{u}{x} + w(x,v)$ \;
}
}
}
\lIf {$k' > \mindist{w}{i}{u}{v}$} {
$k' := \mindist{w}{i}{u}{v}$
}
}
}
\Return $k'$ \;
}
\caption{Find the next smallest weight value $k' > k$ among all paths of length at most $ i $ from $ u $}
\label{algo:nextk}
\end{algorithm}
\begin{lemma}
\label{lem:nextk}
Given $(G, w, u, i, k, c_{k}^{i}(u), D_{k}^{i}(u))$, Algorithm \ref{algo:nextk} correctly computes the value $\min \{ \dist{w}{i}{u}{v} \mid \dist{w}{i}{u}{v} > k, \ v \in V \} $.
\end{lemma}
To see the correctness of Lemma \ref{lem:nextk} observe that for every vertex $v$ such that $\dist{w}{i}{u}{v} > k$, the algorithm cycles through all vertices $x$ such that there is an edge from $x$ to $v$ and the length of the path from $u$ to $x$ is at most $i-1$. It computes the minimum weight of such a path and store it in the variable $\mindist{w}{i}{u}{v}$. It then computes the minimum value of $\mindist{w}{i}{u}{v}$ over all possible vertices $v$ and outputs it as $k'$, as required.
After we get the appropriate weight value $k'$, we then compute the values of $c_{k'}^{i}(v)$ and $D_{k'}^{i}(v)$ by using a technique similar to Reinhardt and Allender (implemented in Algorithm \ref{algo:induct}). Additionally we also maintain a shared flag value $\mathsf{BAD.WEIGHT}$ between Algorithms \ref{algo:minunique} and \ref{algo:induct}, which is set to $\mathsf{true}$ if $G_w$ is not min-unique for paths of length at most $i$, else it is $\mathsf{false}$.
\begin{algorithm}[h]
\SetAlgoNoLine
\DontPrintSemicolon
\SetKwFor{For}{for}{do}{endfor}
\SetKwFor{ForEach}{for each}{do}{endfor}
\SetKwIF{If}{ElseIf}{Else}{if}{then}{else if}{else}{endif}
\KwIn{($G, w, u, i, k, c_{k}^{i}(u), D_{k}^{i}(u), k'$)}
\KwOut{($c_{k'}^{i}(u), D_{k'}^{i}(u)$) and also flag $\mathsf{BAD.WEIGHT}$ }
\Begin{
$c_{k'}^{i}(u) := c_{k}^{i}(u)$; $D_{k'}^{i}(u) := D_{k}^{i}(u)$ \;
\ForEach{vertex $v$} {
\If {$\neg (\dist{w}{i}{u}{v} \leq k)$} {
\ForEach {$x$ such that $(x, v)$ is an edge} {
\If {$\dist{w}{i}{u}{x} \leq k$ and $\dist{w}{i}{u}{x} + w(x,v) = k'$ and $ \lrad{w}{i}{u}{x} + 1 \leq i $} {
$c_{k'}^{i}(u):= c_{k'}^{i}(u) + 1$; $D_{k'}^{i}(u) := D_{k'}^{i}(u)+k'$ \;
\ForEach {$x' \neq x$ such that ($x', v$) is an edge} {
\If { $\dist{w}{i}{u}{x'} \leq k$ and $\dist{w}{i}{u}{x'} + w(x',v) = k'$ and $ \lrad{w}{i}{u}{x'} + 1 \leq i $} {
$\mathsf{BAD.WEIGHT} := \mathsf{true}$ \;
}
}
}
}
}
}
\Return ($c_{k'}^{i}(u), D_{k'}^{i}(u)$)
}
\caption{Compute $c_{k'}^{i}(u)$ and $D_{k'}^{i}(u)$ and check whether $ G_{w} $ is min-unique for paths with length at most $ i $ and weight at most $ k' $ from $ u $}
\label{algo:induct}
\end{algorithm}
\subsection{Computing the $\dist{w}{i}{u}{v}$ function}
\label{sec:dist}
In Algorithms \ref{algo:nextk} and \ref{algo:induct}, an important step is to check whether $\dist{w}{i}{u}{v} \leq k$ and if so, get the values of $\dist{w}{i}{u}{v}$ and $\lrad{w}{i}{u}{v}$. These values are obtained from Algorithm \ref{algo:dist}. Algorithm \ref{algo:dist} describes a nondeterministic procedure that takes as input a weighted graph $G_w$, which is min-unique for paths of length at most $i$ and weight at most $ k $ from a source vertex $ u $ and the values $c_k^i (u)$ and $D_k^i (u)$. For any vertex $ v $, if $\dist{w}{i}{u}{v} \leq k$ then it outputs $\mathsf{true}$ and the values of $\dist{w}{i}{u}{v}$ and $\lrad{w}{i}{u}{v}$ along a unique computation path. Otherwise it outputs $\mathsf{false}$ along a unique computation path with $\infty$ as the values of $\dist{w}{i}{u}{v}$ and $\lrad{w}{i}{u}{v}$. All other computation paths halt and reject. As a result we can compute the predicate $\neg (\dist{w}{i}{u}{v} \leq k)$ along a unique path as well.
\begin{algorithm}[h]
\SetAlgoNoLine
\DontPrintSemicolon
\SetKwFor{For}{for}{do}{endfor}
\SetKwFor{ForEach}{for each}{do}{endfor}
\SetKwIF{If}{ElseIf}{Else}{if}{then}{else if}{else}{endif}
\KwIn{($G, w, u, i, k, c_{k}^{i}(u), D_{k}^{i}(u), v$)}
\KwOut{ ($\mathsf{true}$ or $\mathsf{false}$), $\dist{w}{i}{u}{v}$, $ \lrad{w}{i}{u}{v} $ }
\Begin{
$count := 0$; $sum := 0$; $path.to.v := \mathsf{false}$ \;
$\dist{w}{i}{u}{v} := \infty$; $ \lrad{w}{i}{u}{v} := \infty $\;
\ForEach{$x \in V$}{
Guess non deterministically if $\dist{w}{i}{u}{x} \leq k$ in $ G_{w} $\; \label{alg_line:guess}
\If {the guess is $\dist{w}{i}{u}{x} \leq k$ } {
Guess a path of weight $d \leq k$ and length $l \leq i$ from $u$ to $x$ \; \label{alg_line:guesspath}
(If this fails then halt and reject) \;
$count := count + 1$; $sum := sum + d$ \;
\If { $x = v$} {
$path.to.v := \mathsf{true}$ \;
$ \dist{w}{i}{u}{v} := d $ \;
$ \lrad{w}{i}{u}{v} := l $ \;
}
}
}\label{alg_line:for}
\eIf{$count = c_{k}^{i}(u)$ and $sum = D_{k}^{i}(u)$} {\label{alg_line:if}
\Return ($path.to.v$, $\dist{w}{i}{u}{v}$, $ \lrad{w}{i}{u}{v} $) \;
}{
halt and reject \;
}
}
\caption{An unambiguous routine to determine if $\dist{w}{i}{u}{v} \leq k$ and find $\dist{w}{i}{u}{v}$ and $ \lrad{w}{i}{u}{v} $}
\label{algo:dist}
\end{algorithm}
Note that Algorithm \ref{algo:dist} is the only algorithm where we use non-determinism. The algorithm is similar to the unambiguous subroutine of Reinhardt and Allender \cite{RA00} with the only difference being that here we consider weight of a path instead of length of a path. The algorithm assumes that the subgraph induced by all the paths of length at most $ i $ and weight at most $ k $ from $ u $ is min-unique.
In Line \ref{alg_line:guess} of Algorithm \ref{algo:dist}, for each vertex $ x $ the routine non-deterministically guesses whether $ \dist{w}{i}{u}{x} \leq k $ and if the guess is `$ \mathsf{true} $', it then guesses a path of length at most $ k $ from $ u $ to $ x $. If the algorithm incorrectly guesses for some vertex $ x $ that $ \dist{w}{i}{u}{x} > k $, then the variable $ count $ will never reach $ c_{k}^{i}(u) $ and the routine will reject. If it guesses incorrectly that $ \dist{w}{i}{u}{x} \leq k $ it will fail to guess a correct path in Line \ref{alg_line:guesspath} and again reject that computation. Thus the only computation paths that exit the {\bf for} loop in Line \ref{alg_line:for} and satisfy the first condition of the {\bf if} statement in Line \ref{alg_line:if}, are the ones that correctly guess exactly the set $ \{x \mid \dist{w}{i}{u}{x} \leq k\} $. If the algorithm ever guesses incorrectly the weight $ d $ of the shortest path to $ x $, then if $ \dist{w}{i}{u}{x} > d $ no path of weight $ d $ will be found, and if $ \dist{w}{i}{u}{x} < d $ then the variable $ sum $ will be incremented by a value greater than $ \dist{w}{i}{u}{x} $. In the latter case, at the end of the algorithm, $ sum $ will be greater than $ D_{k}^{i}(u) $, and the routine will reject.
Since $ G_{w} $ is min-unique for paths of length at most $ i $ and weight at most $ k $ from $ u $, only for exactly one computation path $ sum $ and $ count $ will match with $ c_{k}^{i}(u) $ and $ D_{k}^{i}(u) $. So except one computation path which made all the guesses correct, all other paths halt and reject. If $ \dist{w}{i}{u}{v} \leq k $ then even though the algorithm uses non-deterministic choices, it outputs `$ \mathsf{true} $' along a single computation path while all other paths halt and reject. Also if $ \dist{w}{i}{u}{v} > k $, the algorithm outputs `$ \mathsf{false} $' along a single computation path while all other paths halt and reject. The space complexity of the algorithm is bounded by the size of the weight function $w$.
As a corollary of Theorem \ref{thm:main} we get the following result.
\begin{corollary}
For $s(n) \geq \log n$, $\NSPACE(s(n)) \subseteq \TIUSP(2^{\mathcal{O} (s(n))}, s^{2}(n))$.
\end{corollary} |
2,877,628,088,499 | arxiv | \section{Introduction}
Milne-like spacetimes are a class of $k = -1$ FLRW spacetimes which admit continuous spacetime extensions through the big bang. This extension was observed in \cite{GalLing_con},\footnote{These extensions have been noted previously in the physics literature, see e.g. \cite{Coleman}.} and further properties of these spacetimes were explored in \cite{Ling_coord_sing}. Similar to how investigating the geometrical properties of the $r = 2m$ event horizon in the Schwarzschild spacetime led to a better understanding of black holes, we believe that investigating the geometrical properties of the big bang extension for Milne-like spacetimes may lead to a better understanding of cosmology.
In \cite[Thm. 4.2]{Ling_coord_sing}, it was shown that, under suitable hypotheses of the scale factor for a Milne-like spacetime, the equation of state for the energy density $\rho$ and pressure $p$ at the big bang is the same as that of a cosmological constant, namely, $\rho(0) = -p(0)$. We referred to this property as ``the cosmological constant appearing as an initial condition for Milne-like spcetimes." In this paper we generalize this statement to spacetimes which share similar geometrical properties with Milne-like spacetimes but without any homogeneous or isotropic assumptions. (Recall that Milne-like spacetimes are a subclass of FLRW spacetimes and hence are spatially isotropic.)
\newpage
De Sitter space is characterized by an equation of state $\rho = - p$ \cite[p. 124]{HE}. In this way, the initial equation of state $\rho(0) = -p(0)$ for Milne-like spacetimes can be described as yielding a ``quasi de Sitter" expansion for cosmic times $\tau$ close to the big bang $\tau = 0$, i.e. if $\rho(0) = -p(0)$, then $\rho(\tau) \approx -p(\tau)$ for $\tau$ near $\tau = 0$. This has applications to inflationary theory since $\rho(\tau) \approx -p(\tau)$ yields an inflationary era, $a''(\tau) > 0$, provided $\rho(0) > 0$.
This paper is divided as follows. In section \ref{Milne-like ext sec}, we review the definition of Milne-like spacetimes and their continuous spacetime extensions through the big bang. In section \ref{Milne-like cosmo const sec}, we review how the cosmological constant appears as an initial condition for Milne-like spacetimes. In section \ref{main result}, we prove our main results which generalize the results in section \ref{Milne-like cosmo const sec} to spacetimes without homogeneous or isotropic assumptions. Lastly, in section \ref{inflationary section}, we show how the ``quasi de Sitter" nature of these spacetimes can imply inflationary scenarios.
Our main result, Theorem \ref{main}, says that, under certain assumptions on a spacetime $(M,g)$, the cosmological constant appears as an initial condition. The point of the assumptions in the theorem is to remove the spatial isotropy enjoyed by Milne-like spacetimes. However, we have not been able to construct examples of such spacetimes that are not Milne-like. This leads to the following open question: are there spacetimes $(M,g)$ which satisfy the hypotheses of Theorem \ref{main} and are not Milne-like? To simplify matters, it's worth asking the same question but replacing assumption (a) in Theorem \ref{main} with ``$(M,g)$ solves the vacuum Einstein equations with a cosmological constant, i.e. $\text{Ric} = \Lambda g$."
Milne-like spacetimes were found by investigating low regularity aspects of Lorentzian geometry. This is a growing field with many tantalizing problems to solve. For low regularity causal theory, generalizations, and various results, see \cite{ChrusGrant, Leonardo, Ling_causal_theory, Minguzzi_cone, future_not_open, Clemens_GH, Lesourd_Minguzzi}. For low regularity spacetime inextendibility results, see \cite{SbierskiSchwarz1, SbierskiSchwarz2, SbierskiHol, GalLing_con, GLS, GrafLing, ChrusKlinger}. For the singularity theorems in low regularity, see \cite{Hawking_Penrose_C11, Hawking_sing_low_reg, Penrose_sing_low_reg, Graf_sing_thm, Schin_Stein}. For results on geodesics and maximizing causal curves in low regularity, see \cite{Clemens_Steinbauer, Lorentz_meets_Lipschitz, Schin_Stein}. For results on Lorentzian length spaces, see \cite{Lorentzian_length_spaces, cones_as_length_spaces, length_spaces_causal_hierarchy, time_fun_on_length_spaces, Lorentzian_analogue}. Lastly, for results related to the null distance function and other notions of distance defined on a spacetime, see \cite{Null_distance, Spacetime_distances_exploration, prop_null_dist, null_distance_lorentzian_length_spaces}.
\medskip
\subsection{Milne-like spacetimes and their continuous spacetime extensions through the big bang}\label{Milne-like ext sec}
In this section, we review the definition of Milne-like spacetimes and their continuous spacetime extensions through the big bang.
\emph{Milne-like spacetimes} are $k = -1$ FLRW spacetimes satisfyng the following limiting condition on the scale factor: $a(\tau) = \tau + o(\tau^{1+\e})$ as $\tau \to 0$ for some $\e > 0$. Specifically, the manifold and metric are given by
\begin{equation}
M \,=\, (0, \tau_{\rm max}) \times \mathbb{R}^3\:\:\:\: \text{ and } \:\:\:\: g \,=\, -d\tau^2 + a^2(\tau) h
\end{equation}
where $(\mathbb{R}^3, h)$ is hyperbolic space with constant sectional curvature $k = -1$. We assume $a(\tau)$ is smooth so that $(M,g)$ is a smooth spacetime. The \emph{Milne universe} corresponds to the scale factor $a(\tau) = \tau$ which isometrically embeds into Minkowski space.
Since the assumption on the scale factor is a limiting condition, Milne-like spacetimes can include an inflationary era, a radiation-dominated era, a matter-dominated, and a dark energy-dominated era. Hence they can model the dynamics of our universe. Figure \ref{milne universe and milne-like scale factor figure} depicts a Milne-like spacetime modeling an inflationary era.
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[scale = .725]
\draw [<->,thick] (-12,-2.5) -- (-12,2.5);
\draw [<->,thick] (-13.5,-1) -- (-7.5,-1);
\draw [very thick] (-12,-1) -- (-9.5,1.5);
\draw (-7.15, -1) node {\small{$\tau$}};
\draw (-8, 2) node {\small{$a(\tau) \,=\, \tau$}};
\draw (-9.5,-3) node {\small{The Milne universe}};
\draw [<->,thick] (-2,-2.5) -- (-2,2.5);
\draw [<->,thick] (-3.5,-1) -- (2.5,-1);
\draw (2.85, -1) node {\small{$\tau$}};
\draw [very thick] (-2,-1) -- (-1.55,-0.55);
\draw [densely dashed, thick] (-1.5, -0.5) .. controls (-1,.25).. (-.75,1.6);
\draw [densely dashed, thick] (-.75, 1.6) .. controls (-.5,2.4).. (1.9,2.75);
\draw (4.35, 2.9) node {\small{$a(\tau) \,=\, \tau + o(\tau^{1 +\e})$}};
\draw [->] [thick] (0.5,0) arc [start angle=-90, end angle=-120, radius=68pt];
\draw (2.15,0) node [scale = .85]{\small{Inflationary era}};
\draw (1.0,-3) node {\small{A Milne-like spacetime}};
\end{tikzpicture}
\end{center}
\captionsetup{format=hang}
\caption{\small{Left: The scale factor for the Milne universe. Right: The scale factor for a Milne-like spacetime modeling an inflationary era.}}\label{milne universe and milne-like scale factor figure}
\end{figure}
Introducing coordinates $(R, \theta, \phi)$ for the hyperbolic metric $h$, we can write the spacetime metric as
\begin{equation}
g \,=\, -d\tau^2 + a^2(\tau)\big[dR^2 + \sinh^2(R)(d\theta^2 + \sin^2\theta d\phi^2) \big].
\end{equation}
We introduce new coordinates $(t,r,\theta, \phi)$ via
\begin{equation}\label{t and r def}
t \,=\, b(\tau)\cosh(R) \quad \text{ and } \quad r\,=\, b(\tau)\sinh(R),
\end{equation}
where $b$ is given by $b(\tau) = \exp(\int_{\tau_0}^\tau \frac{1}{a(s)}ds)$ for some $\tau_0 > 0$. (Note that for the Milne universe, $a(\tau) = \tau$, we obtain $b(\tau) = \tau$ when $\tau_0 = 1$.) Hence $b$ satisfies $b' = b/a$. Putting $\Omega = 1/b' = a/b$, the metric in these new coordinates is
\begin{align}\label{conformal metric intro eq}
g \,&=\, \Omega^2(\tau)\big[-dt^2 + dr^2 + r^2(d\theta^2 + \sin^2\theta d\phi^2) \big] \nonumber
\\
&=\, \Omega^2(\tau)[-dt^2 + dx^2 + dy^2 + dz^2] \nonumber
\\
&=\, \Omega^2(\tau)\eta.
\end{align}
Thus Milne-like spacetimes are conformal to (a subset of) Minkowski space. In eq. (\ref{conformal metric intro eq}), $\tau$ is implicitly a function of $t$ and $r$. Specifically, $\tau$ is related to $t$ and $r$ via
\begin{equation}\label{tau t r eq}
b^2(\tau) \,=\, t^2 - r^2.
\end{equation}
Therefore the spacetime manifold $M$ lies within the set of points $t^2 - r^2 > 0$. Since $t > 0$ by eq. (\ref{t and r def}), it follows that $M$ lies within the set of points $t > r$. See figure \ref{milne universe and milne-like figure}.
The proof of \cite[Thm. 3.4]{Ling_coord_sing} shows that $b(0) = 0$ where $b(0) = \lim_{\tau \to 0}b(\tau)$. Therefore, by eq. (\ref{tau t r eq}), $\tau = 0$ corresponds to the set of points $t = r$ on the lightcone at the origin $\mathcal{O}$. Lastly, the proof also shows that $\Omega(0) = \tau_0$. Since $\tau_0 > 0$, eq. (\ref{conformal metric intro eq}) implies that there is no degeneracy at $\tau = 0$ in these coordinates (i.e. the big bang is a coordinate singularity for Milne-like spacetimes). Therefore Milne-like spacetimes admit continuous spacetime extensions through the big bang by defining the extended metric $g_\text{{\rm ext}}$ via $g_\text{{\rm ext}} = \Omega^2(0)\eta$ for points $t \leq r$ and $g_\text{{\rm ext}} = g$ for points $t > r$. ``Continuity" here refers to the fact that the metric $g_\text{{\rm ext}}$ is merely continuous.\footnote{Using similar arguments as in \cite[Appendix B]{Greg_Graf_Ling_AdSxS2}, one can show that Milne-like spacetimes actually admit \emph{Lipschitz} spacetime extensions through the big bang. This should be compared with the results in \cite{SbierskiHol}.}
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[scale = .7]
\shadedraw [white](-4,2) -- (0,-2) -- (4,2);
\shadedraw [dashed, thick, blue](0,-2) -- (4,2);
\shadedraw [dashed, thick, blue](0,-2) -- (-4,2);
\draw [<->,thick] (0,-3.5) -- (0,2.25);
\draw [<->,thick] (-4.5,-2) -- (4.5,-2);
\draw (-.35,2.5) node [scale = .85] {$t$};
\draw (4.75, -2.25) node [scale = .85] {$x^i$};
\draw (-.25,-2.25) node [scale = .85] {$\mathcal{O}$};
\draw [->] [thick] (1.5,2.8) arc [start angle=140, end angle=180, radius=60pt];
\draw (2.0,3.25) node [scale = .85]{\small{The Milne universe}};
\draw [->] [thick] (-2.4,-1.75) arc [start angle=-90, end angle=-30, radius=40pt];
\draw (-3.4,-1.7) node [scale = .85] {\small lightcone};
\draw [thick, red] (-3.84,2) .. controls (0,-2) .. (3.84,2);
\draw [thick, red] (-3.5,2) .. controls (0, -1.3).. (3.5,2);
\draw [->] [thick] (1,-2.3) arc [start angle=-120, end angle=-180, radius=40pt];
\draw (2.3,-2.5) node [scale = .85] {\small{$\tau =$ constant }};
\draw (0,-4.5) node [scale = 1] {\small{$g \,=\, -dt^2 + dx^2 + dy^2 + dz^2$}};
\shadedraw [dashed, thick, white](9,2) -- (13,-2) -- (17,2);
\shadedraw [dashed, thick, blue](13,-2) -- (17,2);
\shadedraw [dashed, thick, blue](13,-2) -- (9,2);
\draw [<->,thick] (13,-3.5) -- (13,2.25);
\draw [<->,thick] (8.5,-2) -- (17.5,-2);
\draw (12.65,2.5) node [scale = .85] {$t$};
\draw (17.75, -2.25) node [scale = .85] {$x^i$};
\draw (12.75,-2.25) node [scale = .85] {$\mathcal{O}$};
\draw [->] [thick] (14.5,2.8) arc [start angle=140, end angle=180, radius=60pt];
\draw (15.0,3.25) node [scale = .85]{\small{A Milne-like spacetime}};
\draw [->] [thick] (10.6,-1.75) arc [start angle=-90, end angle=-30, radius=40pt];
\draw (9.6,-1.7) node [scale = .85] {\small lightcone};
\draw [thick, red] (9.16,2) .. controls (13,-2) .. (16.84,2);
\draw [thick, red] (9.5,2) .. controls (13, -1.3).. (16.5,2);
\draw [->] [thick] (14,-2.3) arc [start angle=-120, end angle=-180, radius=40pt];
\draw (15.3,-2.5) node [scale = .85] {\small{$\tau =$ constant }};
\draw (13,-4.5) node [scale = 1] {\small{$g \,=\,\Omega^2(\tau)[ -dt^2 + dx^2 + dy^2 + dz^2]$}};
\end{tikzpicture}
\end{center}
\captionsetup{format=hang}
\caption{\small{Left: the Milne universe sits inside the future lightcone at the origin $\mathcal{O}$ of Minkowsi space. Right: a Milne-like spacetime sits inside the future lightcone at the origin $\mathcal{O}$ of a spacetime conformal to Minkowski space. In both cases the spacetime is foliated by the hyperboloids of constant $\t$ and extends continuously through the lightcone at $\mathcal{O}$.}}\label{milne universe and milne-like figure}
\end{figure}
It's interesting to understand the behavior of the comoving observers within the extended spacetime. Recall that the \emph{comoving observers} are the integral curves of $u = \partial_\tau$ and hence are given by the curves $\tau \mapsto (\tau, R_0, \theta_0, \phi_0)$ for various points $(R_0, \theta_0, \phi_0)$ on the hyperboloid. Physically, the comoving observers in an FLRW spacetime model the trajectories of the material particles which make up the galaxies, dust, etc. within the universe. In the $(t,r,\theta, \phi)$ coordinates, a comoving observer is given by $\tau \mapsto \big(t(\tau), r(\tau), \theta_0, \phi_0\big)$. By eq. (\ref{t and r def}), we have $t(\tau) = \coth(R_0)r(\tau)$. Thus, in the $(t,r,\theta, \phi)$ coordinates, the comoving observers are straight lines emanating from the origin $\mathcal{O}$. See figure \ref{comoving figure in intro}. This behavior can also be seen by noticing that the comoving observers have to be orthogonal to the hypersurfaces of constant $\tau$ which are the hyperboloids shown in figure \ref{milne universe and milne-like figure}.
\medskip
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[scale = 0.7]
\shadedraw [white] (-4.1,2.1) -- (0,-2) -- (4.1,2.1);
\draw [dashed, thick, blue] (0,-2) -- (4.1,2.1);
\draw [dashed, thick, blue] (0,-2) -- (-4.1,2.1);
\draw [<-,thick] (0,-3.5) -- (0,2.0);
\draw [<->,thick] (-4.5,-2) -- (4.5,-2);
\draw (4.75, -2.25) node [scale = .85] {$x^i$};
\draw (-.25,-2.25) node [scale = .85] {$\mathcal{O}$};
\draw [thick, purple] (0,-2) -- (2,2.1);
\draw [thick, purple] (0,-2) -- (3,2.1);
\draw [thick, purple] (0,-2) -- (1,2.1);
\draw [thick, purple] (0,-2) -- (-1,2.1);
\draw [thick, purple] (0,-2) -- (-2,2.1);
\draw [thick, purple] (0,-2) -- (-3,2.1);
\draw [thick, purple] (0,-2) -- (0,2.1);
\end{tikzpicture}
\end{center}
\captionsetup{format=hang}
\caption{\small{The comoving observers in a Milne-like spacetime. They all emanate from the origin $\mathscr{O}$.}}\label{comoving figure in intro}
\end{figure}
Lastly, we note that the behavior illustrated in figure \ref{comoving figure in intro} is closely related to the notion of a \emph{Janus point}, see \cite{janus_point, janus_point_book}. For Milne-like spacetimes, the ``two-futures-one-past" scenario associated with a Janus point can be seen in\cite[figures 6 and 18]{Ling_coord_sing}.
\medskip
\subsection{The cosmological constant appears as an initial condition for Milne-like spacetimes}\label{Milne-like cosmo const sec}
As shown in \cite[Thm. 12.11]{ON}, FLRW spacetimes satisfy the Einstein equations with a perfect fluid $(u, \rho, p)$,
\begin{equation}
\text{Ric} - \frac{1}{2} Rg\,=\, 8\pi T \,=\, 8\pi\big[(\rho + p)u_* \otimes u_* + pg\big],
\end{equation}
where $u_* = g(u,\cdot)$ is the one-form metrically equivalent to the vector field $u = \partial_\tau$.
We emphasize that for FLRW spacetimes, the energy density $\rho$ and pressure $p$ are purely geometrical quantities given by $\rho = \frac{1}{8\pi} G(u,u)$ and $p = \frac{1}{8\pi}G(e,e)$ where $e$ is any unit spacelike vector orthogonal to $u$ (its choice does not matter by isotropy). Here $G = \text{Ric} - \frac{1}{2} Rg$ is the Einstein tensor which is related to $T$ via $G = 8\pi T$. To incorporate a cosmological constant $\Lambda$, we define $T_{\rm normal} = T + \frac{\Lambda}{8\pi}g$ so that the Einstein equations become
\begin{equation}
\text{Ric} - \frac{1}{2} R g + \Lambda g \,=\, 8\pi T_{\rm normal}.
\end{equation}
Setting $\rho_{\rm normal} = T_{\rm normal}(u,u)$ and $p_{\rm normal} = T_{\rm normal}(e,e)$, we have
\begin{equation}\rho_{\rm normal} \,=\, \rho - \rho_{\Lambda} \:\:\:\: \text{ and } \:\:\:\: p_{\rm normal} \,=\, p - p_{\Lambda},
\end{equation}
where $\rho_{\Lambda} = \frac{\Lambda}{8\pi}$ and $p_{\Lambda} = -\frac{\Lambda}{8\pi}$. Note that
\begin{equation}\label{coso const eq st}
\rho_{\Lambda} \,=\, - p_{\Lambda}.
\end{equation}
Eq. (\ref{coso const eq st}) is the \emph{equation of state} for a cosmological constant.
For a $k = -1$ FLRW spacetime, the Friedmann equations \cite[Thm. 12.11]{ON} are given by
\begin{equation}\label{Friedmann eqs}
\frac{8\pi}{3}\rho(\tau) \,=\, \frac{a'(\tau)^2 - 1}{a(\tau)^2} \:\:\:\: \text{ and } \:\:\:\: -8\pi p(\tau) \,=\, \frac{2a''(\tau)a(\tau) + a'(\tau)^2 -1}{a(\tau)^2}.
\end{equation}
Now assume $(M,g)$ is Milne-like. For simplicity, assume that the scale factor is analytic at zero: $a(\tau) = \tau + \sum_2^\infty c_n\tau^n$. Taking the limit $\tau \to 0$ in (\ref{Friedmann eqs}), we find:
\begin{equation}\label{rho and p for Milne}
c_2 \,=\, 0 \:\:\:\: \Longrightarrow \:\:\:\:\rho(0) \,=\, -p(0) \,=\, \frac{3}{8\pi}(6c_3).
\end{equation}
Given eq. (\ref{coso const eq st}), the statement in (\ref{rho and p for Milne}) is what we mean by \emph{the cosmological constant appears as an initial condition for Milne-like spacetimes.} To obtain the same result under more relaxed assumptions on the scale factor, see \cite[Thm. 4.2]{Ling_coord_sing}. We generalize statement (\ref{rho and p for Milne}) in Theorems \ref{main} and \ref{main2} in the next section. Statement (\ref{rho and p for Milne}) won't hold for FLRW models which begin with a radiation dominated era (i.e. no inflation) since $\rho$ and $p$ diverge as $\tau \to 0$ \cite[exercise 12.14]{ON}.
Lastly, the scalar curvature for $(M,g)$ is given by
\begin{equation}\label{scsalar curv eq}
R(\tau) \,=\, 6\frac{a''(\tau)a(\tau) + a'(\tau)^2 -1}{a(\tau)^2}.
\end{equation}
Taking the limit $\tau \to 0$ in (\ref{scsalar curv eq}), we have
\begin{equation}\label{scalar curv = rho}
c_2 \,=\, 0 \:\:\:\: \Longrightarrow \:\:\:\: R(0) \,=\, 12(6c_3) \,=\, 32\pi\rho(0).
\end{equation}
We generalize statement (\ref{scalar curv = rho}) in Corollary \ref{cor 1} in the next section.
\medskip
\section{Main result}\label{main result}
\smallskip
In this section, we generalize the results of the previous section to spacetimes that share similar geometrical properties with Milne-like spacetimes but without any homogeneous or isotropic assumptions. Specifically, Theorems \ref{main} and \ref{main2} generalize statement (\ref{rho and p for Milne}) and Corollary \ref{cor 1} generalizes statement (\ref{scalar curv = rho}). We also deduce a statement about the Ricci curvature in Corollary \ref{cor 2}.
Our definition of a spacetime $(M,g)$ will follow \cite{Ling_causal_theory}. In particular, the manifold $M$ is always assumed to be smooth. A \emph{smooth} spacetime is one where the metric $g$ is smooth, that is, its components $g_{\mu\nu} = g(\partial_\mu, \partial_\nu)$ are smooth functions with respect to any coordinates $(x^0, \dotsc, x^n)$. A \emph{continuous} spacetime is one where the metric is continuous, that is, its components are continuous functions with respect to any coordinates.
Let $(M,g)$ be a continuous spacetime. Our definition of timelike curves and the timelike future and past, $I^\pm$, will also follow \cite{Ling_causal_theory}. In particular, a \emph{future directed timelike curve} $\g \colon [a,b] \to M$ is a Lipschitz curve that's future directed timelike almost everywhere and satisfies $g(\g', \g') < -\e$ almost everywhere for some $\e > 0$. This class of timelike curves contains the class of piecewise $C^1$ timelike curves \cite[Prop. 2.4]{Ling_causal_theory}.
Let $(M,g)$ be a smooth spacetime. A continuous spacetime $(M_\text{{\rm ext}}, g_\text{{\rm ext}})$ is said to be a \emph{continuous extension} of $(M,g)$ provided $M$ and $M_\text{{\rm ext}}$ have the same dimension, and there is an isometric embedding
\[
(M,g) \,\hookrightarrow\, (M_\text{{\rm ext}}, g_\text{{\rm ext}})
\]
preserving time orientations such that $M \subset M_\text{{\rm ext}}$ is a proper subset. Note that we are identifying $M$ with its image under the embedding.
Let $(M_\text{{\rm ext}}, g_\text{{\rm ext}})$ be a continuous extension of a smooth spacetime $(M,g)$. The topological boundary of $M$ within $M_\text{{\rm ext}}$ is denoted by $\partial M = \overline{M} \setminus M$. A future directed timelike curve $\g \colon [a,b] \to M_\text{{\rm ext}}$ is called a \emph{future terminating timelike curve} for a point $p \in \partial M$ provided $\g(b) = p$ and $\g\big([a,b)\big) \subset M$. \emph{Past terminating} timelike curves are defined time-dually. The \emph{future} and \emph{past boundaries} of $M$ within $M_\text{{\rm ext}}$ are defined as
\begin{align*}
\partial^+M \,&=\, \{p \in \partial M \mid \text{there is a future terminating timelike curve for $p$}\}\\
\partial^-M \,&=\, \{p \in \partial M \mid \text{there is a past terminating timelike curve for $p$}\}.
\end{align*}
For example, $\partial^-M$ for a Milne-like spacetime coincides with the lightcone in figure \ref{milne universe and milne-like figure}.
A set in a spacetime is \emph{achronal} if no two points in the set can be joined by a future directed timelike curve. An important result we will use is the following lemma.\footnote{See \cite[Thm. 2.6]{GalLing_con} for a proof. The proof generalizes to the class of timelike curves considered in this paper since it only uses the openness of $I^\pm$ which follows from \cite[Thm. 2.12]{Ling_causal_theory}. Moreover, the ``topological hypersurface" part of the conclusion follows from \cite[Thm. A.6]{Ling_causal_theory}.}
\medskip
\begin{lem}\label{future and past boundary lem}
If $\partial^+M = \emptyset$, then $\partial^-M$ is an achronal topological hypersurface.
\end{lem}
\medskip
For a Milne-like spacetime $(M,g)$, statement (\ref{rho and p for Milne}) implies that $\rho(\tau)$ and $p(\tau)$ extend continuously to $\tau = 0$ along each integral curve of $u$. We will use a slightly stronger version of these ``continuous extensions" which we make precise next.
Suppose $(M_\text{{\rm ext}}, g_\text{{\rm ext}})$ is a continuous extension of a smooth spacetime $(M,g)$ such that $M = I^+(\mathcal{O}, M_\text{{\rm ext}})$ for some point $\mathcal{O} \in \partial^-M$. Let $f$ be a smooth function on $M$. We say $f$ \emph{extends continuously} to $M \cup \{\mathcal{O}\}$ provided there is a continuous function $\wt{f} \colon M \cup \{\mathcal{O}\} \to \mathbb{R}$ such that $\wt{f}|_M = f$. In this case, we call $\wt{f}$ the \emph{continuous extension} of $f$. The topology on $M \cup \{\mathcal{O}\}$ is the subspace topology inherited from $M_\text{{\rm ext}}$. In other words, $\wt{f}$ is continuous at $\mathcal{O}$ means that given any $\e > 0$, there is a neighborhood $U \subset M_\text{{\rm ext}}$ of $\mathcal{O}$ such that $|\wt{f}(\mathcal{O}) - \wt{f}(x)| < \e$ for all $x \in U \cap (M \cup \{\mathcal{O}\})$.
Likewise, a smooth vector field $X$ on $M$ \emph{extends continuously} to $M \cup \{\mathcal{O}\}$ provided there is a coordinate neighborhood $U$ of $\mathcal{O}$ with coordinates $(x^0, \dotsc, x^n)$ such that each of the components $X^\mu$ in $X = X^\mu \partial_\mu$ extends continuously to $(U \cap M) \cup \{\mathcal{O}\}$. A similar definition applies to smooth tensors on $M$ by requiring each of its components to extend continuously. (This definition does not depend on the choice of coordinate system by the usual transformation law for tensor components.) For example, the metric tensor $g$ on $M$ extends continuously to $M \cup \{\mathcal{O}\}$ since $(M_\text{{\rm ext}}, g_\text{{\rm ext}})$ is a continuous extension of $(M,g)$. For another example, suppose $T$ is a smooth tensor on $(M_\text{{\rm ext}}, g_\text{{\rm ext}})$, then obviously the restriction, $T|_M$, extends continuously to $M \cup \{\mathcal{O}\}$ since it extends smoothly.
We are now ready to state our main result.
\medskip
\begin{thm}\label{main}
Let $(M_\text{{\rm ext}}, g_\text{{\rm ext}})$ be a continuous extension of a smooth spacetime $(M,g)$ such that $M = I^+(\mathcal{O}, M_\text{{\rm ext}})$ for some point $\mathcal{O} \in \partial^-M$. We make the following assumptions.
\begin{itemize}
\item[\emph{(a)}] $(M,g)$ solves the Einstein equations with a perfect fluid $(u, \rho, p)$.
\item[\emph{(b)}] All the integral curves of $u$ have past endpoint $\mathcal{O}$ within $M_\text{{\rm ext}}$. Technical assumption: each of these extended curves are future directed timelike on any compact domain.
\item[\emph{(c)}] The Ricci tensor ${\rm Ric}$ of $(M,g)$, $\rho$, and $p$ extend continuously to $M \cup \{\mathcal{O}\}$.
\item[\emph{(d)}] $(M_\text{{\rm ext}}, g_\text{{\rm ext}})$ is strongly causal at $\mathcal{O}$.
\end{itemize}
Then the continuous extensions of $\rho$ and $p$ satisfy $\wt{\rho} = -\wt{p}$ at $\mathcal{O}$.
\end{thm}
\medskip
\noindent\emph{Remarks.}
\begin{itemize}
\item[-] Recall that $\rho = -p$ is the equation of state for a cosmological constant. The conclusion of Theorem \ref{main} is that $\wt{\rho}(\mathcal{O}) = -\wt{p}(\mathcal{O})$; this is what we mean by the cosmological constant \emph{appears as an initial condition.} By continuity, we have $\rho \approx -p$ for points in $M$ near $\mathcal{O}$; this yields a ``quasi de Sitter" expansion which we elaborate more on in the next section.
\item[-] Note that $M = I^+(\mathcal{O}, M_\text{{\rm ext}})$ holds for Milne-like spacetimes; see figure \ref{milne universe and milne-like figure}. Assumption (b) mimics what happens in figure \ref{comoving figure in intro}. Hence the hypotheses in Theorem \ref{main} generalize what happens in a Milne-like spacetime but without the spatially isotropic assumption. The technical assumption in (b) says that if $\g \colon [0, b] \to M \cup \{\mathcal{O}\}$ is an integral curve of $u$ on $(0,b]$ with past endpoint $\g(0) = \mathcal{O}$, then $\g$ is a future directed timelike curve within $M_\text{{\rm ext}}$. Clearly $\g|_{[\e, b]}$ is a future directed timelike curve for any $\e > 0$. Since $g(\g', \g') = -1$ almost everywhere, requiring $\g$ to be future directed timelike amounts to $\g$ satisfying a Lipschitz condition. This would be satisfied, for example, if $\g$ was continuously differentiable at $\tau = 0$ (which holds for Milne-like spacetimes).
\item[-] Regarding assumption (c), let $(M,g)$ be a Milne-like spacetime with a scale factor that's analytic at zero: $a(\tau) = \tau + \sum_{2}^\infty c_n\tau^n$. If $c_2 \neq 0$, then it's easy to see from eq. (\ref{Friedmann eqs}) that $\rho$ and $p$ diverge as $\tau \to 0$. So our assumption that $\rho$ and $p$ extend continuously to $M \cup \{\mathcal{O}\}$ is similar to setting $c_2 = 0$ in statement (\ref{rho and p for Milne}). Moreover, if $c_2 = 0$ and $c_4 = 0$, then the Ricci tensor, $\text{Ric}$, of $(M,g)$ extends continuously to $M \cup \{\mathcal{O}\}$. (In fact $\text{Ric}$ extends continuously to $M \cup \partial^-M$). This follows from \cite[Lem. 3.5]{Ling_coord_sing} since $\text{Ric}$ can be written as a sum of products of the metric, its inverse, and their first and second derivatives along with the fact that the inverse metric is as regular as the metric.
\item[-] Regarding assumption (d), recall that $(M_\text{{\rm ext}}, g_\text{{\rm ext}})$ is \emph{strongly causal} at $\mathcal{O}$ means that for any neighborhood $U$ of $\mathcal{O}$ there is a neighborhood $V \subset U$ of $\mathcal{O}$ such that $\g(a),\g(b) \in V$ implies $\g\big([a,b]\big) \subset U$ whenever $\g \colon [a,b] \to M_\text{{\rm ext}}$ is a future directed causal curve. This assumption holds for the usual continuous extensions of Milne-like spacetimes constructed in section \ref{Milne-like ext sec} since these constructions are conformal to (subsets of) Minkowski space which are strongly causal at every point.
\end{itemize}
\medskip
\noindent\underline{\emph{Proof of Theorem \emph{\ref{main}}}}.
\medskip
Before providing the details, we briefly outline the proof: If $\wt{\rho}(\mathcal{O}) \neq -\wt{p}(\mathcal{O})$, then assumptions (a) and (c) imply that the vector field $u$ extends continuously to $M \cup \{\mathcal{O}\}$. However, assumptions (b) and (d) along with $M = I^+(\mathcal{O}, M_\text{{\rm ext}})$ imply that $M \cup \partial^-M$ locally looks like figure \ref{comoving figure in intro} near $\mathcal{O}$. It's evident from figure \ref{comoving figure in intro} that $u$ does not extend continuously to $M \cup \{\mathcal{O}\}$, yielding a contradiction.
Seeking a contradiction, suppose $\wt{\rho} \neq - \wt{p}$ at $\mathcal{O}$. We show that this implies that $u$ extends continuously to $M \cup \{\mathcal{O}\}$. Since $(M,g)$ solves the Einstein equations with a perfect fluid, we have
\[
\text{Ric} - \frac{1}{2} Rg\,=\, 8\pi T \,=\, 8\pi\big[(\rho + p)u_* \otimes u_* + pg\big]
\]
within $M$. Here $u_* = g(u, \cdot)$ is the one-form metrically equivalent to the vector field $u$. Since $\text{Ric}$ extends continuously to $M \cup \{\mathcal{O}\}$, so does the scalar curvature $R$ and hence so does $T$.
Since $\wt{\rho}(\mathcal{O}) \neq -\wt{p}(\mathcal{O})$, there is a coordinate neighborhood $U \subset M_\text{{\rm ext}}$ of $\mathcal{O}$ such that $\wt{\rho} + \wt{p} \neq 0$ in $U \cap (M \cup \{\mathcal{O}\})$. Then, within $U \cap M$, we have
\[
u_* \otimes u_* \,=\, \frac{1}{\rho + p}(T - pg).
\]
The right-hand side of the above equality extends continuously to $(U \cap M) \cup \{\mathcal{O}\}$, hence so does the left-hand side. Let $S$ denote the continuous extension of $u_* \otimes u_*$ to $M \cup \{\mathcal{O}\}$. Let $(x^0, \dotsc, x^n)$ denote the coordinates on $U$. Let $S_{\mu\nu} = S(\partial_\mu, \partial_\nu)$. Then $S_{\mu\nu} = u_\mu u_\nu$ within $U \cap M$ where $u_\mu$ are the components of $u_*$. Define $\wt{u}_*$ on $M \cup \{\mathcal{O}\}$ via $\wt{u}_*|_M = u_*$ and the extension
\[
\wt{u}_\mu(\mathcal{O}) \,=\, \left\{
\begin{array}{ll}
+\sqrt{S_{\mu\mu}(\mathcal{O})} & \text{ if } S_{\mu\mu}(\mathcal{O}) \neq 0 \text{ and } u_\mu(\mathcal{O}) > 0 \text{ near } \mathcal{O} \\
-\sqrt{S_{\mu\mu}(\mathcal{O})} & \text{ if } S_{\mu\mu}(\mathcal{O}) \neq 0 \text{ and } u_\mu(\mathcal{O}) < 0 \text{ near } \mathcal{O}
\\
0 & \text{ if } S_{\mu\mu}(\mathcal{O}) = 0
\end{array}
\right.
\]
Then $\wt{u}_*$ is a continuous extension of $u_*$ to $M \cup \{\mathcal{O}\}$. Let $\wt{u}$ denote the vector field metrically equivalent to $\wt{u}_*$ (i.e. its components are given by $\wt{u}^\mu = g_\text{{\rm ext}}^{\mu\nu} \wt{u}_\nu$). Then $\wt{u}$ is a continuous extension of $u$ to $M \cup \{\mathcal{O}\}$.
Since $g(u, u) = -1$ (by definition of a perfect fluid), continuity implies $g_\text{{\rm ext}}(\wt{u}, \wt{u}) = -1$ at $\mathcal{O}$. Using \cite[Lem. 2.9]{Ling_causal_theory} and applying the Gram-Schmidt orthogonalization process appropriately, for any $0 < \e <1$, we can assume that the coordinates $(x^0, \dotsc, x^n)$ on $U$ satisfy assumptions (1) - (6) below.
\begin{itemize}
\item[(1)] $\partial_0|_{\mathcal{O}} = \wt{u}(\mathcal{O})$,
\item[(2)] $x^0$ is a time function on $U$,
\item[(3)] $\wt{g}_{\mu\nu}(\mathcal{O}) = \eta_{\mu\nu}$ and $|\wt{g}_{\mu\nu}(x) - \eta_{\mu\nu}| < \e$ for all $x \in U$ where $\wt{g}_{\mu\nu} = g_\text{{\rm ext}}(\partial_\mu, \partial_\nu)$.
\end{itemize}
Here $\eta_{\mu\nu}$ are the usual components of the Minkowski metric with respect to the coordinates $(x^0, \dotsc, x^n)$. That is,
\[
\eta \,=\, \eta_{\mu\nu}dx^\mu dx^\nu \,=\, -(dx^0)^2 + \delta_{ij}dx^idx^j.
\]
By choosing $U$ even smaller, we can also assume that
\begin{itemize}
\item[(4)] $\eta^\e(X,X) \leq 0 \,\Longrightarrow\, g_\text{{\rm ext}}(X,X) < 0$ for all nonzero $X \in T_pM_{\text{{\rm ext}}}$ whenever $p \in U$,
\end{itemize}
where $\eta^\e$ is the narrow Minkowskian metric on $U$ given by
\[
\eta^\e \,=\, -\frac{1-\e}{1+\e}(dx^0)^2 + \delta_{ij}dx^i dx^j \,=\, \eta + \frac{2\e}{1-\e}(dx^0)^2.
\]
Moreover, since $\wt{u}$ is a continuous extension of $u$ to $M \cup \{\mathcal{O}\}$, we can also assume that
\begin{itemize}
\item[(5)] $|\wt{u}^\mu (x) - \wt{u}^\mu(\mathcal{O})| < \frac{\e}{2}$ for all $x \in U \cap (M \cup \{\mathcal{O}\})$.
\end{itemize}
Lastly, if $\phi \colon U \to \mathbb{R}^{n+1}$ denotes the coordinate map (i.e. $\phi = (x^0, \dotsc, x^n)$), then, by restricting the domain of $\phi$, we can assume that
\begin{itemize}
\item[(6)] $\phi(U) = B_{2r}$ where $B_{2r} \subset \mathbb{R}^{n+1}$ is an open ball with radius $2r > 0$ (as measured by the Euclidean metric $\delta = \delta_{\mu\nu}dx^\mu dx^\nu$ on $U$) centered at the origin: $\phi(\mathcal{O}) = (0, \dotsc, 0)$.
\end{itemize}
Choose $\e = \frac{3}{5}$. Then $\eta^\e$ has lightcones with `slope' $2$. Define the curve $c \colon [0, r] \to B_{2r}$ by $c(t) = (t, \frac{t}{2}, 0, \dotsc, 0)$. By (4), the curve $\phi^{-1}\circ c(t)$ is future directed timelike. Let $q = \phi^{-1} \circ c(r)$. Since $M = I^+(\mathcal{O}, M_\text{{\rm ext}})$, it follows that $q \in M$.
Let $\g\colon [0,b] \to M \cup \{\mathcal{O}\}$ denote the integral curve of $u$, i.e. $\g'(\tau) = u\circ \g(\tau)$ on $(0,b]$, with future endpoint $\g(b) = q$ and past endpoint $\g(0) = \mathcal{O}$. Note that $\tau$ is the proper time of $\g$.
\medskip
\medskip
\noindent{\bf Claim.} We can assume $\g\big([0,b]\big) \subset U$.
\medskip
\medskip
The claim follows by strong causality of $(M_\text{{\rm ext}}, g_\text{{\rm ext}})$ at $\mathcal{O}$. To see this, note that strong causality implies that there is a neighborhood $V \subset U$ of $\mathcal{O}$ such that if $\g$ has endpoints in $V$, then the image of $\g$ is contained in $U$. Let $V' \subset V$ denote a neighborhood of $\mathcal{O}$ satisfying assumption (6) above. Then we work in $V'$ to construct the curve $\g$ in exactly the same way as in the paragraph above the claim. Then strong causality implies that the image of $\g$ is contained in $U$. This proves the claim.
By the claim and (2), we can reparameterize $\g$ by $x^0$. Let $\bar{\gamma} \colon [0,r] \to M \cup \{\mathcal{O}\}$ be the reparameterization of $\g$ by $x^0$. Then
\[
\bar{\g}(t) \,=\, \g \circ (x^0 \circ \g)^{-1}(t) \:\:\:\: \text{ where } \:\:\:\: x^0 \circ \g(\tau) \,=\, \int_0^\tau \frac{d(x^0 \circ \g)}{d\tau'}d\tau'.
\]
Note that $\bar{\gamma}(0) = \mathcal{O}$ and $\bar{\gamma}(r) = q$. Since $\phi(q) = (r, \frac{r}{2}, 0, \dotsc, 0)$, the mean value theorem implies that there exists a $t_* \in (0,r)$ such that $(x^1 \circ \bar{\gamma})'(t_*) = \frac{1}{2}$. Set $\gamma^\mu = x^\mu \circ \gamma$ and $\bar{\gamma}^\mu = x^\mu \circ \bar{\gamma}$. Using the fact that $\tau$ and $t = x^0 \circ \g$ are inverses of each other, the chain rule gives
\[
\frac{1}{2} \,=\, \frac{d \bar{\gamma}^{1}}{dt}(t_*) \,=\, \frac{d\g^1}{d\tau}\big(\tau(t_*)\big)\frac{d\tau}{dt}(t_*)\,=\, \frac{d\gamma^1/d\tau}{d\g^0 /d\tau}\big(\tau(t_*)\big) \,=\, \frac{u^1}{u^0}\big(\bar{\g}(t_*)\big).
\]
However, by (1) and (5), we have
\[
\sup_{x \in U}\,\frac{u^1}{u^0}(x) \,\leq\, \frac{0 + \e/2}{1 - \e/2} \,=\, \frac{3}{7} \,<\, \frac{1}{2},
\]
which is a contradiction.
\hfill $\Box$ \medskip
\medskip
\medskip
A careful inspection of the proof of Theorem \ref{main} reveals that assumption (d) is only used to prove the claim in the proof. The next theorem shows that one can replace assumption (d) with (d$'$). Essentially, (d$'$) says that $\partial^-M$ looks like figure \ref{milne universe and milne-like figure} at least locally near $\mathcal{O}$.
\medskip
\medskip
\begin{thm}\label{main2}
Let $(M_\text{{\rm ext}}, g_\text{{\rm ext}})$ be a continuous extension of a smooth spacetime $(M,g)$ such that $M = I^+(\mathcal{O}, M_\text{{\rm ext}})$ for some $\mathcal{O} \in \partial^-M$. Assume \emph{(a) - (c)} from Theorem \emph{\ref{main}} but replace assumption \emph{(d)} with
\begin{itemize}
\item[\emph{(d$'$)}] For any neighborhood $U$ of $\mathcal{O}$, there is a neighborhood $V \subset U$ of $\mathcal{O}$ such that the past boundary of $M$ satisfies $\partial^-M \cap V \subset J^+(\mathcal{O}, V)$.
\end{itemize}
Then the continuous extensions of $\rho$ and $p$ satisfy $\wt{\rho} = -\wt{p}$ at $\mathcal{O}$.
\end{thm}
\noindent \emph{Proof:\ }
From the discussion above Theorem \ref{main2}, it suffices to show that the claim in the proof of Theorem \ref{main} holds. That is, we want to show that we could have chosen our neighborhood $U$ such that $\g\big([0,b]\big) \subset U$.
Let $U'$ be a neighborhood of $\mathcal{O}$ satisfying (1)-(5) in the proof of Theorem \ref{main}. By assumption (d$'$), there is a neighborhood $V \subset U'$ of $\mathcal{O}$ such that $\partial^-M \cap V \subset J^+(\mathcal{O}, V)$. Let $U \subset U' \cap V$ be a neighborhood of $\mathcal{O}$ satisfying
\begin{itemize}
\item[(6$'$)] $\phi(U) = (-2r, 2r) \times (-10r, 10r)^n$ for some $r > 0$ where $n + 1$ is the dimension of the spacetime. Assume that $U$ is still centered at the origin: $\phi(\mathcal{O}) = (0, \dotsc, 0)$.
\end{itemize}
Again, choose $\e = 3/5$ and define the curve $c \colon [0,r] \to \phi(U)$ by $c(t) = (t, \frac{t}{2}, 0, \dotsc, 0)$. Let $q = \phi^{-1}\circ c(r)$. Again, we have $q \in M$, so let $\g \colon [0, b] \to M \cup \{\mathcal{O}\}$ denote the integral curve of $u$ with future endpoint $q$ and past endpoint $\mathcal{O}$. As remarked above, it suffices to show $\g \big([0,b] \big) \subset U$.
Seeking a contradiction, suppose this is not the case. Define
\[\tau_0 \,=\, \inf \{\tau \in [0,b] \mid \g\big((\tau,b]\big) \subset U\}.
\]
Then $\tau_0 > 0$ by assumption and $\g(\tau_0) \in \partial U$. Since $\e = 3/5$ (and hence lightcones are contained within wider Minkowski lightcones with slope 1/2), applying \cite[Lem. 2.9 and 2.11]{Ling_causal_theory} shows that
\begin{itemize}
\item[(i)] $\lim_{\tau \to \tau_0} x^0 \circ \g(\t) \,=\, -2r.$
\end{itemize}
Since $\partial^-M \cap U \subset \partial^-M \cap V \subset J^+(\mathcal{O}, V) \subset J^+(\mathcal{O}, U')$, another application of \cite[Lem. 2.9 and 2.11]{Ling_causal_theory} gives
\begin{itemize}
\item[(ii)] $x^0\big(\partial^-M \cap U) \subset [0, 2r)$.
\end{itemize}
Since $M = I^+(\mathcal{O}, M_\text{{\rm ext}})$, we have $\partial^+M = \emptyset$. Therefore, by Lemma \ref{future and past boundary lem}, $\partial^-M$ is an achronal topological hypersurface. Since it's a topological hypersurface, we can assume that it separates $U$ by shrinking $U$ if necessary. The separation is given by the following disjoint union
\[
U \,=\, I^+(\partial^-M, U) \sqcup (\partial^-M \cap U) \sqcup \big(U \setminus \overline{I^+(\partial^-M, U)}\big).
\]
We have $q \in I^+(\partial^-M, U)$. By (i) and (ii), it follows that there must be some $\tau_* \in (\tau_0,b)$ such that $\g(\tau_*) \in \partial^-M$. However, this contradicts the achronality of $\partial^-M$ since $\g|_{[0,\tau_*]}$ is a future directed timelike curve with endpoints on $\partial^-M$.
\hfill $\Box$ \medskip
\medskip
\medskip
\noindent\emph{Remark.} The assumption $M = I^+(\mathcal{O}, M_\text{{\rm ext}})$ in Theorem \ref{main} can be replaced with the weaker assumption $I^+(\mathcal{O}, M_\text{{\rm ext}}) \subset M$; the proof remains unchanged. This is not true for Theorem \ref{main2} since $M \subset I^+(\mathcal{O}, M_\text{{\rm ext}})$ was used to ensure that $\partial^+M = \emptyset$. However, we can make the weaker assumption $I^+(\mathcal{O}, M_\text{{\rm ext}}) \subset M$ in Theorem \ref{main2} so long as we also assume conditions on $(M,g)$ that ensure $\partial^+M = \emptyset$. For example, if $(M,g)$ is future one connected and future divergent, then $\partial^+M = \emptyset$ \cite{SbierskiSchwarz1, GalLing_con}, or if $(M,g)$ is future timelike geodesically complete and globally hyperbolic, then $\partial^+M = \emptyset$ \cite{GLS}. In fact, future timelike geodesic completeness was shown to be sufficient \cite{Minguzzi_Suhr}.
\medskip
\medskip
\begin{cor}\label{cor 1}
Assume the hypotheses of either Theorem \emph{\ref{main}} or Theorem \emph{\ref{main2}}. Then the continuous extension of the scalar curvature satisfies
\[
\wt{R}(\mathcal{O}) \,=\, 16\pi \frac{n+1}{n-1} \wt{\rho}(\mathcal{O}).
\]
When the spacetime dimension is $n + 1 = 4$, we recover equation \emph{(\ref{scalar curv = rho})}.
\end{cor}
\noindent \emph{Proof:\ }
This follows from tracing the Einstein equations and using $\wt{\rho} = -\wt{p}$ at $\mathcal{O}$.
\hfill $\Box$ \medskip
\medskip
\medskip
Corollary \ref{cor 2} shows that we can also get a statement about the Ricci curvature at $\mathcal{O}$. In some sense, it says that the spacetime ``begins Einstein." In fact Corollary \ref{cor 2} implies Corollary \ref{cor 1}; however, we have to work a bit harder to prove Corollary \ref{cor 2} since it relies on the technical assumption appearing in part (b) of Theorem \ref{main}.
\medskip
\begin{cor}\label{cor 2}
Assume the hypotheses of either Theorem \emph{\ref{main}} or Theorem \emph{\ref{main2}}. Then the continuous extension of ${\rm Ric}$ to $M \cup \{\mathcal{O}\}$ satisfies
\[
\wt{{\rm Ric}} \,=\, \frac{16\pi \wt{\rho}}{n-1} \, g_\text{{\rm ext}}
\]
at $\mathcal{O}$.
\end{cor}
\noindent \emph{Proof:\ }
Rewriting the Einstein equations, we have
\[
\text{Ric} \,=\, 8\pi \big[(\rho + p)u_* \otimes u_* + pg\big] + \frac{8\pi}{n-1} \big[(\rho + p) - (n+1)p\big]g
\]
within $M$.
Let $U$ be a coordinate neighborhood of $\mathcal{O}$ with coordinates $(x^0, \dotsc, x^n)$. Write $R_{\mu\nu} = \text{Ric}(\partial_\mu, \partial_\nu)$, and let $\wt{R}_{\mu\nu}$ denote their continuous extensions to $M \cup \{\mathcal{O}\}$. Consider a future directed timelike curve $\g\colon [0,b] \to M \cup \{\mathcal{O}\}$ where $\g$ is an integral curve of $u$ on $(0,b]$ with past endpoint $\g(0) = \mathcal{O}$. Setting $\g^\mu = x^\mu \circ \g$, we have
\[
R^{\mu\nu}\circ \g \,=\, 8\pi \left[(\rho + p)\frac{d\gamma^\mu}{d\tau}\frac{d\gamma^\nu}{d\tau} + p g^{\mu\nu} \right] + \frac{8\pi}{n-1}\big[(\rho + p) - (n+1)p\big]g^{\mu\nu}.
\]
Since $\g$ is future directed timelike, it satisfies a Lipschitz condition by definition. Therefore there is a constant $C$ such that
\[
\left|\frac{d\gamma^\mu}{d\tau} \right| \,\leq\, C
\]
almost everywhere \cite[Prop. 2.2]{Ling_causal_theory}.
Since $(\rho + p) \to 0$ along $\g(\tau)$ as $\tau \to 0$, the bound above implies that $(\rho + p)\frac{d\g^\mu}{d\tau}\frac{d\g^\nu}{d\tau} \to 0$ as $\tau \to 0$. Therefore
\[
\wt{R}^{\mu\nu}(\mathcal{O}) = -\frac{16\pi \wt{p}(\mathcal{O})}{n-1}\wt{g}^{\mu\nu}(\mathcal{O})
\]
where $\wt{g}^{\mu\nu}$ are the components of the inverse metric to $g_\text{{\rm ext}}$. The result follows.
\hfill $\Box$ \medskip
\medskip
\medskip
In this section, we have always assumed that $\text{Ric}$ extends continuously to $M \cup \{\mathcal{O}\}$. Finding sufficient conditions on the perfect fluid $(u,\rho,p)$ for when this happens is perhaps an interesting question, but this will not be explored here.
\section{Some remarks on inflationary scenarios}\label{inflationary section}
In this section, we show how the results from the previous section can be used to imply inflationary scenarios for spacetimes without the homogeneous and isotropic assumptions associated with FLRW spacetimes. The idea is this: the cosmological constant appearing as an initial condition yields a ``quasi de Sitter" expansion in the early universe. For FLRW spacetimes, this is seen via Friedmann's second equation. (We show this in statement (\ref{2nd friedmann eq}) below.) For the nonhomogeneous spacetimes considered in the previous section, we will use the Raychaudhuri equation, which is a generalization of Friedmann's second equation, to obtain an inflationary expansion.
\newpage
An inflationary era is characterized by an accelerated expansion, $a''(\tau) > 0$, right after the big bang but before the radiation dominated era. It's speculated to occur since it solves certain problems in cosmology (e.g. the horizon and flatness problems) and predicts that the spectrum of density perturbations is scale-invariant. For a nice introduction on inflationary theory, see \cite{Liddle}; for a more thorough account, see \cite{WeinbergCos}. The significance of $a''(\tau) > 0$ is that it violates the \emph{strong energy condition} which holds for all known physical matter models, e.g. dust and radiation. (It's also the energy condition appearing in Hawking's cosmological singularity theorems.) Therefore, if the energy-momentum tensor was dominated by radiation in the early universe immediately after the big bang, then an inflationary era cannot occur. Some other matter model, which violates the strong energy condition, must be used to generate an inflationary era.
To account for an inflationary era, one normally introduces an ``inflaton" scalar field $\phi$ in a slow-roll potential. If the energy-momentum tensor was dominated by the scalar field, then the slow-roll potential implies $a''(\tau) > 0$. This is
\emph{not} our approach. Instead, we obtain an inflationary era from the \emph{geometry} of the spacetimes considered in the previous section. The geometry of these spacetimes, encoded in the assumptions of Theorems \ref{main}/\ref{main2}, imply that the cosmological constant appears as an initial condition which implies a ``quasi de Sitter" expansion. We show this next.
From eq. (\ref{Friedmann eqs}), we obtain Friedmann's second equation
\begin{equation}\label{2nd friedmann eq}
\frac{a''(\tau)}{a(\tau)} \,=\, -\frac{4\pi}{3}\big(\rho(\tau) + 3p(\tau)\big).
\end{equation}
Consider a Milne-like spacetime with $a(\tau) = \tau + \sum_2^\infty c_n\tau^n$ near $\tau = 0$. By statement (\ref{rho and p for Milne}) and eq. (\ref{2nd friedmann eq}), we see that
\begin{equation}\label{inflationary scenario eq}
c_2 \,=\, 0 \quad \Longrightarrow \quad \rho(0) \,=\, -p(0) \quad \Longrightarrow \quad a''(\tau) \,>\, 0
\end{equation}
for $\tau$ near $\tau = 0$ provided $\rho(0) > 0$. Hence we see that the assumptions $c_2 = 0$ and $\rho(0) > 0$ yield an inflationary era.
Next we generalize statement (\ref{inflationary scenario eq}) to the nonhomogeneous spacetimes considered in the previous section. First, we identify a geometric quantity for $a''$. Let $\{e_0, e_1, e_2, e_3\}$ be an orthonormal frame for an FLRW spacetime with $e_0 = u = \partial_\tau$. Using $\langle \cdot, \cdot \rangle$ to denote the metric $g(\cdot, \cdot)$, we have
\[
\text{div}(u) \,=\, -\langle \nabla_{e_0} u, e_0 \rangle + \sum_{i = 1}^3 \langle \nabla_{e_i} u, e_i \rangle \,=\, \sum_{i = 1}^3 \langle \nabla_{e_i} u, e_i \rangle \,=\, 3\frac{a'}{a}.
\]
Set $H = \frac{1}{3}\text{div}(u) = a'/a$. Then the equation $a''/a = (a'/a)' + (a'/a)^2$ becomes
\begin{equation}\label{a'' eq}
a''/a \,=\, H' + H^2.
\end{equation}
The right hand side of eq. (\ref{a'' eq}) will be our geometrical substitute for $a''$. For FLRW spacetimes, $u$ is hypersurface orthogonal and so $H$ coincides with the mean curvature, $\frac{1}{3}\text{tr}(K)$, of the constant $\tau$-slices where $K$ is the second fundamental form of the slice given by $K(X,Y) = \langle \nabla_X u, Y\rangle$.\footnote{Our convention for the mean curvature $H$, which includes the 1/3 factor in front of $\text{tr}(K)$, coincides with the Hubble parameter, $a'/a$, which is also denoted by $H$ in the physics literature.}
Now let $(M,g)$ be any smooth spacetime, and let $u$ be a smooth future directed timelike vector field on $M$ normalized to $\langle u,u\rangle = -1$. For simplicity, assume $\text{dim}(M) = 4$. Define $H = \frac{1}{3}\text{div}(u)$. Letting $\tau$ denote the proper time of the flow lines of $u$, the Raychaudhuri equation \cite[eq. (4.26)]{HE} gives
\begin{equation}\label{Ray eq}
3\left(\frac{d H}{d\tau} + H^2\right)\,=\, -\text{Ric}(u,u) + 2\omega^2 - 2\sigma^2 + \text{div}(\nabla_u u),
\end{equation}
where $2\omega^2 = \omega_{ij}\omega^{ij} \geq 0$ and $2\s^2 =\s_{ij}\s^{ij}\geq 0$. Here $\omega$ and $\sigma$ are the \emph{vorticity} and \emph{shear} scalars, which are completely determined by vectors spanning the orthogonal complement $u^\perp$, see \cite[ch. 7 and 12]{Frankel_grav}. When $u$ is hypersurface orthogonal, the vorticity scalar vanishes and $H$ coincides with the mean curvature of the hypersurfaces.
Following \cite{Ellis, Ellis_Elst}, we define an \emph{average length scale} $\mathfrak{a}(\tau)$ along the flow lines of $u$ via $\mathfrak{a}'/\mathfrak{a} = H$ where the prime $'$ denotes a derivative with respect to the proper time $\tau$ of the flow lines. With this definition, we have $\mathfrak{a}''/\mathfrak{a} = H' + H^2$ which generalizes eq. (\ref{a'' eq}). For FLRW spacetimes, the average length scale $\mathfrak{a}(\tau)$ coincides with the scale factor $a(\tau)$.
Consider the setting of the previous section. Assume the hypotheses of either Theorem \ref{main} or Theorem \ref{main2}. By Corollary \ref{cor 2}, for points near $\mathcal{O}$, eq. (\ref{Ray eq}) gives
\begin{equation}\label{a'' gen eq}
3\frac{\mathfrak{a}''}{\mathfrak{a}} \,\approx\,\, 8\pi \wt{\rho}(\mathcal{O}) + 2\omega^2 -2\s^2 + \text{div}(\nabla_u u).
\end{equation}
Eq. (\ref{a'' gen eq}) generalizes eq. (\ref{2nd friedmann eq}). If $2\s^2 - \text{div}(\nabla_u u)$ is sufficiently less than $8\pi \wt{\rho}(\mathcal{O})$ for points close to $\mathcal{O}$, then eq. (\ref{a'' gen eq}) shows that $\mathfrak{a}'' > 0$. Since $\mathfrak{a}''/\mathfrak{a} = H' + H^2$, eq. (\ref{a'' eq}) shows that we can interpret $\mathfrak{a}'' > 0$ as an analogue for an inflationary era in this nonhomogeneous setting.
If $u$ is a geodesic vector field (which is the case for FLRW spacetimes), then $\nabla_u u = 0$ and so we only require that $2\s^2$ is sufficiently less than $8\pi \wt{\rho}(\mathcal{O})$ to obtain $\mathfrak{a}'' > 0$. Recall that $2\s^2$ measures the rate of shear of the flow; it's zero for FLRW spacetimes and, in fact, zero for any fluid flow with uniform expansion. In this sense, assuming $2\s^2$ is sufficiently small can be thought of as a substitute for the spatial isotropy associated with FLRW spacetimes.
\medskip
\section*{Acknowledgments}
The author gratefully acknowledges being supported by the Harold H. Martin Postdoctoral Fellowship at Rutgers University. He thanks Greg Galloway for many helpful comments and pointing out references \cite{Ellis, Ellis_Elst}. He thanks two anonymous reviewers who greatly improved the quality of the paper. Lastly, he thanks the organizers of \emph{Singularity theorems, causality, and all that; a tribute to Roger Penrose} for putting together a stimulating conference.
\medskip
\medskip
\medskip
\noindent {\bf Data availability statement}
\medskip
\noindent This manuscript has no associated data.
\bibliographystyle{amsplain}
|
2,877,628,088,500 | arxiv | \section{Introduction}
It is well-known in statistical physics that N-body correlations have to be carefully described in order to characterize statistical properties of complex systems.
For instance, in the case of the Liouville equation for Hamiltonian dynamics, this problem is at the heart of the derivation of the reduced BBGKY hierarchy, thereby leading to the Boltzmann and Enskog theories for fluids \cite{balescu}.
In this line of though, it is primordial to discriminate N-body correlations that are due to intrinsic N-body interactions, from those that merely develop from lower order interactions. This issue is directly related to a well-known problem in complex network theory, i.e. the "projection" of bipartite networks onto simplified structures. As a paradigm for such systems, people usually consider co-authorship networks \cite{newman}, namely networks composed by two kinds of nodes, e.g. the scientists and the articles, with links running
between scientists and the papers they wrote.
In that case, the usual
projection method \cite{newman2} consists in focusing e.g. on the scientist nodes, and in drawing a link between them if they co-authored a common paper (see Fig.\ref{projection}). As a result, the projected system is a unipartite network of scientists, that characterizes the community structure of science collaborations. Such studies have been very active recently, due to their complex social structure \cite{newman3}, to the ubiquity of such bipartite networks in complex systems \cite{bara} \cite{ramasco}, and to the large databases available.
\begin{figure}
\hspace{1.8cm}
\includegraphics[width=3.00in]{projection.ps
\caption{\label{projection} Usual projection method of the bipartite graph on a unipartite scientists graph.}
\end{figure}
A standard quantity of interest in order to characterize the structure of the projected network is the clustering coefficient \cite{watts}, which measures network "transitivity", namely the probability that two scientist's co-authors have themselves coauthored a paper. In topological terms, it is a measure of the density of triangles in a network, a triangle being formed every time two of one's collaborators collaborate with each other. This coefficient is usually very high in systems where sociological cliques develop \cite{eurovision}. However,
part of the clustering in co-authorship network is due to papers with three or more coauthors. Such papers introduce trivial triangles of collaborating authors, thereby increasing the clustering coefficient.
This problem, that was raised by Newman et al. \cite{newman2}, was circumvented by studying directly the bipartite network, in order to infer the authors community structure. Newman et al. showed on some examples that these high order interactions may account for one half of the clustering coefficient.
One should note, however, that if this approach offers a well-defined theoretical framework for bipartite networks, it suffers a lack of transparency as compared to the original projection method, i.e. it does not allow a clear visualisation of the unipartite structure.
In this article, we propose an alternative approach that is based on a more refine unipartite projection, and follows Statistical Mechanics usual expansion methods. To do so, we focus on a small dataset, retrieved from the arXiv database and composed of articles dedicated to complex network theory. This choice is motivated by their relatively few co-authors per article, a property typical to theoretical physics papers \cite{grossman}.
Our method consists in discriminating the different kinds of scientists collaborations, based upon the number of co-authors per article. This discrimination leads to a diagram representation \cite{feynman, mayer} of co-authorship (see also \cite{berg} for the applicability of Feynman diagrams in complex networks). The resulting N-body projection reconciles the visual features of the usual projection, and the exact description of Newman's theoretical approach. Empirical results confirm the importance of high order collaborations in the network structure. Therefore, we introduce in the last section a simple network model, that is based on random triangular connections between the nodes. We study numerically percolation for the model.
\section{N-body projection method}
\begin{figure}
\includegraphics[angle=-90,width=3.50in]{dist.ps
\caption{\label{dist} Histogram of the number of scientists/articles, $n$. The dashed line corresponds to the fit $e^{-\frac{n}{1.5}}$.}
\end{figure}
The data set contains all articles from arXiv in the time interval $[1995:2005]$, that contain the word {\em "network"} in their abstract and are classified as {\em "cond-mat"}. In order to discriminate the authors and avoid spurious data, we checkeed the names and the first names of the authors. Moreover, in order to avoid multiple ways for an author to cosign a paper, we also took into account the initial notation of the prenames. For instance, {\em Marcel Ausloos} and {\em M. Ausloos} are the same person, while {\em Marcel Ausloos} and {\em Mike Ausloos} are considered to be different. Let us stress that this method may lead to ambiguities if an initial refers to two different first names, e.g. M. Ausloos might be Marcel or Mike Ausloos. Nonetheless, we have verified that this case occurs only once in the data set (Hawoong, Hyeong-Chai and H. Jeong), so that its effects are negligible. In that sole case, we attributed the papers of H. Jeong to the most prolific author (Hawoong Jeong in the dataset). Given this identification method, we find $n_P=2533$ persons and $n_A=1611$ articles. The distribution of the number of co-authors per article (Fig.\ref{dist}) shows clearly a rapid exponential decrease, associated to a clear predominance of small collaborations, as expected.
Formally, the bipartite structure authors-papers may be mapped exactly on the vector of matrices $\mathcal{M} $ defined by:
\begin{equation}
\label{one}
\mathcal{M} = [{\bf M}^{(1)} , {\bf M}^{(2)}, ... , {\bf M}^{(j)} ,...., {\bf M}^{(n_P)}]
\end{equation}
where ${\bf M}^{(j)}$ is a square $n_P^j$ matrix that accounts for all articles co-authored by $j$ scientists. By definition, the element $M^{(j)}_{a_1 ... a_j}$ are equal to the number of collaborations between the $j$ authors $a_1... a_j$.
In the following, we assume that co-authorship is not a directed relation, thereby neglecting the position of the authors in the collaboration, e.g. whether or not the author is the first author.
This implies that the matrices are symmetric under permutations of indices. Moreover, as people can not collaborate with themselves, the diagonal elements $M^{(j)}_{a a ... a}$ vanish by construction. For example, $M^{(1)}_{a_1}$ and $M^{(2)}_{a_1 a_2}$ represent respectively the total number of papers written by $a_1$ alone, and the total number of papers written by the pair ($a_1$, $a_2$).
A way to visualize $\mathcal{M}$ consists in a network whose nodes are the scientists, and whose links are discriminated by their shape. The intrinsic co-authorship interactions form loops (order 1), lines (order 2), triangles (order 3) (see Fig.\ref{fff})...
To represent the intensity of the multiplet interaction, the width of the lines is taken to be proportional to the number of collaborations of this multiplet.
Altogether, these rules lead to a graphical representation of $\mathcal{M}$, that is much more refine than the usual projection method (Fig.\ref{example}).
\begin{figure}
\hspace{0.4cm}
\includegraphics[width=3.30in]{fff.ps
\caption{\label{fff} Graphical representation of the 4 most basic authors interactions, namely 1, 2, 3, 4 co-authorships. }
\end{figure}
\begin{figure}
\hspace{0.9cm}
\includegraphics[width=3.50in]{example.ps
\caption{\label{example} Graphical representation of the co-authorship network. This small sub-network accounts for 1 two-authors collaboration, {\em (Timme, Ashwin)}; 4 three-authors collaborations, 3 times {\em (Timme, Wolf, Geisel)} and {\em (Geisel, Hufnagel, Brockmann))}; 1 four-authors collaboration {\em (Timme, Wolf, Geisel, Zumdieck)}. Because the triplet {\em (Timme, Wolf, Geisel)} collaborates three times, its links are three times larger than the other links.}
\end{figure}
\begin{figure}
\hspace{2cm}
\includegraphics[width=3.50in]{basic.ps}
\hspace{1cm}
\includegraphics[width=3.50in]{projected.ps
\caption{\label{basic} 3-body projection of the bipartite network. For the sake of clarity, we focus on a small sub-cluster, centered around the collaborations of M. Newman. The upper figure is the usual projection method \cite{newman2} . The lower figure is the triangular projection (\ref{three}) of the same bipartite network.}
\end{figure}
It is important to point out that the vector of matrices $\mathcal{M}$ describes without approximation the bipartite network, and that it reminds the Liouville distribution in phase space of a Hamiltonian system. Accordingly, a relevant macroscopic description of the system relies on a coarse-grained reduction of its internal variables.
The simplest reduced matrix is the one-scientist matrix ${\bf R}^{(1)}$ that is obtained by summing over the N-body connections, $N\geq 2$:
\begin{eqnarray}
R^{(1)}_{a_1} = M^{(1)}_{a1} + \sum_{a_2} M^{(2)}_{a_1 a_2} + \sum_{a_2}\sum_{a_3<a_2} M^{(3)}_{a_1 a_2 a_3}+ .... \cr + \sum_{a_2}.... \sum_{a_j<a_{j-1}} M^{(j)}_{a_1 ... a_j}+...
\end{eqnarray}
It is straightforward to show that the elements $R^{(1)}_{a_j}$ denote the total number of articles written by the scientist $a_j$.
The second order matrix:
\begin{eqnarray}
R^{(2)}_{a_1 a_2} = M^{(2)}_{a_1 a_2} + \sum_{a_3} M^{(3)}_{a_1 ... a_3}+.... \cr + \sum_{a_3}.... \sum_{a_j<a_{j-1}} M^{(j)}_{a_1 ... a_j}+...
\end{eqnarray}
Its elements represent the total number of articles written by the pair of scientists ($a_1$, $a_2$).
Remarkably, this matrix reproduces the usual projection method (Fig. \ref{projection}), and obviously simplifies the structure of the bipartite structure by hiding the effect of high order connections.
The three-scientist matrix read similarly:
\begin{eqnarray}
\label{three}
R^{(3)}_{a_1 a_2 a_3} = M^{(3)}_{a_1 a_2 a_3} + \sum_{a_4} M^{(4)}_{a_1 ... a_4}+.... \cr + \sum_{a_4}.... \sum_{a_j<a_{j-1}} M^{(j)}_{a_1 ... a_j}+...
\end{eqnarray}
This new matrix counts the number of papers co-written by the triplet ($a_1$, $a_2$, $ a_3$), and may be represented by a network whose links are triangles relating three authors. The generalization to higher order matrices ${\bf R}^{(j)}$ is straightforward, but, as in the case of the BBGKY hierarchy, a truncature of the vector $\mathcal{M}$ must be fixed at some level in order to describe usefully and compactly the system.
It is therefore important to point that the knowledge of ${\bf M}^{(2)}$ together with ${\bf R}^{(3)}$ is completely sufficient in order to characterize the triangular structure of $\mathcal{M}$. Consequently, in this paper, we stop the reduction procedure at the 3-body level, and define the triangular projection of $\mathcal{M}$ by the application:
\begin{eqnarray}
[M^{(1)}_{a1} , M^{(2)}_{a_1 a_2} , M^{(3)}_{a_1 a_2 a_3} ,...., M^{(n_P)}_{a_1... a_{n_P}}] \cr \rightarrow [M^{(1)}_{a1} , M^{(2)}_{a_1 a_2} , R^{(3)}_{a_1 a_2 a_3} ]
\end{eqnarray}
The triangular projection is depicted in Fig. \ref{basic}, and compared to the usual projection method.
In order to test the relevance of this description, we have measured in the data set the total number of triangles generated by edges. We discriminate two kinds of triangles: those which arise from {\bf one} 3-body interaction of ${\bf R}^{(3)}$, and those which arise {\bf only} from an interplay of different interactions. There are respectively 5550 and 30 such triangles, namely $99.5 \%$ of triangles are of the first kind. This observation by itself therefore justifies the detailed projection method introduced in this section, and shows the importance of co-authorship links geometry in the characterization of network structures, precisely the clustering coefficient in the present case.
\section{Triangular Erd\"os-Renyi networks}
\begin{figure}
\includegraphics[width=2.50in]{perco1.ps}
\includegraphics[width=2.50in]{perco2.ps
\caption{\label{perco} Percolation transition in the $\mbox{ERN}^3$ model with 50 nodes, from a dilute phase with small disconnected islands (8 triangles) to a percolated phase with one giant cluster (20 triangles).}
\end{figure}
The empirical results of the previous section have shown the significance of N-body connections in social networks. A more complete framework for networks is therefore required in order to describe correctly the system complexity.
In this article, we focus on the most simple generalization, namely a network whose links relate triplets of nodes. To so, we base our modeling on the Erd\"os-Renyi uncorrelated random graph \cite{renyi}, i.e. the usual prototype to be compared with more complex
random graphs.
The usual Erd\"os-Renyi network (ERN) is composed by $N_n$ labeled nodes
connected by $N_e^{(2)}$ edges, which are chosen randomly from
the $N_n (N_n-1) /2$ possible edges. In this paper, we define the triangular ER network ($\mbox{ERN}^3$) to be
composed by $N_n$ labeled nodes,
connected by $N_e^{(3)}$ triangles, which are chosen randomly from
the $N_n (N_n-1) (N_n-2)/6$ possible triangles. As a result, connections in the system relate triplets of nodes $(a_1, a_2, a_3)$, and the matrix vector $\mathcal{M}$ reduces to the matrix ${\bf M}^{(3)}$.
Before going further, let us point that the clustering coefficient of triangular ER networks is very high by construction, but, contrary to intuition, it is different from 1 in general. For instance, for the two triplets $(a_1, a_2, a_3)$ and $(a_1, a_4, a_5)$, the local clustering coefficient of $a_1$ is equal to $\frac{1}{3}$.
\begin{figure}
\includegraphics[angle=-90,width=3.50in]{transition.ps}
\caption{\label{transition} Proportion of nodes in the main island, as a function of the number of links/node, in the ERN and the $\mbox{ERN}^3$ model.}
\end{figure}
In this paper, we focus numerically on the percolation transition \cite{vicsek} in $\mbox{ERN}^3$, i.e. on the appearance of a giant component by increasing the number of nodes in the system (Fig.\ref{perco}). This transition is usually associated to dramatic changes in the topological structure, that are crucial to ensure communicability between network nodes, e.g. the spreading of scientific knowledge in the case under study. In the following, we work at fixed number of nodes, and focus on the proportion of nodes in the main cluster as a function of the number of binary links in the system.
Moreover, in order to compare results with the usual ERN, we do not count twice redundant links, i.e. couples of authors who interact in different triplets. For instance, the triplet $(a_1, a_2, a_3)$ accounts for 3 binary links, but $(a_1, a_2, a_3)$ and $(a_1, a_2, a_4)$ account together for 5 links, so that $N_e^{(3)} \neq 3 N_e^{(2)}$ in general. Whatever, this detailed counting has small effects on the location of the percolation transition. Numerical results are depicted in figure \ref{transition}, where we consider networks with $N_n=1000$. Obviously, the triangular structure of interactions displaces the bifurcation point, by requiring more links in order to observe the percolation transition. This feature comes from the triangular structure of connections that restrains the network exploration as compared to random structures. Indeed, 3 links relate only 3 nodes in $\mbox{ERN}^3$, while 3 links typically relate 4 nodes in ERN. Finally, let us stress that the same mechanism takes place in systems with high clustering coefficients \cite{clustering, preparation}.
\section{Conclusion}
In this paper, we show the importance of N-body interactions in co-authorships networks. By focusing on data sets extracted from the arXiv database, we introduce a way to project bipartite networks onto unipartite networks. This approach generalizes usual projection methods by accounting for the complex geometrical figures connecting authors. To do so, we present a simple theoretical framework, and define N-body reduced and projected networks. The graphical representation of these simplified networks rests on a "shape-based" discrimination of the different co-authorship interactions (for a "color-based" version, see the author's website \cite{website}), and allows a clear visualization of the different mechanisms occurring in the system. Finally, we apply the method to some arXiv data subset, thereby showing the importance of such "high order corrections" in order to characterize the community structure of scientists.
The empirical results motivate therefore a better study of networks with complex weighted geometrical links. In the last section, we focus on the simplest case by introducing a triangular random model, $\mbox{ERN}^3$.
Moreover, we restrict the scope by analyzing the effect of the 3-body connection on percolation. A complete study of the topological of $\mbox{ERN}^3$ as well as its generalization to higher order connections is let for a forthcoming work.
{\bf Acknowledgements}
Figures \ref{fff}, \ref{example}, \ref{basic} and \ref{perco} were plotted thanks to the {\em visone} graphical tools.
This work
has been supported by European Commission Project
CREEN FP6-2003-NEST-Path-012864.
|
2,877,628,088,501 | arxiv | \section{Introduction}
Relative pose estimation from two views of a camera, or a multi-camera system is regarded as a fundamental problem in computer vision~\cite{HartleyZisserman-472,scaramuzza2011visual,kazik2012real,schoenberger2016sfm,guan2018visual}, which plays an important role in simultaneous localization and mapping (SLAM), visual odometry (VO) and structure-from-motion (SfM). Thus, improving the accuracy, efficiency and robustness of relative pose estimation algorithms is always an important research topic~\cite{hee2014relative,ventura2015efficient,sweeney2015computing,Agarwal2017,barath2018five,Silveira_2019_CVPR}. Motivated by the fact that multi-camera systems are already available in self-driving cars, micro aerial vehicles or augmented reality headsets, this paper investigates the problem of estimating the relative pose of multi-camera systems from affine correspondences, see Fig.~\ref{fig:AffineTransformation}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.85\linewidth]{figure/AffineTransformation_CrossFeature.png}
\end{center}
\caption{An affine correspondence in camera $C_i$ between consecutive frames $k$ and $k+1$. The local affine transformation $\mathbf{A}$ relates the infinitesimal patches around point correspondence (${\mathbf{x}}_{ij}$, ${\mathbf{x}}'_{ij}$).}
\label{fig:AffineTransformation}
\end{figure}
Since a multi-camera system contains multiple individual cameras connected by being fixed to a single rigid body, it has the advantage of large field-of-view and high accuracy. The main difference of a multi-camera system and a standard pinhole camera is the absence of a single projection center. A multi-camera system is modeled by the generalized camera model. The light rays that pass through a multi-camera system are expressed as Pl\"{u}cker lines and the epipolar constraint of the Pl\"{u}cker lines is described by the generalized essential matrix~\cite{pless2003using}.
Most of the state-of-the-art SLAM and SfM pipelines using a multi-camera system~\cite{hane20173d,heng2019project} follow the same procedure consisting of three major steps~\cite{scaramuzza2011visual}: first, a feature matching algorithm is applied to establish image point correspondences between two frames. Then a robust estimation framework, \emph{e.g.} the Random Sample Consensus (RANSAC)~\cite{fischler1981random}, is applied to find the pose parameters and remove outlier matches. Finally, the final relative pose between the two frames is estimated using all RANSAC inliers. The reliability and robustness of such a scheme is heavily dependent on the outlier removal step. In addition, the outlier removal process has to be efficient, which directly affects the real-time performance of SLAM and SfM. The computational complexity and, thus, the processing time of the RANSAC procedure depends exponentially on the number of points required for the estimation. Therefore, exploring the minimal solutions for relative pose estimation of multi-camera system is of significant importance and has received sustained attention~\cite{henrikstewenius2005solutions,li2008linear,hee2014relative,sweeney2014solving,ventura2015efficient,sweeney2015computing,kneip2016generalized,liu2017robust}.
The idea of deriving minimal solutions for relative pose estimation of multi-camera systems ranges back to the work of Stew{\'e}nius \emph{et al.} with the 6-point method~\cite{henrikstewenius2005solutions}. Then other classical works have been subsequently proposed, such as the 17-point linear method~\cite{li2008linear} and techniques based on iterative optimization~\cite{kneip2014efficient}. Moreover, the minimal number of necessary points can be further reduced by taking additional motion constraints into account or using other sensors, like an inertial measurement unit (IMU). For example, two point correspondences are sufficient for the ego-motion estimation of a multi-camera system by exploiting the Ackermann motion model constraints of wheeled vehicles~\cite{hee2013motion}. For vehicles equipped with a multi-camera system and an IMU, the relative motion can be estimated from four point correspondences by exploiting the known vertical direction from the IMU measurements, \emph{i.e.}, roll and pitch angles~\cite{hee2014relative,liu2017robust}.
All of the previously mentioned relative pose solvers estimate the pose parameters from a set of point correspondences, \emph{e.g.}, coming from SIFT~\cite{Lowe2004Distinctive} or SURF~\cite{Bay2008346} detectors. However, as it has been clearly shown in several recently published papers papers~\cite{bentolila2014conic,raposo2016theory,barath2018efficient,eichhardt2018affine}, using more informative features, \emph{e.g.} affine correspondences, improves the estimation procedure both in terms of accuracy and efficiency. An affine correspondence is composed of a point correspondence and a 2$\times$2 affine transformation. Due to containing more information, than point correspondences, about the underlying surface geometry, the affine correspondences enable to estimate relative pose from fewer correspondences. In this paper, we focus on the relative pose estimation of a multi-camera system from affine correspondences, instead of point correspondences. Four novel solutions are proposed:
\begin{itemize}
\item A new minimal solver is proposed which requires two affine correspondences to estimate the general motion of a multi-camera system which has 6 degrees of freedom (6DOF). In contrast, state-of-the-art solvers use six point correspondences~\cite{henrikstewenius2005solutions,kneip2014efficient,ventura2015efficient}.
\item When the motion is planar (\emph{i.e.}, the body to which the cameras are fixed moves on a plane; 3DOF), a single affine correspondence is sufficient to recover the planar motion of a multi-camera system. In order to deal with the degenerate case of 1AC solver, we also propose a new method to estimate the relative pose from two affine correspondences. The point-based solution requires two point pairs, but only for the Ackermann motion model~\cite{hee2013motion}.
\item A fourth solver is proposed for the case when the vertical direction is known (4DOF), \emph{e.g.}, from an IMU attached to the multi-camera system. We show that two affine correspondences are required to recover the relative pose. In contrast, the point-based solver requires four correspondences~\cite{hee2014relative,sweeney2014solving,liu2017robust}.
\end{itemize}
\section{\label{sec:relatedwork}Related Work}
There has been much interest in using multi-camera systems in both academic and industrial communities. The most common case is that a set of cameras, particularly with non-overlapping views, are mounted rigidly on self-driving vehicles, unmanned aerial vehicles (UAV) or AR headsets.
Due to the absence of a single center of projection, the camera model of multi-camera systems is different from the standard pinhole camera. Pless proposed to express the light rays as Pl\"{u}cker lines and derived the generalized camera model which has become a standard representation for the multi-camera systems~\cite{pless2003using}. Stew{\'e}nius~\emph{et al.} proposed the first minimal solution to estimate the relative pose of a multi-camera system from 6 point correspondences, which produces up to 64 solutions~\cite{henrikstewenius2005solutions}. Kim~\emph{et al.} later proposed several approaches for motion estimation using second-order cone programming~\cite{kim2007visual} or branch-and-bound techniques~\cite{kim2009motion}. Lim~\emph{et al.} presented the antipodal epipolar constraint and estimated the relative motion by using antipodal points~\cite{lim2010estimating}. Li~\emph{et al.} provided several linear solvers to compute the relative pose, among which the most commonly used one requires 17 point correspondences~\cite{li2008linear}. Kneip and Li proposed an iterative approach for the relative pose estimation based on eigenvalue minimization~\cite{kneip2014efficient}. Ventura~\emph{et al.} used first-order approximation of the relative
rotation to simplify the problem and estimated the relative pose from 6 point correspondences~\cite{ventura2015efficient}.
By considering additional motion constraints or using additional information provided by an IMU, the number of required point correspondences can be further reduced. Lee~\emph{et al.} presented a minimal solution with two point correspondences for the ego-motion estimation of a multi-camera system, which constrains the relative motion by the Ackermann motion model~\cite{hee2013motion}. In addition, a variety of algorithms have been proposed when a common direction of the multi-camera system is known,~\emph{i.e.}, an IMU provides the roll and pitch angles of the multi-camera system. The relative pose estimation with known vertical direction requires a minimum of 4 point correspondences~\cite{hee2014relative,sweeney2014solving,liu2017robust}.
Exploiting the additional affine parameters besides the image coordinates has been recently proposed for the relative pose estimation of monocular cameras, which reduces the number of required points significantly. Bentolila and Francos estimated the fundamental matrix from three ACs~\cite{bentolila2014conic}. Raposo and Barreto computed homography and essential matrix using two ACs~\cite{raposo2016theory}. Barath and Hajder derived the constraints between the local affine transformation and the essential matrix and recovered the essential matrix from two ACs~\cite{barath2018efficient}. Eichhardt and Chetverikov~\cite{eichhardt2018affine} also estimated the relative pose from two ACs, which is applicable to arbitrary central-projection models. Hajder and Barath~\cite{hajder2019relative} and Guan~\emph{et al.}~\cite{Guan2020CVPR} proposed several minimal solutions for relative pose from a single AC under the planar motion assumption or with knowledge of a vertical direction. The above mentioned works are only suitable for the monocular perspective camera, rather than the multiple perspective cameras connected by being fixed to the single body. In this paper, we focuses on the minimal number of ACs to estimate the relative pose of a multi-camera system.
\section{\label{sec:6DOFmotion}Relative Pose Estimation under General Motion}
A multi-camera system is made up of individual cameras denoted by $C_i$, as shown in Fig.~\ref{fig:AffineTransformation}. Its extrinsic parameters expressed in a multi-camera reference frame are represented as $(\mathbf{R}_i,\mathbf{t}_i)$. For general motion, there is a 3DOF rotation and a 3DOF translation between two reference frames at time $k$ and $k+1$. Rotation $\mathbf{R}$ using Cayley parameterization and translation $\mathbf{t}$ can be written as:
\begin{equation}
\begin{aligned}
&\mathbf{R} = \frac{1}{1+q_x^2+q_y^2+q_z^2} \ . \\ &\begin{bmatrix}{1+q_x^2-q_y^2-q_z^2}&{2{q_x}{q_y}-2{q_z}}&{2{q_y}+2{q_x}{q_z}}\\
{2{q_x}{q_y}+2{q_z}}&{1-q_x^2+q_y^2-q_z^2}&{2{q_y}{q_z}-2{q_x}}\\
{2{q_x}{q_z}-2{q_y}}&{2{q_x}+2{q_y}{q_z}}&{1-q_x^2-q_y^2+q_z^2}
\end{bmatrix},\\
\end{aligned}
\label{eq:R6dof1}
\end{equation}
\begin{equation}
\mathbf{t} = \begin{bmatrix}
{t_x}& \
{t_y}& \
{t_z}
\end{bmatrix}^T,
\label{eq:T6dof1}
\end{equation}
where $[1,q_x,q_y,q_z]^T$ is a homogeneous quaternion vector. Note that 180 degree rotations are prohibited in Cayley parameterization, but this is a rare case for consecutive frames.
\subsection{Generalized camera model}
We give a brief description of generalized camera model (GCM)~\cite{pless2003using}. Let us denote an affine correspondence in camera $C_i$ between consecutive frames $k$ and $k+1$ as $({\mathbf{x}}_{ij}, {\mathbf{x}}'_{ij}, \mathbf{A})$, where ${\mathbf{x}}_{ij}$ and ${\mathbf{x}}'_{ij}$ are the normalized homogeneous image coordinates of feature point $j$ and $\mathbf{A}$ is a 2$\times$2 local affine transformation. Indices $i$ and $j$ are the camera and point index, respectively. The related local affine transformation $\mathbf{A}$ is a 2$\times$2 linear transformation which relates the infinitesimal patches around ${\mathbf{x}}_{ij}$ and ${\mathbf{x}}'_{ij}$~\cite{barath2018five}.
The normalized homogeneous image coordinates $({\mathbf{p}}_{ij}, {\mathbf{p}}'_{ij})$ expressed in the multi-camera reference frame are given as
\begin{equation}
{\mathbf{p}}_{ij} = {\mathbf{R}_i}{\mathbf{x}}_{ij},\qquad
{\mathbf{p}}'_{ij} = {\mathbf{R}_i}{\mathbf{x}}'_{ij}.
\label{eq:imagecoord6dof}
\end{equation}
The unit direction of rays $({\mathbf{u}}_{ij}, {\mathbf{u}}'_{ij})$ expressed in the multi-camera reference frame are given as: ${\mathbf{u}}_{ij} = {\mathbf{p}}_{ij}/{{\|}{{\mathbf{p}}_{ij}}{\|}}$,${\mathbf{u}}'_{ij} = {\mathbf{p}}'_{ij}/{{\|}{{\mathbf{p}}'_{ij}}{\|}}$. The 6-dimensional vector Pl\"{u}cker lines corresponding to the rays are denoted as ${\mathbf{l}}_{ij} = [{\mathbf{u}}_{ij}^T, \ ({\mathbf{t}}_i\times {\mathbf{u}}_{ij})^T]^T$, ${\mathbf{l}}'_{ij} = [{{\mathbf{u}}'_{ij}}^T, \ ({\mathbf{t}}_i\times {\mathbf{u}}'_{ij})^T]^T$. The generalized epipolar constraint is written as~\cite{pless2003using}
\begin{equation}
{{\mathbf{l}}'^T_{ij}}
\begin{bmatrix} {{{\left[ {\mathbf{t}} \right]}_ \times }{\mathbf{R}}}, & {\mathbf{R}} \\ {\mathbf{R}}, & {\mathbf{0}} \end{bmatrix}
{{\mathbf{l}}_{ij}} = 0,
\label{GECS6dof}
\end{equation}
where ${{\mathbf{l}}'^T_{ij}}$ and ${{\mathbf{l}}_{ij}}$ are Pl\"{u}cker lines between two consecutive frames at time $k$ and $k+1$.
\subsection{Affine transformation constraint}
We denote the transition matrix of camera coordinate system $C_i$ between consecutive frames $k$ and $k+1$ as $(\mathbf{R}_{Ci},\mathbf{t}_{Ci})$, which is represented as:
{ \begin{equation}
\begin{aligned}
&\begin{bmatrix}
{\mathbf{R}_{Ci}}&{\mathbf{t}_{Ci}}\\
{{\mathbf{0}}}&{1}\\
\end{bmatrix} = \begin{bmatrix}{\mathbf{R}_{i}}&{\mathbf{t}_{i}}\\
{{\mathbf{0}}}&{1}\\
\end{bmatrix}^{-1}\begin{bmatrix}{\mathbf{R}}&{\mathbf{t}}\\
{{\mathbf{0}}}&{1}\\
\end{bmatrix}\begin{bmatrix}{\mathbf{R}_{i}}&{\mathbf{t}_{i}}\\
{{\mathbf{0}}}&{1}\\
\end{bmatrix} \\
& \qquad \ \ \ =\begin{bmatrix}{{\mathbf{R}_{i}^T}{\mathbf{R}}{\mathbf{R}_{i}}}& \ {{\mathbf{R}_{i}^T}{\mathbf{R}}{\mathbf{t}_{i}}+{\mathbf{R}_{i}^T}{\mathbf{t}}-{\mathbf{R}_{i}^T}{\mathbf{t}_{i}}}\\
{{\mathbf{0}}}& \ {1}\\
\end{bmatrix}.
\end{aligned}
\label{eq:transformationmatrix6dof}
\end{equation}}\\
The essential matrix $\mathbf{E}$ between two frames of camera $C_i$ is given as:
\begin{equation}
\begin{aligned}
\mathbf{E} = [\mathbf{t}_{Ci}]_{\times}\mathbf{R}_{Ci}
= {\mathbf{R}_{i}^T}[{\mathbf{R}_{i}}\mathbf{t}_{Ci}]_{\times}{{\mathbf{R}}{\mathbf{R}_{i}}},
\end{aligned}
\label{eq:E6dof}
\end{equation}
where $\left[{\mathbf{R}_{i}}\mathbf{t}_{Ci}\right]_{\times}={\mathbf{R}}[{\mathbf{t}_{i}}]_{\times}{{\mathbf{R}}^T} + [{\mathbf{t}}]_{\times} - [{\mathbf{t}_{i}}]_{\times}$. The relationship of essential matrix $\mathbf{E}$ and local affine transformation $\mathbf{A}$ is formulated as follows~\cite{barath2018efficient}:
\begin{equation}
(\mathbf{E}^{T}{\mathbf{x}}'_{ij})_{(1:2)} = -(\hat{\mathbf{A}}^{T}\mathbf{E}{\mathbf{x}}_{ij})_{(1:2)},
\label{eq:E6dof_Ac1}
\end{equation}
where $\mathbf{n}_{ij}\triangleq{\mathbf{E}^{T}{\mathbf{x}}'_{ij}}$ and $\mathbf{n}'_{ij}\triangleq{\mathbf{E}{\mathbf{x}}_{ij}}$ denote the epipolar lines in their implicit form in frames of camera $C_i$ at time $k$ and $k+1$. The subscript 1 and 2 represent the first and second equations of the equation system, respectively. $\hat{\mathbf{A}}$ is a $3\times3$ matrix: $\hat{\mathbf{A}} = [\mathbf{A} \ \mathbf{0}; \mathbf{0} \ 0]$. By substituting Eq.~\eqref{eq:E6dof} into Eq.~\eqref{eq:E6dof_Ac1}, we obtain:
\begin{eqnarray}
\begin{aligned}
({\mathbf{R}_{i}^T}{\mathbf{R}^T}&{[{\mathbf{R}_{i}}\mathbf{t}_{Ci}]_{\times}^T}{\mathbf{R}_{i}}{\mathbf{x}}'_{ij})_{(1:2)} \\
&= -(\hat{\mathbf{A}}^{T}{\mathbf{R}_{i}^T}[{\mathbf{R}_{i}}\mathbf{t}_{Ci}]_{\times}{{\mathbf{R}}{\mathbf{R}_{i}}}{\mathbf{x}}_{ij})_{(1:2)}.
\end{aligned}
\label{eq:E6dof_Ac2}
\end{eqnarray}
Based on Eq.~\eqref{eq:imagecoord6dof}, the above equation is reformulated and expanded as follows:
\begin{equation}
\begin{aligned}
({\mathbf{R}_{i}^T}&([{\mathbf{t}_{i}}]_{\times}{\mathbf{R}}^T + {\mathbf{R}^T}[{\mathbf{t}}]_{\times} - {\mathbf{R}^T}[{\mathbf{t}_{i}}]_{\times}){\mathbf{p}}'_{ij})_{(1:2)} = \\
&(\hat{\mathbf{A}}^{T}{\mathbf{R}_{i}^T}({\mathbf{R}}[{\mathbf{t}_{i}}]_{\times} + [{\mathbf{t}}]_{\times}{\mathbf{R}} - [{\mathbf{t}_{i}}]_{\times}{\mathbf{R}}){\mathbf{p}}_{ij})_{(1:2)}.
\end{aligned}
\label{eq:E6dof_Ac6}
\end{equation}
Equation~\eqref{eq:E6dof_Ac6} interprets the epipolar constraints which a local affine transformation implies on the $i$-th camera from a multi-camera system between two consecutive frames $k$ and $k+1$.
\subsection{Solution using Gr\"{o}bner basis method}
For affine correspondence $({\mathbf{x}}_{ij}, {\mathbf{x}}'_{ij}, \mathbf{A})$, we get three polynomials for six unknowns $\{q_x, q_y, q_z, t_x, t_y, t_z\}$ from Eqs.~\eqref{GECS6dof} and~\eqref{eq:E6dof_Ac6}. Thus two affine correspondences are enough to recover the relative pose of a multi-camera system under 6DOF general motion. The hidden variable resultant method~\cite{cox2013ideals} is used to solve for the unknowns, see supplementary material for details. The obtained solver is however too large and, therefore, slow and numerically unstable. Experiments confirmed that the solver is numerically unstable and, thus, no further experiments and comparisons are presented in the paper.
We furthermore investigate the special cases of multi-camera motion, \emph{i.e.},
planar motion and motion with known vertical direction, see Fig.~\ref{fig:Specialcases}. We will show that two special cases can be efficiently solved with affine correspondences.
\section{\label{sec:planarmotion}Relative Pose Estimation Under Planar Motion}
\begin{figure}[ht]
\begin{center}
\subfigure[Planar motion]
{
\includegraphics[width=0.31\linewidth]{figure/PlanarMotion.png}
}
\hspace{0.2in}
\subfigure[Motion with known vertical direction]
{
\includegraphics[width=0.55\linewidth]{figure/KnownVerticalDirection.png}
}
\end{center}
\caption{Special cases of multi-camera motion: (a) Planar motion between two multi-camera reference frames in top-view. There are three unknowns: yaw angle $\theta$, translation direction $\phi$ and translation distance $\rho$. (b) Motion with known vertical direction. There are four unknowns: a Y-axis rotation $\mathbf{R}_{y}$ and 3D translation $\tilde{\mathbf{t}} =[{\tilde{t}_x}, {\tilde{t}_y}, {\tilde{t}_z}]^T$.}
\label{fig:Specialcases}
\end{figure}
When assuming that the body, to which the camera system is rigidly fixed, moves on a planar surface (as visualized in Fig.~\ref{fig:Specialcases}(a)), there are only a Y-axis rotation and 2D translation between the reference frames $k$ and $k+1$. Similar to Eqs.~\eqref{eq:R6dof1} and~\eqref{eq:T6dof1}, the rotation $\mathbf{R}=\mathbf{R}_{y}$ and the translation $\mathbf{t}$ from frame $k$ to $k+1$ is written as:
\begin{equation}
\begin{aligned}
\mathbf{R}_{y} & = \frac{1}{1+{q_y^2}}\begin{bmatrix}{1-{q_y^2}}&0&{-2{q_y}}\\
0&1+{q_y^2}&0\\
{2{q_y}}&0&{1-{q_y^2}}
\end{bmatrix}, \\
\mathbf{t} & = \begin{bmatrix}
{t_x}& \
{0}& \
{t_z}
\end{bmatrix}^T.
\end{aligned}
\label{eq:Ryt1}
\end{equation}
where ${q_y}=\tan(\frac{\theta}{2})$, $t_x={\rho\sin{(\phi)}}$, $t_z={-\rho\cos{(\phi)}}$, $\rho$ is the distance between two multi-camera reference frames.
\subsection{Solution by reduction to a single polynomial}
By substituting Eq.~\eqref{eq:Ryt1} into Eqs.~\eqref{GECS6dof} and~\eqref{eq:E6dof_Ac6}, we get an equation system of three polynomials for 3 unknowns $q_y$, $t_x$ and $t_z$.
Since an AC generally provides 3 independent constraints for relative pose, a single affine correspondence is sufficient to recover the planar motion of a multi-camera system. Three independent constraints from an affine correspondence are stacked into 3 equations in 3 unknowns:
\begin{equation}
\frac{1}{1+{q_y^2}}\underbrace {\begin{bmatrix}
{M_{11}}& {M_{12}}& {M_{13}}\\
{M_{21}}& {M_{22}}& {M_{23}}\\
{M_{31}}& {M_{32}}& {M_{33}}
\end{bmatrix}}_{{\mathbf{M}}\left( {{q_y}} \right)}
\begin{bmatrix}
{{{t}_x}}\\
{{{t}_z}}\\
1
\end{bmatrix} = {\mathbf{0}},
\label{eq:euq_q1}
\end{equation}
where the elements $M_{ij}$ $(i=1,\ldots,3; j=1,\ldots,3)$ of the coefficient matrix ${\mathbf{M}(q_y)}$ are formed by the polynomial coefficients and one unknown variable $q_y$, see supplementary material for details. Since ${\mathbf{M}(q_y)}/(1+{q_y^2})$ is a square matrix, Eq.~\eqref{eq:euq_q1} has a non-trivial solution only if the determinant of ${\mathbf{M}(q_y)/(1+{q_y^2})}$ is zero. The expansion of $\det({\mathbf{M}(q_y)}/(1+{q_y^2}))=0$ gives an 4-degree univariate polynomial:
\begin{eqnarray}
\begin{aligned}
\quot(\textstyle \sum_{i=0}^6 w_i q_y^i, {q_y^2}+1) = 0,
\end{aligned}
\label{eq:euq_q2}
\end{eqnarray}
where $\quot(a, b)$ means calculating the quotient of $a$ divided by $b$, $w_{0},\ldots,w_{6}$ are formed by a Pl\"{u}cker line correspondence and a affine transformation between the corresponding feature points. This univariate polynomial leads to an explicit analytic solution with a maximum of 4 real roots. Once the solutions for $q_y$ are found, the remaining unknowns $t_x$ and $t_z$ are solved by substituting $q_y$ into ${\mathbf{M}(q_y)}$ and solving the linear system via calculating its null vector. Finally, the rotation matrix $\mathbf{R}_{y}$ is recovered from Eq.~\eqref{eq:Ryt1}.
However, we proved that the solver relies on one AC has a degenerate case, \emph{i.e.}, the distances between motion plane and optical centers of individual cameras are equal, see supplementary material for details. This degenerate case often happens in the self-driving scenario. To overcome this issue, two affine correspondences are used to estimate the relative pose. For example, the first and second constraints of the first affine correspondence, and the first constraint of the second affine correspondence are also stacked into 3 equations in 3 unknowns, just as Eq.~\eqref{eq:euq_q1}. The solution procedure remains the same, except that the code for constructing the coefficient matrix ${\mathbf{M}(q_y)}$ is replaced.
An interesting fact in this case is that only three equations from two affine correspondences are used. Although two affine correspondences are required to sample for this solver in the RANSAC loop, it is possible to run a consistency check on two affine correspondences. To identify an outlier free planar motion estimation hypothesis, the three remaining equations of two affine correspondences have also to be fulfilled. The solutions which do not fulfill the hypothesis would be preemptively rejected. This gives a significant computational advantage over the regular 2-point method, such as the solver with Ackermann motion assumption~\cite{hee2013motion}, because the inconsistent samples can be detected directly without testing on all the other affine correspondences.
\section{\label{sec:knownverticaldirection}Relative Pose Estimation with Known Vertical Direction}
In this section a minimal solution using two affine correspondences is proposed for relative motion estimation for multi-camera systems with known vertical direction, see Fig.~\ref{fig:Specialcases}(b). In this case, an IMU is coupled with the multi-camera system and the relative rotation between the IMU and the reference frame is known. The IMU provides the known roll and pitch angles for the reference frame. So the reference frame can be aligned with the measured gravity direction, such that the X-Z-plane of the aligned reference frame is parallel to the ground plane and the Y-axis is parallel to the gravity direction. Rotation $\mathbf{R}_{\text{\text{imu}}}$ for aligning the reference frame to the aligned reference frame is written as:
\begin{equation}
\begin{aligned}
&\mathbf{R}_{\text{\text{imu}}} = \mathbf{R}_{p}\mathbf{R}_{r} \\
&= \begin{bmatrix}1&0&0\\
0&\cos(\theta_p)&{\sin(\theta_p)}\\
0&{-\sin(\theta_p)}&{\cos(\theta_p)}
\end{bmatrix}\begin{bmatrix}
{\cos(\theta_r)}&{\sin(\theta_r)}&0\\
{ -\sin(\theta_r)}&{\cos(\theta_r)}&0\\
0&0&1
\end{bmatrix}, \nonumber
\end{aligned}
\label{eq:RxRz}
\end{equation}
where $\theta_r$ and $\theta_p$ are roll and pitch angles provided by the coupled IMU, respectively. Thus, there are only a Y-axis rotation $\mathbf{R}=\mathbf{R}_{y}$ and 3D translation $\tilde{\mathbf{t}}= {{{\mathbf{R}}}'_{\text{imu}}}{\mathbf{t}} =[{\tilde{t}_x}, {\tilde{t}_y}, {\tilde{t}_z}]^T$ to be estimated between the aligned multi-camera reference frames at time $k$ and $k+1$.
\subsection{Generalized camera model}
Let us denote the rotation matrices from the roll and pitch angles of the two corresponding multi-camera reference frames
at time $k$ and $k+1$ as $\mathbf{R}_{\text{\text{imu}}}$ and $\mathbf{R}'_{\text{\text{imu}}}$. The relative rotation between two multi-camera reference frames can now be given as:
\begin{equation}
{\mathbf{R}} = {(\mathbf{R}'_{\text{\text{imu}}})^T}{\mathbf{R}_{y}}{\mathbf{R}_{\text{\text{imu}}}}.
\label{eq:Rv}
\end{equation}
We substitute Eq.~\eqref{eq:Rv} into Eq.~\eqref{GECS6dof} yields:
{{\begin{equation}
\begin{aligned}
{\underbrace {\left(\begin{bmatrix}
{{{{\mathbf{R}}}'_{\text{\text{imu}}}}}& {\mathbf{0}}\\
{{\mathbf{0}}}& {{{{\mathbf{R}}}'_{\text{\text{imu}}}}}\\
\end{bmatrix}{\mathbf{l}'_{ij}} \right)^T}_{\tilde{\mathbf{l}}'_{ij}}}
&\begin{bmatrix}{{{\left[ {{\tilde{\mathbf{t}}}} \right]}_ \times } {{\mathbf{R}}_y}}&{{{\mathbf{R}}_y}}\\
{{{\mathbf{R}}_y}}&{\mathbf{0}}
\end{bmatrix}.\\
&{\underbrace {\left(\begin{bmatrix}
{{{\mathbf{R}}_{\text{\text{imu}}}}}& {\mathbf{0}}\\
{{\mathbf{0}}}& {{{\mathbf{R}}_{\text{\text{imu}}}}}\\
\end{bmatrix}{\mathbf{l}_{ij}} \right)}_{\tilde{\mathbf{l}}_{ij}}}= 0,
\end{aligned}
\label{eq:GECSIMU}
\end{equation}}}\\
where ${\tilde{\mathbf{l}}_{ij}} \leftrightarrow {\tilde{\mathbf{l}}'_{ij}}$ are the corresponding Pl\"{u}cker lines expressed in the aligned multi-camera reference frame.
\subsection{Affine transformation constraint}
In this case, the transition matrix of the camera coordinate system $C_i$ between consecutive frames $k$ and $k+1$ is represented as
{\begin{equation}
\begin{aligned}
&\begin{bmatrix}
{\mathbf{R}_{Ci}}&\ {\mathbf{t}_{Ci}}\\
{{\mathbf{0}}}&\ {1}\\
\end{bmatrix}
= \left(\begin{bmatrix}{\mathbf{R}'_{\text{imu}}}&{\mathbf{0}}\\
{{\mathbf{0}}}&{1}\\
\end{bmatrix}\begin{bmatrix}{\mathbf{R}_{i}}&{\mathbf{t}_{i}}\\
{{\mathbf{0}}}&{1}\\
\end{bmatrix}\right)^{-1}.\\
& \qquad \qquad \quad \ \ \begin{bmatrix}{\mathbf{R}_{y}}&{\tilde{\mathbf{t}}}\\
{{\mathbf{0}}}&{1}
\end{bmatrix}
\left(\begin{bmatrix}{\mathbf{R}_{\text{imu}}}&{\mathbf{0}}\\
{{\mathbf{0}}}&{1}\\
\end{bmatrix}\begin{bmatrix}{\mathbf{R}_{i}}&{\mathbf{t}_{i}}\\
{{\mathbf{0}}}&{1}\\
\end{bmatrix}\right),
\end{aligned}
\label{eq:transformationmatrix_Ev}
\end{equation}}\\
we denote that
{\begin{eqnarray}
\begin{aligned}
&\begin{bmatrix}
{\tilde{\mathbf{R}}_{\text{imu}}}&{\tilde{\mathbf{t}}_{\text{imu}}}\\
{{\mathbf{0}}}&{1}\\
\end{bmatrix} = \begin{bmatrix}{\mathbf{R}_{\text{imu}}}&{\mathbf{0}}\\
{{\mathbf{0}}}&{1}\\
\end{bmatrix}\begin{bmatrix}{\mathbf{R}_{i}}&{\mathbf{t}_{i}}\\
{{\mathbf{0}}}&{1}\\
\end{bmatrix},\\
&\begin{bmatrix}
{\tilde{\mathbf{R}}'_{\text{imu}}}&{\tilde{\mathbf{t}}'_{\text{imu}}}\\
{{\mathbf{0}}}&{1}\\
\end{bmatrix} = \begin{bmatrix}{\mathbf{R}'_{\text{imu}}}&{\mathbf{0}}\\
{{\mathbf{0}}}&{1}\\
\end{bmatrix}\begin{bmatrix}{\mathbf{R}_{i}}&{\mathbf{t}_{i}}\\
{{\mathbf{0}}}&{1}\\
\end{bmatrix}.
\end{aligned}
\label{eq:R_imuNew}
\end{eqnarray}}\\
By substituting Eq.~\eqref{eq:R_imuNew} into Eq.~\eqref{eq:transformationmatrix_Ev}, we obtain
{\begin{equation}
\begin{aligned}
&\begin{bmatrix}
{\mathbf{R}_{Ci}}&{\mathbf{t}_{Ci}}\\
{{\mathbf{0}}}&{1}\\
\end{bmatrix}\\
&=\begin{bmatrix}{({\tilde{\mathbf{R}}'_{\text{imu}}})^T{\mathbf{R}_{y}}{\tilde{\mathbf{R}}_{\text{imu}}}}& {{({\tilde{\mathbf{R}}'_{\text{imu}}})^T}({\mathbf{R}_{y}}{\tilde{\mathbf{t}}_{\text{imu}}}+{\tilde{\mathbf{t}}}-{\tilde{\mathbf{t}}'_{\text{imu}}})}\\
{{\mathbf{0}}}&{1}\\
\end{bmatrix}.
\end{aligned}
\label{eq:transformationmatrix_Ev2}
\end{equation}}\\
The essential matrix $\mathbf{E}$ between two frames of camera $C_i$ is given as
\begin{equation}
\begin{aligned}
\mathbf{E} = [\mathbf{t}_{Ci}]_{\times}\mathbf{R}_{Ci} = {({\tilde{\mathbf{R}}'_{\text{imu}}})^T}[{\tilde{\mathbf{R}}'_{\text{imu}}}\mathbf{t}_{Ci}]_{\times}{{\mathbf{R}_{y}}{\tilde{\mathbf{R}}_{\text{imu}}}},
\end{aligned}
\label{eq:Ev}
\end{equation}
where $[{\tilde{\mathbf{R}}'_{\text{imu}}}\mathbf{t}_{Ci}]_{\times}={{\mathbf{R}_{y}}[\tilde{\mathbf{t}}_{\text{imu}}]_{\times}{\mathbf{R}_{y}^T}} + [\tilde{\mathbf{t}}]_{\times} - [\tilde{\mathbf{t}}'_{\text{imu}}]_{\times}$. By substituting Eq.~\eqref{eq:Ev} into Eq.~\eqref{eq:E6dof_Ac1}, we obtain
{\begin{eqnarray}
\begin{aligned}
({\tilde{\mathbf{R}}_{\text{imu}}^T}&{\mathbf{R}_{y}^T}{[{\tilde{\mathbf{R}}'_{\text{imu}}}\mathbf{t}_{Ci}]_{\times}^T}{{\tilde{\mathbf{R}}'_{\text{imu}}}}{\mathbf{x}}'_{ij})_{(1:2)} = \\ &-(\hat{\mathbf{A}}^{T}{({\tilde{\mathbf{R}}'_{\text{imu}}})^T}[{\tilde{\mathbf{R}}'_{\text{imu}}}\mathbf{t}_{Ci}]_{\times}{{\mathbf{R}_{y}}{\tilde{\mathbf{R}}_{\text{imu}}}}{\mathbf{x}}_{ij})_{(1:2)}.
\end{aligned}
\label{eq:Ev_Ac}
\end{eqnarray}}
We denote the normalized homogeneous image coordinates expressed in the aligned multi-camera reference frame as $(\tilde{{\mathbf{p}}}_{ij}, {\tilde{\mathbf{p}}}'_{ij})$, which are given as
\begin{equation}
\tilde{{\mathbf{p}}}_{ij} = {\tilde{\mathbf{R}}_{\text{imu}}}{\mathbf{x}}_{ij},\qquad
\tilde{{\mathbf{p}}}'_{ij} = {{\tilde{\mathbf{R}}'_{\text{imu}}}}{\mathbf{x}}'_{ij}.
\label{eq:Ev_alignedimage}
\end{equation}
Based on the above equation, Eq.~\eqref{eq:Ev_Ac} is rewritten and expanded as follows:
\begin{equation}
\begin{aligned}
&({\tilde{\mathbf{R}}_{\text{imu}}^T}([{\tilde{\mathbf{t}}_{\text{imu}}}]_{\times}{\mathbf{R}_{y}^T} + {\mathbf{R}_{y}^T}[{\tilde{\mathbf{t}}}]_{\times} - {\mathbf{R}_{y}^T}[{\tilde{\mathbf{t}}'_{\text{imu}}}]_{\times}){\tilde{{\mathbf{p}}}'_{ij}})_{(1:2)} = \\
&(\hat{\mathbf{A}}^{T}{({\tilde{\mathbf{R}}'_{\text{imu}}})^T}({\mathbf{R}_{y}}[{\tilde{\mathbf{t}}_{\text{imu}}}]_{\times} + [{\tilde{\mathbf{t}}}]_{\times}{\mathbf{R}_{y}} - [{\tilde{\mathbf{t}}_{\text{imu}}}]_{\times}{\mathbf{R}_{y}}){{\tilde{{\mathbf{p}}}}_{ij}})_{(1:2)}
\end{aligned}
\label{eq:Ev_Ac2}
\end{equation}
\subsection{Solution by reduction to a single polynomial}
Based on Eqs.~\eqref{eq:GECSIMU} and \eqref{eq:Ev_Ac2}, we get an equation system of three polynomials for 4 unknowns $q_y$, $\tilde{t}_x$, $\tilde{t}_y$ and $\tilde{t}_z$. Recall that there are three independent constraints provided by one AC. Thus, one more equation is required which can be taken from a second affine correspondence. In principle, one arbitrary equation can be chosen from Eqs.~\eqref{eq:GECSIMU} and \eqref{eq:Ev_Ac2}, for example, three constraints of the first affine correspondence, and the first constraint of the second affine correspondence are stacked into 4 equations in 4 unknowns:
\begin{equation}
\frac{1}{1+{q_y^2}}\underbrace{\begin{bmatrix}
{\tilde{M}_{11}}&{\tilde{M}_{12}}&{\tilde{M}_{13}}&{\tilde{M}_{14}}\\
{\tilde{M}_{21}}&{\tilde{M}_{22}}&{\tilde{M}_{23}}&{\tilde{M}_{24}}\\
{\tilde{M}_{31}}&{\tilde{M}_{32}}&{\tilde{M}_{33}}&{\tilde{M}_{34}}\\
{\tilde{M}_{41}}&{\tilde{M}_{42}}&{\tilde{M}_{43}}&{\tilde{M}_{44}}
\end{bmatrix} }_{\tilde{\mathbf{M}}\left( {q_y} \right)}
\begin{bmatrix}
{{\tilde{t}_x}}\\
{{\tilde{t}_y}}\\
{{\tilde{t}_z}}\\
1
\end{bmatrix} = {\mathbf{0}},
\label{eq:euq_Ev1}
\end{equation}
where the elements $\tilde{M}_{ij} (i=1,\ldots,4; j=1,\ldots,4)$ of the coefficient matrix $\tilde{\mathbf{M}}({q_y})$ are formed by the polynomial coefficients and one unknown variable $q_y$, see supplementary material for details. Since $\tilde{\mathbf{M}}({q_y})/(1+{q_y^2})$ is a square matrix, Eq.~\eqref{eq:euq_Ev1} has a non-trivial solution only if the determinant of $\tilde{\mathbf{M}}({q_y})/(1+{q_y^2})$ is zero. The expansion of $\det({\tilde{\mathbf{M}}({q_y})}/(1+{q_y^2}))=0$ gives a 6-degree univariate polynomial:
{\begin{equation}
\begin{aligned}
\quot(\textstyle \sum_{i=0}^8 w_i q_y^i, {q_y^2}+1) = 0,
\end{aligned}
\label{eq:euq_Evq}
\end{equation}}\\
where $\tilde{w}_{0},\ldots,\tilde{w}_{8}$ are formed by two Pl\"{u}cker line correspondences and two affine transformations between the corresponding feature points.
This univariate polynomial leads to a closed-form solution with a maximum of 6 real roots. Equation~\eqref{eq:euq_Evq} can be efficiently solved by the companion matrix method~\cite{cox2013ideals} or Sturm bracketing method~\cite{nister2004efficient}. Once $q_y$ has been obtained, the rotation matrix $\mathbf{R}_{y}$ is recovered from Eq.~\eqref{eq:Ryt1}.
For the relative pose between two multi-camera reference frames at time $k$ and $k+1$, the rotation matrix $\mathbf{R}$ is recovered from Eq.~\eqref{eq:Rv} and the translation is computed by $\mathbf{t} = ({{{\mathbf{R}}}'_{\text{imu}}})^T \mathbf{\tilde{t}}$. Note that two remaining equations of the second affine correspondence can be also used in the preemptive hypothesis tests, which detect and reject inconsistent samples directly.
\section{\label{sec:experiments}Experiments}
In this section, we conduct extensive experiments on both synthetic and real-world data to evaluate the performance of the proposed methods. Our solvers are compared with state-of-the-art methods.
For relative pose estimation under planar motion, the solvers using 1 AC and 2 ACs proposed in Section~\ref{sec:planarmotion} are referred to as \texttt{1AC~plane} method and \texttt{2AC~plane} method, respectively. The accuracy of \texttt{1AC~plane} and \texttt{2AC~plane} are compared with the \texttt{17pt-Li}~\cite{li2008linear}, \texttt{8pt-Kneip}~\cite{kneip2014efficient} and \texttt{6pt-Stew{\'e}nius}~\cite{henrikstewenius2005solutions}, which are provided in
the OpenGV library~\cite{kneip2014opengv}. Since the Ackermann motion model is restrictive in practice and usually requires a post-relaxation~\cite{hee2013motion,liu2017robust}, the methods using the Ackermann motion model are not compared in this paper.
For relative pose estimation with known vertical direction, the solver proposed in Section~\ref{sec:knownverticaldirection} is referred to as the \texttt{2AC method}. We compare the accuracy of \texttt{2AC method} with \texttt{17pt-Li}~\cite{li2008linear}, \texttt{8pt-Kneip}~\cite{kneip2014efficient}, \texttt{6pt-Stew{\'e}nius}~\cite{henrikstewenius2005solutions}, \texttt{4pt-Lee} ~\cite{hee2014relative}, \texttt{4pt-Sweeney}~\cite{sweeney2014solving} and \texttt{4pt-Liu}~\cite{liu2017robust}.
The proposed methods \texttt{1AC~plane}, \texttt{2AC~plane} and \texttt{2AC method} take about 3.6, 3.6 and 17.8~$\mu s$ in C++. Due to space limitations, the efficiency comparison and stability study are provided in the supplementary material. In the experiments, all the solvers are implemented within RANSAC to reject outliers. The relative pose which produces the highest number of inliers is chosen. The confidence of RANSAC is set to 0.99 and an inlier threshold angle is set to $0.1^\circ$ by following the definition in OpenGV~\cite{kneip2014opengv}. We also show the feasibility of our methods on the \texttt{KITTI} dataset~\cite{geiger2013vision}. This experiment demonstrates that our methods are well suited for visual odometry in road driving scenarios.
\subsection{Experiments on synthetic data}
We made a simulated 2-camera rig system by following the KITTI autonomous driving platform. The baseline length between two simulated cameras is set to 1 meter and the cameras are installed at different heights. The multi-camera reference frame is built at the middle of camera rig and the translation between two multi-camera reference frames is 3 meters. The resolution of the cameras is 640 $\times$ 480 pixels and the focal lengths are 400 pixels. The principal points are set to the image center (320, 240).
The synthetic scene is composed of a ground plane and 50 random planes. All 3D planes are randomly generated within the range of -5 to 5 meters (X-axis direction), -5 to 5 meters (Y-axis direction), and 10 to 20 meters (Z-axis direction), which are expressed in the respective axis of the multi-camera reference frame. We choose 50 ACs from the ground plane and an AC from each random plane randomly. Thus, there are 100 ACs generated randomly in the synthetic data. For each AC, a random 3D point from a plane is reprojected onto two cameras to get the image point pair. The corresponding affine transformation is obtained by the following procedure. First, the implicit homography is calculated for each plane by four random, not col-linear, additional 3D points from the same plane; projecting them to the cameras; adding Gaussian noise with a standard deviation to the image coordinates, which is similar to the noise added to the coordinates of image point pair; and, finally, estimating the homography. The affine parameters is the first-order approximation of the noisy homography matrix, which the plane implies at the image point pair. The 3D points initializing both the image point pair and the homography are selected randomly considering both the image size and the range of the synthetic scene. Note that the homography can be calculated directly from the plane normal and distance. However, using four projected additional random 3D points enables an indirect but geometrically interpretable way of adding noise to the affine transformation~\cite{barath2019homography}.
A total of 1000 trials are carried out in the synthetic experiment. In each test, 100 ACs are generated randomly. The ACs for the methods are selected randomly and the error is measured on the relative pose which produces the most inliers within the RANSAC scheme. This also allows us to select the best candidate from multiple solutions. The median of errors are used to assess the rotation and translation error. The rotation error is computed as the angular difference between the ground truth rotation and the estimated rotation: ${\varepsilon _{\bf{R}}} = \arccos ((\trace({\mathbf{R}_{gt}}{{\mathbf{R}^T}}) - 1)/2)$, where $\mathbf{R}_{gt}$ and ${\mathbf{R}}$ are the ground truth and estimated rotation matrices. Following the definition in~\cite{quan1999linear,hee2014relative}, the translation error is defined as: ${\varepsilon _{\bf{t}}} = 2\left\| ({{\mathbf{t}_{gt}}}-{\mathbf{t}})\right\|/(\left\| {\mathbf{t}_{gt}} \right\| + \left\| {{\mathbf{t}}} \right\|)$, where $\mathbf{t}_{gt}$ and ${\mathbf{t}}$ are the ground truth and estimated translations.
\subsubsection{Planar motion estimation}
In this scenario, the planar motion of the multi-camera system is described by ($\theta$, $\phi$), see Fig.~\ref{fig:Specialcases}(a). The magnitudes of both angles ranges from $-10^\circ$ to $10^\circ$. The magnitude of image noise is set to Gaussian noise with a standard deviation ranging from $0$ to $2$ pixel. Figure~\ref{fig:RT_planar}(a) $\sim$ (c) show the performance of the proposed \texttt{1AC~plane} method and \texttt{2AC~plane} method against image noise. The \texttt{2AC~plane} method performs better than comparative methods under perfect planar motion. In comparison with the \texttt{2AC~plane} method, the \texttt{1AC~plane} method has similar performance in rotation estimation, but performs slightly worse in translation estimation. As shown in Fig.~\ref{fig:RT_planar}(c) and (f), we plot the translation direction error as an additional evaluation. It is interesting to see that the \texttt{1AC~plane} method also performs better than comparative methods in translation direction estimation.
\begin{figure}[tbp]
\begin{center}
\subfigure[\scriptsize{${\varepsilon_{\bf{R}}}$ with image noise}]
{
\includegraphics[width=0.303\linewidth]{figure/PlaneMotion_stdPix_R_add1AC.pdf}
}
\subfigure[\scriptsize{${\varepsilon_{\bf{t}}}$ with image noise}]
{
\includegraphics[width=0.303\linewidth]{figure/PlaneMotion_stdPix_T_add1AC.pdf}
}
\subfigure[\scriptsize{Translation direction error with image noise}]
{
\includegraphics[width=0.303\linewidth]{figure/PlaneMotion_stdPix_T_degree_add1AC.pdf}
}
\subfigure[\scriptsize{${\varepsilon_{\bf{R}}}$ with non-planar motion noise}]
{
\includegraphics[width=0.303\linewidth]{figure/PlaneMotion_nonPlanar_R_add1AC.pdf}
}
\subfigure[\scriptsize{${\varepsilon_{\bf{t}}}$ with non-planar motion noise}]
{
\includegraphics[width=0.303\linewidth]{figure/PlaneMotion_nonPlanar_T_add1AC.pdf}
}
\subfigure[\scriptsize{Translation direction error with non-planar motion noise}]
{
\includegraphics[width=0.303\linewidth]{figure/PlaneMotion_nonPlanar_T_degree_add1AC.pdf}
}
\end{center}
\caption{Rotation and translation error under planar motion. (a) $\sim$ (c): vary image noise under perfect planar motion. (d) $\sim$ (f): vary non-planar motion noise and fix the standard deviation of image noise at $1.0$ pixel.}
\label{fig:RT_planar}
\end{figure}
We also evaluate the accuracy of the proposed \texttt{1AC~plane} method and \texttt{2AC~plane} method for increasing non-planar motion noise. The non-planar components of a 6DOF relative pose including X-axis rotation, Z-axis rotation and direction of YZ-plane translation~\cite{choi2018fast} are randomly generated and added to the motion of the multi-camera system. The magnitude of non-planar motion noise ranges from $0^\circ$ to $1^\circ$ and the standard deviation of the image noise is set to $1.0$ pixel. Figures~\ref{fig:RT_planar}(d) $\sim$ (f) show the performance of the proposed \texttt{1AC~plane} method and \texttt{2AC~plane} method against non-planar motion noise. Methods \texttt{17pt-Li}, \texttt{8pt-Kneip} and \texttt{6pt-Stew{\'e}nius} deal with the 6DOF motion case and, thus they are not affected by the noise in the planarity assumption. It can be seen that the rotation accuracy of \texttt{2AC~plane} method performs better than comparative methods when the non-planar motion noise is less than $0.3^\circ$. Since the estimation accuracy of translation direction of the \texttt{2AC~plane} method in Fig.~\ref{fig:RT_planar}(f) performs satisfactory, the main reason for poor performance of translation estimation is that the metric scale estimation is sensitive to the non-planar motion noise. In comparison with the \texttt{2AC~plane} method, the \texttt{1AC~plane} method has similar performance in rotation estimation, but performs poorly in translation estimation. The translation accuracy decreases significantly when the non-planar motion noise is more than $0.2^\circ$.
Both the \texttt{1AC~plane} method and the \texttt{2AC~plane} method have a significant computational advantage over comparative methods, because the efficient solver for 4-degree polynomial equation takes only about 3.6~$\mu s$. A more interesting fact for the \texttt{2AC~plane} method is the speed-up gained by the preemptive hypothesis tests, which detect and reject inconsistent samples directly. Compared with testing on the other affine correspondences, the preemptive hypothesis tests sped up the procedure by more than three times while leading to the same accuracy of relative pose estimation.
\subsubsection{Motion with known vertical direction}
In this set of experiments, the translation direction between two multi-camera reference frames is chosen to produce either forward, sideways or random motions. In addition, the second reference frame is rotated around three axes in order and the rotation angles range from $-10^\circ$ to $10^\circ$. With the assumption that the roll and pitch angles are known, the multi-camera reference frame is aligned with the gravity direction. Due to space limitations, we only show the results for random motion. The results for forward and sideways motions are shown in the supplementary material. Figure~\ref{fig:RT_1AC}(a) and (d) show the performance of the \texttt{2AC~method} against image noise with perfect IMU data in the random motion case. It can be seen that the proposed method is robust to image noise and performs better than the comparative methods.
\begin{figure}[tbp]
\begin{center}
\subfigure[\scriptsize{${\varepsilon _{\bf{R}}}$ with image noise}]
{
\includegraphics[width=0.303\linewidth]{figure/RandomMotion_stdPix_R.pdf}
}
\subfigure[\scriptsize{${\varepsilon _{\bf{R}}}$ with pitch angle noise}]
{
\includegraphics[width=0.303\linewidth]{figure/RandomMotion_stdAx_R.pdf}
}
\subfigure[\scriptsize{${\varepsilon _{\bf{R}}}$ with roll angle noise}]
{
\includegraphics[width=0.303\linewidth]{figure/RandomMotion_stdAz_R.pdf}
}
\subfigure[\scriptsize{${\varepsilon _{\bf{t}}}$ with image noise}]
{
\includegraphics[width=0.303\linewidth]{figure/RandomMotion_stdPix_T.pdf}
}
\subfigure[\scriptsize{${\varepsilon _{\bf{t}}}$ with pitch angle noise}]
{
\includegraphics[width=0.303\linewidth]{figure/RandomMotion_stdAx_T.pdf}
}
\subfigure[\scriptsize{${\varepsilon _{\bf{t}}}$ with roll angle noise}]
{
\includegraphics[width=0.303\linewidth]{figure/RandomMotion_stdAz_T.pdf}
}
\end{center}
\caption{Rotation and translation error under random motion with known vertical direction. The upper row: rotation error, the bottom row: translation error. (a)(d): vary image noise. (b)(e) and (c)(f): vary IMU angle noise and fix the standard deviation of image noise at $1.0$ pixel.}
\label{fig:RT_1AC}
\end{figure}
Figure~\ref{fig:RT_1AC}(b)(e) and (c)(f) show the performance of the proposed \texttt{2AC~method} against IMU noise in the random motion case, while the standard deviation of the image noise is fixed at $1.0$ pixel. Note that the methods \texttt{17pt-Li}, \texttt{8pt-Kneip} and \texttt{6pt-Stew{\'e}nius} are not influenced by IMU noise, because these methods do not use the known vertical direction as a prior. It is interesting to see that our method outperforms the methods \texttt{17pt-Li}, \texttt{8pt-Kneip} and \texttt{6pt-Stew{\'e}nius} in the random motion case, even though the IMU noise is around $0.8^\circ$. In addition, the proposed \texttt{2AC~method} performs better than the methods \texttt{4pt-Lee}, \texttt{4pt-Sweeney} and \texttt{4pt-Liu} as well, which also use the known vertical direction as a prior. The results under forward and sideways motion also demonstrate that the \texttt{2AC~method} performs basically better than all comparative methods against image noise and provides comparable accuracy for increasing IMU noise. It is worth to mention that, with the help of preemptive hypothesis tests, the relative pose estimation with the proposed \texttt{2AC~method} solver sped up the procedure by more than three times while leading to similarly accurate relative poses.
\subsection{Experiments on real data}
We test the performance of our methods on the \texttt{KITTI} dataset~\cite{geiger2013vision}, which consists of successive video frames from a forward facing stereo camera. We ignore the overlap in their fields of view, and treat it as a general multi-camera system. The sequences labeled from 0 to 10 which have ground truth are used for the evaluation. Therefore, the methods were tested on a total of 23000 image pairs. The affine correspondences between consecutive frames in each camera are established by applying the ASIFT~\cite{morel2009asift}. It can also be obtained by MSER~\cite{matas2004robust} which will be slightly less accurate but much faster to obtain~\cite{barath2016accurate}. The affine correspondences across the two cameras are not matched and the metric scale is not estimated as the movement between consecutive frames is small. Besides, integrating the acceleration over time from IMU is more suitable for recovering the metric scale~\cite{NutziWeiss-411}. All the solvers have been integrated into a RANSAC scheme.
\begin{table*}[htbp]
\caption{Rotation and translation error on \texttt{KITTI} sequences (unit: degree).}
\begin{center}
\setlength{\tabcolsep}{0.9mm}{
\scalebox{1.0}{
\begin{tabular}{|c||c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{\footnotesize{Seq.}} & \footnotesize{17pt-Li~\cite{li2008linear}} & \footnotesize{8pt-Kneip~\cite{kneip2014efficient}} & \footnotesize{6pt-}{\footnotesize{St.}}~\footnotesize{\cite{henrikstewenius2005solutions}} & \footnotesize{4pt-Lee~\cite{hee2014relative}} & \footnotesize{4pt-Sw.~\cite{sweeney2014solving}}& \footnotesize{4pt-Liu~\cite{liu2017robust}}& \footnotesize{\textbf{2AC~plane}}& \footnotesize{\textbf{2AC method}} \\
\cline{2-9}
& ${\varepsilon _{\bf{R}}}$\qquad\ ${\varepsilon _{\bf{t}}}$ & ${\varepsilon _{\bf{R}}}$\qquad\ ${\varepsilon _{\bf{t}}}$ & ${\varepsilon _{\bf{R}}}$\qquad\ ${\varepsilon _{\bf{t}}}$ & ${\varepsilon _{\bf{R}}}$\qquad\ ${\varepsilon _{\bf{t}}}$ & ${\varepsilon _{\bf{R}}}$\qquad\ ${\varepsilon _{\bf{t}}}$ & ${\varepsilon _{\bf{R}}}$\qquad\ ${\varepsilon _{\bf{t}}}$ & ${\varepsilon _{\bf{R}}}$\qquad\ ${\varepsilon _{\bf{t}}}$ & ${\varepsilon _{\bf{R}}}$\qquad\ ${\varepsilon _{\bf{t}}}$\\
\hline
00& 0.139 \ 2.412 & 0.130 \ 2.400& 0.229 \ 4.007 & 0.065 \ 2.469 & 0.050 \ 2.190 & 0.066 \ 2.519 & 0.280 \ 2.243 &\textbf{0.031} \ \textbf{1.738} \\
\rowcolor{gray!10}01& 0.158 \ 5.231 & 0.171 \ 4.102& 0.762 \ 41.19 & 0.137 \ 4.782 & 0.125 \ 11.91 & 0.105 \ 3.781 & 0.168 \ 2.486 &\textbf{0.025} \ \textbf{1.428} \\
02& 0.123 \ 1.740 & 0.126 \ 1.739& 0.186 \ 2.508 & 0.057 \ 1.825 & 0.044 \ 1.579 & 0.057 \ 1.821 & 0.213 \ 1.975 &\textbf{0.030} \ \textbf{1.558} \\
\rowcolor{gray!10}03& 0.115 \ 2.744 & 0.108 \ 2.805& 0.265 \ 6.191 & 0.064 \ 3.116 & 0.069 \ 3.712 & 0.062 \ 3.258 & 0.238 \ \textbf{1.849} &\textbf{0.037} \ 1.888 \\
04& 0.099 \ 1.560 & 0.116 \ 1.746& 0.202 \ 3.619 & 0.050 \ 1.564 & 0.051 \ 1.708 & 0.045 \ 1.635 & 0.116 \ 1.768 &\textbf{0.020} \ \textbf{1.228} \\
\rowcolor{gray!10}05& 0.119 \ 2.289 & 0.112 \ 2.281& 0.199 \ 4.155 & 0.054 \ 2.337 & 0.052 \ 2.544 & 0.056 \ 2.406 & 0.185 \ 2.354 &\textbf{0.022} \ \textbf{1.532} \\
06& 0.116 \ 2.071 & 0.118 \ 1.862& 0.168 \ 2.739 & 0.053 \ 1.757 & 0.092 \ 2.721 & 0.056 \ 1.760 & 0.137 \ 2.247 &\textbf{0.023} \ \textbf{1.303} \\
\rowcolor{gray!10}07& 0.119 \ 3.002 & 0.112 \ 3.029& 0.245 \ 6.397 & 0.058 \ 2.810 & 0.065 \ 4.554 & 0.054 \ 3.048 & 0.173 \ 2.902 &\textbf{0.023} \ \textbf{1.820} \\
08& 0.116 \ 2.386 & 0.111 \ 2.349& 0.196 \ 3.909 & 0.051 \ 2.433 & 0.046 \ 2.422 & 0.053 \ 2.457 & 0.203 \ 2.569 &\textbf{0.024} \ \textbf{1.911} \\
\rowcolor{gray!10}09& 0.133 \ 1.977 & 0.125 \ 1.806& 0.179 \ 2.592 & 0.056 \ 1.838 & 0.046 \ 1.656 & 0.058 \ 1.793 & 0.189 \ 1.997 &\textbf{0.027} \ \textbf{1.440} \\
10& 0.127 \ 1.889 & 0.115 \ 1.893& 0.201 \ 2.781 & 0.052 \ 1.932 & 0.040 \ 1.658 & 0.058 \ 1.888 & 0.223 \ 2.296 &\textbf{0.025} \ \textbf{1.586} \\
\hline
\end{tabular}}}
\end{center}
\label{VerticalRTErrror}
\end{table*}
\begin{table*}[htbp]
\caption{Runtime of RANSAC averaged over \texttt{KITTI} sequences combined with different solvers (unit:~$s$).}
\begin{center}
\setlength{\tabcolsep}{0.9mm}{
\scalebox{0.95}{
\begin{tabular}{|c||c|c|c|c|c|c|c|c|}
\hline
\small{Methods} & \footnotesize{17pt-Li~\cite{li2008linear}} & \footnotesize{8pt-Kneip~\cite{kneip2014efficient}} & \footnotesize{6pt-}{\footnotesize{St.}}~\footnotesize{\cite{henrikstewenius2005solutions}} &\footnotesize{4pt-Lee~\cite{hee2014relative}} & \footnotesize{4pt-Sw.~\cite{sweeney2014solving}}& \footnotesize{4pt-Liu~\cite{liu2017robust}} & \footnotesize{\textbf{2AC~plane}}& \footnotesize{\textbf{2AC method}} \\
\hline
\small{Mean time }& 52.82 & 10.36 & 79.76& 0.85& 0.63& 0.45& \textbf{0.07} & 0.09\\
\hline
\small{Standard deviation}& 2.62 & 1.59 & 4.52& 0.093& 0.057& 0.058& \textbf{0.0071} & 0.0086\\
\hline
\end{tabular}}}
\end{center}
\label{RANSACTime}
\end{table*}
\begin{figure*}[!h]
\begin{center}
\subfigure[\texttt{8pt-Kneip}]
{
\includegraphics[width=0.285\linewidth]{figure/8pt_Kneip.pdf}
}
\subfigure[\texttt{4pt-Sweeney}]
{
\includegraphics[width=0.285\linewidth]{figure/4pt_Sweeney.pdf}
}
\subfigure[\texttt{2AC~method}]
{
\includegraphics[width=0.34\linewidth]{figure/Ev_2AC.pdf}
}
\end{center}
\caption{Estimated trajectories without any post-refinement. The relative pose measurements between consecutive frames are directly concatenated. Colorful curves are estimated trajectories with \texttt{8pt-Kneip}~\cite{kneip2014efficient}, \texttt{4pt-Sweeney}~\cite{sweeney2014solving} and \texttt{2AC~method}. Black curves with stars are ground truth trajectories. Best viewed in color.}
\label{fig:trajectory}
\end{figure*}
The proposed \texttt{2AC~plane} method and \texttt{2AC method} are compared against \texttt{17pt-Li}~\cite{li2008linear}, \texttt{8pt-Kneip}~\cite{kneip2014efficient}, \texttt{6pt-Stew{\'e}nius}~\cite{henrikstewenius2005solutions}, \texttt{4pt-Lee}~\cite{hee2014relative}, \texttt{4pt-Sweeney}~\cite{sweeney2014solving} and \texttt{4pt-Liu}~\cite{liu2017robust}. Since the \texttt{KITTI} dataset is captured by the stereo camera with the same height, which is a degenerate case for the \texttt{1AC~plane} method, this method is not performed in the experiment. For the \texttt{2AC~plane} method, the estimation results are also compared with the 6DOF ground truth of relative pose, even though this method only estimates two angles ($\theta$, $\phi$) with the plane motion assumption. For the \texttt{2AC method}, the roll and pitch angles obtained from the ground truth data are used to simulate IMU measurements, which align the multi-camera reference frame with the gravity direction. To ensure the fairness of the experiment, the roll and pitch angles are also provided for the methods \texttt{4pt-Lee}~\cite{hee2014relative}, \texttt{4pt-Sweeney}~\cite{sweeney2014solving} and \texttt{4pt-Liu}~\cite{liu2017robust}. The results of the rotation and translation estimation are shown in Table~\ref{VerticalRTErrror}. The runtime of RANSAC averaged over \texttt{KITTI} sequences combined with different solvers is shown in Table~\ref{RANSACTime}.
The \texttt{2AC method} offers the best overall performance among all the methods. The \texttt{6pt-Stew{\'e}nius} method performs poorly on sequence 01, because this sequence is a highway with few tractable close objects, and this method always fails to select the best candidate from multiple solutions under forward motion in the RANSAC scheme. Besides, it is interesting to see that the translation accuracy of \texttt{2AC~plane} method basically outperforms the \texttt{6pt-Stew{\'e}nius} method, even though the planar motion assumption does not fit the \texttt{KITTI} dataset well. Due to the benefits of computational efficiency, both the \texttt{2AC~plane} method and the \texttt{2AC method} are quite suitable for finding a correct inlier set, which is then used for accurate motion estimation in visual odometry.
To visualize the comparison results, the estimated trajectory for sequence 00 is plotted in Fig.~\ref{fig:trajectory}. We are directly concatenating frame-to-frame relative pose measurements without any post-refinement. The trajectory for the \texttt{2AC~method} is compared with the two best performing comparison methods in sequence 00 based on Table~\ref{VerticalRTErrror}: the \texttt{8pt-Kneip} method in 6DOF motion case and the \texttt{4pt-Sweeney} method in 4DOF motion case. Since all methods were not able to estimate the scale correctly, in particular for the many straight parts of the trajectory, the ground truth scale is used to plot the trajectories. Then the trajectories are aligned with the ground truth and the color along the trajectory encodes the absolute trajectory error (ATE)~\cite{sturm2012benchmark}. Even though all trajectories have a significant accumulation of drift, it can still be seen that the proposed \texttt{2AC~method} has the smallest ATE among the compared trajectories.
\section{\label{sec:conclusion}Conclusion}
By exploiting the affine parameters, we have proposed four solutions for the relative pose estimation of a multi-camera system. A minimum of two affine correspondences is needed to estimate the 6DOF relative pose of a multi-camera system.
Under the planar motion assumption, we present two solvers to recover the planar motion of a multi-camera system, including a minimal solver with a single affine correspondence and a solver with two affine correspondences. In addition, a minimal solution with two affine correspondences is also proposed to solve for the relative pose of the multi-camera system with known vertical direction. The assumptions taken in these solutions are commonly met in road driving scenes. We evaluate the latter two solutions on synthetic data and real image sequence datasets. The experimental results clearly showed that the proposed methods provide better efficiency and accuracy for relative pose estimation in comparison to state-of-the-art methods.
{\small
\bibliographystyle{ieee_fullname}
\section{\label{sec:6DOFmotion_supp}Relative Pose Estimation under General Motion}
\subsection{Solution using Gr\"{o}bner basis method}
For affine correspondence $({\mathbf{x}}_{ij}, {\mathbf{x}}'_{ij}, \mathbf{A})$, we get three polynomials for six unknowns $\{q_x, q_y, q_z, t_x, t_y, t_z\}$ from Eqs.~\eqref{GECS6dof} and~\eqref{eq:E6dof_Ac6}. After separating $q_x$, $q_y$, $q_z$ from $t_x$, $t_y$, $t_z$, we arrive at equation system
{\begin{equation}
\frac{1}{1+q_x^2+q_y^2+q_z^2}\underbrace {\begin{bmatrix}
{M_{11}}& {M_{12}}& {M_{13}}& {M_{14}}\\
{M_{21}}& {M_{22}}& {M_{23}}& {M_{24}}\\
{M_{31}}& {M_{32}}& {M_{33}}& {M_{34}}
\end{bmatrix}}_{{\mathbf{M}}\left( {{q_x,q_y,q_z}} \right)}
\begin{bmatrix}
{{{t}_x}}\\
{{{t}_y}}\\
{{{t}_z}}\\
1
\end{bmatrix} = {\mathbf{0}},
\label{eq:euq_qxqyqz1}
\end{equation}}\\
where the elements $M_{ij}$ $(i=1,\ldots,3; j=1,\ldots,4)$ of the coefficient matrix ${\mathbf{M}(q_x,q_y,q_z)}$ are formed by the polynomial coefficients and three unknown variables $q_x,q_y,q_z$:
\begin{equation}
{\mathbf{M}(q_x,q_y,q_z)} = \begin{bmatrix}
[2]&\ [2]&\ [2]&\ [2]\\
[2]&\ [2]&\ [2]&\ [2]\\
[2]&\ [2]&\ [2]&\ [2]
\end{bmatrix},
\label{eq:M_qxqyqz2}
\end{equation}
where $[N]$ denotes a polynomial of degree $N$ in variables $q_x,q_y,q_z$.
For the general case, Eq.~\eqref{eq:euq_qxqyqz1} imposes three independent constraints on six unknowns $\{q_x, q_y, q_z, t_x, t_y, t_z\}$. Thus two affine correspondences are enough to recover the relative pose of a multi-camera system under 6DOF general motion. Hence, we get an equation system of 6 independent constraints from 2 affine correspondences in similar form as Eq.~\eqref{eq:euq_qxqyqz1}. These constraints are stacked into six equations in six unknowns:
{\begin{equation}
\frac{1}{1+q_x^2+q_y^2+q_z^2}\underbrace {\begin{bmatrix}
{M_{11}}& {M_{12}}& {M_{13}}& {M_{14}}\\
{M_{21}}& {M_{22}}& {M_{23}}& {M_{24}}\\
{M_{31}}& {M_{32}}& {M_{33}}& {M_{34}}\\
{M_{41}}& {M_{42}}& {M_{43}}& {M_{44}}\\
{M_{51}}& {M_{52}}& {M_{53}}& {M_{54}}\\
{M_{61}}& {M_{62}}& {M_{63}}& {M_{64}}
\end{bmatrix}}_{{\mathbf{M}}_{6\times4}} \begin{bmatrix}
{{{t}_x}}\\
{{{t}_y}}\\
{{{t}_z}}\\
1
\end{bmatrix} = {\mathbf{0}},
\label{eq:euq_qxqyqz1_sp}
\end{equation}}
Since ${{\mathbf{M}}_{6\times4}}/({1+q_x^2+q_y^2+q_z^2})$ has a null vector, its rank must be at most three. Thus, all the $4\times4$ sub-determinants of ${{\mathbf{M}}_{6\times4}}/({1+q_x^2+q_y^2+q_z^2})$ must be zero. In this paper, three sub-matrices which give three equations in three unknowns $q_x,q_y,q_z$ are choose as follows:
{\begin{align}
\begin{cases}
\det(\frac{1}{1+q_x^2+q_y^2+q_z^2}{\begin{bmatrix}
{M_{11}}& {M_{12}}& {M_{13}}& {M_{14}}\\
{M_{21}}& {M_{22}}& {M_{23}}& {M_{24}}\\
{M_{31}}& {M_{32}}& {M_{33}}& {M_{34}}\\
{M_{41}}& {M_{42}}& {M_{43}}& {M_{44}}
\end{bmatrix}}) = 0 \\
\det(\frac{1}{1+q_x^2+q_y^2+q_z^2}{\begin{bmatrix}
{M_{11}}& {M_{12}}& {M_{13}}& {M_{14}}\\
{M_{21}}& {M_{22}}& {M_{23}}& {M_{24}}\\
{M_{31}}& {M_{32}}& {M_{33}}& {M_{34}}\\
{M_{51}}& {M_{52}}& {M_{53}}& {M_{54}}
\end{bmatrix}}) = 0 \\
\det(\frac{1}{1+q_x^2+q_y^2+q_z^2}{\begin{bmatrix}
{M_{11}}& {M_{12}}& {M_{13}}& {M_{14}}\\
{M_{21}}& {M_{22}}& {M_{23}}& {M_{24}}\\
{M_{31}}& {M_{32}}& {M_{33}}& {M_{34}}\\
{M_{61}}& {M_{62}}& {M_{63}}& {M_{64}}
\end{bmatrix}}) = 0 \\
\end{cases}
\label{eq:DetM3}
\end{align}}
The hidden variable resultant method~\cite{cox2013ideals} is used to solve for the unknowns of Eq.~\eqref{eq:DetM3}. By grouping the unknowns $q_x$, $q_y$, $q_z$ with the known coefficients, we obtain an equation system with 84 monomials which consist of $q_x$, $q_y$, $q_z$. Note that the coefficients are divided by ${1+q_x^2+q_y^2+q_z^2}$, which reduces the polynomial degree and improves the efficiency of the solution. The final solver was obtained by the automatic solver generator of~\cite{larsson2017efficient}. This equation system has a maximum polynomial degree of 6.
\section{\label{sec:planarmotion_Supp}Relative Pose Estimation Under Planar Motion}
\subsection{Details about the coefficient matrix ${\mathbf{M}(q_y)}$}
Refer to Eq. (12) in the paper, three constraints obtained from two affine correspondences are stacked into 3 equations in 3 unknowns. The elements $M_{ij}$ $(i=1,\ldots,3; j=1,\ldots,3)$ of the coefficient matrix ${\mathbf{M}(q_y)}$ are formed by the polynomial coefficients and one unknown variable $q_y$, which can be described as:
\begin{equation}
{\mathbf{M}(q_y)} = \begin{bmatrix}
[2]&\ [2]&\ [2]\\
[2]&\ [2]&\ [2]\\
[2]&\ [2]&\ [2]
\end{bmatrix},
\label{eq:M_qy3}
\end{equation}
where $[N]$ denotes a polynomial of degree $N$ in variable $q_y$.
\subsection{Degenerate Case}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.8\linewidth]{figure/proof.pdf}
\end{center}
\vspace{-0.1in}
\caption{Relative pose estimation for a multi-camera system.}
\label{fig:degenerate}
\end{figure}
\begin{proposition}
\label{theorem:nister}
Consider a multi-camera system which is under planar motion. Assume the following three conditions are satisfied. (1) The rotation axis is $y$-axis, and the translation is on $xz$-plane. (2) There is one affine correspondence across camera $C_i$ in frame $k$ and camera $C_j$ in frame $k+1$ ($C_i$ and $C_j$ can be the same or not). (3) The optical centers of camera $C_i$ and $C_j$ have the same $y$-coordinate. Then this case is degenerate. Specifically, the rotation can be correctly recovered, while the translation can not.
\end{proposition}
\begin{proof}
Figure~\ref{fig:degenerate} illustrates the case described in the proposition. Our proof is based on the following observation: whether a case is degenerate is independent of the pose solvers. Based on this point, we construct a new minimal solver which is different from the proposed solver in the main text.
(i) Since the multi-camera system is rotated by $y$-axis, the camera $C_i$ in frame $k$ and camera $C_j$ in frame $k+1$ are under motion with known rotation axis. Thus we can use the \texttt{1AC-method} for perspective cameras~\cite{Guan2020CVPR} to estimate the relative pose between $C_i$ and $C_j$. This is a minimal solver since one AC provides 3 independent constraints and there are three unknowns (1 for rotation, 2 for translation by excluding scale-ambiguity). Denote the recovered rotation and translation between $C_i$ and $C_j$ as $(\mathbf{R}', \mathbf{t}')$, where $\mathbf{t}'$ is a unit vector. The scale of the translation vector cannot be recovered at this moment. Denote the unknown translation scale as $\lambda$.
(ii) From Fig.~\ref{fig:degenerate}, we have
{ \begin{equation}
\begin{aligned}
&\begin{bmatrix}
{\mathbf{R}} & \mathbf{t}\\
{\mathbf{0}}&{1}\\
\end{bmatrix} = \begin{bmatrix}{\mathbf{R}_{j}}&{\mathbf{t}_{j}}\\
{\mathbf{0}}&{1}\\
\end{bmatrix}
\begin{bmatrix}{\mathbf{R}'}&{ \lambda \mathbf{t}'}\\
{\mathbf{0}}&{1}\\
\end{bmatrix}
\begin{bmatrix}{\mathbf{R}_{i}}&{\mathbf{t}_{i}}\\
{\mathbf{0}}&{1}\\
\end{bmatrix}^{-1} \\
& \qquad \ \ =\begin{bmatrix}{{\mathbf{R}_{j}}{\mathbf{R}'}{\mathbf{R}_{i}^T}}& \ \lambda \mathbf{R}_j \mathbf{t}' + \mathbf{t}_j - \mathbf{R}_j \mathbf{R}' \mathbf{R}_i^T \mathbf{t}_i\\
{\mathbf{0}}& \ {1}\\
\end{bmatrix}.
\end{aligned}
\label{eq:trans_general}
\end{equation}}\\
%
From Eq.~\eqref{eq:trans_general}, we have
\begin{align}
&\mathbf{R} = \mathbf{R}_j \mathbf{R}' \mathbf{R}_i^T, \label{eq:r_equ} \\
&\mathbf{t} = \lambda \mathbf{R}_j \mathbf{t}' + \mathbf{t}_j - \mathbf{R}_j \mathbf{R}' \mathbf{R}_i^T \mathbf{t}_i.
\label{eq:t_equ}
\end{align}
%
From Eq.~\eqref{eq:r_equ}, the rotation $\mathbf{R}$ between frame $k$ and frame $k+1$ for the multi-camera system can be recovered.
%
From Eq.~\eqref{eq:t_equ}, we have
\begin{align}
\lambda (\mathbf{R}_j \mathbf{t}') - \mathbf{t} + (\mathbf{t}_j - \mathbf{R} \mathbf{t}_i) = \mathbf{0}.
\label{eq:tran_linear}
\end{align}
In Eq.~\eqref{eq:tran_linear}, note that $\mathbf{t} = [t_x, 0, t_z]^T$ due to planar motion. Thus this linear equation system has $3$ unknowns $\{\lambda, t_x, t_z\}$ and $3$ equations. Usually the unknowns can be uniquely determined by solving this equation system. However, if the second entry of $\mathbf{R}_j \mathbf{t}'$ is zero, it can be verified that $\lambda$ becomes a free parameter. In other words, the scale cannot be determined and this is a degenerate case.
(iii) Finally, we exploit the geometric meaning of the degenerate case, i.e., the second entry of $\mathbf{R}_j \mathbf{t}'$ is zero. Denote the normalized vector originated from $C_i$ to $C_j$ as $\mathbf{v}$. Since $\mathbf{v}$ represents the normalized translation vector between $C_i$ and $C_j$, the coordinates of $\mathbf{v}$ in reference of camera $C_j$ is $\mathbf{t}'$. Further, the coordinates of $\mathbf{v}$ in frame $k+1$ is $\mathbf{R}_j \mathbf{t}'$. The second entry of $\mathbf{R}_j \mathbf{t}'$ is zero means that the endpoints of $\mathbf{v}$ have the same $y$-coordinate in frame $k+1$, which is the condition~(3) in the proposition.
\end{proof}
\section{\label{sec:knownverticaldirection_Supp}Relative Pose Estimation with Known Vertical Direction}
Refer to Eq. (23) in the paper, four constraints obtained from two affine correspondences are stacked into 4 equations in 4 unknowns. The elements $\tilde{M}_{ij}$ $(i=1,\ldots,4; j=1,\ldots,4)$ of the coefficient matrix $\tilde{\mathbf{M}}({q_y})$ are formed by the polynomial coefficients and one unknown variable $q_y$, which can be described as:
\begin{equation}
{\tilde{\mathbf{M}}({q_y})} = \begin{bmatrix}
[2]&\ [2]&\ [2]&\ [2]\\
[2]&\ [2]&\ [2]&\ [2]\\
[2]&\ [2]&\ [2]&\ [2]\\
[2]&\ [2]&\ [2]&\ [2]
\end{bmatrix},
\label{eq:M_qy4}
\end{equation}
where $[N]$ denotes a polynomial of degree $N$ in variable $q_y$.
\section{\label{sec:experiments_supp}Experiments}
\subsection{Efficiency comparison}
The runtimes of our solvers and the comparative solvers are evaluated on an Intel(R) Core(TM) i7-7800X 3.50GHz. All algorithms are implemented in C++. Methods \texttt{17pt-Li}, \texttt{8pt-Kneip} and \texttt{6pt-Stewenius} are provided in the OpenGV library. We implemented solver \texttt{4pt-Lee}. For methods \texttt{4pt-Sweeney} and \texttt{4pt-Liu}, we used their publicly available implementations from GitHub. The average, over 10 000 runs, processing times of the solvers are shown in Table~\ref{SolverTime}. The runtimes of the methods \texttt{1AC~plane} , \texttt{2AC~plane} and \texttt{4pt-Liu} are the lowest, because these methods solve the 4-degree polynomial equation. The \texttt{2AC~method} which solves the 6-degree polynomial equation also requires low computation time.
\begin{table*}[htbp]
\caption{Run-time comparison of motion estimation algorithms (unit:~$\mu s$).}
\begin{center}
\setlength{\tabcolsep}{0.9mm}{
\scalebox{1.0}{
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|}
\hline
\small{Methods} & \footnotesize{17pt-Li~\cite{li2008linear}} & \footnotesize{8pt-Kneip~\cite{kneip2014efficient}} & \footnotesize{6pt-}{\footnotesize{St.}}~\footnotesize{\cite{henrikstewenius2005solutions}} &\footnotesize{4pt-Lee~\cite{hee2014relative}} & \footnotesize{4pt-Sw.~\cite{sweeney2014solving}}& \footnotesize{4pt-Liu~\cite{liu2017robust}}& \footnotesize{\textbf{1AC~plane}}&\footnotesize{\textbf{2AC~plane}}& \footnotesize{\textbf{2AC method}} \\
\hline
\small{Timings}& 43.3 & 102.0& 3275.4& 26.5& 22.2& 3.7& \textbf{3.6}& \textbf{3.6} & 17.8\\
\hline
\end{tabular}}}
\end{center}
\label{SolverTime}
\end{table*}
\subsection{Numerical stability}
\begin{figure}[htbp]
\begin{center}
\subfigure[]
{
\includegraphics[width=0.47\linewidth]{figure/NumericalStability_R_1AC3equations_add1ACMethod.eps}
}
\subfigure[]
{
\includegraphics[width=0.47\linewidth]{figure/NumericalStability_T_1AC3equations_add1ACMethod.eps}
}
\end{center}
\caption{Probability density functions over estimation errors in the noise-free case (10 000 runs). The horizontal axis represents the log$_{10}$ errors and the vertical axis represents the density. (a) reports the rotation error. (b) reports the translation error. The proposed \texttt{1AC~plane} method, \texttt{2AC~plane} method and \texttt{2AC method} are compared against \texttt{17pt-Li}~\cite{li2008linear}, \texttt{8pt-Kneip}~\cite{kneip2014efficient}, \texttt{6pt-Stew{\'e}nius}~\cite{henrikstewenius2005solutions}, \texttt{4pt-Lee}~\cite{hee2014relative}, \texttt{4pt-Sweeney} ~\cite{sweeney2014solving} and \texttt{4pt-Liu}~\cite{liu2017robust}.}
\label{fig:Numerical}
\end{figure}
Figure~\ref{fig:Numerical} reports the numerical stability of the solvers in the noise-free case. The procedure is repeated 10 000 times. The empirical probability density functions (vertical axis) are plotted as the function of the log$_{10}$ estimated errors (horizontal axis). Methods \texttt{1AC~plane}, \texttt{2AC~plane}, \texttt{2AC method}, \texttt{17pt-Li}~\cite{li2008linear}, \texttt{4pt-Lee}~\cite{hee2014relative} and \texttt{4pt-Sweeney}~\cite{sweeney2014solving} are numerically stable. It can also be seen that the \texttt{4pt-Sweeney} method has a small peak, both in the rotation and translation error curves, around $10^{-2}$. The \texttt{8pt-Kneip} method based on iterative optimization is susceptible to falling into local minima. Due to the use of first-order approximation of the relative rotation, the \texttt{4pt-Liu} method inevitably has greater than zero error in the noise-free case.
\subsection{Motion with known vertical direction}
\begin{figure}[htbp]
\begin{center}
\subfigure[]
{
\includegraphics[width=0.303\linewidth]{figure/ForwardMotion_stdPix_R.eps}
}
\subfigure[]
{
\includegraphics[width=0.303\linewidth]{figure/ForwardMotion_stdAx_R.eps}
}
\subfigure[]
{
\includegraphics[width=0.303\linewidth]{figure/ForwardMotion_stdAz_R.eps}
}
\subfigure[]
{
\includegraphics[width=0.303\linewidth]{figure/ForwardMotion_stdPix_T.eps}
}
\subfigure[]
{
\includegraphics[width=0.303\linewidth]{figure/ForwardMotion_stdAx_T.eps}
}
\subfigure[]
{
\includegraphics[width=0.303\linewidth]{figure/ForwardMotion_stdAz_T.eps}
}
\end{center}
\caption{Rotation and translation error under forward motion. The upper row: rotation error, the bottom row: translation error. (a)(d): vary image noise. (b)(e) and (c)(f): vary IMU angle noise and fix the standard deviation of image noise as $1.0$ pixel.}
\label{fig:RTForwardMotion_1AC}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\subfigure[]
{
\includegraphics[width=0.303\linewidth]{figure/SidewaysMotion_stdPix_R.eps}
}
\subfigure[]
{
\includegraphics[width=0.303\linewidth]{figure/SidewaysMotion_stdAx_R.eps}
}
\subfigure[]
{
\includegraphics[width=0.303\linewidth]{figure/SidewaysMotion_stdAz_R.eps}
}
\subfigure[]
{
\includegraphics[width=0.303\linewidth]{figure/SidewaysMotion_stdPix_T.eps}
}
\subfigure[]
{
\includegraphics[width=0.303\linewidth]{figure/SidewaysMotion_stdAx_T.eps}
}
\subfigure[]
{
\includegraphics[width=0.303\linewidth]{figure/SidewaysMotion_stdAz_T.eps}
}
\end{center}
\caption{Rotation and translation error under sideways motion. The upper row: rotation error, the bottom row: translation error. (a)(d): vary image noise. (b)(e) and (c)(f): vary IMU angle noise and fix the standard deviation of image noise as $1.0$ pixel.}
\label{fig:RTSidewaysMotion_1AC}
\end{figure}
In this section we show the performance of the proposed \texttt{2AC~method} under forward and sideways motion. Figure~\ref{fig:RTForwardMotion_1AC} shows the performance of the proposed \texttt{2AC~method} under forward motion. Figure~\ref{fig:RTSidewaysMotion_1AC} shows the performance of the proposed \texttt{2AC~method} under sideways motion. The results demonstrate that the \texttt{2AC~method} performs better than all compared methods against image noise and provides comparable accuracy for increasing IMU noise.
\subsection{Cumulative errors distributions}
\begin{figure}[htbp]
\begin{center}
\subfigure[]
{
\includegraphics[width=0.47\linewidth]{figure/CDF_Rotation.eps}
}
\subfigure[]
{
\includegraphics[width=0.47\linewidth]{figure/CDF_Translation.eps}
}
\end{center}
\caption{Empirical cumulative error distributions for KITTI sequence 00. (a) reports the rotation error. (b) reports the translation error. The proposed \texttt{1AC~plane} method, \texttt{2AC~plane} method and \texttt{2AC method} are compared against \texttt{17pt-Li}~\cite{li2008linear}, \texttt{8pt-Kneip}~\cite{kneip2014efficient}, \texttt{6pt-Stew{\'e}nius}~\cite{henrikstewenius2005solutions}, \texttt{4pt-Lee}~\cite{hee2014relative}, \texttt{4pt-Sweeney} ~\cite{sweeney2014solving} and \texttt{4pt-Liu}~\cite{liu2017robust}.}
\label{fig:RTCDF}
\end{figure}
We also show the empirical cumulative error distributions for KITTI sequence 00. These values are calculated from the same values which were used for creating Table 1 in the paper. Figure~\ref{fig:RTCDF} shows the proposed \texttt{2AC~method} offers the best overall performance in comparison to state-of-the-art methods.
\end{document}
|
2,877,628,088,502 | arxiv | \section{Introduction}
\noindent Blowup Algebras, in particular the Rees algebra $\mathcal{R}(I) = R[It]$
($t$ a variable) and the associated graded ring $\mathcal{G}(I) =
\mathcal{R}(I)/I\mathcal{R}(I)$ of an ideal $I$ in a Noetherian ring
$R$ play a crucial role in the birational study of algebraic
varieties. The scheme ${\rm Proj}(\mathcal{R}(I))$ is the blowup of
${\rm Spec}(R)$ along $V(I)$, with ${\rm Proj}(\mathcal{G}(I))$ being
the exceptional fiber. Although blowing up is a
fundamental operation, an explicit understanding of this process
remains an open problem. For example, Francia's conjecture stated in
O'Carroll-Valla (1997) says: If $R$ is a regular local ring and $I$
is a dimension one prime ideal in $R$ then $I$ is a complete
intersection if ${\rm Proj}(\mathcal{R}(I))$ is a smooth projective
scheme. A negative answer to this conjecture was given by
Johnson-Morey (2001; 1.1.5), for $R = \mathbb{Q}[x,y,z]$. It is
still unknown whether the conjecture is true or not for polynomial
rings $R$ over an algebraically closed field. It is evident that a
good understanding of the defining equations of the Rees algebra is
necessary to answer such queries and an explicit computation of
these equations is often extremely difficult. In this context,
the Cohen-Macaulay and the normal properties of blowup algebras
have attracted the attention of several authors because they help
in describing these algebras qualitatively.
\medskip
Our aim in this article is to use the Elimination theorem to explicitly
compute the defining equations of the Rees algebra for certain one dimensional
prime ideals $\wp$, namely those which arise as the defining ideal of the affine
monomial curve given by the parametrization $X_{0} = T^{m_{0}}, X_{1} = T^{m_{1}},
X_{2} = T^{m_{2}}, X_{3} = T^{m_{3}}$ such that
$m_{0} < m_{1} < m_{2} < m_{3}$ is a sequence of positive
integers with gcd $1$, which form an arithmetic progression (see Section 4 for
complete technical details). The explicit form of these
equations will be used in Section 6, in conjuction with the Jacobian Criterion for smoothness
over a perfect field to prove that ${\rm Proj}R[\wp t]$ is not smooth. It
is known from the work of Maloo-Sengupta (2003), that $\wp$ is not a
complete intersection. Hence Francia's conjecture is true for $\wp$
over any perfect field $K$.
\section{Equations defining the Rees Algebra}
\noindent In order to compute the equations defining the Rees
algebra $\mathcal{R}(I)$, we view $\mathcal{R}(I)$ as quotients of
polynomial algebras. Thus for a Rees Algebra $\mathcal{R}(I)$, it amounts to
the study of the natural homomorphism associated to the
generators $(a_{1},\ldots, a_{m})$ of $I$
$$\widehat{R}=R[T_{1},\ldots,T_{m}]\stackrel{\varphi}\longrightarrow R[It], \quad \varphi(T_{i}) = a_{i}t\,;$$
and particularly of how to find $E = ker(\varphi)$, and analyze its
properties. $E$ will be referred to as the {\it equations of}
$\mathcal{R}(I)$ or {\it the defining ideal} of $\mathcal{R}(I)$.
One approach to get at these equations goes as follows. Let\\
$$ R^{r}\stackrel{\varphi}\longrightarrow
R^{m}\longrightarrow I \longrightarrow 0\,,$$ be a presentation of
the ideal $I$. $E_{1}$ is generated by the $1$-forms
$$[f_{1},\ldots,f_{r}] = [T_{1},\ldots,T_{m}] . \varphi = \mathbf{T}.\varphi.$$
The ring $R[T_{1},\ldots,T_{m}] / (E_{1})$ is the symmetric algebra
of the ideal $I$, and we write $\mathcal{A} = E / (E_{1})$ for the
kernel of the canonical surjection
$$0 \longrightarrow \mathcal{A} \longrightarrow S(I) \longrightarrow \mathcal{R}(I) \longrightarrow
0.$$ If $R$ is an integral domain, $\mathcal{A}$ is the $R$-torsion
submodule of $S(I)$. The ideal $I$ is said to be an {\it ideal of
linear type} if $\mathcal{A} = 0$, i.e., $E = E_{1}$, or
equivalently the symmetic algebra and the Rees algebra are
isomorphic. We will come accross a natural class of such ideals in
Section 6. An ideal reference is Vasconcelos (1994) for more on Blowup algebras and
related things.
\section{Computational Methods}
In this section, we assume the basic knowledge of Gr\"{o}bner bases and recall the
Elimination Theorem below, mostly from the book Cox-Little-O'Shea (1996).
\subsection{The Elimination Theorem}
Let $K[t_{1}, \ldots, t_{r}, Y_{1}, \ldots, Y_{s}]$ be a polynomial
ring over a field $K$. Let $\mathfrak{a}$ be an ideal in $K[t_{1},
\ldots, t_{r}, Y_{1}, \ldots, Y_{s}]$. The $r$-th elimination ideal
is $\mathfrak{a}(r) = \mathfrak{a} \cap K[Y_{1}, \ldots, Y_{s}]$.
We can actually compute a Gr\"{o}bner basis for $\mathfrak{a}(r)$,
if we know that of $\mathfrak{a}$ and if we choose a monomial order
suitably on $K[t_{1}, \ldots, t_{r}, Y_{1}, \ldots, Y_{s}]$.
Let $>_{\mathcal{E}}$ be a monomial order on $K[t_{1},
\ldots, t_{r}, Y_{1}, \ldots, Y_{s}]$, such that
$$t_{1} >_{\mathcal{E}} \cdots >_{\mathcal{E}} t_{r}
>_{\mathcal{E}} Y_{1} >_{\mathcal{E}}\cdots >_{\mathcal{E}} Y_{s}$$
and monomials involving at least one of the $t_{1}, \ldots, t_{r}$
are greater than all monomials involving only the remaining
variables $Y_{1}, \ldots, Y_{s}$. We then call $>_{\mathcal{E}}$ an
{\it elimination order} with respect to the variables $t_{1},
\ldots, t_{r}$.
\noindent One of the main tools for computing the equations of the
Rees algebras is the {\it Elimination Theorem}, which is the
following:
\begin{theorem}{\it
Let $G$ be a Gr\"{o}bner basis for the ideal $\mathfrak{a}$ in $K[t_{1}, \ldots, t_{r},
Y_{1}, \ldots, Y_{s}]$, where the order is an
elimination order $>_{\mathcal{E}}$ with respect to the variables $t_{1}, \ldots,
t_{r}$. Then $G_{r} = G \,\cap\, K[Y_{1}, \ldots, Y_{s}]$ is a
Gr\"{o}bner basis of the $r$-th elimination ideal
$\mathfrak{a}(r)$, with respect to $>_{\mathcal{E}}$. }
\end{theorem}
\proof See Cox-Little-O'Shea (1996; Chapter 3).\qed
\medskip
\noindent Let $I = (a_{1}, \ldots, a_{m})$ be an ideal in the
polynomial ring $R := K[Z_{1},\ldots, Z_{n}]$, over a field $K$. The
presentation of the Rees algebra $R[It]$ is obtained as:
\begin{proposition}{\it
In the ring $R[z_{1}, \ldots, z_{m}, t]$, consider the ideal
$\mathfrak{a}$ generated by the polynomials $z_{j} - ta_{j}$, $j =
1, \ldots, m$. Then $R[It] = R[z_{1}, \ldots, z_{m}]/E$, where $E =
\mathfrak{a} \cap R[z_{1}, \ldots, z_{m}]$. }
\end{proposition}
\proof It is clear that $E \supset \mathfrak{a} \cap R[z_{1},
\ldots, z_{m}]$. Conversely, if $f(z_{1}, \ldots, z_{m})$ is an
element of $E$, we write
$$f(z_{1}, \ldots, z_{m}) = f\left(ta_{1} + (z_{1} - ta_{1}), \ldots, ta_{m} + (z_{m} - ta_{m})\right)$$
and we can use Taylor expansion to show that $f\in
\mathfrak{a}$.\qed
\begin{proposition}{\it Let $R$, $\mathfrak{a}$ and $E$ be as defined in the Proposition 1.2.3.
Let $>_{\mathcal E}$ be an elimination order with respect to the
variable $t$ on $R[z_{1}, \ldots, z_{m}, t]$, with $t
>_{\mathcal{E}} \,Z_{i}, \,z_{j}$. If $\mathcal{G}$ is a Gr\"{o}bner
basis for $\mathfrak{a}$ with respect to $>_{\mathcal{E}}$, then
$\mathcal{G}\cap R[z_{1}, \ldots, z_{m}] $ is a Gr\"{o}bner basis
for $E$. }
\end{proposition}
\proof Follows from Theorem 3.1 and Proposition 3.2. \qed
\noindent We end this section with the statement of the Jacobian
Criterion for smoothness, which will be used for verifying
smoothness of the blowup; see Kunz (1985; page 171) for a proof.
\begin{theorem}{\it
Let $R=K[Z_{1},\ldots, Z_{n}]$ be a polynomial ring over a perfect field
$K$. Let $J=(f_{1},\ldots, f_{m})$ be an ideal in $R$ and set $S=R/J$. Let
$\mathfrak{p}$ be a prime ideal of $R$ containing $J$ and write
$\kappa(\mathfrak{p})=K(R/\mathfrak{p})$ for the residue field at
$\mathfrak{p}$. Let $c$ be the codimension of $J_{\mathfrak{p}}$ in
$R_{\mathfrak{p}}$.
\begin{enumerate}
\item The Jacobian matrix
$$\mathcal{J} := (\partial f_{i}/\partial Z_{j}),$$
taken modulo $\mathfrak{p}$ has rank atmost $c$.
\item $S_{\mathfrak{p}}$ is a regular local ring iff the matrix
$\mathcal{J}$, taken modulo $\mathfrak{p}$, has rank $c$.
\end{enumerate}
}
\end{theorem}
\section{Monomial Curves}
\noindent Let $\mathbb{N}$ \, and $\mathbb{Z}$ denote the set of
nonnegative integers and the set of integers respectively.
Assume that $0<m_{0} < m_{1} < \ldots < m_{p} $ form an
arithmetic sequence of integers, with $p\geq 2$ and $\gcd(m_{0},
\ldots ,m_{p}) = 1$. We further assume that $m_{i}=m_{0}+id$ where
$d$ is the common difference of the arithmetic sequence and $m_{0},
m_{1}, \ldots , m_{p}$ generate the numerical semigroup $\Gamma :=
\sum_{i=0}^{p}\mathbb{N}m_{i}$ minimally . Write $m_{0}=ap+b$, where
a and b are unique integers such that $a \geq 1$ (otherwise $m_{0},
m_{1}, \ldots , m_{p}$ can not generate the numerical semigroup $\Gamma$
minimally) and $1\leq b \leq p$. Let $\wp$ denote the kernel of the map $\eta : R:= K[X_{0}, X_{1}, \ldots,
X_{p}] \to K[T]$, given by $\eta(X_{i}) = T^{m_{i}} $. The
prime ideal $\wp$ is an one-dimensional perfect ideal and it is the
defining ideal of the affine monomial curve given by the
parametrization $X_{0} = T^{m_{0}}, \ldots, X_{p} = T^{m_{p}}$.
A minimal binomial generating set $\mathcal{G}$ for $\wp$
was constructed by Patil (1993). It was proved by Sengupta (2003) that
it is a Gr\"{o}bner basis with respect to the graded reverse lexocographic
monomial order. It was noted in Maloo-Sengupta (2003) that the set
$\mathcal{G}$ depends intrinsically on the integer $b$. We therefore
write $\mathcal{G}_{b}$ instead of $\mathcal{G}$, which is
$\mathcal{G}_{b}:=\{\phi(i,j) \mid i,j \in [1,p-1]\} \cup \{\,\psi(b, \,j) \mid j\in [0,p-b]\}$, such that\footnote{Our notations differ slightly from those introduced by Patil (1993) in the following manner: The embedding dimension in our case is $p+1$ and not $e$; the indeterminates $X_{0}, \ldots, X_{p}, Y$ have been replaced by $X_{0}, \ldots, X_{p}$; the binomials $\xi_{ij}$ occur in our list of binomials $\phi(i,j)$; the binomial $\theta$ is $\psi(b,p-b)$ in our list.}:
\begin{enumerate}
\item[(i)] $\phi(i,j):=\begin{cases} X_{i}X_{j}-X_{\epsilon(i,j)}X_{i+j-\epsilon(i,j)}\,, & {\rm if} \quad i,j \in [1,p-1];\\
0\,, & {\rm otherwise}\,;
\end{cases}$
\medskip
\item[(ii)] $\psi(b,j):=\begin{cases} X_{b+j}X_{p}^{a}-X_{j}X_{0}^{a+d}\,, & {\rm if} \quad j \in [0,p-b];\\
0\,, & {\rm otherwise}\,;
\end{cases}$
\end{enumerate}
\medskip
with
\medskip
\begin{enumerate}
\item[(iii)] $\epsilon(i\,,\,j) :=\quad
\begin{cases}
i+j & {\rm if} \quad i+j \,<\, p\\
p & {\rm if} \quad i+j \,\geq\, p\\
\end{cases}$;
\medskip
\item[(iv)] $[a\,,\,b]=\{i\in\mathbb{Z} \,\mid \,a\leq i\leq b\}$.
\end{enumerate}
\medskip
We now restrict our attention to $p=3$, since we will be
dealing only with monomial curves in affine
$4$-space, parametrized by four integers $m_{0}, \ldots , m_{3}$ in
arithmetic progression.
Let us write, $R_{b} = K[\mathbb{X}, \Psi_{b},
\Phi],$ such that $\Psi_{b}= \{\Psi(b,0),
\Psi(b,1),\ldots,\Psi(b,3-b)\}$, $\Phi=
\{\Phi(2,2),\Phi(1,2),\Phi(1,1)\}$ and
$\mathbb{X}=\{X_{1}, X_{2}, X_{3}, X_{0}\}$ are indeterminates. The indeterminate $X_{0}$ in the
set $\mathbb{X}$ has been listed at the end deliberately, keeping the monomial order in mind, to be
defined in the next section.
\medskip
Let \,$t$\, be an
indeterminate. We define the homomorphism $\varphi_{b}:
R_{b}\longrightarrow R[\wp t]$ as $\varphi_{b}(X_{i}) = X_{i}$, $\varphi_{b}(\Phi(i,j)) = \phi(i,j)t$,
$\varphi_{b}(\Psi(b,j))=\psi(b,j)t$. Let $E_{b}$ denote the kernel of
$\varphi_{b}$. Our aim is to construct a minimal Gr\"{o}bner basis
for the ideal $E_{b}$. Write $S = R_{b}[t]$ and define the ring
homomorphism \,$\overline{\varphi_{b}} : S \longrightarrow R[\wp t]$ as $\overline{\varphi_{b}}(t)=t \quad {\rm and} \quad
\overline{\varphi_{b}}=\varphi_{b} \quad {\rm on} \quad
R_{b}$. We follow the method of elimination described in
Propositions 3.2 and 3.3 and consider the ideal $\mathfrak{a}_{b}\subseteq S$ such that $\mathfrak{a}_{b} \cap R_{b} = E_{b}$.
We shall compute a Gr\"{o}bner basis $\widehat{\mathfrak{a}_{b}}$ for
$\mathfrak{a}_{b}$, with respect to an elimination order
$>_{\mathcal{E}}$ (with respect to $t$) on $S$. Then,
$\widehat{\mathfrak{a}_{b}}\cap R_{b}$ is a Gr\"{o}bner basis
for $E_{b}$, that is, those elements of $\widehat{\mathfrak{a}_{b}}$
that do not involve the variable $t$. These generators of $E_{b}$ will be used
to decide the non-smoothness of the blowup in section 6. We now define the desired
elimination order on $S$.
\section{Elimination order on $S = R_{b}[t]$}
\noindent A monomial in $\displaystyle{S =
R_{b}[t]=R[\Psi(b,0)
,\ldots,\Psi(b,3-b),\Phi(2,2),\Phi(1,2),\Phi(1,1),t]}$ is given by
$$\displaystyle{t^{d}\mathbb{X}^{\alpha}\Psi^{\beta}\Phi^{\gamma}
=
t^{d}\left(X_{1}^{\alpha_{1}}X_{2}^{\alpha_{2}}X_{3}^{\alpha_{2}}X_{0}^{\alpha_{0}}\right)
\left(\prod_{i=0}^{3-b}\Psi(b,i)^{\beta_{i}}\right)
\left(\Phi(2,2)^{\gamma_{1}}\Phi(1,2)^{\gamma_{2}}\Phi(1,1)^{\gamma_{3}}\right)
}$$ which is being identified with the ordered tuple
$\displaystyle{(d , \alpha, \beta, \gamma)\in \mathbb{N}^{12 -b}}$,
such that
$$\displaystyle{\alpha := (\,\alpha_{1}, \alpha_{2},
\alpha_{3}, \alpha_{0}\,)}, \displaystyle{\quad \beta := (\,\beta_{0},\ldots, \beta_{3-b}\,)},
\displaystyle{\quad \gamma
:=(\,\gamma_{1},\gamma_{2},\gamma_{3}\,)}.$$.
\noindent Let us define the weight function
$\displaystyle{\widehat{\omega}}$ on the non-zero monomials of
$\displaystyle{S}$ to be the function with the property $\displaystyle{\widehat{\omega}(fg)=\widehat{\omega}(f)+\widehat{\omega}(g)},$
for any two non-zero monomials $f$ and $g$ in $S$, and that
$$\displaystyle{\widehat{\omega}(t)\,=\,1}, \quad \displaystyle{\widehat{\omega}(X_{i})\,=\,m_{i}}, \quad \displaystyle{\widehat{\omega}(\Phi(i,j))\,=\,\widehat{\omega}(X_{i}X_{j})}, \quad \displaystyle{\widehat{\omega}(\Psi(b,j))\,=\,\widehat{\omega}(X_{3}^{a}X_{b+j})}.$$
\noindent We say that $\quad\displaystyle{t^{d}\mathbb{X}^{\alpha}\Psi^{\beta}\Phi^{\gamma} \,
>_{\mathcal{E}} \,
t^{d'}\mathbb{X}^{\alpha'}\Psi^{\beta'}\Phi^{\gamma'} }, \quad$ if one of the following holds:
\begin{enumerate}
\item[(i)] $\displaystyle{d>d'}$;
\medskip
\item[(ii)] $\displaystyle{d=d'}$ \,and\, $\displaystyle{\widehat{\omega}(\mathbb{X}^{\alpha}\Psi^{\beta}\Phi^{\gamma})
\,>\,\widehat{\omega}(\mathbb{X}^{\alpha}\Psi^{\beta}\Phi^{\gamma})}$;
\medskip
\item[(iii)] $d=d'$, \,$\displaystyle{\widehat{\omega}(\mathbb{X}^{\alpha}\Psi^{\beta}\Phi^{\gamma})
\,=\,\widehat{\omega}(\mathbb{X}^{\alpha}\Psi^{\beta}\Phi^{\gamma})}$ \,and\, $\displaystyle{\sum\beta_{i}\,>\,\sum\beta'_{i}}$;
\medskip
\item[(iv)] $\displaystyle{d=d'}$, \,$\displaystyle{\widehat{\omega}(\mathbb{X}^{\alpha}\Psi^{\beta}\Phi^{\gamma})
\,=\,\widehat{\omega}(\mathbb{X}^{\alpha}\Psi^{\beta}\Phi^{\gamma})}$, \,$\displaystyle{\sum\beta_{i}\,=\,\sum\beta'_{i}}$
\,and in the difference \,$\displaystyle{( \beta-\beta')}$, the rightmost non-zero entry is negative;
\medskip
\item[(v)] $d=d'$, \,$\displaystyle{\widehat{\omega}(\mathbb{X}^{\alpha}\Psi^{\beta}\Phi^{\gamma})
\,=\,\widehat{\omega}(\mathbb{X}^{\alpha}\Psi^{\beta}\Phi^{\gamma})}$, \,$\displaystyle{ \beta=\beta'}$ \,and\, $\displaystyle{\sum\gamma_{i}\,>\,\sum\gamma'_{i}}$;
\medskip
\item[(vi)] $d=d'$, \,$\displaystyle{\widehat{\omega}(\mathbb{X}^{\alpha}\Psi^{\beta}\Phi^{\gamma})
\,=\,\widehat{\omega}(\mathbb{X}^{\alpha}\Psi^{\beta}\Phi^{\gamma})}$, \,$\displaystyle{ \beta=\beta'}$, \,$\displaystyle{\sum\gamma_{i}\,=\,\sum\gamma'_{i}}$ \,and in the difference\, $\displaystyle{( \gamma-\gamma')}$, the rightmost
non-zero entry is negative;
\medskip
\item[(vii)] $d=d'$, \,$\displaystyle{\widehat{\omega}(\mathbb{X}^{\alpha}\Psi^{\beta}\Phi^{\gamma})
\,=\,\widehat{\omega}(\mathbb{X}^{\alpha}\Psi^{\beta}\Phi^{\gamma})}$, $\displaystyle{\beta=\beta'}$,
\,$\displaystyle{\gamma=\gamma'}$ \, and in the difference \,$\displaystyle{(\alpha - \alpha')}$,
\,the rightmost non-zero entry is negative.
\end{enumerate}
\noindent Then $>_{\mathcal{E}}$ is the desired elimination order on $S$, with respect to the variable $t$.
\medskip
\begin{theorem}
{\it Given $b\in \{1, 2,3\}$, let $\mathfrak{a}_{b}$ be the ideal in
$S$, generated by
\begin{itemize}
\item $P(i,j)=\begin{cases}\underline{tX_{i}X_{j}}
-tX_{\epsilon(i,j)}X_{i+j-\epsilon(i,j)}
-\Phi(i,j)\quad , \quad i,j \,\in \,[1,2]\,,\\[2mm]
0\hspace*{2.50in},\hspace*{0.20in} {\rm otherwise}\,;
\end{cases}$\\[3mm]
\medskip
\item $P(\Psi(b,l))=\begin{cases}\underline{tX_{b+l}X_{3}^{a}}-tX_{l}X_{0}^{a+d}-\Psi(b,l)
\quad \quad , \quad l\in
[0,3-b]\,,\\[2mm]
0\hspace*{2.30in},\hspace*{0.20in} {\rm otherwise}\,;
\end{cases}$
\end{itemize}
\noindent A Gr\"{o}bner Basis for the ideal $\mathfrak{a}_{b}$ is the set
$$\widehat{\mathfrak{a}_{b}}=\{P(i,j), \,P(\Psi(b,j)), \,M(b,j), \,L(i), \,B(i,j), \,A(i;b, j), \,D, \,Q(b,i)\}$$
such that,
\begin{itemize}
\item
$D=(\underline{X_{1}^{2}}-X_{2}X_{0})\Phi(1,2)
-(X_{1}X_{2}-X_{3}X_{0})\Phi(1,1)$\,;
\medskip
\item $B(i,j)=\begin{cases}
(\underline{X_{i}X_{j}}\, -
X_{\epsilon(i,j)}X_{i+j-\epsilon(i,j)})\Psi_{b,3-b}
-(X_{3}^{a}X_{b}-X_{0}^{a+d+1})\Phi(i,j)\\
\hspace*{1.50in} {\rm if} \quad i,j \in [1,2] \,,\\[2mm]
0 \hspace*{1in} ; \hspace*{0.40in} {\rm otherwise}\,;
\end{cases}$
\medskip
\item $A(i;b, j)=\begin{cases}
\underline{X_{i}\Psi(b,j)}-X_{b+i+j-\epsilon(i,b+j)}\Psi(b,\epsilon(i,b+j)-b)
-X_{3}^{a}\Phi(i,b+j)\\
+X_{0}^{a+d}[\Phi(i,j)-\Phi(b+i+j-3,3-b)]\\
\hspace*{1in} {\rm if} \quad i \in [1,3], \,j \in [0,2-b] \quad {\rm and} \quad b\neq 3\,,\\[2mm]
0 \hspace*{1in} ; \hspace*{0.40in} {\rm otherwise}\,;
\end{cases}$
\medskip
\item $ \,L(i)=\begin{cases}
\underline{X_{i}\Phi(2\,,\,2)}\, - \, X_{i+1}\Phi(1\,,\,2)
\,+\,X_{i+2}\Phi(1\,,\,1) \,\, \quad ; \,\, \quad
{\rm if} \quad i, \in [0,1] \,,\\[2mm]
0 \hspace*{2.90in} ; \hspace*{0.20in} {\rm otherwise}\,;
\end{cases}$
\medskip
\item $Q(b,i)=\begin{cases}
\underline{\Psi(1,0)\Phi(2\,,\,2)}+\Psi(1,2)\Phi(1\,,\,1)-
\Psi(1,1)\Phi(1\,,\,2) \quad {\rm if} \quad b=1 \quad {\rm and}
\quad i=1\,,\\
\underline{\Psi(1,1)^{2}}-\Psi(1,2)\Psi(1,0)-X_{3}^{a-1}\Psi(1,2)\Phi(2\,,\,2)
+X_{0}^{a+d-1}\Psi(1,0)\Phi(1\,,\,1)\\[2mm]
\hspace*{0.35in}
-X_{3}^{a-1}X_{0}^{a+d-1}(\Phi(1\,,\,2)^{2}-\Phi(2\,,\,2)\Phi(1\,,\,1))
\quad {\rm if} \quad b=1 \quad {\rm and} \quad i=2\,,\\
\underline{\Psi(2,0)^{2}\Phi(2\,,\,2)}-X_{3}^{a-1}\Psi(2,1)\Phi(2\,,\,2)^{2}
-\Psi(2,1)\Psi(2,0)\Phi(1,2)-X_{3}^{a-1}X_{0}^{a+d-1}\Phi(1\,,\,2)^{3}\\[2mm]
+\Psi(2,1)^{2}\Phi(1\,,\,1)+X_{3}^{a-1}X_{0}^{a+h-1}\Phi(2,2)\Phi(1\,,\,2)\Phi(1\,,\,1)
+X_{0}^{a+d-1}\Psi(2,0)\Phi(1\,,\,1)^{2}\\
\quad \quad \quad {\rm if} \quad b=2
\quad {\rm and} \quad i=1\,,\\
0 \quad \quad \quad \quad \quad \quad \quad {\rm otherwise}\,;\\
\end{cases}$
\medskip
\item $M(b,i)=\begin{cases}
\underline{tX_{0}^{a+d+1}\Psi(b,i)}+\Psi(b,0)\Psi(b,i)
-tX_{3}^{a-1}X_{1+b+i}X_{b-1}\Psi(b,3-b)\\-tX_{3}^{2a}\Phi(b,b+i)
+(-1)^{i+1}tX_{3}^{a-1}X_{0}^{a+d}X_{3b+3i-3}\Phi(3-b-i,3-b-i)\\
\hspace*{0.60in}{\rm if} \quad i \in [0,2-b] \quad {\rm and} \quad b\neq 3\,,\\
0 \hspace*{0.40in} , \hspace*{0.30in}{\rm otherwise}. \\
\end{cases}
$\\
\end{itemize}
}
\end{theorem}
\noindent For our convenience let us set the following:
\begin{enumerate}
\item $V(i,j;q)=\begin{cases}B(i,j) & {\rm if} \quad q=1\,,\\
P(i,j) & {\rm if} \quad q=2\,;\\
\end{cases}$
\item $U(q)=\begin{cases}\Psi(b,3-b) & {\rm if} \quad q=1\,,\\
t & {\rm if} \quad q=2\,;\\
\end{cases}$
\item $u(q)=\begin{cases}\psi(b,3-b) & {\rm if} \quad q=1\,,\\
1 & {\rm if} \quad q=2.\\
\end{cases}$
\item $X_{i}=0 \quad {\rm if} \quad i \notin [0,3]$;
\item $\Phi(i,j)=\Phi(j,i)$;
\item $\Phi(i,j)=0 \quad {\rm if} \quad i,j \notin
[1,2]$;
\item $\Psi(b,j)=0 \quad {\rm
if} \quad j \notin [0,3-b]$;
\item $\phi(i,j)=\phi(j,i)$ \, and \, $V(i,j;q)=V(j,i;q)$.
\end{enumerate}
\noindent The following Lemma will be used for proving Theorem 5.1.
\begin{lemma}
{\it Given $b\in \{1, 2 , 3\}$, let $\mathfrak{Q}_{b}$ be the ideal
in $S$, generated by $\{P(i,j),P(\Psi(b,3-b))\}$\,. A
Gr\"{o}bner Basis for $\mathfrak{Q}_{b}$ is the set
$\widehat{\mathfrak{Q}_{b}}=\{P(i,j), P(\Psi(b,3-b)),L(i),B(i,j),D\}$.}
\end{lemma}
\proof We apply the Buchberger's criterion and show that all the
$S$-polynomials reduce to zero modulo $\widehat{\mathfrak{Q}_{b}}$.
If $\gcd(\,\lm(f), \,\lm(g)\,) = 1$, then the $S$-polynomials reduce
to $0$ modulo $\widehat{\mathfrak{Q}_{b}}$. Let us
consider the other cases, that is when the gcd is not one.
\begin{enumerate}
\item $S(V(1,i;q),L(1))=\Phi(2,2)V(1,i;q)-X_{i}\Psi(b,0)L(1)\,, \quad
{\rm where} \quad i\,\in\,[1,2]\\[2mm]
= -U(q)[\underline{X_{1+i}X_{0}\Phi(2,2)}
-X_{i}X_{2}\Phi(1,2)+X_{i}X_{3}\Phi(1,1)]
-u(q)\Phi(2,2)\Phi(1,i)\\[2mm]
=-X_{1+i}U(q)L(0)+\Phi(1,i)V(2,2;q)$\\[2mm]
\item $S(V(1,i;q),D)=
X_{1}^{i-1}\Phi(1,2)V(1,i;q)- X_{2}^{i-1}U(q)D\,, \quad
{\rm where} \quad i\,\in\,[1,2]\\[2mm]
=-X_{0}U(q)\Phi(1,2)[X_{1}^{i-1}X_{1+i}-X_{2}^{i-1}X_{2}] -
u(q)X_{1}^{i-1}\Phi(1,2)\Phi(1,i)+\phi(1,2)X_{2}^{i-1}U(q)\Phi(1,1)\\[2mm]
=X_{0}^{i-1}\Phi(1,i)V(i,2;q)+X_{2}\Phi(1,i-1)V(1,2;q)+
u(q)\Phi(1,2)L(i-2)$\\[2mm]
${\rm Note\,\, that\, the\, LT\,\, is}\,\,
X_{2}^{i-1}U(q)X_{1}X_{2}\Phi(1,1)\,\, {\rm if}\,\, i=1\,, \, {\rm
and\, the\, LT\,\, is}\,\, X_{2}^{i-1}U(q)X_{2}X_{0}\Phi(1,2)\,\, {\rm
if}\,\, i=2$.\\[2mm]
\item $S(V(1,1;q),V(2,2;q))= X_{2}^{2}V(1,1;q) -
X_{1}^{2}V(2,2;q)\\[2mm]
=-U(q)[X_{2}^{3}X_{0} - \underline{X_{1}^{3}X_{3}}]
+u(q)[X_{1}^{2}\Phi(2,2)-X_{2}^{2}\Phi(1,1)]\\[2mm]
=X_{1} X_{3}V(1,1;q)-X_{2}X_{0}V(2,2;q)+u(q)[X_{1}L(1) - X_{2}L(0)]$\\[2mm]
\item $S(V(1,2;q),V(i,i;q))=X_{i}V(1,2;q)-X_{j}V(i,i;q)\,,
\quad {\rm where} \quad i\in [1,2]\quad {\rm and} \quad j\, \in\, \{1,2\} \setminus \{i\}\\
=-U(q)[X_{i}X_{3}X_{0}-
\underline{X_{j}X_{\epsilon(i,i)}X_{2i-\epsilon(i,i)}}]
-u(q)[X_{i}\Phi(1,2)-X_{j}\Phi(i,i)] \\[2mm]
=X_{3i-3}V(j,j;q) +u(q)L(2-j)$\\[2mm]
\item $S(L(0),L(1))\,=X_{1}L(0)-X_{0}L(1)=-[
\underline{X_{1}^{2}}-X_{2}X_{0}]\Phi(1,2)+\phi(1,2)\Phi(1,1) =-D$\\[2mm]
\item $S(L(1),D)=
X_{1}\Phi(1,2)L(1)-\Phi(2,2)D\\[2mm]
=-\Phi(1,2)[X_{1}X_{2}\Phi(1,2)-X_{1}X_{3}\Phi(1,1)
-\underline{X_{2}X_{0}\Phi(2,2)}]
+\phi(1,2)\Phi(2,2)\Phi(1,1)\\[2mm]
=X_{2}\Phi(1,2)L(0)-\Phi(1,1)[X_{3}L(0)-X_{2}L(1)]$\\[2mm]
\item $S(P(i,j),B(i,j))=\Psi(b,3-b)P(i,j)-tB(i,j)\,,
\quad {\rm where} \quad i,j \in [1,2]\\[2mm]
= -\Psi(b,3-b)\Phi(i,j)+\underline{t\psi(b,3-b)\Phi(i,j)}
=\Phi(i,j)P(\Psi(b,3-b))$\\[2mm]
\item $S(P(i,j),B(l,j))=X_{l}\Psi(b,3-b)P(i,j)-tX_{i}B(l,j)\,,
\quad {\rm where} \quad i\,,\,j\,,\,l \,\in \,[1,2] \, \quad {\rm with} \quad l\neq i\\[2mm]
=-\Psi(b,3-b)[tX_{l}X_{\epsilon(i,j)}X_{i+j-\epsilon(i,j)}+X_{l}\Phi(i,j)
-tX_{i}X_{\epsilon(l,j)}X_{l+j-\epsilon(l,j)}]+tX_{i}\psi(b,3-b)\Phi(l,j)\\[2mm]
=
+X_{i}\Phi(l,j)P(\Psi(b,3-b))+(-1)^{i+j+1}\Psi(b,3-b)[X_{3j-3}P(3-j,3-j)+L(j-1)]$\\[2mm]
${\rm Note\,\, that\, the\, LT\,\, is}\quad
-tX_{l}X_{\epsilon(i,j)}X_{i+j-\epsilon(i,j)}\Psi(b,3-b)\quad {\rm
if}\quad l\neq j$\\[2mm]
${\rm and\,the \, LT\,\, is}\quad
tX_{i}X_{\epsilon(l,j)}X_{l+j-\epsilon(l,j)}\Psi(b,3-b)\quad {\rm
if}\quad l=j$\\[2mm]
\item $S(P(i,j),P(\Psi(b,3-b)))=X_{3}^{a+1}P(i,j)-X_{i}X_{j}P(\Psi(b,3-b)\,,
\quad {\rm where} \quad i\,,\,j\,\in\,[1,2]\\[2mm]
=-X_{3}^{a+1}[\underline{tX_{\epsilon(i,j)}X_{i+j-\epsilon(i,j)}}+\Phi(i,j)]
+X_{i}X_{j}[tX_{3-b}X_{0}^{a+d}+\Psi(b,3-b)]\\[2mm]
=-X_{\epsilon(i,j)}X_{i+j-\epsilon(i,j)}P(\Psi(b,3-b))+B(i,j)
+X_{3-b}X_{0}^{a+d}P(i,j)$\\[2mm]
\end{enumerate}
\noindent Hence the proof.\qed
\begin{lemma}{\it A minimal Gr\"{o}bner basis for the ideal
$\mathfrak{q}_{b}=\mathfrak{Q}_{b}\cap R_{b}$ is the set $\widehat{\mathfrak{q}_{b}} = \{L(i), B(i,j)\}$.}
\end{lemma}
\proof By the Elimination theorem, a Gr\"{o}bner basis for the ideal $\mathfrak{q}_{b}$ is the
set $\{L(i), B(i,j), D\}$, which contains only those elements of $\widehat{\mathfrak{Q}_{b}}$,
which do not involve the variable $t$. Now, $D=X_{0}L(1)-X_{1}L(0)$ and the leading monomials
of $L(i)$ or $B(i,j)$ do not divide each other. Therefore, by removing $D$ from the above list
we obtain a minimal Gr\"{o}bner basis $\widehat{\mathfrak{q}_{b}} = \{L(i), B(i,j)\}$ for the ideal
$\mathfrak{q}_{b}$.\qed
\begin{corollary}{\it A minimal Gr\"{o}bner basis for the ideal $E_{3}$ is the set
$\widehat{E_3} = \{L(i), B(i,j)\}$.}
\end{corollary}
\proof Note that $\mathfrak{q}_{3}=E_{3}$. Hence, the proof follows
from Lemma 5.3\,.\qed
\begin{remark}
Note that, for $b=3$, the ideal $\wp$ is a prime ideal with
$\mu({\wp})=4=1+{\rm ht}(\wp)$, and therefore an ideal of linear
type by Huneke (1981) and Valla (1980, 1980/81). It is interesting to note that
$\mu({\mathfrak{q}_{b}})=4=1+{\rm ht}(\mathfrak{q}_{b})$, and what we have proved above
shows that $\mathfrak{q}_{b}$ is an ideal of linear type for $b\in
[1,3]$\,, but $\mathfrak{q}_{b}$ is not a prime
ideal if $b\neq 3$. This produces a class of non-prime ideals of linear type which
have the property that $\mu(-)= 1+{\rm ht}(-)$.
\end{remark}
\noindent \textbf{Proof of Theorem 5.1.}
\proof We apply the Buchberger's criterion and show that all the
$S$-polynomials reduce to zero modulo $\widehat{\mathfrak{a}_{b}}$.
If $\gcd(\,\lm(f), \,\lm(g)\,) = 1$, then the $S$-polynomials reduce
to $0$ modulo $\widehat{\mathfrak{a}_{b}}$. Let us
consider the other cases, that is when the gcd is not one.
\medskip
Note that, by Lemma 5.2, every non-zero polynomial $\textbf{H} \in K[t,X_{i},\Psi(b,3-b),\Phi(i,j)]
\subseteq R_{b}[t]$, with $\overline{\varphi_{b}}(\textbf{H})=0$, can be expressed as $\textbf{H}=\sum_{i}
c_{i}H_{i}$, with $c_{i} \in R$, $H_{i} \in \widehat{\mathfrak{Q}_{b}}$ and ${\rm Lm}(\textbf{H}) \geq
{\rm Lm}(c_{i}H_{i})$, whenever $c_{i}\neq 0$. Henceforth, the symbols $\textbf{G}$ and $\textbf{H}$ will only
denote polynomials in $K[t,X_{i},\Psi(b,3-b),\Phi(i,j)] \subseteq R_{b}[t]$, such that
$\overline{\varphi_{b}}(\textbf{H})=0$\,. We use this observation below to prove that the $S$-polynomials converge
to zero. We only indicate the proof for the $S$-polynomial $S(A(i;b,j),A(l;b,j))$, for all othere cases the proof is similar.
\begin{enumerate}
\item $S(A(i;b,j),A(l;b,j)) = X_{l}A(i;b,j)-X_{i}A(l;b,j)\,,\quad
{\rm with }\quad i<l \quad {\rm and}\quad i,l \in [1,3]\\[2mm]
= -X_{l}[X_{b+i+j-\epsilon(i,b+j)}\Psi(b,\epsilon(i,b+j)-b)
+X_{3}^{a}\Phi(i,b+j)
-X_{0}^{a+d}\{\Phi(i,j)-\Phi(b+i+j-3,3-b)\}]\\[2mm]
+X_{i}[X_{b+l+j-\epsilon(l,b+j)}\Psi(b,\epsilon(l,b+j)-b)
+X_{3}^{a}\Phi(l,b+j)
-X_{0}^{a+d}\{\Phi(l,j)-\Phi(b+l+j-3,3-b)\}]$\\[2mm]
= $-\underline{X_{l}X_{b+j+i-\epsilon(i,b+j)}\Psi(b\,,\,\epsilon(i,b+j)-b)}
+X_{i}X_{b+j+l-\epsilon(l,b+j)}\Psi(b\,,\,\epsilon(l,b+j)-b) + \textbf{G}$\\[2mm]
where \,$\textbf{G}$\, is an element of \,$R[t,X_{1},X_{2},X_{3},X_{0},\Psi(b,3-b),\Phi(i,j)]$.
Note that the only monomial of \,$X_{l}A(i;b,j)-X_{i}A(l;b,j)$, which does not belong to
\,$R[t,X_{1},X_{2},X_{3},X_{0},\Psi(b,3-b),\Phi(i,j)]$\, is $-X_{l}X_{b+i+j-\epsilon(i,b+j)}\Psi(b,\epsilon(i,b+j)-b)$ \, if
\, $(i;b,j)=(1;1,0)$. Therefore, every monomial of
\begin{eqnarray*}
\textbf{H} & = & S(A(i;b,j),A(l;b,j))+X_{b+j+i-\epsilon(i,b+j)}A(l;b,\epsilon(i,b+j)-b)\\
& = & X_{l}A(i;b,j)-X_{i}A(l;b,j)+X_{b+j+i-\epsilon(i,b+j)}A(l;b,\epsilon(i,b+j)-b)
\end{eqnarray*}
belongs to \,$R[t,X_{1},X_{2},X_{3},X_{0},\Psi(b,3-b),\Phi(i,j)]$\,, since every monomial of
\,$A(l;b,\epsilon(i,b+j)-b)$ belongs to \,$R[t,X_{1},X_{2},X_{3},X_{0},\Psi(b,3-b)]$, except
\,$X_{l}\Psi(b,\epsilon(i,b+j)-b)$\, with $(i;b,j)=(1;1,0)$.
\medskip
Moreover, $X_{l}A(i;b,j)$, \,$X_{i}A(l;b,j)$ and \,$X_{b+j+i-\epsilon(i,b+j)}A(l;b,\epsilon(i,b+j)-b)$
belong to ${\rm ker}(\overline{\varphi_{b}})$.
Hence, $\overline{\varphi_{b}}(\textbf{H})=0$. Therefore, we can write
\medskip
$S(A(i;b,j),A(l;b,j)) = X_{l}A(i;b,j)-X_{i}A(l;b,j)\,, {\rm with} \,i<l\, {\rm and} \,i,l \in [1,3];\\[2mm]
= -\underline{X_{l}X_{b+j+i-\epsilon(i,b+j)}\Psi(b\,,\,\epsilon(i,b+j)-b)}
+X_{i}X_{b+j+l-\epsilon(l,b+j)}\Psi(b\,,\,\epsilon(l,b+j)-b) + \textbf{G}\\[2mm]
= -X_{b+j+i-\epsilon(i,b+j)}A(l;b,\epsilon(i,b+j)-b)+\textbf{H}.$
\medskip
\noindent Now one can apply Lemma 5.3 to conclude that there exist \,$c_{i} \in R$ and $H_{i} \in \widehat{\mathfrak{q}_{b}}$, such that
\,$\textbf{H}=\sum_{i}c_{i}H_{i}$.\\[2mm]
\item $S(D,A(1;b,j))=\Psi(b,j)D-X_{1}\Phi(1,2)A(1;b,j)\\[2mm]
=-\underline{X_{2}X_{0}\Psi(b,j)\Phi(1,2)}-\phi(1,2)\Psi(b,j)\Phi(1,1)
+X_{1}X_{0}\Psi(b,1+j)\Phi(1,2)+\textbf{G}\\[2mm]
=-[X_{0}\Phi(1,2)+X_{1}\Phi(1,1)]A(2;b,j)+X_{0}\Phi(1,1)A(3;b,j)
+X_{0}\Phi(1,2)A(1;b,1+j)+\textbf{H}$\\[2mm]
\item $S(L(1),A(1;b,j))=\Psi(b,j)L(1)-\Phi(2,2)A(1;b,j)\\[2mm]
=-\underline{X_{2}\Psi(b,j)\Phi(1,2)}+X_{3}\Psi(b,j)\Phi(1,1)
+X_{0}\Psi(b,1+j)\Phi(2,2)+\textbf{G}\\[2mm]
=-\Phi(1,2)[A(2;b,j)-A(1;b,1+j)]+\Phi(1,1)[A(3;b,j)-A(2;b,1+j)]
+\Psi(b,1+j)L(0) +\textbf{H}$\\[2mm]
\item $S(P(i,l),P(\Psi(b,j))=X_{3}^{a}X_{b+j}P(i,l)-X_{i}X_{l}P(\Psi(b,j))
\quad {\rm where} \quad b+j \notin \{i,l\}\quad {\rm and} \quad i\leq l\\[2mm]
=-\underline{tX_{b+j}X_{\epsilon(i,l)}X_{i+l-\epsilon(i,l)}X_{3}^{a}}-X_{b+j}X_{3}^{a}\Phi(i,l)
+tX_{i}X_{l}X_{j}X_{0}^{a+d}+X_{i}X_{l}\Psi(b,j)\\[2mm]
=-X_{\epsilon(i,l)}X_{i+l-\epsilon(i,l)}P(\Psi(b,j))
-X_{i+l-\epsilon(i,l)}A(\epsilon(i,l);b,j)
+X_{j}X_{0}^{a+d}P(i,l)+X_{i}A(l;b,j)
+\textbf{H}$\\[2mm]
\item $\displaystyle{S(P(\Psi(b,j)),P(b+j,l))
=X_{l}P(\Psi(b,j))-X_{3}^{a}P(b+j,l) }$ $\displaystyle{=
\underline{tX_{\epsilon(b+j,l)}X_{b+j+l-\epsilon(b+j,l)}X_{3}^{a}}
-X_{l}\Psi(b,j)+\textbf{G}}$\\[2mm]
$\displaystyle{=-A(l;b,j)-X_{0}P(\Psi(b,3b+5j+3l-5)) +\textbf{H} }$\\[2mm]
\item $\displaystyle{S(P(\Psi(b,j)),B(b+j,l))
=X_{l}\Psi(b,3-b)P(\Psi(b,j))-tX_{3}^{a}B(b+j,l) }$\\[2mm]
$\displaystyle{=
\underline{tX_{\epsilon(b+j,l)}X_{b+j+l-\epsilon(b+j,l)}X_{3}^{a}\Psi(b,3-b)}
-X_{l}\Psi(b,j)\Psi(b,3-b)+\textbf{G}}$\\[2mm]
$\displaystyle{=-\Psi(b,3-b)A(l;b,j)-X_{0}\Psi(b,2)P(\Psi(b,3b+5j+3l-5)) +\textbf{H} }$\\[2mm]
\item $S(\,P(\Psi(b,i)\,,\,P(\Psi(b,j))\,)=X_{b+j}P(\Psi(b,i)\,-\,X_{b+i}P(\Psi(b,j))
\quad {\rm assume}\quad i\,<\,j \\[2mm]
=-tX_{i}X_{b+j}X_{0}^{a+d}+\underline{tX_{j}X_{b+i}X_{0}^{a+d}}-X_{b+j}\Psi(b,i)+X_{b+i}\Psi(b,j)\\[2mm]
=-X_{0}^{a+d}[P(i,b+j)-P(j,b +i)]-A(b+j;b,i)+A(b+i;b,j) +\textbf{H}$\\[2mm]
\item $\displaystyle{S(A(i;b,j),P(i,l))=tX_{l}A(i;b,j)-\Psi(b,j)P(i,l)}$\\[2mm]
$\displaystyle{=-tX_{l}X_{b+j+i-\epsilon(i,b+j)}\Psi(b,\epsilon(i,b+j)-b)
+\underline{tX_{\epsilon(i,l)}X_{i+l-\epsilon(i,l)}\Psi(b,j)}
+\Psi(b,j)\Phi(i,l)+\textbf{G}}$\\[2mm]
$\displaystyle{=-tX_{0}A(l;b,3b+5j+3i-5)
+tX_{i+l-\epsilon(i,l)}A(\epsilon(i,l);b,j) +\Phi(i,l)P(\Psi(b,j))
+\textbf{H} }$\\[2mm]
\item $S(A(i;b,j),B(i,l))=X_{l}\Psi(b,3-b)A(i;b,j)-\Psi(b,j)B(i,l)\\[2mm]
=-X_{l}X_{b+j+i-\epsilon(i,b+j)}\Psi(b,\epsilon(i,b+j)-b)\Psi(b,3-b)
+\underline{X_{\epsilon(i,l)}X_{i+l-\epsilon(i,l)}\Psi(b,j)\Psi(b,3-b)}\\[2mm]
+\psi(b,3-b)\Psi(b,j)\Phi(i,l)+\textbf{G}\\[2mm]
=-X_{0}\Psi(b,2)A(l;b,3b+5j+3i-5)
+X_{i+l-\epsilon(i,l)}\Psi(b,3-b)A(\epsilon(i,l);b,j)\\[2mm]
+\Phi(i,l)\Big{[}X_{3}^{a}A(3;b,j)-X_{0}^{a+d}A(3-b;b,j)\Big{]}
+\textbf{H}$\\[2mm]
\item $\displaystyle{S(M(b,j),P(\Psi(b,i)))=X_{3}^{a}X_{b+i}M(b,j)
-X_{0}^{a+d+1}\Psi(b,j)P(\Psi(b,i))}$\\[2mm]
$\displaystyle{=X_{0}^{a+d+1}\Psi(b,j)\Big{[}
\underline{tX_{i}X_{0}^{a+d}}+\Psi(b,i)\Big{]}+X_{3}^{a}X_{b+i}\Psi(b,0)\Psi(b,j) +\textbf{G}}$\\[2mm]
$\displaystyle{=X_{0}^{a+d+1}M(b,i+j)+tX_{0}^{2a+2d+1}A(i;b,j)}$
$\displaystyle{-\Psi(b,3-b)P(\Psi(b,j))\Big{\{}X_{i-2}^{a+d+1}+X_{b-2}^{a+d+1}\Big{\}}}$\\[2mm]
$\displaystyle{+X_{4i-4}^{a+d+1}\Big{[}Q(b,2j)
-\Psi(b,2)P(\Psi(b,2j-2))-X_{0}^{a+d+1}\Phi(2b-1,2b-1)P(\Psi(b,2j-2))\Big{]}}$\\[2mm]
$\displaystyle{+X_{3}^{a-1}X_{b+i}\Psi(b,j)A(3;b,0)
-X_{3}^{a-1}X_{b}\Big{[}X_{b}\Psi(b,3-b)+X_{0}^{a+d}\Phi(b,3-b)\Big{]}P(\Psi(b,j))
+\textbf{H}}$\\[2mm]
\item $S(M(b,j),P(i,l))=X_{i}X_{l}M(b,j)
-X_{0}^{a+d+1}\Psi(b,j)P(i,l)\quad {\rm assume\,\,that}\,\,i \leq l\\[2mm]
=X_{i}X_{l}\Psi(b,0)\Psi(b,j)+X_{0}^{a+d+1}[\underline{tX_{\epsilon(i,l)}X_{i+l-\epsilon(i,l)}}
+\Phi(i,l)]\Psi(b,j)+\textbf{G}\\[2mm]
=X_{i}\Psi(b,0)A(l;b,j)
-X_{i}[X_{b+l+j-\epsilon(l,b+j)}\Psi(b,\epsilon(l,b+j)-b)
+X_{3}^{a}\Phi(l,b+j)]P(\Psi(b,0))\\[2mm]
+X_{i}X_{0}^{a+d}\{\Phi(l,j)-\Phi(b+l+j-3,3-b)\}P(\Psi(b,0))\\[2mm]
+X_{0}^{a+d+1}[
tX_{i+l-\epsilon(i,l)}A(\epsilon(i,l);b,j)-\Phi(i,l)P(\Psi(b,j))]\\[2mm]
+tX_{0}\psi(b,0)A(3j+1;b,7-2b-2j-2l)+\textbf{H}$\\[2mm]
\item $S(L(0),M(b,j))=tX_{0}^{a+d}\Psi(b,j)L(0)-\Phi(2,2)M(b,j)\\[2mm]
=-tX_{0}^{a+d}[\underline{X_{1}\Phi(1,2)}-X_{2}\Phi(1,1)]\Psi(b,j)-\Psi(b,0)\Psi(b,j)\Phi(2,2)
+\textbf{G}\\[2mm]
=-tX_{0}^{a+d}[\Phi(1,2)A(1;b,j)-\Phi(1,1)A(2;b,j)]-
\Psi(b,j)Q(b,b)-Q(b,b-1)\\[2mm]
-\Phi(1,2)Q(b,2j)-M(b,j+1)\Phi(1,2)
+(-1)^{b}\Psi(b,3-b)\Phi(1,b)P(\Psi(b,j))\\[2mm]
+\Psi(1,2)\Phi(1,2)P(\Psi(b,j-1))
-X_{0}^{a+d-1}\Phi(1,j+1)\Phi(1,1)P(\Psi(b,b+j-2))+\textbf{H}$\\[2mm]
\item $S(P(\Psi(b,j)),A(b+j;b,l))=\Psi(b,l)P(\Psi(b,j))-tX_{3}^{a}A(b+j;b,l)
\quad {\rm when} \quad b+j\neq 3\\[2mm]
=-\Psi(b,l)[\underline{tX_{j}X_{0}^{a+d}}+\Psi(b,j)]
+tX_{3}^{a}X_{2b+j+l-\epsilon(b+j,b+l)}\Psi(b,\epsilon(b+j,b+l)-b)+\textbf{G}\\[2mm]
=-tX_{0}^{a+d}A(j;b,l)-M(b,j+l)-Q(b,4l+2j-4)+tX_{3}^{a-1}X_{0}A(j+3;b,b+l)\\[2mm]
+\Psi(b,2)P(\Psi(b,l+j-2))+\Psi(b,1)P(\Psi(b,4b+j-9))
-X_{0}^{a+d-1}\Phi(1,1)P(\Psi(b,4l+3j-7))+\textbf{H}$\\[2mm]
\item $S(P(\Psi(b,j)),A(3;b,l))=\Psi(b,l)P(\Psi(b,j))-tX_{b+j}X_{3}^{a-1}A(3;b,l)
\\[2mm]
=-\Psi(b,l)[\underline{tX_{j}X_{0}^{a+d}}+\Psi(b,j)]
+tX_{b+j}X_{b+l}X_{3}^{a-1}\Psi(b,3-b)+\textbf{G}\\[2mm]
=-tX_{0}^{a+d}A(j;b,l)-M(b,j+l)-Q(b,4l+2j-4)\\[2mm]
+\Psi(b,2)P(\Psi(b,l+j-2))+\Psi(b,1)P(\Psi(b,4b+j-9))
-X_{0}^{a+d-1}\Phi(1,1)P(\Psi(b,4l+3j-7))+\textbf{H}$\\[2mm]
\item $S(M(b,j),A(i;b,j))=X_{i}M(b,j)-tX_{0}^{a+d+1}A(i;b,j)\\[2mm]
=X_{i}[\Psi(b,0)\Psi(b,j)-tX_{b-1}X_{b+j+1}X_{3}^{a-1}\Psi(b,3-b)]
+tX_{b+i+j-\epsilon(i,b+j)}X_{0}^{a+d+1}\Psi(b,\epsilon(i,b+j)-b)+\textbf{G}\\[2mm]
=\Psi(b,0)A(i;b,j)+X_{0}M(b,b+i+j-1)\\[2mm]
-X_{0}\Psi(b,3-b) P(\Psi(b,b+i+j-3) +\Psi(b,3-b)
A(b+i+j-3;b,0)\\[2mm]
-[X_{3}^{a}\Phi(i,b+j)-X_{0}^{a+d}
\{\Phi(i,j)-\Phi(b+i+j-3,3-b)\}]P(\Psi(b,0))+\textbf{H}$\\[2mm]
${\rm
Note\,\,that\, the \,LT\,\,is}\,\,-tX_{i}X_{b-1}X_{b+j+1}X_{3}^{a-1}\Psi(b,3-b)\,\,{\rm
if}\,\,(i;b,j)\neq (1;1,0),\\[2mm]
{\rm and\, the\, LT\,\, is}\,\,
tX_{b+i+j-\epsilon(i,b+j)}X_{0}^{a+d+1}\Psi(b,\epsilon(i,b+j)-b)\,\,{\rm if}\,\,(i;b,j)=(1;1,0)$.\\[2mm]
\end{enumerate}
\noindent Rest of the $S$-polynomial computations and their
reductions modulo $\widehat{\mathfrak{a}_{b}}$ is divided into two cases,
depending on $b=1$ and $b=2$\,.
\noindent \textbf{\underline{Case (i): $b=1$}}
\begin{enumerate}
\item $S(A(i;1,0),A(i;1,1))=\Psi(1,1)A(i;1,0)-\Psi(1,0)A(i;1,1)\\[2mm]
=-X_{1+i-\epsilon(1,i)}\Psi(1,\epsilon(1,i)-1)\Psi(1,1)+X_{i-1}\Psi(1,0)\Psi(1,2)\\[2mm]
-X_{3}^{a}[\Psi(1,1)\Phi(i,1)-\Psi(1,0)\Phi(i,2)]
-X_{0}^{a+d}[\Psi(1,1)\Phi(i-2,2)+\Psi(1,0)\{\Phi(i,1)-\Phi(i-1,2)\}]\\[2mm]
=-X_{3}^{a-1}[\Phi(i,1)A(3;1,1)-\Phi(i,2)A(3;1,0)]
-\Psi(1,2)[A(1+i-\epsilon(1,i);1,1)-A(i-1;1,0)]\\[2mm]
+X_{0}^{a+d}Q(1,i-2) -X_{0}Q(1,i+1)+\textbf{H}$\\[2mm]
${\rm Note \,\, that \, the \,LT \,\, is}\quad
-X_{1+i-\epsilon(i,1)}\Psi(1,1)\Psi(1,\epsilon(i,1)-1)\quad {\rm
if}\,\,
i=1$,\\[2mm]
${\rm and\, the\, LT\,\, is}\quad X_{i-1}\Psi(1,0)\Psi(1,2)
\quad {\rm if} \,\, i\neq 1$.\\[2mm]
\item $S(P(\Psi(1,0)),L(1))=\Phi(2,2)P(\Psi(1,0))-tX_{3}^{a}L(1)\\
=-\underline{tX_{0}^{a+d+1}\Phi(2,2)}-\Phi(2,2)\Psi(1,0)+\textbf{G}
=-Q(1,1)+\Phi(1,2)P(\Psi(1,1))+\textbf{H}$\\[2mm]
\item $S(P(\Psi(1,0)),D)=X_{1}\Phi(1,2)P(\Psi(1,0))-tX_{3}^{a}D\\[2mm]
=-\underline{tX_{2}X_{3}^{a}X_{0}\Phi(1,2)}-X_{1}\Psi(1,0)\Phi(1,2)+\textbf{G}
=-\Phi(1,2)A(1;1,0)+X_{0}\Phi(1,2)P(\Psi(1,1))
+\textbf{H}$\\[2mm]
\item $S(Q(1,1),L(i))=X_{i}Q(1,1)-\Psi(1,0)L(i)\\[2mm]
=-\Phi(1,2)[X_{i}\Psi(1,1)-\underline{X_{i+1}\Psi(1,0)}]
-X_{2+i}\Psi(1,0)\Phi(1,1)+\textbf{G}\\[2mm]
=-\Phi(1,2)[A(i;1,1)-A(i+1;1,0)]-\Phi(1,1)A(i+2;1,0)+\textbf{H}$\\[2mm]
\item $S(Q(1,1),A(i;1,0))=X_{i}Q(1,1)-\Phi(2,2)A(i;1,0)\\[2mm]
=-X_{i}\Psi(1,1)\Phi(1,2)+X_{1+i-\epsilon(i,1)}\Psi(1,\epsilon(i,1)-1)\Phi(2,2)+\textbf{G}\\[2mm]
=-\Phi(1,2)[A(i;1,1)-A(1;1,i)]-\Phi(1,1)A(2;1,i)+\Psi(1,i)L(0)+\textbf{H}$\\[2mm]
${\rm Note \,\, that \, the\, LT \,\, is}\quad -X_{i}\Psi(1,1)\Phi(1,2)\quad {\rm if} \quad i \neq 1$,\\[2mm]
${\rm and \, the\, LT \,\, is}\quad
X_{1+i-\epsilon(i,1)}\Psi(1,\epsilon(i,1)-1)\Phi(2,2)
\quad {\rm if} \quad i = 1$.\\[2mm]
\item $S(Q(1,1),M(1,0))=tX_{0}^{a+d+1}Q(1,1)-\Phi(2,2)M(1,0)\\[2mm]
=-\underline{tX_{0}^{a+d+1}\Psi(1,1)\Phi(1,2)}-\Psi^{2}(1,0)\Phi(2,2)+\textbf{G}\\[2mm]
=-\Phi(1,2)M(1,1)-\Psi(1,0)Q(1,1)-\Psi(1,2)\Phi(1,1)P(\Psi(1,0))+\textbf{H}$\\[2mm]
\item $S(Q(1,2),A(i;1,1))=X_{i}Q(1,2)-\Psi(1,1)A(i;1,1)\\[2mm]
=-\Psi(1,2)[\underline{X_{i}\Psi(1,0)}-X_{i-1}\Psi(1,1)]
+X_{3}^{a}\Psi(1,1)\Phi(i,2)
\\[2mm]
+X_{0}^{a+d-1}[X_{i}\Psi(1,0)\Phi(1,1)-X_{0}\{\Phi(i,1)-\Phi(i-1,2)\}\Psi(1,1)]+\textbf{G}\\[2mm]
=-\Psi(1,2)[A(i;1,0)-A(i-1;1,1)]+X_{3}^{a-1}\Phi(i,2)A(3;1,1)\\[2mm]
+X_{0}^{a+d-1}[\Psi(1,1)L(i-3)+\Phi(1,2)A(i-2;1,1)-\Phi(1,1)A(8-2i;1,1)+\textbf{H}$\\[2mm]
\item $S(Q(1,2),M(1,1))=tX_{0}^{a+d+1}Q(1,2)-\Psi(1,1)M(1,1)\\[2mm]
=-tX_{0}^{a+d+1}[\underline{\Psi(1,0)\Psi(1,2)}-X_{0}^{a+d-1}\Psi(1,0)\Phi(1,1)]-\Psi(1,0)\Psi^{2}(1,1)\\[2mm]
+tX_{3}^{a}\Psi(1,1)[X_{0}\Psi(1,2)-X_{0}^{a+d}\Phi(1,1)+X_{3}^{a}\Phi(1,2)]+\textbf{G}\\[2mm]
=-[\Psi(1,2)-X_{0}^{a+d-1}\Phi(1\,,\,1)]M(1,0)
+X_{0}^{a+d-1}X_{3}^{a-1}[ \Phi^{2}(1,2)-\Phi(2,2)\Phi(1,1)]P(\Psi(1,0))\\[2mm]
+X_{3}^{a-1}\Psi(1,2)\Phi(2,2)P(\Psi(1,0)) -\Psi(1,0)Q(1,2)
+tX_{3}^{a-1}[X_{0}\Psi(1,2)+X_{0}^{a+d}\Phi(1,1)+X_{3}^{a}\Phi(1,2)]A(3;1,1)\\[2mm]
-tX_{3}^{2a-1}[\Psi(1,2)L(1)-X_{0}^{a+d-1}\{
\Phi(1,2)L(0)+\Phi(1,1)L(1) \} ]+\textbf{H}$\\[2mm]
\item $S(M(1,0),M(1,1))=\Psi(1,1)M(1,0)-\Psi(1,0)M(1,0)\\[2mm]
=-tX_{3}^{a-1}\Psi(1,1)[ X_{2}X_{0}\Psi(1,2)+X_{3}^{a+1}\Phi(1,1)
+X_{0}^{a+d}X_{0}\Phi(2,2)]\\[2mm]
+tX_{3}^{a-1}\Psi(1,0)[
\underline{X_{3}X_{0}\Psi(1,2)}+X_{3}^{a+1}\Phi(1,2)
-X_{0}^{a+d}X_{3}\Phi(1,1)]+\textbf{G}\\[2mm]
=-tX_{3}^{a-1}X_{0}\Psi(1,2)[A(2;1,1)-A(3;1,0)]
-tX_{3}^{2a-1}[\Phi(1,1)A(3;1,1)-\Phi(1,2)A(3;1,0)]\\[2mm]
-tX_{3}^{a-1}X_{0}^{a+d}[\Psi(1,1)L(0)
+\Phi(1,2)A(1;1,1)-\Phi(1,1)A(2;1,1)+\Phi(1,1)A(3;1,0)]+\textbf{H}$\\[2mm]
\end{enumerate}
\noindent \textbf{\underline{Case(ii): $b=2$}}
\begin{enumerate}
\item $S(M(2,0),Q(2,1))=\Psi(2,0)\Phi(2,2)M(2,0)
-tX_{0}^{a+d+1}Q(2,1)\\[2mm]
=\Psi(2,0)\Phi(2,2)[\Psi(2,0)\Psi(2,0)
-\underline{tX_{3}^{a}X_{1}\Psi(2,1)}-tX_{3}^{2a}\Phi(2,2)
-tX_{3}^{a-1}X_{0}^{a+d}X_{3}\Phi(1,1)]\\[2mm]
+tX_{0}^{a+d+1}[ X_{3}^{a-1}\Psi(2,1)\Phi^{2}(2\,,\,2)
+\Psi(2,0)\Psi(2,1)\Phi(1,2)]
+\textbf{G}\\[2mm]
=\Psi(2,0)Q(2,1)+\Psi(2,1)\Phi(1,2)M(2,0)
-X_{0}^{a+d-1}\Phi^{2}(1,1)M(2,0)-X_{3}^{a-1}\Psi(2,0)\Phi^{2}(2,2)P(\Psi(2,1))\\[2mm]
+\Psi^{2}(2,1)\Phi(1,1)P(\Psi(2,0))
-X_{3}^{a-1}X_{0}^{a+d-1}[\Phi^{3}(1,2)
-\Phi(2,2)\Phi(1\,,\,2)\Phi(1\,,\,1)]P(\Psi(2,0))\\[2mm]
-tX_{3}^{a-1}\Phi(2,2)[X_{1}\Psi(2,1)A(3;2,0)
+X_{0}^{a+d}\{\Phi(1\,,\,1)A(3;2,0) +\Phi(2,2)A(1;2,0)\}]
+\textbf{H}$\\[2mm]
\item $S(Q(2,1),A(i;2,0))=X_{i}Q(2,1)-\Psi(2,0)\Phi(2,2)A(i;2,0)\\[2mm]
=-X_{i}\Psi(2,0)[\Psi(2,1)\Phi(1,2)-X_{0}^{a+d-1}\Phi(1\,,\,1)^{2}]
\\[2mm]
+\Psi(2,0)\Phi(2,2)[\underline{X_{i-1}\Psi(2,1)} +X_{3}^{a}\Phi(i,2)
+X_{0}^{a+d}\Phi(i-1,1)]+\textbf{G}\\[2mm]
=-[\Psi(2,1)\Phi(1,2)-X_{0}^{a+d-1}\Phi^{2}(1,1)]A(i;2,0)
+\Psi(2,1)\Phi(2,2)A(i-1;2,0)\\[2mm]
+X_{3}^{a-1}\Phi(i,2)\Phi(2,2)A(3;2,0)
+X_{0}^{a+d-1}\Phi(i-1,1)\Psi(2,0)L(0)\\[2mm]
+X_{0}^{a+d-1}\Phi(i-1,1)[\Phi(1,2)A(1;2,0)
-\Phi(1,1)A(2;2,0)]+\textbf{H}$\\[2mm]
\item $S(Q(2,1),L(i))=X_{i}Q(2,1)-\Psi(2,0)^{2}L(i)\\[2mm]
=-X_{i} \Psi(2,0)[\Psi(2,1)\Phi(1,2)-
X_{0}^{a+d-1}\Phi(1\,,\,1)^{2}]
+\Psi^{2}(2,0)[\underline{X_{1+i}\Phi(1,2)}-X_{2+i}\Phi(1,1)]+\textbf{G}\\[2mm]
=\Psi(2,0)[\Phi(1,2)A(i+1;2,0)-\Phi(1,1)A(i+2;2,0)]\\[2mm]
-\Psi(2,1)[\Phi(1,1)A(i+1;2,0)-\Phi(1,2)A(i;2,0)]\\[2mm]
+X_{3}^{a-1}[\Phi(2,1+i)\Phi(1,2)-\Phi(2,2+i)\Phi(1,1)]A(3;2,0)\\[2mm]
+X_{0}^{a+d-1}\Phi(1,1)^{2}A(i;2,0)+\textbf{H}$\\[2mm]
\end{enumerate}
Hence the proof.\qed
\begin{theorem}
{\it Given $b\in \{1, 2\}$, a Gr\"{o}bner Basis for the ideal $E_{b}$
is the set
$$\widehat{E_{b}}=\{A(i;b,j),B(i,j),D,L(i),Q(b,i)\}.$$}
\end{theorem}
\proof Note that, $\widehat{E_{b}}=\widehat{\mathfrak{a}_{b}}\cap R_{b}$\,.\qed
\bigskip
Furthermore, if $b\in \{1, 2\}$, \,$b+l=2$ and \,$i,j \in [1,2]$, we have
$$B(i,j)=X_{i+1}A(j;b,l)-X_{j}A(i+1;b,l)-X_{3}^{a}L(i+j-1)
-X_{0}^{a+d}L(2i+2j-5b)+X_{0}^{a+d}L(7-b-i-j).$$
Therefore, a smaller set $\widehat{\widehat{E_{b}}} =
\{A(i;b,j),L(i),Q(b,i),D\}$ generates the ideal $E_{b}$.
\section{Smoothness of Blowups}
\noindent Let $E$ and $\mathfrak{P}=\langle Y_{1},\ldots,Y_{n-1} \rangle$ be
prime ideals of a ring $N=K[Y_{1},\ldots,Y_{n}]$ and $E \subseteq
\mathfrak{P}$\,. Let $\mathcal{J}_{\mathfrak{P}}$ denote the
Jacobian matrix of the ideal $E$ , taken modulo $\mathfrak{P}$.
Given an indeterminate $\zeta \in \{Y_{1},\ldots,Y_{n}\}$, let
$C_{\zeta}$ denote the column in the matrix
$\mathcal{J}_{\mathfrak{P}}$, corresponding to the indeterminate
$\zeta$. Then, it is obvious from the construction of
$\mathfrak{P}$, that, the column $C_{\zeta}$ is non-zero if and only
if there exists a polynomial $F \in E$ such that $F$ has at least
one term of the form $k\zeta Y_{n}^{l}$, for some $k\in K$ and $l
\in \mathbb{N}$\,.
\medskip
Before we prove our last theorem let us record the following
observations:
\begin{enumerate}
\item
$F\, \in \,\widehat{E_1}$ implies that no term of $F$ is an element
of the set
$$\{X_{2}\Phi(2,2)\,,\,X_{3}\Phi(2,2)\,,\,\Psi(1,1)\Phi(2,2)\,,\,
\Psi(1,2)\Phi(2,2)\,,\,\Phi(1,2)\Phi(2,2)\,,\,\Phi(1,1)\Phi(2,2)\,,\,\Phi(2,2)^{l}\}.$$
\item
$F\,\in\,\widehat{E_2}$ implies that no term of $F$ is an element
of the set
$$\{X_{2}\Phi(2,2)\,,\,X_{3}\Phi(2,2)\,,\,\Psi(2,1)\Phi(2,2)\,,\,
\Psi(2,0)\Phi(2,2)\,,\,\Phi(1,2)\Phi(2,2)\,,\,\Phi(1,1)\Phi(2,2)\,,\,\Phi(2,2)^{l}\}.$$
\item
$F\,\in\,\widehat{E_3}$ implies that no term of $F$ is an element
of the set $$\{X_{1}\Psi(3,0)
,X_{2}\Psi(3,0),X_{3}\Psi(3,0),X_{0}\Psi(3,0)\,,\,
\Psi(2,2)\Psi(3,0)\,,\,\Phi(1,2)\Psi(3,0)\,,\,\Phi(1,1)\Psi(3,0)\,,\,\Psi(3,0)^{l}\}.$$
\end{enumerate}
\begin{theorem}{\it
${\rm Proj}\,\mathcal{R}(\wp)$ is not smooth.}
\end{theorem}
\proof Let us write
\medskip
$\mathfrak{P}_{b} =\begin{cases} \langle X_{1}, X_{2}, X_{3} ,
X_{0}, \Psi(1,0), \Psi(1,1),\Psi(1,2), \Phi(1,2), \Phi(1,1)
\rangle\,, \quad {\rm if} \quad b=1;\\
\langle X_{1}, X_{2}, X_{3} , X_{0}, \Psi(2,0), \Psi(2,1),
\Phi(1,2), \Phi(1,1) \rangle\,, \quad {\rm if} \quad b=2;\\
\langle X_{1}, X_{2}, X_{3} , X_{0}, \Phi(2,2), \Phi(1,2), \Phi(1,1)
\rangle\,, \quad {\rm if} \quad b=3.\\
\end{cases}$
\medskip
\noindent It is clear that $\mathfrak{P}_{b}$ is a homogeneous prime
ideal of $R_{b}$, containing $E_{b}$. Let
$\mathcal{J}_{\mathfrak{P}_{b}}$ denote the Jacobian matrix, taken
modulo $\mathfrak{P}_{b}$. Now we use the preceding observations to conclude that
\medskip
$\bullet$\noindent $C_{\zeta}$ \,is non-zero if and only if \,$\zeta
\, \in \,
\{X_{0},X_{1},\Psi(1,0)\}$, for $b=1$.
\smallskip
$\bullet$\noindent $C_{\zeta}$ \,is non-zero if and only if \,$\zeta
\, \in \,
\{X_{0},X_{1}\}$, for $b=2$.
\smallskip
$\bullet$\noindent $C_{\zeta}$ \,is zero if \,$\zeta \, \in \,
\{X_{1}, X_{2}, X_{3} , X_{0}, \Phi(2,2) , \Phi(1,2), \Phi(1,1)\}$, for $b=3$.
\smallskip
\noindent Hence, the rank of the matrix
$\mathcal{J}_{\mathfrak{P}_{b}}$ is $\begin{cases} 3 \quad {\rm when} \quad b=1;\\
2 \quad {\rm when} \quad b=2;\\
0 \quad {\rm when} \quad b=3;\\
\end{cases}$\\
\noindent and the height of the ideal
$(E_{b})_{(\mathfrak{P}_{b})}$ in the localized ring
$(R_{b})_{(\mathfrak{P}_{b})}$ is $6-b$. Therefore,
$\left(\mathcal{R}(\wp)\right)_{(\mathfrak{P}_{b})}=\left(R_{b}/E_{b}\right)_{(\mathfrak{P}_{b})}$
is not regular by Theorem 3.4. Hence, \,${\rm Proj}
\,\mathcal{R}(\wp)$ \,is not smooth\,.\qed
\medskip
|
2,877,628,088,503 | arxiv | \section{Introduction}\label{Sec:Introduction}
Let $(\Om,\cF,\dbP)$ be a complete probability space on which a standard $d$-dimensional Brownian motion $W=\{W(t);0\les t<\i\}$ is defined, and let $\dbF=\{\cF_t\}_{t\ges0}$ be the natural filtration of $W(\cd)$ augmented by all the $\dbP$-null sets in $\cF$. Let $T>0$. We denote
\bel{D}\ba{ll}
\ns\ds\sX_t\equiv L_{\cF_{t}}^2(\Om;\dbR^n)=\big\{\xi:\Om\to\dbR^n~| ~\xi \hbox{ is $\cF_{t}$-measurable, } \dbE|\xi|^2<\i\big\},\q t\in[0,T]; \\
\ns\ds\sD=\big\{(t,\xi)~|~t\in[0,T),\,\xi\in\sX_t\big\};\q\D[0,T]\deq\big\{(t,s)\bigm|0\les t\les s\les T\big\}.\ea\ee
For any {\it initial pair} $(t,\xi)\in\sD$, consider the following controlled (forward) stochastic differential equation (SDE, or FSDE, for short) on the finite horizon $[t,T]$, in its integral form:
\bel{state}X(s)=\xi+\int_t^sb(r,X(r),u(r))dr+\int_t^s\si(r,X(r),u(r))dW(r),\q s\in[t,T],\ee
where $b:[0,T]\times\dbR^n\times U\to\dbR^n$ and $\si:[0,T]\times\dbR^n\times U\to\dbR^{n\times d}$ are given (deterministic) maps with $U\subseteq\dbR^m$ being a nonempty set (which could be bounded or unbounded). In the above, $u(\cd)$ is called a {\it control process}, which belongs to the following set of {\it admissible controls}:
$$\sU[t,T]=\Big\{u:[t,T]\times\Om\to U\bigm|u(\cd)\hb{ is $\dbF$-progressively measurable, and~}\dbE\int^T_t|u(s)|^2ds<\i\Big\}.$$
According to the standard results of SDEs, under appropriate conditions, for any $(t,\xi)\in\sD$ and any $u(\cd)\in\sU[t,T]$, equation \rf{state} admits a unique (strong) solution $X(\cd)\equiv X(\cd\,;t,\xi,u(\cd))$ which is referred to as the {\it state process} corresponding to $(t,\xi)$ and $u(\cd)$. To measure the performance of the control $u(\cd)$, one could introduce the following cost functional:
\bel{cost-e}
J^0(t,\xi;u(\cd))=\dbE_t\[e^{-\l(T-t)}h^0(X(T))+\int_t^T e^{-\l(r-t)}g^0(r,X(r),u(r))dr\],\ee
for some suitable maps $h^0:\dbR^n\to\dbR$, $g^0:[0,T]\times\dbR^n\times U\to\dbR$, and some parameter $\l\ges0$, where $\dbE_t[\,\cd\,]=\dbE[\,\cd\,|\,\cF_t]$ is the conditional expectation operator. The two terms on the right-hand side of the above are called the {\it terminal cost} and the {\it running cost}, respectively, and $\l$ is called the {\it discount rate}. Note that the terms $e^{-\l(T-t)}$ in the terminal cost and $e^{-\l(r-t)}$ in the running cost are exponential functions with the same parameter $\l$. We therefore call \rf{cost-e} a cost functional with an {\it exponential discounting}. With the state equation \rf{state} and the cost functional \rf{cost-e}, one could pose the following classical stochastic optimal control problem.
\ms
{\bf Problem (C)}. For any $(t,\xi)\in\sD$, find a control $\bar u(\cd)\in\sU[t,T]$ such that
\bel{inf1}J^0(t,\xi;\bar u(\cd))=\essinf_{u(\cd)\in\sU[t,T]}J^0(t,\xi;u(\cd))=V^0(t,\xi).\ee
Any $\bar u(\cd)\in\sU[t,T]$ satisfying \rf{inf1} is called an ({\it open-loop}) {\it optimal control} of Problem (C) for the initial pair $(t,\xi)$; the corresponding state process $\bar X(\cd)\equiv X(\cd\,;t,\xi,\bar u(\cd))$ is called an ({\it open-loop}) {\it optimal state process}; $(\bar X(\cd),\bar u(\cd))$ is called an ({\it open-loop}) {\it optimal pair}; and $V^0(\cd\,,\cd):\sD\to\dbR$ is called the {\it value function} of Problem (C).
\ms
It is known that (see, for example, \cite{Yong 2012}) if $\bar u(\cd)$ is an open-loop optimal control of Problem (C) for the initial pair $(t,\xi)$ with the corresponding optimal state process
$\bar X(\cd)\equiv\bar X(\cd\,;t,\xi,\bar u(\cd))$, then for any $\t\in(t,T]$,
$$J^0(\t,\bar X(\t);\bar u(\cd)|_{[\t,T]})=\essinf_{u(\cd)\in\sU[\t,T]}J^0(\t,\bar X(\t);u(\cd)).$$
This means that the restriction $\bar u(\cd)|_{[\t,T]}$ of $\bar u(\cd)$ on $[\t,T]$ is an open-loop optimal control for the initial pair $(\t,\bar X(\t))$. Such a property is referred to as the {\it time-consistency} of the optimal control $\bar u(\cd)$,
or the {\it time-consistency} of Problem (C).
\ms
In 1992, Duffie and Epstein introduced {\it stochastic differential utility process} (\cite{Duffie-Epstein 1992,Duffie--Epstein 1992-1}, see also \cite{El Karoui-Peng-Quenez 1997}). We prefer to call it a {\it recursive utility/disutility process}. The main feature of such a process $Y(\cd)$ is that the current value $Y(t)$ depends on the future values $Y(s)$, $t<s\les T$ of the process. Economically, this can be explained as follows: Due to people's optimism/pessimism, the predicted future utility/disutility affects the current utility/disutility. For a classical optimal control problem (with exponential discounting), the recursive utility/disutility process $Y(\cd)$ of the state-control pair $(X(\cd),u(\cd))$ can usually be described by the following equation:
\bel{BSDE1}Y(s)=\dbE_s\[e^{-\l(T-s)}h(X(T))+\int_s^Te^{-\l(r-s)}
g(r,X(r),u(r),Y(r))dr\],\q s\in[t,T].\ee
We refer the readers to \cite{Duffie-Epstein 1992,El Karoui-Peng-Quenez 1997,Wei-Yong-Yu 2017} for more details. It is easy to see that $Y(\cd)$ solves \rf{BSDE1} if and only if for some $Z(\cd)$, the pair $(Y(\cd),Z(\cd))$ is an {\it adapted solution} to the following {\it backward stochastic differential equation} (BSDE, for short):
\bel{BSDE2}Y(s)=h(X(T))+\int_s^T\big[\l Y(r)+g(r,X(r),u(r),Y(r))\big]dr-\int_s^TZ(r)dW(r),\q s\in[t,T].\ee
Inspired by this, we could introduce more general controlled decoupled {\it forward-backward stochastic differential equation} (FBSDE, for short) whose integral form is as follows:
\bel{FBSDE}\left\{\2n\ba{ll}
\ds X(s)=\xi+\int_t^sb(r,X(r),u(r))dr+\int_t^s\si(r,X(r),u(r))dW(r),\\
\ns\ds Y(s)=h(X(T))+\int_s^Tg(r,X(r),u(r),Y(r),Z(r))dr-\int_s^TZ(r) dW(r),\ea\right.\qq s\in[t,T],\ee
with the cost functional, called a {\it recursive cost functional}, given by the following:
\bel{recursive}J^R(t,\xi;u(\cd))=Y(t).\ee
Then we may pose the following classical {\it optimal recursive control problem}.
\ms
{\bf Problem (R).} \rm For each $(t,\xi)\in\sD$, find a $\bar u(\cd)\in\sU[t,T]$ such that
\bel{inf JR}J^R(t,\xi;\bar u(\cd))=\essinf_{u(\cd)\in\sU[t,T]}J^R(t,\xi;u(\cd))\equiv V^R(t,\xi).\ee
Similar to Problem (C), any $\bar u(\cd)\in\sU[t,T]$ satisfies \rf{inf JR} is called an open-loop optimal control, etc. It is interesting that Problem (R) is also time-consistent (see \cite{Wei-Yong-Yu 2017}).
\ms
The advantage of the time-consistency is that for any initial pair $(t,\xi)$, if an optimal control $\bar u(\cd)$ is constructed, then it will stay optimal thereafter (for the later initial pair $(\t,\bar X(\t))$ along the optimal state process). This is a little too ideal. In the real world, the time-consistency rarely exists. Instead, most problems, if not all, people encountered are not time-consistent. In other words, an optimal control/policy found for the current initial pair $(t,\xi)$ will hardly stay optimal as time goes by. Such a situation is referred to as the {\it time-inconsistency}. An important reason leading to the time-inconsistency is due to people's subjective {\it time-preferences}. In fact, people usually over discount on the utility for the outcome of immediate future events. Mathematically, such a situation could be described by the so-called {\it nonexponential discounting}, meaning that the discounting terms $e^{-\l(T-t)}$ and $e^{-\l(s-t)}$ appearing in, say, $J^0(t,\xi;u(\cd))$ are replaced by some more general functions $\mu(T,t)$ and $\nu(s,t)$, respectively, so that the cost functional $J^0(t,\xi;u(\cd))$ becomes
\bel{wt J0}\wt J^0(t,\xi;u(\cd))=\dbE_t\[\m(T,t)h^0(X(T))+\int_t^T\n(r,t)g^0(r,X(r),u(r))dr\].\ee
Let us list some possible nonexponential discounting functions as follows:
\begin{enumerate}[(i)]
\item {\it Hyperbolic discounting }:
$\mu(t,T)={1\over 1+\l_1(T-t)}$, $\nu(t,r)={1\over1+\l_2(r-t)}$ with $\l_1,\l_2>0$ and they could be equal or different;
\item {\it Heterogeneous discounting}: $\mu(t,T)=e^{-\l_1(T-t)}$, $\nu(t,r)=e^{-\l_2(r-t)}$ with $\l_1,\l_2>0$, $\l_1\ne\l_2$;
\item {\it Convex combination of two exponential discounting }: $\mu(t,T)=\a e^{-\l_1(T-t)}+(1-\a)e^{-\l_2(T-t)} $,
$\nu(t,r)=\a e^{-\l_1(r-t)}+(1-\a)e^{-\l_2(r-t)} $, with $\a\in(0,1)$, $\l_1,\l_2>0$, $\l_1\ne\l_2$;
\item {\it Quasi-exponential discounting}: $\mu(t,T)=\big(1+\a(T-t)\big)e^{-\l(T-t)}$,
$\nu(t,r)=\big(1+\a(r-t)\big)e^{-\l(r-t)}$, with $\a,\l>0$.
\end{enumerate}
We refer the reader to \cite{Ekeland 2006,Ekeland 2008,Ekeland 2010,Marin-Solano 2010,Marin-Solano 2011} for some relevant results. Inspired by \cite{Yong 2012,Yong 2014,Yong 2017,Wei-Yong-Yu 2017}, instead of \rf{wt J0}, one may consider the following more general cost functional
\bel{cost-none}
\h J(t,\xi;u(\cd))=\dbE_t\[h(t,X(T))+\int_t^T g(t,r,X(r),u(r))dr\],
\ee
which not only includes the cases with the discounting functions (i)--(iv) listed above, but also, of course, includes the case of exponential discounting. It turns out that the optimal control problem with the state equation \rf{state} and the above cost functional $\h J(t,\xi;u(\cd))$ is time-inconsistent, in general. If we let
\bel{cost-none-Y}
\h Y(s)=\dbE_s\[h(s,X(T))+\int_s^Tg(s,r,X(r),u(r))dr\],\qq s\in[t,T],\ee
then for some $\h Z(\cd\,,\cd)$, the pair $(\h Y(\cd),\h Z(\cd\,,\cd))$ is the adapted solution to the following {\it backward stochastic Volterra integral equation} (BSVIE, for short):
\bel{BSVIE-noyz}
\h Y(s)=h(s,X(T))+\int_s^T g(s,r,X(r),u(r))dr-\int_s^T\h Z(s,r)dW(r),\q s\in[t,T],\ee
and
$$\h J(t,\xi;u(\cd))=\h Y(t).$$
Motivated by the above non-exponential discounting, together with the recursive utility/disutility, it is then natural to introduce the following recursive cost functional with general (nonexponential) discounting:
\bel{BSVIE}Y(s)=h(s,X(T))+\int_s^T g(s,r,X(r),u(r),Y(r),Z(s,r))dr-\int_s^T Z(s,r)dW(r),\q s\in[t,T],\ee
where $g:\D[0,T]\times\dbR^n\times U\times\dbR\times\dbR^{1\times d}\to\dbR$ and $h:[0,T]\times\dbR^n\to\dbR$ are suitable deterministic maps, recalling (see \rf{D})
$$\D[0,T]=\{(s,r)\in[0,T]\bigm|0\les s\les r\les T\}.$$
In \rf{BSVIE}, $g(\cd)$ and $h(\cd)$ are called the {\it generator} and the {\it free term}, respectively, of the BSVIE. By the standard results of BSVIE (see, \cite{Yong 2008,Shi-Wang-Yong 2015}), under some mild conditions, for any initial pair $(t,\xi)\in\sD$, any control $u(\cd)\in\sU[t,T]$, and the corresponding state process $X(\cd)$, equation \rf{BSVIE} admits a unique {\it adapted solution} $(Y(\cd),Z(\cd\,,\cd))\equiv(Y(\,\cd\,;t,\xi,u(\cd)),Z(\cd\,,\cd\,
;t,\xi,u(\cd)))$, by which we mean an $(\dbR\times\dbR^{1\times d})$-valued random field $(Y,Z)=\{(Y(s),Z(s,r)):(s,r)\in\D[t,T]\}$ such that
$Y(\cd)$ is $\dbF$-progressively measurable on $[t,T]$; for each fixed $s\in[t,T]$, $Z(s,\cd)$ is $\dbF$-progressively measurable on $[s,T]$; equation \rf{BSVIE} is satisfied in the usual It\^{o} sense. Now, with the adapted solution $(Y(\cd),Z(\cd\,,\cd))$ of BSVIE \rf{BSVIE}, depending on $(t,\xi,u(\cd))$, we introduce the following cost functional:
\bel{cost-functional}J(t,\xi;u(\cd))=Y(t).\ee
Thus, we are considering state equation \rf{state} with the recursive cost functional \rf{cost-functional} determined through BSVIE \rf{BSVIE}. Then the corresponding optimal control problem can be stated as:
\ms
{\bf Problem (N).} For each $(t,\xi)\in\sD$, find a $\bar u(\cd)\in\sU[t,T]$ such that
\bel{Problem-C0}J(t,\xi;\bar u(\cd))=\essinf_{u(\cd)\in\sU[t,T]}J(t,\xi;u(\cd))=V(t,\xi).\ee
Similar to before, any $\bar u(\cd)\in\sU[t,T]$ satisfying \eqref{Problem-C0} is called an ({\it open-loop}) {\it optimal control} of Problem (N) for the initial pair $(t,\xi)$;
the corresponding state process $\bar X(\cd)\equiv X(\cd\,;t,\xi,\bar u(\cd))$ is called an ({\it open-loop}) {\it optimal state process}; $(\bar X(\cd),\bar u(\cd))$ is called an ({\it open-loop}) {\it optimal pair}; and $V(\cd,\cd)$ is called the {\it value function} of Problem (N).
\ms
We point out that Problem (N) is time-inconsistent, in general. Readers might notice that in \cite{Yong 2012,Wei-Yong-Yu 2017}, a similar problem was studied, where the (recursive) cost functional was described by a family of BSDEs. In the last section, we will briefly present an argument to show that using BSVIEs seems to be more natural than using BSDEs. Because of the time-inconsistency of the above Problem (N), people also refer to the above optimal control $\bar u(\cd)$ as the {\it pre-commitment optimal control}.
\ms
Let us take this opportunity to briefly recall the history of BSVIEs. As an extension of BSDEs, a BSVIE of form
\bel{BSVIE1}Y(s)=\xi+\int_s^Tg(s,r,Y(r),Z(s,r))dr-\int_s^TZ(s,r) dW(r),\qq s\in[0,T],\ee
(where $Y(\cd)$ could be higher dimensional) was firstly studied by Lin \cite{Lin 2002}, followed by several other researchers: Aman and N'Zi \cite{Aman-N'Zi 2005}, Wang and Zhang \cite{Wang-Zhang 2007}, Djordjevi\'{c} and Jankovi\'{c} \cite{Djordjevic-Jankovic 2013,Djordjevic-Jankovic 2015}, Hu and {\O}ksendal \cite{Hu 2018}, etc. Inspired by the study of optimal control problems for forward stochastic Volterra integral equations (FSVIEs, for short), Yong \cite{Yong 2008} introduced more general BSVIEs, together with the notion of {\it adapted M-solution}.
There are quite a few follow-up works. Let us mention some: Anh--Grecksch--Yong \cite{Anh-Grecksch-Yong 2011} investigated BSVIEs in Hilbert spaces; Shi--Wang--Yong \cite{Shi-Wang-Yong 2013} studied the well-posedness of BSVIEs containing expectation (of the unknowns); Ren \cite{Ren 2010} discussed BSVIEs with jumps; Wang--Sun--Yong \cite{Wang-Sun-Yong 2018} studied BSVIE with quadratic growth (in $Z$); Wang--Yong \cite{Wang-Yong 2019} obtained a representation of the adapted (M-)solution for a class of BSVIEs via the so-called representation partial differential equations; Overbeck and R\"oder \cite{Overbeck-Roder p} even developed a theory of path-dependent BSVIEs; Numerical aspect was considered by Bender--Pokalyuk \cite{Bender-Pokalyuk 2013}; relevant optimal control problems were studied by Shi--Wang--Yong \cite{Shi-Wang-Yong 2015}, Agram--{\O}ksendal \cite{Agram-Oksendal 2015}, Wang--Zhang \cite{Wang-Zhang 2017}, and Wang \cite{Wang 2018};
Wang--Yong \cite{Wang-Yong 2015} established various comparison theorems for both adapted solutions and adapted M-solutions to BSVIEs in multi-dimensional Euclidean spaces.
\ms
For the state equation \rf{state} together with the recursive cost functional $J(t,\xi;u(\cd))$ defined by \rf{cost-functional} through the BSVIE \rf{BSVIE}, Problem (N) is expected to be time-inconsistent. Therefore, finding an optimal control at any given initial pair $(t,\xi)$ is not very useful. Instead, one should find an equilibrium strategy which is time-consistent and possesses certain kind of local optimality. To find such a strategy, we adopt the method of multi-person differential games. The idea can be at least traced back to the work of Pollak \cite{Pollak 1968} in 1968. Later, the approach was adopted and further developed by Ekeland--Lazrak \cite{Ekeland 2006,Ekeland 2010}; Yong \cite{Yong 2012,Yong 2014,Yong 2017};
Bj\"{o}rk--Murgoci \cite{Bjork-Murgoci 2014}; Bj\"{o}rk--Murgoci--Zhou \cite{Bjork-Murgoci-Zhou 2014}; Bj\"{o}rk--Khapko--Murgoci \cite{Bjork-Khapko-Murgoci 2017}; Wei--Yong--Yu \cite{Wei-Yong-Yu 2017}, Mei--Yong \cite{Mei-Yong 2019}, and Yan--Yong \cite{Yan-Yong 2019} for various kinds of problems.
\ms
Let us now recall the approach of \cite{Yong 2012}, which leads to our current approach.
For any $\t\in[0,T)$, we first divide the time interval $[\t,T]$ into $N$ subintervals: $[t_0,t_1),[t_1,t_2),...,[t_{N-1},t_N]$ with $t_0=\t$, $t_N=T$,
and introduce an $N$-person differential game, where players are labeled from $1$ to $N$. Player $k$ takes over the system at $t=t_{k-1}$ from Player $(k-1)$, controls the system on $[t_{k-1},t_k)$, and hands over to Player $(k+1)$. The initial pair $(t_{k-1},X(t_{k-1}))$ of Player $k$ is the terminal pair of Player $(k-1)$. All the players know that each player tries to find an optimal control (in some sense) for his/her own problem and each player will discount the future cost in his/her own way, even though this player is not controlling the system later on. For Player $k$, using the representation of adapted solutions to the BSVIE by that of FSDE, together with a presentation partial differential equation, the future cost on $[t_k,T]$ will be transformed to the terminal cost at $t=t_k$; and with some suitable modification of BSVIE on $[t_{k-1},t_k]$, a suitable recursive cost functional on $[t_{k-1},t_k]$ for Player $k$ can be constructed. Then Player $k$ faces a time-consistent optimal control problem. Under proper conditions, an optimal control on $[t_{k-1},t_k]$ can be determined. This leads to an {\it approximate equilibrium strategy} and the corresponding {\it approximate equilibrium value function} of the game. Letting the mesh size $\|\Pi\|$ tend to zero, we get the limits called the {\it equilibrium strategy} and {\it equilibrium value function} of the original problem, respectively. At the same time, a so-called {\it equilibrium Hamilton--Jacobi--Bellman equation} (equilibrium HJB equation, for short) is also derived, which can be used to identify the time-consistent equilibrium value function. Under certain conditions, the equilibrium strategy is locally optimal in a proper sense. When $\si(\cd)$ is independent of the control process $u(\cd)$, under some mild conditions, we will show that the equilibrium HJB equation admits a unique classical solution. Furthermore, inspired by the idea of decoupling FBSDEs and the nonlinear Feynman--Kac formula, the (classical) solution of the equilibrium HJB equation can be represented by the solution of a new type BSVIE in which the term $Z(s,s)$ appears. Under certain conditions, a well-posedness result of such an equation is established. As a consequence, the equilibrium strategy can be expressed in terms of the solution to such a BSVIE.
\ms
The rest of this paper is organized as follows. In \autoref{Preliminaries}, we present preliminary results which will be useful in the sequel. In \autoref{sec:equlibrium-strategy}, by using the idea of multi-person differential games, a family of approximate equilibrium strategies is constructed. By letting the mesh size tend to zero, the equilibrium strategy and equilibrium value function are determined by the equilibrium HJB equation. A verification theorem as well as a well-posedness result of the equilibrium HJB equation (for a special case) is obtained in \autoref{Verification Theorem}. In \autoref{BSVIEs}, a new kind of BSVIE (containing the diagonal values $Z(s,s)$ of $Z(\cd\,,\cd)$) is introduced, which is motivated by finding a Feynman-Kac type formula for the equilibrium HJB equation. Some concluding remarks are collected in \autoref{remarks}, including a formal argument to show that BSVIE is a more suitable way to represent a recursive cost functional with nonexponential discounting, a comparison of equilibrium HJB equations resulted from two approaches, and some open questions concerning Problem (N).
\ms
\section{Preliminaries}\label{Preliminaries}
Throughout this paper, $M^\top$ stands for the transpose of a matrix $M$, $\tr(M)$ the trace of $M$, $\dbR^{n\times d}$ the Euclidean space consisting of $(n\times d)$ real matrices, endowed with the Frobenius inner product $\lan M,N\ran\mapsto\tr[M^\top N]$. Let $U\subseteq \dbR^m$ be a nonempty set which could be bounded or unbounded and $\dbS^n$ be the subspace of $\dbR^{n\times n}$ consisting of symmetric matrices. We will use $K>0$ to represent a generic constant which could be different from line to line. Let $T>0$ be a fixed time horizon and $\dbH$ (also $\dbH_1$, $\dbH_2$) be a Euclidean space (which could be $\dbR^n$, $\dbR^{n\times d}$, $\dbS^n$, etc.). Recall $\D[0,T]=\big\{(t,s)\in[0,T]^2~|~0\les t\les s\les T\big\}$ as before. In the sequel, we will need various spaces of functions and processes, which we collect here first for convenience:
$$\ba{ll}
\ns\ds L_{\cF_t}^2(\Om;\dbH)=\big\{\xi:\Om\to\dbH\bigm|\xi \hbox{ is $\cF_{t}$-measurable, } \dbE|\xi|^2<\i\big\},\q t\in[0,T],\\
\ns\ds L_\dbF^2(\Om;C([0,T];\dbH))
=\Big\{\f:[0,T]\times\Om\to\dbH\bigm|\f(\cd)~\hb{ $\dbF$-adapted, } t\mapsto\f(t,\om)~\hb{is continuous, }\as\,\om\in\Om,\\
\ns\ds\qq\qq\qq\qq\qq\qq\qq\qq\qq\qq\dbE\big[\ds\sup_{0\les s\les T}|\f(s)|^2\big]<\i \Big\},\\
\ns\ds C_\dbF([0,T];L^2(\Om;\dbH))\1n
=\1n\Big\{\f:[0,T]\1n\to\1n L^2_{\cF_T}(\Om;\dbH)\bigm|\f(\cd)~\hb{is $\dbF$-adapted, continuous, }\ds\sup_{0\les s\les T}\dbE\big[|\f(s)|^2\big]\1n<\1n\i \Big\},\\
\ns\ds L_\dbF^2(\D[0,T];\dbH)
=\Big\{\f:\1n\D[0,T]\1n\times\1n\Om\1n\to\1n\dbH\bigm|\f(t,\cd)~\hb{is $\dbF$-progressively measurable on $[t,T]$, }\ae~t\1n\in\1n[0,T],\\
\ns\ds\qq\qq\qq\qq\qq\qq\qq\qq\qq\qq\sup_{t\in[0,T]}\dbE\int_t^T|\f(t,s)|^2ds<\i \Big\},\\
\ns\ds C^k(\dbH_1;\dbH_2)=\Big\{\f:\1n\dbH_1\to\1n\dbH_2\bigm|\f(\cd)~\hb{is $j$-th continuously differentiable for any $0\les j\les k$}\,\Big\},\\
\ns\ds C_b^k(\dbH_1;\dbH_2)=\Big\{\f:\1n\dbH_1\to\1n\dbH_2\bigm|\f(\cd)\in C^k(\dbH_1;\dbH_2),~\hb{the $j$-th derivatives are bounded, $0\les j\les k$}\,\Big\}.\ea$$
For convenience, we rewrite the state equation and the recursive cost functional below:
\bel{state-rewrite}X(s)=\xi+\int_t^sb(r,X(r),u(r))dr+\int_t^s\si(r,X(r),u(r))dW(r),
\q s\in[t,T],\ee
\bel{BSVIE-rewrite}
Y(s)=h(s,X(T))+\int_s^T g(s,r,X(r),u(r),Y(r),Z(s,r))dr-\int_s^T Z(s,r)dW(r),~s\in[t,T],\ee
and
\bel{cost-functional-rewrite}J(t,\xi;u(\cd))=Y(t).\ee
To guarantee the well-posedness of the controlled SDE \rf{state-rewrite} and BSVIE \rf{BSVIE-rewrite}, we adopt the following assumptions:
\ms
{\bf(H1)} Let the maps $b:[0,T]\times\dbR^n\times U\to\dbR^n$ and $\si:[0,T]\times\dbR^n\times U\to\dbR^{n\times d}$ be continuous. There exists a constant $L>0$ such that
$$\ba{ll}
\ns\ds|b(s,x_1,u)-b(s,x_2,u)|+|\si(s,x_1,u)-\si(s,x_2,u)|\les L|x_1-x_2|,\q \forall~(s,u)\in [0,T]\times U,~ x_1,x_2\in\dbR^n,\\
\ns\ds|b(s,0,u)|+|\si(s,0,u)|\les L(1+|u|),\q\forall~(s,u)\in [0,T]\times U.\ea$$
{\bf(H2)} Let the maps $g:\D[0,T]\times\dbR^n\times U\times\dbR\times\dbR^{1\times d}\to\dbR$ and $h:[0,T]\times\dbR^n\to\dbR$ be continuous. There exists a constant $L>0$ such that
$$\ba{ll}
\ns\ds|g(t_1,s,x_1,u,y_1,z_1)-g(t_2,s,x_2,u,y_2,z_2)|+|h(t_1,x_1)-h(t_2,x_2)|\\
\ns\ds\q\les L\big(|t_1-t_2|+|x_1-x_2|+|y_1-y_2|+|z_1-z_2|\big),\\
\ns\ds\qq\q\forall~ (s,u)\in [0,T]\times U,~ t_1,t_2\in [0,T],~ x_1,x_2\in\dbR^n,~ y_1,y_2\in\dbR,~z_1,z_2\in\dbR^{1\times d},\\
\ns\ds|g(0,s,0,u,0,0)|+|h(0,0)|\les L(1+|u|), \q\forall~ (s,u)\in [0,T]\times U.\ea$$
\ms
The following results, whose proofs are standard (see \cite{Yong-Zhou 1999} and \cite{Yong 2008,Shi-Wang-Yong 2015}), present the well-posedness of SDE \rf{state-rewrite} and BSVIE \rf{BSVIE-rewrite} under (H1)--(H2).
\bl{lmm:well-posedness-SDE}
\sl Let {\rm(H1)} hold. Then for any initial pair $(t,\xi)\in\sD$ and control $u(\cd)\in\sU[t,T]$, the state equation \rf{state-rewrite} admits a unique solution $X(\cd)\equiv X(\cd\,;t,\xi,u(\cd))\in L_\dbF^2(\Om;C([t,T];\dbR^n))$. Moreover, there exists a constant $K>0$, independent of $(t,\xi)$ and $u(\cd)$, such that
\bel{|X|}\dbE_t\(\sup_{t\les s\les T}|X(s)|^2\)\les K\dbE_t\(1+|\xi|^2+ \int_t^T|u(r)|^2dr\).\ee
In addition, if {\rm(H2)} also holds, then for any initial pair $(t,\xi)\in\sD$, control $u(\cd)\in\sU[t,T]$, and the corresponding state process $X(\cd)$, BSVIE \rf{BSVIE-rewrite} admits a unique adapted solution $(Y(\cd),Z(\cd,\cd))\equiv(Y(\cd\,;t,\xi,u(\cd)),Z(\cd,\cd\,;t,\xi,u(\cd)))\in C([t,T];L_\dbF^2(\Om;\dbR))\times L_\dbF^2(\D[t,T];\dbR^{1\times d})$.
Moreover, there exists a constant $K>0$, independent of $(t,\xi)$ and $u(\cd)$, such that
\bel{|Y|+|Z|}\sup_{t\les s\les T}\dbE_t\(|Y(s)|^2+\int_s^T|Z(s,r)|^2dr\)\les K\dbE_t\(1+|\xi|^2+\int_t^T|u(r)|^2dr\).\ee
\el
\ms
We now present a result concerning a certain type modified BSVIEs, which will play a crucial role in our subsequent analysis. For any $(t,\xi)\in\sD$ and $u(\cd)\in\sU[t,T]$, let $(X(\cd),Y(\cd),Z(\cd\,,\cd))$ be the adapted solution to \rf{state-rewrite}--\rf{BSVIE-rewrite}. For any small $\e>0$, consider the following BSVIE:
\bel{BSVIE-e}Y_\e(s)=h_\e(s,X(T))+\int_s^Tg_\e(s,r,X(r),u(r),
Y_\e(r),Z_\e(s,r))dr-\int_s^TZ_\e(s,r)dW(r),\q s\in[t,T],\ee
which is referred to as a {\it modified BSVIE}, where
$$\left\{\1n\ba{ll}
\ds h_\e(s,x)=h(t,x){\bf1}_{[t,t+\e]}(s)+h(s,x){\bf1}_{(t+\e,T]}(s),\\
\ns\ds g_\e(s,r,x,u,y,z)=g(t,r,x,u,y,z){\bf1}_{[t,t+\e]}(s)
+g(s,r,x,u,y,z){\bf1}_{(t+\e,T]}(s).\ea\right.$$
This is a natural approximation of the original BSVIE \rf{BSVIE-rewrite}, in which the generator and the free term have been modified for $s\in[t,t+\e]$ only. By \autoref{lmm:well-posedness-SDE}, under {\rm(H1)--(H2)}, BSVIE \rf{BSVIE-e} admits a unique adapted solution $(Y_\e(\cd),Z_\e(\cd,\cd))$. By the stability of adapted solutions to BSVIEs, we have
\bel{d2}\ba{ll}
\ns\ds\sup_{s\in[t,T]}\dbE_t\(|Y_\e(s)-Y(s)|^2+\int_s^T|Z_\e(s,r)-Z(s,r)|^2dr\)\\
\ns\ds\les K\sup_{s\in[t,T]}\dbE_t\Big[|h_\e(s,X(T))-h(s,X(T))|^2\\
\ns\ds\qq+\int_s^T\2n\big|g_\e(s,r,X(r),u(r),Y(r),Z(s,r))
-g(s,r,X(r),u(r),Y(r),Z(s,r))\big|^2 dr\Big]\\
\ns\ds\les K\Big\{\sup_{s\in[t,t+\e]}\dbE_t\[|h(t,X(T))-h(s,X(T))|^2\\
\ns\ds\qq+\int_s^{T}\2n\big|g(t,r,X(r),u(r),Y(r),Z(s,r))
-g(s,r,X(r),u(r),Y(r),Z(s,r))\big|^2dr\]\Big\}\les K\e^2,\ea\ee
hereafter $K>0$ is a generic constant which could be different from line to line. In particular,
\bel{|Y_e-Y|}|Y_\e(t)-Y(t)|\les K\e.\ee
In our later discussion, we will need a little better than the above. Thus, some more delicate analysis is needed. We now present the following result which gives a representation of $(Y_\e(\cd),Z_\e(\cd\,,\cd))$ in terms of $(X(\cd),Y(\cd),Z(\cd,\cd))$ and a better estimate than \rf{|Y_e-Y|}.
\bp{lemma-BSVIE-approximate-estimate} \sl Let {\rm(H1)}--{\rm(H2)} hold.
Let $(t,\xi)\in\sD$, $u(\cd)\in\sU[t,T]$, and $(X(\cd),Y(\cd),Z(\cd\,,\cd))$ be the adapted solution of \rf{state-rewrite}--\rf{BSVIE-rewrite}. For $\e\in(0,T-t]$, let $(Y_\e(\cd),Z_\e(\cd\,,\cd))$ be the adapted solution of \rf{BSVIE-e}. Then
\bel{Y_e=Y}\left\{\1n\ba{ll}
\ds Y_\e(s)=Y(s),\qq\qq t+\e<s\les T,\\
\ns\ds Z_\e(s,r)=Z(s,r),\qq t+\e<s\les r\les T,\ea\right.\ee
and
\bel{Z_e=Z}\left\{\1n\ba{ll}
\ds Y_\e(r)=\wt Y(r),\qq\q t\les r\les t+\e,\\
\ns\ds Z_\e(s,r)=\wt Z(r),\qq t\les s\les t+\e,~s\les r\les T,\ea\right.\ee
with $(\wt Y(\cd),\wt Z(\cd))$ being the adapted solution of the following BSDE:
\bel{BSDE-wt Y}\ba{ll}
\ns\ds\wt Y(s)=h(t,X(T))+\int_s^T\(g\big(t,r,X(r),u(r),Y(r),\wt Z(r)\big){\bf1}_{[t+\e,T]}(r)\\
\ns\ds\qq\qq\qq\qq+g\big(t,r,X(r),u(r),\wt Y(r),\wt Z(r)\big){\bf1}_{[t,t+\e]}(r)\)dr-\int_s^T\wt Z(r)dW(r),\q s\in[t,T].\ea\ee
Moreover, there exists a constant $K>0$, independent of $(t,\xi,u(\cd))$, such that
\bel{lemma-BSVIE-approximate-estimate-main}|Y_\e(t)-Y(t)|\les K\e^{2},\qq\as\ee
\ep
\begin{proof} First of all, for $s\in[t+\e,T]$, BSVIEs \rf{BSVIE-e} and \rf{BSVIE-rewrite} are identical. Thus, by the uniqueness of adapted solutions to BSVIEs, we have \rf{Y_e=Y}.
Next, for the given $(X(\cd),u(\cd))$, we denote (suppressing $X(\cd)$ and $u(\cd)$, for notational simplicity)
$$\ba{ll}
\ns\ds h(s)=h(s,X(T)),\qq g(s,r,y,z)=g(s,r,X(r),u(r),y,z),\\
\ns\ds h_\e(s)=h_\e(s,X(T)),\qq g_\e(s,r,y,z)=g_\e(s,r,X(r),u(r),y,z).\ea$$
To get \rf{Z_e=Z}, we let $(y(s,\cd),z(s,\cd))$ and $(y_\e(s,\cd),z_\e(s,\cd))$ be the adapted solutions to the following BSDEs parameterized by $s\in[t,T]$, respectively:
\bel{yz}y(s,\t)=h(s)+\int_\t^Tg(s,r,Y(r),z(s,r))dr-\int_\t^Tz(s,r)
dW(r),\q\t\in[s,T],\ee
and
\bel{y_e}y_\e(s,\t)=h_\e(s)+\int_\t^Tg_\e(s,r,Y_\e(r),
z_\e(s,r))dr-\int_\t^Tz_\e(s,r)dW(r),\q\t\in[s,T].\ee
Note that $Y(\cd)$ and $Y_\e(\cd)$ appeared on the right-hand sides are known. Setting $\t=s$ in \rf{y_e}, we have
\bel{y_e*}y_\e(s,s)=h_\e(s)+\int_s^Tg_\e(s,r,Y_\e(r),z_\e(s,r))dr
-\int_s^Tz_\e(s,r)dW(r),\q s\in[t,T],\ee
Regarding \rf{BSVIE-e} (suppressing $X(\cd)$ and $u(\cd)$) as a BSVIE with generator $(s,r,y,z)\mapsto g(s,r,Y_\e(r),z)$ (independent of $y$), we see that BSVIEs \rf{y_e*} and \rf{BSVIE-e} are identical. Hence, by uniqueness of adapted solutions to BSVIEs, one has
\bel{y_e=Y_e}y_\e(s,s)=Y_\e(s),\qq z_\e(s,r)=Z_\e(s,r),\qq t\les s\les r\les T.\ee
Similarly,
\bel{y=Y}y(s,s)=Y(s),\qq z(s,r)=Z(s,r),\qq t\les s\les r\les T.\ee
Next, for $s\in[t,t+\e]$, \rf{y_e} reads
\bel{wt Y_e*}y_\e(s,\t)=h(t)+\int_\t^Tg(t,r,Y_\e(r),z_\e(s,r))dr
-\int_\t^Tz_\e(s,r)dW(r),\q\t\in[s,T],\ee
which is a BSDE on $[s,T]$ with the generator and terminal term independent of $s\in[t,t+\e]$. Therefore,
\bel{wt Y_e=Y}y_\e(s,\t)=y_\e(\t),\q z_\e(s,\t)=z_\e(\t),\qq s\in[t,t+\e],~\t\in[s,T].\ee
Combining \rf{y_e=Y_e} and \rf{wt Y_e=Y}, we further have that
\bel{Z=Z}z_\e(\t)=z_\e(s,\t)=Z_\e(s,\t)\equiv Z_\e(\t),\qq s\in[t,t+\e],~\t\in[s,T],\ee
i.e., in the current case, $Z_\e(s,\t)$ is independent of $s$ and can be denoted by $Z_\e(\t)$. Clearly, $(y_\e(\cd),z_\e(\cd))$ satisfies (pick $s=t$ in \rf{wt Y_e*}) the following BSDE on $[t,T]$:
\bel{wt Y_e**}y_\e(\t)=h(t)+\int_\t^Tg(t,r,Y_\e(r),z_\e(r))dr-\int_\t^Tz_\e(r)dW(r),\q\t\in[t,T],\ee
Comparing \rf{wt Y_e**} with the following (taking $s=t$ in \rf{yz})
\bel{y(t)}y(t,\t)=h(t)+\int_\t^Tg(t,r,Y(r),z(t,r))dr-\int_\t^Tz(t,r)dW(r),\q
\t\in[t,T],\ee
making use of the fact that (see \rf{Y_e=Y}) $Y_\e(r)=Y(r)$ for $r\in[t+\e,T]$, we see that the above BSDEs \rf{wt Y_e**} and \rf{y(t)} are identical for $\t\in[t+\e,T]$. Hence,
\bel{2.23}y_\e(\t)=y(t,\t)\equiv y(\t),\qq z_\e(\t)=z(t,\t)\equiv z(\t),\qq\t\in[t+\e,T].\ee
Namely, for $\t\in[t+\e,T]$, $(y_\e(\t),z_\e(\t))$ is independent of $\e>0$ and $(y(t,\t),z(t,\t))$ is independent of $t$. Thus, both can be denoted by $(y(\t),z(\t))$. Combining with \rf{Z=Z} and \rf{2.23}, one has
\bel{z=Z_e}z(\t)=Z_\e(\t),\qq\t\in[t+\e,T].\ee
Also, taking $\t=t+\e$ in \rf{wt Y_e**}, we have (noting \rf{2.23})
$$y(t+\e)=h(t)+\int_{t+\e}^Tg(t,r,Y(r),z(r))dr
-\int_{t+\e}^Tz(r)dW(r),$$
which is $\cF_{t+\e}$-measurable, as $(y_\e(\cd),z_\e(\cd))=(y(\cd),z(\cd))$ solves BSDE \rf{wt Y_e**} on $[t+\e,T]$. Hence, (still remember that $s\in[t,t+\e]$ so that \rf{Z=Z}, \rf{2.23} and \rf{z=Z_e} hold) \rf{BSVIE-e} becomes
\bel{BSDE-Y_e}\begin{aligned}
Y_\e(s)&=h(t)+\int_{t+\e}^Tg(t,r,Y_\e(r),Z_\e(s,r))dr
-\int_{t+\e}^TZ_\e(s,r)dW(r)\\
&\qq\qq+\int_s^{t+\e}g(t,r,Y_\e(r),Z_\e(s,r))dr
-\int_s^{t+\e}Z_\e(s,r)dW(r)\\
&=h(t)+\int_{t+\e}^Tg(t,r,Y(r),z(r))dr
-\int_{t+\e}^Tz(r)dW(r)\\
&\qq\qq+\int_s^{t+\e}g(t,r,Y_\e(r),Z_\e(r))dr
-\int_s^{t+\e}Z_\e(r)dW(r)\\
&=y(t+\e)+\int_s^{t+\e}g(t,r,Y_\e(r),Z_\e(r))dr
-\int_s^{t+\e}Z_\e(r)dW(r),
\end{aligned}
\ee
which is a BSDE on $[t,t+\e]$, since $y(t+\e)$ is $\cF_{t+\e}$-measurable. Also, for fixed $t$, from \rf{yz}, one has the following:
\bel{y(bar t)}y(t,s)=y(t+\e)+\int_s^{t+\e}g(t,r,Y(r),z(t,r))dr
-\int_s^{t+\e}z(t,r)dW(r),\q s\in[t,t+\e].\ee
We now have two BSDEs \rf{BSDE-Y_e} and \rf{y(bar t)} on $[t,t+\e]$ which have the same terminal value $y(t+\e)$, and with different generators $g(t,\cd\,,Y_\e(\cd),z)$ and $g(t,\cd\,,Y(\cd),z)$. Hence, by the stability estimate of BSDE and \rf{d2}, we have
$$\ba{ll}
\ns\ds\dbE_t\[\sup_{s\in[t,t+\e]}|y(t,s)-Y_\e(s)|^2+\int_t^{t+\e}|z(t,r)-Z_\e(r)|^2dr\]\\
\ns\ds\q\les K\dbE_t\(\int_t^{t+\e}|g(t,r,Y(r),Z_\e(r))-g(t,r,Y_\e(r),
Z_\e(r))|dr\)^2\\
\ns\ds\q\les K\dbE_t\(\int_t^{t+\e}|Y(r)-Y_\e(r)|dr\)^2\les K\e\dbE_t\(\int_t^{t+\e}|Y(r)-Y_\e(r)|^2 dr\)\les K\e^4.\ea$$
Consequently,
$$|Y_\e(t)-Y(t)|^2=|Y_\e(t)-y(t,t)|^2\les\sup_{s\in[t,T]}\dbE_t\big[|Y_\e(s)-y(t,s)
|^2\big]\les K\e^4,\qq\as,$$
which proves the estimate \rf{lemma-BSVIE-approximate-estimate-main}. Finally, from \rf{BSDE-Y_e} and \rf{yz}, together with \rf{z=Z_e}, for $s\in[t,t+\e]$, one has
\bel{BSDE-Y_e*}\begin{aligned}
Y_\e(s)&=h(t)+\int_{t+\e}^Tg(t,r,Y(r),z(r))dr
-\int_{t+\e}^Tz(r)dW(r)\\
&\qq\qq\qq+\int_s^{t+\e}g(t,r,Y_\e(r),Z_\e(r))dr
-\int_s^{t+\e}Z_\e(r)dW(r)\\
&=h(t)+\int_{t+\e}^Tg(t,r,Y(r),Z_\e(r))dr
-\int_{t+\e}^TZ_\e(r)dW(r)\\
&\qq\qq\qq+\int_s^{t+\e}g(t,r,Y_\e(r),Z_\e(r))dr
-\int_s^{t+\e}Z_\e(r)dW(r).\end{aligned}\ee
Hence, if $(\wt Y(\cd),\wt Z(\cd))$ is the adapted solution of \rf{BSDE-wt Y}, then \rf{Z_e=Z} holds. \end{proof}
\ms
Next, let us consider the following system of decoupled FSDEs and BSVIE:
\bel{FBSDE-no-u}\left\{\2n\ba{ll}
\ds X(s)=\xi+\int_t^s b(r,X(r))dr+\int_t^s\si(r,X(r))dW(r),\q s\in[t,T],\\
\ns\ds Y(s)=h(s,X(T))+\int_s^T g(s,r,X(r),Y(r),Z(s,r))dr-\int_s^T Z(s,r)dW(r),\q s\in[t,T],\ea\right.\ee
where $(t,\xi)\in\sD$. For the maps $b,\si,g,h$ in the above, we assume that the corresponding (H1)--(H2) (ignoring $u(\cd)$) hold, by \autoref{lmm:well-posedness-SDE}, system \rf{FBSDE-no-u} admits a unique adapted solution $(X(\cd),Y(\cd),Z(\cd\,,\cd))$. Further, we have the following representation (whose proof can be found in \cite{Wang-Yong 2019}).
\begin{proposition}\label{representation-theorem}
\sl Suppose that $\Th(\cd,\cd,\cd)\in C^{0,1,2}(\D[0,T]\times\dbR^n;\dbR)$ is a classical solution to the following PDE:
\bel{PDE-FBSDE}\left\{\2n\ba{ll}
\ds\Th_s(t,s,x)+\BH^0(t,s,x,\Th(s,s,x),\Th_x(t,s,x),\Th_{xx}(t,s,x))=0,\q (t,s,x)\in\D[0,T]\times\dbR^n,\\
\ns\ds\Th(t,T,x)=h(t,x),\q (t,x)\in[0,T]\times\dbR^n,\ea\right.\ee
where
$$\ba{ll}
\ns\ds\BH^0(t,s,x,\th,p,P)={1\over2}\tr\big[P\si(s,x)\si(s,x)^\top \big]+pb(s,x)+g(t,s,x,\th,p\si(s,x)),\\
\ns\ds\qq\qq\qq\qq\qq\qq(t,s,x,\th,p,P)\in\D[0,T]\times\dbR^n\times\dbR\times\dbR^{1\times n}\times\dbS^n.\ea$$
For any $(t,\xi)\in\sD$, suppose equation \rf{FBSDE-no-u} admits a unique adapted solution $(X(\cd),Y(\cd),Z(\cd\,,\cd))\equiv(X(\cd\,;t,\xi),Y(\cd\,;t,\xi),
Z(\cd\,,\cd\,;t,\xi))$, then
\bel{Y=Th}Y(s)=\Th(s,s,X(s)),~Z(s,r)=\Th_x(s,r,X(r))\si(r,X(r)),\q(s,r)\in\D[t,T],~\as\ee
\end{proposition}
As a convention here and below, for any differentiable function $\f:\dbR^n\to\dbR$ the gradient $\f_x:\dbR^n\to\dbR^{1\times n}$ is a row vector valued function. From the above, we see that when $r\mapsto\Th_x(s,r,x)$ is continuous, $r\mapsto Z(s,r)$ is continuous.
\begin{remark}\rm When
\bel{g0}g(s,r,x,y,z)\equiv g(r,x,y,z),\qq h(s,x)\equiv h(x),\ee
equation \rf{FBSDE-no-u} becomes a decoupled FBSDE. Then \autoref{representation-theorem} turns into a representation theorem of the adapted solution to the corresponding FBSDE.
\end{remark}
To conclude this section, let us recall a result for Problem (R). The corresponding HJB equation reads
\bel{HJB-FBSDE}\left\{\2n\ba{ll}
\ds V^R_s(s,x)+\inf_{u\in U}\BH^R\big(s,x,u,V^R(s,x),V^R_x(s,x),V^R_{xx}(s,x)\big)=0,\q (s,x)\in[0,T]\times\dbR^n,\\
\ns\ds V^R(T,x)=h(x),\q x\in\dbR^n,\ea\right.\ee
with
$$\ba{ll}
\ns\ds\BH^R(s,x,u,y,p,P)={1\over2}\tr\(P\si(s,x,u)\si(s,x,u)^\top\)+pb(s,x,u)+g\big(s,x,u,y,p\si(s,x,u)\big),\\
\ns\ds\qq\qq\qq\qq\qq\qq(s,x,u,y,p,P)\in[0,T]\times\dbR^n\times U\times\dbR\times\dbR^{1\times n}\times\dbS^n.\ea$$
The following result is a standard verification theorem of Problem (R). One is referred to Wei--Yong--Yu \cite{Wei-Yong-Yu 2017} for a proof of this result.
\begin{proposition}\label{veri-thm} \sl Let {\rm(H1)--(H2)} hold with $g$ and $h$ being of form \rf{g0}. Suppose that $V^R(\cd,\cd)\in C^{1,2}([0,T]\times\dbR^n;\dbR)$ is a classical solution of the HJB equation \rf{HJB-FBSDE}. Then for any initial pair $(t,\xi)\in\sD$,
$$V^R(t,\xi)\les J(t,\xi;u(\cd)),\qq\as,\q\forall u(\cd)\in\sU[t,T].$$
Let $(t,\xi)\in\sD$ be a given initial pair and $(\bar X(\cd),\bar u(\cd))$ be the corresponding state-control pair such that
$$\bar u(s)\in\arg\min~\BH^R\big(s,\bar X(s),\cd\,,V^R(s,\bar X(s)),V^R_x(s,\bar X(s)),V^R_{xx}(s,\bar X(s))\big),\q\ae~s\in[t,T],$$
then
$$V^R(t,\xi)= J(t,\xi;\bar u(\cd)),~\as$$
In another word, $V^R(\cd\,,\cd)$ is the value function of {\rm Problem (R)} and $(\bar X(\cd),\bar u(\cd) )$ is an optimal pair of {\rm Problem (R)} for the initial pair $(t,\xi)$.
\end{proposition}
\section{Feedback Strategy and Equilibrium Strategy}\label{sec:equlibrium-strategy}
\ms
Since Problem (N) is time-inconsistent, instead of looking for an optimal control, we should find a so-called {\it time-consistent equilibrium strategy} for it, which is time-consistent and is locally optimal in a suitable sense. Inspired by the method of multi-person differential games developed in \cite{Yong 2012}, we will introduce {\it approximate equilibrium strategies} associated with partitions of time intervals, together with their constructions, and investigate the limit as the mesh size of the partition tends to zero. To this end, let us first introduce the following definition.
\bde{feedback strategy} \rm Let $\t\in[0,T)$. A map $\Psi:[\t,T]\times\dbR^n\to U$ is called a {\it feedback strategy} (of state equation \rf{state}) on $[\t,T]$ if for every $t\in[\t,T]$ and $\xi\in\sX_t$, the following {\it closed-loop} system:
\bel{closed-loop}X(s)=\xi+\int_t^sb\big(s,X(s),\Psi(s,X(s))\big)ds+\int_t^s\si\big(s,X(s),
\Psi(s,X(s))\big)dW(s),\q s\in[t,T],\ee
admits a unique solution $X(\cd)\equiv X(\cd\,;t,\xi,\Psi(\cd\,,\cd))\equiv X^\Psi(\cd)$.
\ede
Under (H2), for a given feedback strategy $\Psi(\cd\,,\cd)$, the following BSVIE:
\bel{closed-cost}\ba{ll}
\ns\ds Y(s)=h(s,X^\Psi(T))+\int_s^Tg(s,r,X^\Psi(r),\Psi(r,X^\Psi(r)),Y(r),Z(s,r))ds\\
\ns\ds\qq\qq\qq\qq\qq-\int_s^TZ(s,r)dW(r),\q s\in[t,T]\ea\ee
admits a unique adapted solution $(Y(\cd),Z(\cd\,,\cd))\equiv(Y^\Psi(\cd),Z^\Psi(\cd\,,\cd))$. Therefore, the corresponding recursive cost functional at $(t,\xi)$ is well-defined:
\bel{cost*}J(t,\xi;\Psi(\cd\,,\cd))=Y^\Psi(t).\ee
In this case, the outcome $u(\cd)$ of $\Psi(\cd\,,\cd)$, called a {\it closed-loop control}, given by the following
$$u(s)=\Psi(s,X^\Psi(s)),\qq s\in[t,T],$$
is {\it time-consistent}. Moreover, by \autoref{representation-theorem}, if $\Th^\Psi(\cd,\cd,\cd)\in C^{0,1,2}(\D[\t,T]\times\dbR^n;\dbR)$ is a classical solution to the following representation PDE (parameterized by $t\in[\t,T]$):
\bel{PDE-FBSDE*}\left\{\2n\ba{ll}
\ds\Th^\Psi_s(t,s,x)+\BH^\Psi\big(t,s,x,\Th^\Psi(s,s,x),\Th^\Psi_x
(t,s,x),\Th^\Psi_{xx}(t,s,x)\big)=0,\q(t,s,x)\in\D[\t,T]\times\dbR^n,\\
\ns\ds\Th^\Psi(t,T,x)=h(t,x),\q(t,x)\in[\t,T]\times\dbR^n,\ea\right.\ee
where
$$\ba{ll}
\ns\ds\BH^\Psi(t,s,x,\th,p,P)={1\over2}\tr\big[P\si\big(s,x,\Psi(s,x)\big)
\si\big(s,x,\Psi(s,x)\big)^\top\big]+pb\big(s,x,\Psi(s,x)\big)\\
\ns\ds\qq\qq\qq\qq\q+g\big(t,s,x,\Psi(s,x),\th,p\si(s,x,\Psi(s,x))\big),\\
\ns\ds\qq\qq\qq\qq\qq\qq(t,s,x,\th,p,P)\in\D[\t,T]\times\dbR^n\times\dbR\times\dbR^{1\times n}\times\dbS^n,\ea$$
then $(Y^\Psi(\cd),Z^\Psi(\cd\,,\cd))$ admits the following representation:
\bel{Y=Th*}\left\{\2n\ba{ll}
\ds Y^\Psi(s)=\Th^\Psi(s,s,X^\Psi(s)),\qq s\in[t,T],~\as,\\
\ns\ds Z^\Psi(s,r)=\Th^\Psi_x(s,r,X^\Psi(r))\si\big(r,X^\Psi(r),\Psi(r,X^\Psi(r))\big),
\q(s,r)\in\D[t,T],~\as,\ea\right.\ee
with $X^\Psi(\cd)$ being the solution of \rf{closed-loop}.
\ms
Next, we introduce the following notion, which combines the time-consistency and the local optimality.
\bde{equilibrium strategy} \rm A feedback strategy $\Psi:[\t,T]\times\dbR^n\to U$ (of system \rf{state}) is called an {\it equilibrium strategy} on $[\t,T]$ if for any $t\in[\t,T)$, $\xi\in\sX_t$, and $\e>0$ small with $t+\e\les T$, there exists a family of feedback strategies $\Psi^\e:[t,T]\times\dbR^n\to U$ such that
\bel{lim}\left\{\2n\ba{ll}
\ds\Psi^\e(s,x)=\Psi(s,x),\qq\qq(s,x)\in[t+\e,T]\times\dbR^n,\\
\ns\ds\lim_{\e\to0}\sup_{s\in[t,t+\e),|x|\les M}|\Psi^\e(s,x)-\Psi(s,x)|=0,\qq\forall M>0,\ea\right.\ee
and
\bel{optimality}J(t,\xi;\Psi^\e)\les J\big(t,\xi;u\oplus\Psi|_{[t+\e,T]}\big)+o(\e),\qq\forall u(\cd)\in\sU[t,t+\e],\ee
where $o(\e)$ is uniform in $u(\cd)\in\sU[t,t+\e]$ and
\bel{u+Psi}\big(u\oplus\Psi|_{[t+\e,T]}\big)(s)=\left\{\2n\ba{ll}
\ds u(s),\qq\qq\q\,\, s\in[t,t+\e),\\
\ns\ds\Psi(s,X^\Psi(s)),\qq s\in[t+\e,T].\ea\right.\ee
\ede
From \rf{optimality}, we see that the family $\{\Psi^\e(\cd\,,\cd)\}_{\e>0}$ satisfies the following:
\bel{near-optim}J(t,\xi;\Psi^\e)\les\inf_{u(\cd)\in\sU[t,t+\e]}J\big(t,\xi;u\oplus
\Psi|_{[t+\e,T]}\big)+o(\e),\ee
which can be referred to as the {\it local near-optimality} of the family $\{\Psi^\e(\cd\,,\cd)\}_{\e>0}$ at $(t,\xi)$ (see \cite{Zhou-1998} for the notion of near-optimality, for standard time-consistent problems). Further, if the following holds
$$\sup_{s\in[t,t+\e),|x|\les M}|\Psi^\e(t,x)-\Psi(t,x)|=o(\e),\qq\forall M>0,$$
then \rf{near-optim} leads to
\bel{near-optim*}J\big(t,\xi;\Psi\big|_{[t,T]}\big)\les\inf_{u(\cd)\in\sU[t,t+\e]}
J\big(t,\xi;u\oplus\Psi|_{[t+\e,T]}\big)+o(\e),\ee
which can be referred to as the local near-optimality of $\Psi(\cd\,,\cd)$ itself (instead of the family $\{\Psi^\e(\cd\,,\cd)\}_{\e>0}$).
\ms
To construct equilibrium strategies, we need to make some preparations. First, we set
\bel{Hamilton}\ba{ll}
\ns\ds\BH(t,s,x,u,\th,p,P)={1\over2}\tr\[P\si(s,x,u)\si(s,x,u)^\top\]+pb(s,x,u)
+g\big(t,s,x,u,\th,p\si(s,x,u)\big),\\
\ns\ds\qq\qq\qq\qq\qq\qq\qq\qq(t,s,x,u,\th,p,P)\in\D[0,T]\times\dbR^n\times U\times\dbR\times\dbR^{1\times n}\times\dbS^n.\ea\ee
The infimum (or minimum) of the map $u\mapsto\BH(t,s,x,u,\th,p,P)$ will be needed below. However, this map may not be bounded below in general; even if it is bounded below, the infimum might not be achieved; and even if the minimum exists (in the case, say, $U$ is compact), the minimum might not be unique and might not have needed regularity properties. To avoid all these inconvenient situations which are not our main concern in this paper, similar to \cite{Yong 2012}, we introduce the following technical assumption, which, as mentioned in \cite{Yong 2012}, is satisfied by some situations.
\ms
{\bf(H3)} There is a continuous map $\psi:\D[0,T]\times\dbR^n\times\dbR\times\dbR^{1\times n}\times\dbS^n\to U$ such that
\bel{define-psi}\ba{ll}
\ns\ds\BH\big(t,s,x,\psi(t,s,x,\th,p,P),\th,p,P\big)
=\min_{u\in U}\BH(t,s,x,u,\th,p,P),\\
\ns\ds\qq\qq\qq\qq\qq(t,s,x,\th,p,P)\in \D[0,T]\times\dbR^n\times\dbR\times\dbR^{1\times n}\times\dbS^n,\ea\ee
with the properties that the map
$$(t,s,x,\th,p,P)\mapsto\BH\big(t,s,x,\psi(t,s,x,\th,p,P),\th,p,P\big)$$
is continuously differentiable having bounded derivatives in its arguments.
\ms
We admit that the above assumption is restrictive and maybe far more than enough. The problem without the above (H3) is widely open. We prefer not to explore the minimization problem in (H3) here, and leave it to our future investigations.
\ms
Now, let $0\les\t<\bar\t<T$ and $\Psi:[\bar\t,T]\times\dbR^n\to U$ be a feedback strategy on $[\bar\t,T]$. We want to extend $\Psi(\cd\,,\cd)$ to $[\t,T]\times\dbR^n$ by means of optimal controls. To this end, we formulate a time-consistent optimal control problem on $[\t,\bar \t]$. For any $t\in[\t,\bar\t]$, $\xi\in\sX_t$, and $u(\cd)\in\sU[t,\bar\t]$, we apply $u(\cd)$ to the system on $[t,\bar\t)$ followed by the feedback strategy $\Psi(\cd\,,\cd)$ on $[\bar\t,T]$. Then the state equation reads (recall \rf{u+Psi})
\bel{state[u+Psi]}\ba{ll}
\ns\ds X(s)=\xi+\int_t^sb\big(r,X(r),(u\oplus\Psi|_{[\bar\t,T]})(r,X(r))\big)dr\\
\ns\ds\qq\qq\qq+\int_t^s\si\big(r,X(r),(u\oplus\Psi|_{[\bar\t,T]})(r,X(r))\big)dW(r),\qq s\in[t,T].\ea\ee
The corresponding recursive cost functional (on $[t,T]$) should be
\bel{cost[u+Psi]}J\big(t,\xi;(u\oplus\Psi|_{[\bar\t,T]})(\cd)\big)=Y(t),\ee
where $(Y(\cd),Z(\cd\,,\cd))$ is the adapted solution to the following BSVIE:
\bel{BSVIE[u+Psi]}\ba{ll}
\ns\ds Y(s)=h(s,X(T))+\int_s^Tg\big(s,r,X(r),(u\oplus\Psi|_{[\bar\t,T]})(r,X(r)),Y(r),Z(s,r)\big)dr\\
\ns\ds\qq\qq\qq\qq\qq-\int_s^TZ(s,r)dW(r),\qq s\in[t,T].\ea\ee
Note that for $s\in[t,\bar\t]$, the state $X(\cd)$ satisfies the following:
\bel{state[u]}X(s)=\xi+\int_t^sb(r,X(r),u(r))dr+\int_t^s\si(r,X(r),u(r)
)dW(r),\q s\in[t,\bar\t],\ee
and the BSVIE \rf{BSVIE[u+Psi]} can be written as
\bel{BSVIE[t,t]}\ba{ll}
\ns\ds Y(s)=h(s,X(T))+\int_{\bar\t}^Tg\big(s,r,X(r),\Psi(r,X(r)),Y(r),Z(s,r)\big)dr-\int_{\bar\t}^T
Z(s,r)dW(r)\\
\ns\ds\qq\qq+\int_s^{\bar\t}g(s,r,X(r),u(r),Y(r),Z(s,r))dr-\int_s^{\bar\t}Z(s,r)dW(r),\qq s\in[t,\bar\t].\ea\ee
It seems to be difficult to write the above \rf{BSVIE[t,t]} as a BSDE on $[t,\bar\t]$ in general. In fact, even if the sum of the first three terms on the right-hand side is $\cF_{\bar\t}\,$-measurable, due to the dependence on $s$ of this sum and the integrand of the fourth term, one at most can get a BSVIE on $[t,\bar\t]$. Consequently, the optimal control problem associated with the state equation \rf{state[u]} and cost functional \rf{cost[u+Psi]} (determined through \rf{BSVIE[t,t]}) is generally time-inconsistent, which could not be handled by the classical dynamic programming approach. Therefore, instead of \rf{BSVIE[u+Psi]}, we introduce the following modified BSVIE:
\bel{BSVIE[u+Psi]m}\ba{ll}
\ns\ds\wt Y(s)=h_\t(s,X(T))+\int_s^Tg_\t\big(s,r,X(r),(u\oplus\Psi|_{[\bar\t,T]})(r,X(r)),\wt Y(r),\wt Z(s,r)\big)dr\\
\ns\ds\qq\qq\qq\qq\qq\qq-\int_s^T\wt Z(s,r)dW(r),\qq s\in[t,T],\ea\ee
where
$$\left\{\1n\ba{ll}
\ds h_\t(s,x)=h(\t,x){\bf1}_{[\t,\bar\t]}(s)+h(s,x){\bf1}_{(\bar\t,T]}(s),\\
\ns\ds g_\t(s,r,x,u,y,z)=g(\t,r,x,u,y,z){\bf1}_{[\t,\bar\t]}(s)
+g(s,r,x,u,y,z){\bf1}_{(\bar\t,T]}(s).\ea\right.$$
Then we define the recursive cost functional for $(t,\xi)\in[\t,\bar\t]\times\sX_t$ as follows:
\bel{J^t}J^\t(t,\xi;u(\cd))=\wt Y(t),\ee
and pose the following optimal control problem.
\ms
\bf Problem (C$^\Psi[\t,\bar\t]$). \rm For given $(t,\xi)\in[\t,\bar\t]\times\sX_t$, find a control $\bar u(\cd)\in\sU[t,\bar\t]$ such that
$$J^\t(t,\xi;\bar u(\cd))=\inf_{u(\cd)\in\sU[t,\bar\t]}J^\t(t,\xi;u(\cd))\equiv V^\t(t,\xi).$$
\ms
The above is referred to as a {\it sophisticated} optimal control problem on $[\t,\bar\t]$. We have the following result concerning the above problem.
\bp{time-consistent} \sl Let $\Th^\Psi(\cd\,,\cd\,,\cd)$ be a classical solution to the following:
\bel{PDE-Th^Psi}\left\{\2n\ba{ll}
\ds\Th^\Psi_s(t,s,x)+\BH\big(t,s,x,\Psi(s,x),\Th^\Psi(s,s,x),\Th^\Psi_x(t,s,x),\Th^\Psi_{xx}
(t,s,x)\big)=0,\\
\ns\ds\qq\qq\qq\qq\qq\qq\qq\qq\qq(t,s,x)\in\D[\bar\t,T]\times\dbR^n,\\
\ns\ds\Th^\Psi(t,T,x)=h(t,x),\qq (t,x)\in[\bar\t,T]\times\dbR^n,\ea\right.\ee
and let $\wt\Th^\Psi(\t,\cd\,,\cd)$ be a classical solution to the following:
\bel{PDE-Th^Psi*}\left\{\2n\ba{ll}
\ds\wt\Th^\Psi_s(\t,s,x)+\BH\big(\t,s,x,\Psi(s,x),\Th^\Psi(s,s,x),\wt\Th^\Psi_x(\t,s,x),
\wt\Th^\Psi_{xx}(\t,s,x)\big)=0,\\
\ns\ds\qq\qq\qq\qq\qq\qq\qq\qq\qq\qq\qq(s,x)\in[\bar\t,T]\times\dbR^n,\\
\ns\ds\wt\Th^\Psi(\t,T,x)=h(\t,x),\q x\in\dbR^n.\ea\right.\ee
Then
\bel{BSDE-wt Y*}\ba{ll}
\ns\ds\wt Y(s)=\wt\Th^\Psi(\t,\bar\t,X(\bar\t))+\int_s^{\bar\t}g\big(\t,r,X(r),u(r),\wt Y(r),
\wt Z(r)\big)dr-\int_s^{\bar\t}\wt Z(r)dW(r),\q s\in[\t,\bar\t].\ea\ee
Consequently, Problem {\rm(C$^\Psi[\t,\bar\t]$)} is time-consistent on $[\t,\bar\t]$.
\ep
\begin{proof} \rm By \autoref{lemma-BSVIE-approximate-estimate}, we have
\bel{Y_e=Y*}\left\{\1n\ba{ll}
\ds\wt Y(s)=Y(s),\qq\qq\bar\t\les s\les T,\\
\ns\ds\wt Z(s,r)=Z(s,r),\qq\bar\t\les s\les r\les T,\ea\right.\ee
and
\bel{Z_e=Z*}\wt Z(s,r)=\wt Z(r),\qq t\les s\les\bar\t,~s\les r\les T,\ee
with $(\wt Y(\cd),\wt Z(\cd))$ being the adapted solution of the following BSDE:
\bel{BSDE-wt Y*1}\ba{ll}
\ns\ds\wt Y(s)=h(\t,X(T))+\int_s^T\(g\big(\t,r,X(r),\Psi(r,X(r)),Y(r),\wt Z(r)\big){\bf1}_{[\bar\t,T]}(r)\\
\ns\ds\qq\qq\qq+g\big(\t,r,X(r),u(r),\wt Y(r),\wt Z(r)\big){\bf1}_{[\t,\bar\t]}(r)\)dr-\int_s^T\wt Z(r)dW(r),\q s\in[t,\bar\t].\ea\ee
On the other hand, by \autoref{representation-theorem}, we have the following representation:
\bel{Y=Th*1}Y(s)=\Th^\Psi(s,s,X(s)),~Z(s,r)=\Th^\Psi_x\big(s,r,X(r)\big)\si\big(r,X(r),
\Psi(r,X(r))\big),\q(s,r)\in\D[\bar\t,T],~\as,\ee
where $\Th^\Psi(\cd\,,\cd\,,\cd)$ is the solution to \rf{PDE-Th^Psi}. Thus, on $[\bar\t,T]$, we have the following FBSDE:
\bel{FBSDE[t,T]}\left\{\2n\ba{ll}
\ds X(s)=X(\bar\t\,)+\int_{\bar\t}^sb\big(r,X(r),\Psi(r,X(r))\big)dr+\int_{\bar \t}^s\si\big(r,X(r),\Psi(r,X(r))\big)dW(r),\\
\ns\ds\wt Y(s)=h(\t,X(T))+\int_s^Tg\big(\t,r,X(r),\Psi(r,X(r)),\Th^\Psi(r,r,X(r)),\wt Z(r)\big)dr-\int_s^T\wt Z(r)dW(r).\ea\right.\ee
Then
\bel{}\wt Y(s)=\wt\Th^\Psi(\t,s,X(s)),\q\wt Z(s)=\wt\Th^\Psi_x(\t,s,X(s))\si\big(s,X(s),\Psi(s,X(s))\big),\qq s\in[\bar\t,T],\ee
with $\wt\Th^\Psi(\t,\cd\,,\cd)$ solving \rf{PDE-Th^Psi*}. Hence, in particular,
$$\wt Y(\bar\t)=\wt\Th^\Psi(\t,\bar\t,X(\bar\t)).$$
Then \rf{BSDE-wt Y*} follows, which is a BSDE on $[\t,\bar\t]$. This leads to that Problem (C$^\Psi[\t,\bar\t]$) is a standard optimal control problem with a recursive cost functional. Hence, it is time-consistent on $[\t,\bar\t]$. \end{proof}
Let us make a comment on \rf{PDE-Th^Psi} and \rf{PDE-Th^Psi*}. In the former, $t$ is not fixed since in the equation, both $\Th^\Psi(t,s,x)$ and $\Th^\Psi(s,s,x)$ appear. Whereas, in the latter, $\t$ only plays a role of parameter and it is fixed. Clearly, the former is much difficult than the latter, and their structures are essentially different.
\ms
From \rf{lemma-BSVIE-approximate-estimate-main}, we know that there exists a constant $K>0$, independent of $(t,\xi,u(\cd))$ such that
\bel{|Y-Y|}|\wt Y(t)-Y(t)|\les K(\bar\t-t)^{2},\qq\as,~t\in[\t,\bar\t].\ee
Now, since Problem (C$^\Psi[\t,\bar\t]$) is a standard optimal control problem with a recursive cost functional which is time-consistent, we may use dynamic programming method. Thus, the value function $V^\t(\cd\,,\cd)$ of Problem (C$^\Psi[\t,\bar\t]$) is the unique viscosity solution to the following HJB equation:
\bel{HJB(t)}\left\{\2n\ba{ll}
\ds V^\t_t(t,x)+\inf_{u\in U}\BH\big(\t,t,x,u,V^\t(t,x),V^\t_x(t,x),V^\t_{xx}(t,x)\big)=0,\q(t,x)\in[\t,\bar\t]
\times\dbR^n,\\
\ns\ds V^\t(\bar\t,x)=\Th(\t,\bar\t,x),\qq x\in\dbR^n,\ea\right.\ee
with the Hamiltonian $\BH$ given by \rf{Hamilton}. Further, under the non-degenerate condition, $V^\t(\cd\,,\cd)$ is the unique classical solution to the above HJB equation. Then, we may define
\bel{Psi*}\Psi(t,x)=\psi\big(\t,t,x,V^\t(t,x),V^\t_x(t,x),V^\t_{xx}(t,x)\big),
\qq(t,x)\in[\t,\bar\t)\times\dbR^n.\ee
This extends $\Psi$ from $[\bar\t,T]\times\dbR^n$ to $[\t,T]\times\dbR^n$. For convenience, we refer to the above as the {\it strategy extension procedure} for $\Psi(\cd\,,\cd)$.
\ms
With the above preparation, we now proceed a construction of equilibrium strategies. Let $\t\in[0,T)$ be fixed and let $\Pi\equiv\{t_k~|~0\les k\les N\}$ be a partition of $[\t,T]$ with
\bel{Pi}\t=t_0<t_1<\cds<t_{N-1}<t_N=T,\ee
whose {\it mesh size} $\|\Pi\|$ is defined by
$$\|\Pi\|\deq\ds\max_{0\les i\les N-1}|t_{i+1}-t_i|.$$
For the given partition $\Pi$, denote
\bel{def-pi}\left\{\ba{ll}
\ds\pi^\Pi(t)=\sum_{k=0}^{N-2}t_k{\bf1}_{[t_k,t_{k+1})}+t_{N-1}{\bf1}_{[t_{N-1},t_N]},\\
\ns\ds\wt\pi^\Pi(t)=t_1{\bf1}_{[t_{0},t_1]}+\sum_{k=1}^{N-1}t_{k+1} {\bf1}_{(t_k,t_{k+1}]}.\ea\right.\ee
Following the idea of \cite{Yong 2012}, we now inductively construct a feedback strategy $\Psi^\Pi:[\t,T]\times\dbR^n\to U$ associated with $\Pi$ by means of optimal controls. For any $t\in[t_{N-1},T]$, we first consider the following controlled SDE:
\bel{state(N)}X^N(s)=\xi+\int_{t}^sb(r,X^N(r),u^N(r))dr+\int_{t}^s\si(r,X^N(r),u^N(r))dW(r),\q s\in[t,T],\ee
with the recursive cost functional
\bel{J^N}J^N(t,\xi;u^N(\cd))=Y^N(t),\ee
where $(Y^N(\cd),Z^N(\cd))$ is the adapted solution to the following BSDE:
\bel{BSDE(N)}\ba{ll}
\ns\ds Y^N(s)=h(t_{N-1},X^N(T))+\int_s^Tg\big(t_{N-1},r,X^N(r),u^N(r),Y^N(r),Z^N(r)\big)dr\\
\ns\ds\qq\qq\qq\qq\qq\qq-\int_s^TZ^N(r)dW(r),\qq s\in[t,T].\ea\ee
The optimal control problem associated with the above state equation \rf{state(N)} and recursive cost functional \rf{J^N}--\rf{BSDE(N)} is time-consistent. By dynamic programming approach, under proper conditions, the value function, denoted by, $V^N(\cd\,,\cd)$ is the classical solution to the following HJB equation:
\bel{HJB(N)}\left\{\2n\ba{ll}
\ns\ds V^N_s(s,x)+\inf_{u\in U}\BH\big(t_{N-1},s,x,u,V^N(s,x),V^N_x(s,x),V^N_{xx}(s,x)\big)=0,\q(s,x)\in[t_{N-1},T],\\
\ns\ds V^N(T,x)=h(t_{N-1},x),\qq x\in\dbR^n,\ea\right.\ee
with the Hamiltonian $\BH$ given by \rf{Hamilton}. Then define feedback strategy
\bel{Psi(N)}\Psi^N(s,x)=\psi\big(t_{N-1},s,x,V^N(s,x),V^N_x(s,x),V^N_{xx}(s,x)\big),
\qq(s,x)\in[t_{N-1},T]\times\dbR^n,\ee
whose outcome
$$u^N(s)=\Psi^N(s,X^N(s)),\qq s\in[t_{N-1},T]$$
is an optimal control of the corresponding optimal control problem.
\ms
Next, by the above strategy extension procedure, we obtain an extension $\Psi^{N-1}:[t_{N-2},T]\times\dbR^n\to U$ of $\Psi^N(\cd\,,\cd)$ by the following steps:
\ms
\it Step 1. \rm Solve the following representation PDE parameterized by $t\in[t_{N-1},T]$:
\bel{PDE-FBSDE(N-1)}\left\{\2n\ba{ll}
\ds\Th^{N-1}_s(t,s,x)+\BH\big(t,s,x,\Psi^{N}(s,x),\Th^{N-1}(s,s,x),\Th^{N-1}_x
(t,s,x),\Th^{N-1}_{xx}(t,s,x)\big)=0,\\
\ns\ds\qq\qq\qq\qq\qq\qq\qq\qq\qq\qq\qq(t,s,x)\in\D[t_{N-1},T]\times\dbR^n,\\
\ns\ds\Th^{N-1}(t,T,x)=h(t,x),\q(t,x)\in[t_{N-1},T]\times\dbR^n,\ea\right.\ee
and then solve the following PDE:
\bel{PDE-FBSDE(N-1)*}\left\{\2n\ba{ll}
\ds\wt\Th^{N-1}_s(s,x)+\BH\big(t_{N-2},s,x,\Psi^N(s,x),\Th^{N-1}(s,s,x),\wt\Th^{N-1}_x
(s,x),\wt\Th^{N-1}_{xx}(s,x)\big)=0,\\
\ns\ds\qq\qq\qq\qq\qq\qq\qq\qq\qq\qq\qq\qq(s,x)\in[t_{N-1},T]\times\dbR^n,\\
\ns\ds\wt\Th^{N-1}(T,x)=h(t_{N-2},x),\qq x\in\dbR^n.\ea\right.\ee
\it Step 2. \rm Solve the following HJB equation:
\bel{HJB(N-1)}\left\{\2n\ba{ll}
\ds V^{N-1}_s(s,x)+\inf_{u\in U}\BH\big(t_{N-2},s,x,u,V^{N-1}(s,x),V^{N-1}_x(s,x),V^{N-1}_{xx}(s,x)\big)=0,\\
\ns\ds\qq\qq\qq\qq\qq\qq\qq\qq\qq\qq(s,x)\in[t_{N-2},t_{N-1}]\times\dbR^n,\\
\ns\ds V^{N-1}(t_{N-1},x)=\wt\Th^{N-1}(t_{N-1},x),\qq x\in\dbR^n,\ea\right.\ee
with the Hamiltonian $\BH$ given by \rf{Hamilton}, assuming that the classical solution $V^{N-1}(\cd\,,\cd)$ exists.
\ms
\it Step 3. \rm Define
\bel{Psi(N-1)}\Psi^{N-1}(s,x)\1n=\1n\left\{\2n\ba{ll}
\ds\Psi^N(s,x),\qq\qq\qq\qq\qq\qq\qq\qq\qq(s,x)\in[t_{N-1},T]\times\dbR^n,\\
\ns\ds\psi\big(t_{N-2},s,x,V^{N-1}(s,x),V^{N-1}_x(s,x),
V^{N-1}_{xx}(s,x)\big),\q(s,x)\1n\in\1n[t_{N-2},t_{N-1})\1n\times\1n\dbR^n.\ea\right.\ee
By verification theorem (\autoref{veri-thm}), we know that the outcome
$$u^{N-1}(s)=\Psi^{N-1}(s,X^{\Psi^{N-1}}(s)),\qq s\in[t_{N-2},t_{N-1}]$$
of the feedback strategy $\Psi^{N-1}(\cd\,,\cd)$ is the optimal control for the corresponding sophisticated optimal control problem on $[t_{N-2},t_{N-1}]$.
\ms
Now, suppose $\Psi^{k+1}(\cd\,,\cd)$ has been constructed on $[t_k,T]\times\dbR^n$, (for some $k=1,2,\cds,N-1$). We apply the above strategy extension procedure to obtain an extension $\Psi^k:[t_{k-1},T]\times\dbR^n\to U$ of $\Psi^{k+1}(\cd\,,\cd)$ by the following steps:
\ms
\it Step 1. \rm Solve the following representation PDE parameterized by $t\in[t_k,T]$:
\bel{PDE-FBSDE(k)}\left\{\2n\ba{ll}
\ds\Th^{k}_s(t,s,x)+\BH\big(t,s,x,\Psi^{k+1}(s,x),\Th^{k}(s,s,x),\Th^{k}_x
(t,s,x),\Th^{k}_{xx}(t,s,x)\big)=0,\\
\ns\ds\qq\qq\qq\qq\qq\qq\qq\qq\qq\qq(t,s,x)\in\D[t_k,T]\times\dbR^n,\\
\ns\ds\Th^{k}(t,T,x)=h(t,x),\q(t,x)\in[t_k,T]\times\dbR^n,\ea\right.\ee
and then solve the following PDE:
\bel{PDE-FBSDE(k)*}\left\{\2n\ba{ll}
\ds\wt\Th^{k}_s(s,x)\1n+\1n\BH\big(t_{k-1},s,x,\Psi^{k+1}(s,x),\Th^{k}(s,s,x),\wt\Th^{k}_x
(s,x),\wt\Th^{k}_{xx}(s,x)\big)\1n=\1n0,\\
\ns\ds\qq\qq\qq\qq\qq\qq\qq\qq\qq\qq\qq(s,x)\in[t_k,T]\times\dbR^n,\\
\ns\ds\wt\Th^{k}(T,x)=h(t_{k-1},x),\qq x\in\dbR^n.\ea\right.\ee
\it Step 2. \rm Solve the following HJB equation:
\bel{HJB(k)}\left\{\2n\ba{ll}
\ds V^k_s(s,x)+\inf_{u\in U}\BH\big(t_{k-1},s,x,u,V^k(s,x),V^k_x(s,x),V^k_{xx}(s,x)\big)=0,\q(s,x)\in
[t_{k-1},t_k]\times\dbR^n,\\
\ns\ds V^k(t_k,x)=\wt\Th^{k}(t_k,x),\qq x\in\dbR^n,\ea\right.\ee
with the Hamiltonian $\BH$ given by \rf{Hamilton}, again, assuming the classical solution $V^k(\cd\,,\cd)$ exists.
\ms
\it Step 3. \rm Define
\bel{Psi(k)}\Psi^k(s,x)=\left\{\2n\ba{ll}
\ds\Psi^{k+1}(s,x),\qq\qq\qq\qq\qq\qq\qq\qq(s,x)\in[t_k,T]\times\dbR^n,\\
\ns\ds\psi\big(t_{k-1},s,x,V^k(s,x),V^k_x(s,x),V^k_{xx}(s,x)\big),
\qq\q(s,x)\in[t_{k-1},t_k)\times\dbR^n.\ea\right.\ee
The same as above, the outcome
$$u^k(s)=\Psi^k(s,X^{\Psi^k}(s)),\qq s\in[t_{k-1},t_k],$$
of the feedback strategy $\Psi^k(\cd\,,\cd)$ is an optimal control of the corresponding sophisticated optimal control on $[t_{k-1},t_k]$.
\ms
This completes the induction.
\ms
It is seen that $\Psi^0:[\t,T]\times\dbR^n\to U$ is a feedback
strategy whose outcome $u^k(\cd)$ on $[t_{k-1},t_k]$ is optimal for the corresponding sophisticated problem on $[t_{k-1},t_k]$. We call the above constructed $\Psi^0(\cd,\cd)$ (which is determined by partition $\Pi$) an {\it equilibrium strategy} associated with $\Pi$.
Our next goal is to obtain the limit as $\|\Pi\|\to0$.
\ms
To get the right ansatz, let us make an observation. Once $\Psi^k(s,x)$ is defined for $(s,x)\in[t_{k-1},T]\times\dbR^n$, we may extend
$\Th^{k}(t,s,x)$ and
$\wt\Th^{k}(s,x)$ as follows:
\bel{PDE-FBSDE(k)-}\left\{\2n\ba{ll}
\ds\Th^{k}_s(t,s,x)+\BH\big(t,s,x,\Psi^k(s,x),\Th^{k}(s,s,x),\Th^{k}_x
(t,s,x),\Th^{k}_{xx}(t,s,x)\big)=0,\\
\ns\ds\qq\qq\qq\qq\qq\qq\qq\qq\qq\qq(t,s,x)\in\D[t_{k-1},T]\times\dbR^n,\\
\ns\ds\Th^{k}(t,T,x)=h(t,x),\q(t,x)\in[t_{k-1},T]\times\dbR^n,\ea\right.\ee
and
\bel{PDE-FBSDE(k)*-}\left\{\2n\ba{ll}
\ds\wt\Th^{k}_s(s,x)\1n+\1n\BH\big(t_{k-1},s,x,\Psi^k(s,x),\Th^{k}(s,s,x),\wt\Th^{k}_x
(s,x),\wt\Th^{k}_{xx}(s,x)\big)\1n=\1n0,\q(s,x)\in[t_{k},T]\times\dbR^n,\\ [1mm]
\ds\wt\Th^{k}_s(s,x)\1n+\1n\BH\big(t_{k-1},s,x,\Psi^k(s,x),\wt\Th^{k}(s,x),\wt\Th^{k}_x
(s,x),\wt\Th^{k}_{xx}(s,x)\big)\1n=\1n0,\q(s,x)\in[t_{k-1},t_k)\times\dbR^n,\\ [1mm]
\ns\ds\wt\Th^{k}(T,x)=h(t_{k-1},x),\qq x\in\dbR^n.\ea\right.\ee
Then
$$V^k(s,x)=\wt\Th^{k}(s,x),\qq s\in[t_{k-1},t_k),\q x\in\dbR^n.$$
Now, for any given partition $\Pi$ of $[\t,T]$, we define
\bel{Th-Pi-tau-h-g}
\left\{\begin{aligned}
&\Th^\Pi(t,s,x)=\Th^{k}(t,s,x),\q(t,s,x)\in\D[t_{k},T]\times\dbR^n,\q k=0,1,...,N-1,\\
&\wt\Th^\Pi(t,s,x)=\sum_{k=1}^{N} \wt\Th^{k}(s,x){\bf1}_{[t_{k-1},t_k)}(t),\q (t,s,x)\in\D[\t,T]\times\dbR^n,\\
&h^\Pi(t,x)=\sum_{k=1}^N h(t_{k-1},x){\bf1}_{[t_{k-1},t_k)}(t),\q (t,x)\in[\t,T]\times\dbR^n,\\
&g^\Pi(t,s,x,u,y,z)=\sum_{k=1}^N g(t_{k-1},s,x,u,y,z){\bf1}_{[t_{k-1},t_k)}(t),\\
&\qq\qq\qq\qq (t,s,x,u,y,z)\in\D[\t,T]\times\dbR^n\times U\times\dbR\times\dbR^{1\times d}.
\end{aligned}\right.
\ee
Note that
$$
\Th^{j}(t,s,x)=\Th^{k}(t,s,x),\q(t,s,x)\in\D[t_{k},T]\times\dbR^n,\q 1\les j\les k\les N.$$
Thus $\Th^\Pi(\cd,\cd,\cd)$ is well-defined.
Then $\Th^\Pi(\cd,\cd,\cd)$ and $\wt\Th^\Pi(\cd,\cd,\cd)$ satisfy the following PDEs:
\bel{Th-Pi}\left\{\2n\ba{ll}
\ns\ds\Th^\Pi_s(t,s,x)+\BH\big(t,s,x,\Psi^\Pi(s,x),\Th^\Pi(s,s,x),\Th^\Pi_x(t,s,x),
\Th^\Pi_{xx}(t,s,x)\big)=0,\\
\ns\ds\qq\qq\qq\qq\qq\qq\qq\qq\qq\qq(t,s,x)\in\D[\t,T]\times\dbR^n,\\
\ns\ds\Th^\Pi(t,T,x)=h(t,x),\q (t,x)\in[\t,T]\times\dbR^n,\ea\right.\ee
and
\bel{Th-Pi-tau}
\left\{\begin{aligned}
& \wt\Th^{\Pi}_s(t,s,x)+\BH^\Pi\big(t,s,x,\Psi^\Pi(s,x),\Th^\Pi(s,s,x),\wt\Th^\Pi_x(t,s,x),\wt\Th^\Pi_{xx}(t,s,x)\big)=0,\\
&\qq\qq\qq\qq\qq\qq\qq\qq\qq t\in [\t,T],\,s\in[\ti\pi^\Pi(t),T],\,x\in\dbR^n,\\
& \wt\Th^{\Pi}_s(t,s,x)+\BH^\Pi\big(t,s,x,\Psi^\Pi(s,x), \wt\Th^\Pi(t,s,x),\wt\Th^\Pi_x(t,s,x),\wt\Th^\Pi_{xx}(t,s,x)\big)=0,\\
&\qq\qq\qq\qq\qq\qq\qq\qq\qq t\in [\t,T],\,s\in[t,\ti\pi^\Pi(t)]\,,x\in\dbR^n,\\
& \wt\Th^{\Pi}(t,T,x)=h^\Pi(t,x),\q (t,x)\in[\t,T]\times\dbR^n,
\end{aligned}\right.\ee
where
\bel{H-Pi}\ba{ll}
\ns\ds\BH^\Pi(t,s,x,u,\th,p,P)={1\over 2}\tr[P\si(s,x,u)\si(s,x,u)^\top]+p b(s,x,u)+g^\Pi(t,s,x,u,\th,p\si(s,x,u)),\\
\ns\ds\qq\qq\qq\qq\qq\qq(t,s,x,u,\th,p,P)\in \D[\t,T]\times\dbR^n\times U\times\dbR\times\dbR^{1\times n}\times\dbS^n,\ea\ee
and
\bel{Psi-Pi}\Psi^\Pi(s,x)=\psi(\pi^\Pi(s),s,x,\wt\Th^\Pi(s,s,x),\wt\Th^\Pi_x(s,s,x),
\wt\Th^\Pi_{xx}(s,s,x)),\q (s,x)\in[\t,T]\times\dbR^n.\ee
Note that
\bel{h-h}|h^\Pi(t,x)-h(t,x)|=\sum_{k=1}^N|h(t_{k-1},x)-h(t,x)|{\bf1}_{[t_{k-1},t_k)}(t)
\les L\|\Pi\|,\ee
and likewise,
\bel{H-H}\ba{ll}
\ns\ds|\BH^\Pi(t,s,x,u,\th,p,P)-\BH(t,s,x,u,\th,p,P)|\\
\ns\ds=|g^\Pi(t,s,x,u,\th,p\si(s,x,u))-g(t,s,x,u,\th,p\si(s,x,u))|\les L\|\Pi\|.\ea\ee
Therefore, at the limit, one should have
\bel{PDE-FBSDE(limit)}\left\{\2n\ba{ll}
\ds\Th_s(t,s,x)\1n+\1n\BH\big(t,s,x,\Psi(s,x),\Th(s,s,x),\Th_x(t,s,x),\Th_{xx}(t,s,x)\big)=0,
\q(t,s,x)\1n\in\1n\D[\t,T]\1n\times\1n\dbR^n,\\
\ns\ds\Th(t,T,x)=h(t,x),\q(t,x)\in[\t,T]\times\dbR^n,\ea\right.\ee
\bel{tiPDE-FBSDE(limit)}\left\{\2n\ba{ll}
\ds\wt\Th_s(t,s,x)\1n+\1n\BH\big(t,s,x,\Psi(s,x),\Th(s,s,x),\wt\Th_x(t,s,x),\wt\Th_{xx}(t,s,x)\big)=0,
\q(t,s,x)\1n\in\1n\D[\t,T]\1n\times\1n\dbR^n,\\
\ns\ds\Th(t,T,x)=h(t,x),\q(t,x)\in[\t,T]\times\dbR^n,\ea\right.\ee
\bel{V(limit)}V(s,x)=\wt\Th(s,s,x),\qq(s,x)\in[\t,T]\times\dbR^n,\ee
and
\bel{Psi(limit)}\Psi(s,x)=\psi\big(s,s,x,\wt\Th(s,s,x),\wt\Th_x(s,s,x),\wt\Th_{xx}(s,s,x)\big),\qq(s,x)\in
[\t,T]\times\dbR^n.\ee
Let \rf{PDE-FBSDE(limit)} admit a unique classical solution. Then
\bel{ti-Th-Th}
\wt\Th(t,s,x)=\Th(t,s,x),\q (t,s,x)\in\D[\t,T]\times\dbR^n,
\ee
and thus
\begin{align}
\label{V(limit)-Th-Psi1}&V(s,x)=\wt\Th(s,s,x)=\Th(s,s,x),\qq(s,x)\in[\t,T]\times\dbR^n,\\
\label{V(limit)-Th-Psi}&\Psi(s,x)=\psi\big(s,s,x,\Th(s,s,x),\Th_x(s,s,x),\Th_{xx}(s,s,x)\big)
\qq(s,x)\in[\t,T]\times\dbR^n.
\end{align}
\ms
We call \rf{PDE-FBSDE(limit)} the {\it equilibrium HJB equation} of Problem (N), and $V(\cd,\cd)$ the {\it equilibrium value function} of Problem (N). The map $\Psi(\cd,\cd)$ defined by \rf{V(limit)-Th-Psi} is a {\it feedback strategy} of Problem (N) provided that \rf{PDE-FBSDE(limit)} has a solution with good regularities. We will show that the feedback strategy $\Psi(\cd,\cd)$ is an {\it equilibrium strategy} of Problem (N) in the next section. Note that the equilibrium HJB equation \rf{PDE-FBSDE(limit)} can actually be written as follows:
\bel{Th-differential-form-rewrite}\left\{\2n\ba{ll}
\ds\Th_s(t,s,x)+{1\over2}\tr\[\Th_{xx}(t,s,x)a\big(s,x,\psi(s,s,x,\Th(s,s,x),\Th_x(s,s,x),
\Th_{xx}(s,s,x))\big)\]\\
\ns\ds\q+ \Th_x(t,s,x)b\big(s,x,\psi(s,s,x,\Th(s,s,x),\Th_x(s,s,x),\Th_{xx}(s,s,x))\big)\\
\ns\ds\q+g\big(t,s,x,\psi(s,s,x,\Th(s,s,x),\Th_x(s,s,x),\Th_{xx}(s,s,x)),\Th(s,s,x),\\
\ns\ds\qq\qq\Th_x(t,s,x)\si(s,x,\psi(s,s,x,\Th(s,s,x),\Th_x(s,s,x),\Th_{xx}(s,s,x)))\big)=0,\\
\ns\ds\qq\qq\qq\qq\qq\qq\qq\qq\qq\qq\qq\q(t,s,x)\in\D[\t,T]\times\dbR^n,\\
\ns\ds\Th(t,T,x)=h(t,x),\q (t,x)\in[\t,T]\times\dbR^n,\ea\right.\ee
where
\bel{def-a}
a(s,x,u)\deq\si(s,x,u)\si(s,x,u)^\top,\q (s,x,u)\in[\t,T]\times\dbR^n\times U.
\ee
Taking the feedback strategy $\Psi(\cd\,,\cd)$ defined by \rf{V(limit)-Th-Psi}, the corresponding closed-loop system is
\bel{equilibrium-system}\left\{\begin{aligned}
\bar X(s) &=\xi+\int_\t^sb\big(r,\bar X(r),\Psi(r,\bar X(r))\big)dr+\int_\t^s\si\big(r,\bar X(r),\Psi(r,\bar X(r))\big)dW(r),\qq s\in[\t,T],\\
\bar Y(s) &=h(s,\bar X(T))+\int_s^T g\big(s,r,\bar X(r),\Psi(r,\bar X(r)),\bar Y(r),\bar Z(s,r)\big)dr\\
&\qq\qq\qq\qq\qq -\int_s^T \bar Z(s,r)dW(r),\qq s\in[\t,T].
\end{aligned}\right.\ee
The equilibrium value function is
\bel{bar Y}V(s,\bar X(s))=\Th(s,s,\bar X(s))=\bar Y(s),\q s\in[\t,T].\ee
Also,
\bel{bar Z}\bar Z(t,s)=\Th_x(t,s,\bar X(s))\si\big(s,\bar X(s),\Psi(s,\bar X(s))\big),\qq (t,s)\in\D[\t,T].\ee
We see that \rf{bar Y}--\rf{bar Z} exhibits an interesting relationship between equilibrium HJB equation \rf{Th-differential-form-rewrite} and coupled FSDE and BSVIE \rf{equilibrium-system}. We will explore more about this in \autoref{BSVIEs}.
\section{Verification Theorem and Well-Posedness of Equilibrium HJB Equation}\label{Verification Theorem}
For any partition $\Pi$, we have constructed an {\it approximate equilibrium strategy} $\Psi^\Pi(\cd,\cd)$ of Problem (N). Taking the limit $\ds\lim_{\|\Pi\|\to 0}\Psi^\Pi(\cd,\cd)$, we have formally obtained the feedback strategy $\Psi(\cd,\cd)$.
In this section, we would like to show that $\Psi(\cd,\cd)$ is an equilibrium
strategy of Problem (N) in the sense of \autoref{equilibrium strategy}.
Such a result can be viewed as a verification theorem for our constructed strategy $\Psi(\cd,\cd)$.
\ms
In order to show the local optimality of the feedback strategy $\Psi(\cd,\cd)$,
we assume that the equilibrium HJB equation \rf{Th-differential-form-rewrite} admits a unique smooth solution. We also assume that all the involved functions are bounded and differentiable with bounded derivatives.
For any fixed $ t \in[0,T)$ and $\e>0$ small with $t+\e\les T$, we consider Problem (C$^\Psi[t,t+\e]$).
Then by \autoref{time-consistent}, the state equation and the cost functional of Problem (C$^\Psi[t,t+\e]$) can be given by
\bel{state-t-t+e}
X(s)=\xi+\int_t^sb(r,X(r),u(r))dr+\int_t^s\si(r,X(r),u(r))dW(r),\q s\in[t,t+\e],
\ee
and
\bel{J-t}
J^t(t,\xi;u(\cd))=\wt Y(t),
\ee
with $(\wt Y(\cd),\wt Z(\cd))$ being the adapted solution of the following BSDE:
\bel{BSDE-wtY-t+e}\ba{ll}
\ns\ds\wt Y(s)=\Th(t,t+\e,X(t+\e))+\int_s^{t+\e}g\big(t,r,X(r),u(r),\wt Y(r),
\wt Z(r)\big)dr\\
\ns\ds\qq\qq\qq\qq\qq\qq\qq-\int_s^{t+\e}\wt Z(r)dW(r),\q s\in[t,t+\e].
\ea\ee
Note that Problem (C$^\Psi[t,t+\e]$) is a classical recursive stochastic optimal control problem and thus is time consistent.
Let $\Th^\e(t,\cd,\cd)$ be the unique classical solution of the following HJB equation:
\bel{PDE-Th-e}\left\{\2n\ba{ll}
\ds\Th^\e_s(t,s,x)\1n+\1n\BH\big(t,s,x,\Psi^\e(s,x),\Th^\e(t,s,x),\Th^\e_x(t,s,x),
\Th^\e_{xx}(t,s,x)\big)=0,
\q(s,x)\1n\in\1n[t,t+\e]\1n\times\1n\dbR^n,\\
\ns\ds\Th^\e(t,t+\e,x)=\Th(t,t+\e,x),\q x\in\dbR^n,\ea\right.\ee
where
\bel{Psi-e}
\Psi^\e(s,x)=\psi\big(t,s,x,\Th^\e(t,s,x),\Th^\e_x(t,s,x),\Th^\e_{xx}(t,s,x)\big),\q(s,x)\1n\in\1n[t,t+\e]\1n\times\1n\dbR^n.
\ee
By \autoref{veri-thm}, the outcome $u^\e(\cd)=\Psi^\e(\cd,X^\e(\cd))$ of strategy $\Psi^\e(\cd,\cd)$ is an optimal control of Problem (C$^\Psi[t,t+\e]$);
that is
\bel{J-t-Psit}
J^t(t,\xi;\Psi^\e(\cd,\cd))=\inf_{u(\cd)\in\sU[t,t+\e]}J^t(t,\xi;u(\cd)).
\ee
Note that $\Th(\cd,\cd,\cd)$ satisfies the following equation on $\D[t,t+\e]$:
\bel{PDE-Th-t-t+e}\3n\left\{\2n\ba{ll}
\ds\Th_s( \t,s,x)\1n+\1n\BH\big(\t,s,x,\Psi(s,x),\Th(s,s,x),\Th_x(\t,s,x),\Th_{xx}(\t,s,x)
\big)\1n=\1n0,~(\t,s,x)\1n\in\1n\D[t,t\1n+\1n\e]\1n\times\1n\dbR^n,\\
\ns\ds\Th(\t,t+\e,x)=\Th(\t,t+\e,x),\q(\t,x)\in[t,t+\e]\times\dbR^n,\ea\right.\ee
where
\bel{Psi-t-t+e}
\Psi(s,x)=\psi\big(s,s,x,\Th(s,s,x),\Th_x(s,s,x),\Th_{xx}(s,s,x)\big),\q(s,x)\1n\in\1n
[t,t+\e]\1n\times\1n\dbR^n.\ee
We observe that the $\BH$-term of \rf{PDE-Th-t-t+e} depends on the diagonal values $\Th(s,s,x),\Th_x(s,s,x),\Th_{xx}(s,s,x)$ of the unknown variables and the $\BH$-term of \rf{PDE-Th-e} depends on
$\Th^\e(t,s,x),\Th^\e_x(t,s,x),\Th^\e_{xx}(t,s,x)$. Thus \rf{PDE-Th-t-t+e} is a non-local PDE and \rf{PDE-Th-e} is a classical PDE with the parameter $t$.
Then in general, we do not have
\bel{Th-The-eq}
\Th^\e(t,s,x)=\Th(t,s,x),\q(s,x)\1n\in\1n[t,t+\e]\1n\times\1n\dbR^n.
\ee
Therefore, there are some gaps in the proofs of the verification theorem in \cite{Wei-Yong-Yu 2017,Mei-Yong 2019}.
In fact, if \rf{Th-The-eq} holds, by \rf{PDE-Th-e}--\rf{PDE-Th-t-t+e}, we get
\begin{align*}
&\BH\big(t,s,x,\Psi^\e(s,x),\Th^\e(t,s,x),\Th^\e_x(t,s,x),\Th^\e_{xx}(t,s,x)\big)\\
&\q=\BH\big(t,s,x,\Psi(s,x),\Th(s,s,x),\Th_x(t,s,x),\Th_{xx}(t,s,x)\big),\q(s,x)\in[t,t+\e]\times\dbR^n.
\end{align*}
Then the following should hold true:
$$
\Th^\e(t,s,x)=\Th(s,s,x),\q(s,x)\1n\in\1n[t,t+\e]\1n\times\1n\dbR^n.
$$
Combining the above with \rf{Th-The-eq} yields that
\bel{Th-Th-eq}
\Th(t,s,x)=\Th(s,s,x),\q(s,x)\1n\in\1n[t,t+\e]\1n\times\1n\dbR^n.
\ee
Since the $\BH$-term of \rf{PDE-Th-t-t+e} depends on $\t;\,t\les\t\les t+\e$, the above equality usually fails.
Therefore, we do not have \rf{Th-The-eq} in general.
\ms
By \autoref{lemma-BSVIE-approximate-estimate}, we know that there exists a constant $K>0$, independent of $(t,\xi,u(\cd))$ such that
\bel{J-t-J}
\big|J^t(t,\xi;u(\cd))-J(t,\xi;u\oplus\Psi|_{[t+\e,T]})\big|\les K\e^{2}.
\ee
Combining the above with \rf{J-t-Psit}, we have
\bel{J-Psit}
J(t,\xi;\Psi^\e\oplus\Psi|_{[t+\e,T]})\les J(t,\xi;u\oplus\Psi|_{[t+\e,T]})
+o(\e),\qq\forall u(\cd)\in\sU[t,t+\e],\ee
where $o(\e)$ is uniform in $u(\cd)\in\sU[t,t+\e]$.
Thus $(\Psi^\e\oplus\Psi|_{[t+\e,T]})(\cd,\cd)$ satisfies the local near-optimality \rf{near-optim}.
Next, we would like to show that $\Psi(\cd,\cd)$ satisfies the local optimality condition \rf{near-optim*} under the following assumption:
\ms
{\bf(H4)} There exists a nondecreasing continuous function $\wt\rho:[0,\i)\to[0,\i)$ with $\wt\rho(0)=0$ such that
$$\ba{ll}
\ns\ds|\Th^\e(t,s,x)-\Th(t,s,x)|+|\Th_x^\e(t,s,x)-\Th_x(t,s,x)|+|\Th_{xx}^\e(t,s,x)-\Th_{xx}(t,s,x)|\les \wt\rho(\e)(1+|x|),\\
\ns\ds\qq\qq\qq\qq\qq\qq\qq\qq\qq\qq\forall(s,x)\in[t,t+\e]\times\dbR^n,\ea$$
where $\Th(\cd,\cd,\cd)$ and $\Th^\e(\cd,\cd,\cd)$ are the classical solutions of the PDE \rf{PDE-Th-t-t+e} and \rf{PDE-Th-e}, respectively.
\ms
Under (H4), by the definitions of $\Psi^\e(\cd,\cd)$ and $\Psi(\cd,\cd)$, we have
\bel{Psi-e-Psi}
\begin{aligned}
&\q|\Psi^\e(s,x)-\Psi(s,x)|\\
&=\big|\psi\big(t,s,x,\Th^\e(t,s,x),\Th^\e_x(t,s,x),\Th^\e_{xx}(t,s,x)\big)-\psi\big(s,s,x,\Th(s,s,x),\Th_x(s,s,x),\Th_{xx}(s,s,x)\big)\big|\\
&\les \big|\psi\big(t,s,x,\Th^\e(t,s,x),\Th^\e_x(t,s,x),\Th^\e_{xx}(t,s,x)\big)-\psi\big(t,s,x,\Th(t,s,x),\Th_x(t,s,x),\Th_{xx}(t,s,x)\big)\big|\\
&\q+\big|\psi\big(t,s,x,\Th(t,s,x),\Th_x(t,s,x),\Th_{xx}(t,s,x)\big)-\psi\big(s,s,x,\Th(s,s,x),\Th_x(s,s,x),\Th_{xx}(s,s,x)\big)\big|\\
&\les K\wt\rho(\e)(1+|x|)+K|s-t|\les K[\wt\rho(\e)+\e](1+|x|)\deq\rho(\e)(1+|x|),\q (s,x)\in[t,t+\e]\times\dbR^n.
\end{aligned}
\ee
Let $(X(\cd),\wt Y(\cd),\wt Z(\cd))\equiv(X^\Psi(\cd),\wt Y^\Psi(\cd),\wt Z^\Psi(\cd))$ and $(X^\e(\cd),\wt Y^\e(\cd),\wt Z^\e(\cd))\equiv (X^{\Psi^\e}(\cd),\wt Y^{\Psi^\e}(\cd),\wt Z^{\Psi^\e}(\cd))$ be the unique solutions to the controlled SDE \rf{state-t-t+e}
and BSDE \rf{BSDE-wtY-t+e} corresponding to the feedback strategies $\Psi(\cd,\cd)$ and $\Psi^\e(\cd,\cd)$, respectively.
By \autoref{lmm:well-posedness-SDE} and \rf{Psi-e-Psi}, we have
\bel{X-e-est1}
\begin{aligned}
\dbE_t\(\sup_{t\les s\les t+\e}|X^\e(s)|^2\)&\les K\dbE_t\Big\{1+|\xi|^2+ \int_t^{t+\e}\big|\Psi^\e(r,X^\e(r))\big|^2dr\Big\}\\
&\les K\dbE_t\Big\{1+|\xi|^2+ \int_t^{t+\e}\big[|\Psi(r,X^\e(r))|^2+\rho(\e)^2(1+|X^\e(r)|^2)\big]dr\Big\}\\
&\les K\dbE_t\Big\{1+|\xi|^2+ \int_t^{t+\e}\big[1+|X^\e(r)|^2+\rho(T-t)^2(1+|X^\e(r)|^2)\big]dr\Big\}\\
&\les K\dbE_t\Big\{1+|\xi|^2+ \int_t^{t+\e}|X^\e(r)|^2dr\Big\}.
\end{aligned}
\ee
Then by Gr\"{o}nwall's inequality, there exists a constant $K>0$, independent of $(t,\e)$ such that
\bel{X-e-est}
\dbE_t\(\sup_{t\les s\les t+\e}|X^\e(s)|^2\)\les K\dbE_t[1+|\xi|^2]=K(1+|\xi|^2).
\ee
Combining the above with the standard estimate of SDEs, we have
\bel{X-e-X-est}
\begin{aligned}
&\dbE_t\(\sup_{t\les s\les t+\e}|X^\e(s)-X(s)|^2\)\\
&\q\les K\dbE_t\Big\{ \int_t^{t+\e}\big|b\big(r,X(r),\Psi^\e(r,X^\e(r))\big)-b\big(r,X(r),\Psi(r,X^\e(r))\big)\big|^2dr\\
&\qq\qq+\int_t^{t+\e}\big|\si\big(r,X(r),\Psi^\e(r,X^\e(r))\big)-\si\big(r,X(r),\Psi(r,X^\e(r))\big)\big|^2dr\Big\}\\
&\q\les K\dbE_t\Big\{\int_t^{t+\e}\rho(\e)^2(1+|X^\e(r)|^2)dr\Big\}\les K\e\rho(\e)^2(1+|\xi|^2).
\end{aligned}
\ee
In particular,
\bel{X-e-X-t-est}
\dbE_t|X^\e(t+\e)-X(t+\e)|^2\les K\e\rho(\e)^2(1+|\xi|^2).
\ee
Let us recall \rf{BSDE-wtY-t+e}. Then by the standard estimate of BSDEs and \rf{X-e-X-est}--\rf{X-e-X-t-est}, we have
\bel{Y-e-Y-est}
\begin{aligned}
&\dbE_t\[\sup_{t\les s\les t+\e}|\wt Y^\e(s)-\wt Y(s)|^2\]+\dbE_t\[\int_t^{t+\e}|\wt Z^\e(r)-\wt Z(r)|^2dr\]\\
&\q\les K\dbE_t\[|\Th(t,t+\e,X^\e(t+\e))-\Th(t,t+\e,X(t+\e))|^2\]\\
&\qq+ K\dbE_t\Big[\int_t^{t+\e}\big|g\big(t,r,X^\e(r),\Psi^\e(r,X^\e(r)),\wt Y(r),\wt Z(r)\big)\\
&\qq\qq\qq\qq-g\big(t,r,X(r),\Psi(r,X(r)),\wt Y(r),\wt Z(r)\big)\big|dr\Big]^2\\
&\q\les K\e\rho(\e)^2(1+|\xi|^2)+K\e\dbE_t\int_t^{t+\e}\Big[|X^\e(r)-X(r)|^2+|\Psi^\e(r,X^\e(r))-\Psi(r,X^\e(r))|^2\\
&\qq\qq\qq\qq\qq\qq\qq\qq\qq+|\Psi(r,X^\e(r))-\Psi(r,X(r))|^2\Big]dr\\
&\q\les K\e\rho(\e)^2(1+|\xi|^2)+K\e\dbE_t\int_t^{t+\e}\big[|X^\e(r)-X(r)|^2+|\Psi^\e(r,X^\e(r))-\Psi(r,X^\e(r))|^2\big]dr\\
&\q\les K\e\rho(\e)^2(1+|\xi|^2)+ K\e^2\rho(\e)^2(1+|\xi|^2)+K\e\dbE_t\int_t^{t+\e}\rho(\e)^2\big(1+|X^\e(r)|^2\big)dr\\
&\q\les K\e\rho(\e)^2(1+|\xi|^2).
\end{aligned}
\ee
Observe that
$$
\begin{aligned}
&|\wt Y^\e(t)-\wt Y(t)|^2= \Big|\dbE_t\Big\{\Th(t,t+\e,X^\e(t+\e))-\Th(t,t+\e,X(t+\e))\\
&\qq\qq\qq\qq+\int_t^{t+\e}\[g\big(t,r,X^\e(r),\Psi^\e(r,X^\e(r)),\wt Y^\e(r),\wt Z^\e(r)\big)\\
&\qq\qq\qq\qq\qq\q- g\big(t,r,X(r),\Psi(r,X(r)),\wt Y(r),\wt Z(r)\big)\]dr\Big\}\Big|^2.
\end{aligned}
$$
Then by H\"{o}lder inequality and \rf{Psi-e-Psi}--\rf{X-e-X-est}--\rf{Y-e-Y-est}, we have
\bel{Y-Y-e1}
\begin{aligned}
|\wt Y^\e(t)-\wt Y(t)|^2&\les K\big|\dbE_t\big[\Th(t,t+\e,X^\e(t+\e))-\Th(t,t+\e,X(t+\e))\big] \big|^2\\
&\q+ K\e\dbE_t\int_t^{t+\e}\big|g\big(t,r,X^\e(r),\Psi^\e(r,X^\e(r)),\wt Y^\e(r),\wt Z^\e(r)\big)\\
&\qq\qq\qq\q- g\big(t,r,X(r),\Psi(r,X(r)),\wt Y(r),\wt Z(r)\big)\big|^2dr\\
&\les K\big|\dbE_t\big[\Th(t,t+\e,X^\e(t+\e))-\Th(t,t+\e,X(t+\e))\big] \big|^2\\
&\q+ K\e\dbE_t\int_t^{t+\e}\big[|X(r)-X^\e(r)|^2+|\Psi^\e(r,X^\e(r))-\Psi(r,X^\e(r))|^2\\
&\qq\qq\qq\qq+|\wt Y^\e(r)-\wt Y(r)|^2+|\wt Z^\e(r)-\wt Z(r)|^2\big]dr\\
&\les K\big|\dbE_t\big[\Th(t,t+\e,X^\e(t+\e))-\Th(t,t+\e,X(t+\e))\big] \big|^2+K\e^2\rho(\e)^2(1+|\xi|^2).
\end{aligned}
\ee
Applying It\^{o} formula to $s\mapsto \Th(t,t+\e,X(s))$ on $[t,t+\e]$ implies that
\bel{Ito-Th-X}
\begin{aligned}
&\Th(t,t+\e,X(t+\e))\\
&\q=\Th(t,t+\e,\xi)+\int_t^{t+\e}\Big\{\Th_x(t,t+\e,X(r)) b\big(r,X(r),\Psi(r,X(r))\big)\\
&\qq\q+{1\over2}\tr\big[\Th_{xx}(t,t+\e,X(r))\si\big(r,X(r),\Psi(r,X(r))\big)\si\big(r,X(r),\Psi(r,X(r))\big)^\top\big]\Big\}dr\\
&\qq+\int_t^{t+\e}\Th_x(t,t+\e,X(r))\si\big(r,X(r),\Psi(r,X(r))\big)dW(r).
\end{aligned}
\ee
Similarly,
\bel{Ito-Th-X-e}
\begin{aligned}
&\Th(t,t+\e,X^\e(t+\e))\\
&\q=\Th(t,t+\e,\xi)+\int_t^{t+\e}\Big\{\Th_x(t,t+\e,X^\e(r)) b\big(r,X^\e(r),\Psi^\e(r,X^\e(r))\big)\\
&\qq\q+{1\over2}\tr\big[\Th_{xx}(t,t+\e,X^\e(r))\si\big(r,X^\e(r),\Psi^\e(r,X^\e(r))\big)\si\big(r,X^\e(r),\Psi^\e(r,X^\e(r))\big)^\top\big]\Big\}dr\\
&\qq+\int_t^{t+\e}\Th_x(t,t+\e,X^\e(r))\si\big(r,X^\e(r),\Psi^\e(r,X^\e(r))\big)dW(r).
\end{aligned}
\ee
Thus, we get
\begin{align*}
&\big|\dbE_t\big[\Th(t,t+\e,X^\e(t+\e))-\Th(t,t+\e,X(t+\e))\big]\big|^2\\
&=\Big|\dbE_t\int_t^{t+\e}\3n\Big\{\Th_x(t,t+\e,X(r))^\top b\big(r,X(r),\Psi(r,X(r))\big)-\Th_x(t,t+\e,X^\e(r))^\top b\big(r,X^\e(r),\Psi^\e(r,X^\e(r))\big)\\
&\qq+{1\over2}\tr\big[\Th_{xx}(t,t+\e,X(r))\si\big(r,X(r),\Psi(r,X(r))\big)\si\big(r,X(r),\Psi(r,X(r))\big)^\top\big]\\
&\qq-{1\over2}\tr\big[\Th_{xx}(t,t+\e,X^\e(r))\si\big(r,X^\e(r),\Psi^\e(r,X^\e(r))\big)\si\big(r,X^\e(r),\Psi^\e(r,X^\e(r))\big)^\top\big]\Big\}dr\Big|^2\\
&\les K\e\dbE_t\int_t^{t+\e}\3n\big|\Th_x(t,t\1n+\1n\e,X(r))^\top\1n b\big(r,X(r),\Psi(r,X(r))\big)\1n-\1n\Th_x(t,t\1n+\1n\e,X^\e(r))^\top\1n b\big(r,X^\e(r),\Psi^\e(r,X^\e(r))\big)\\
&\qq+{1\over2}\tr\big[\Th_{xx}(t,t+\e,X(r))\si\big(r,X(r),\Psi(r,X(r))\big)\si\big(r,X(r),\Psi(r,X(r))\big)^\top\big]\\
&\qq-{1\over2}\tr\big[\Th_{xx}(t,t+\e,X^\e(r))\si\big(r,X^\e(r),\Psi^\e(r,X^\e(r))\big)\si
\big(r,X^\e(r),\Psi^\e(r,X^\e(r))\big)^\top\big]\big|^2dr\\
&\les K\e\dbE_t\int_t^{t+\e} \big[|\Psi^\e(r,X^\e(r))-\Psi(r,X^\e(r))|^2+|X^\e(r)-X(r)|^2\big]dr.
\end{align*}
Then by \rf{Psi-e-Psi} and \rf{X-e-X-est}, the above implies that
\bel{Th-Th-e}
\big|\dbE_t\big[\Th(t,t+\e,X^\e(t+\e))-\Th(t,t+\e,X(t+\e))\big]\big|^2
\les K\e^2\rho(\e)^2(1+|\xi|^2).
\ee
Substituting \rf{Th-Th-e} into \rf{Y-Y-e1}, we have
$$
|J^t(t,\xi;\Psi^\e(\cd,\cd))-J^t(t,\xi;\Psi(\cd,\cd))|^2=|\wt Y^\e(t)-\wt Y(t)|^2\les K\e^2\rho(\e)^2(1+|\xi|^2).
$$
Combining the above with \rf{J-t-Psit}, we get
$$
J^t(t,\xi;\Psi(\cd,\cd))\les\inf_{u(\cd)\in\sU[t,t+\e]}J^t(t,\xi;u(\cd))+o(\e).
$$
Then by \autoref{lemma-BSVIE-approximate-estimate}, we have the following local near optimality of $\Psi(\cd,\cd)$:
\bel{J-Psi-final}
J\big(t,\xi;\Psi\big|_{[t,T]}\big)\les J\big(t,\xi;u\oplus\Psi|_{[t+\e,T]}\big)
+o(\e),\qq\forall u(\cd)\in\sU[t,t+\e].
\ee
In conclusion, we can state the following result formally.
\begin{theorem}\label{equi-stra} \sl
Feedback strategy $\Psi(\cd,\cd)$ defined by \rf{V(limit)-Th-Psi} is an equilibrium strategy of Problem (N).
\end{theorem}
Since $\Psi(\cd\,,\cd)$ is an equilibrium strategy of Problem (N), the corresponding closed-loop system \rf{equilibrium-system}
is called an {\it equilibrium system}.
\ms
We now look at the well-posedness of the equilibrium HJB equation. To this end, we first observe the expression
$$\Psi(s,x)=\psi\big(s,s,x,\Th(s,s,x),\Th_x(s,s,x),\Th_{xx}(s,s,x)\big),\qq(s,x)\in[0,T]
\times\dbR^n.$$
From the definition \rf{define-psi} of $\psi(\cd)$, it is clear that the dependence of $\si(\cd)$ on the control process $u(\cd)$ leads to the appearance of $\Th_{xx}(s,s,x)$ in $\Psi(s,x)$, which turns out to bring some essential difficulties in establishing the well-posedness of equilibrium HJB equation. At the moment, such a general situation is widely open and will be investigated in our future publications. In the subsequent analysis of this section, we will consider a special but still important case, in which $\si(\cd)$ is independent of the control process $u(\cd)$. More precisely, we assume that
\bel{si}\si(t,x,u)=\si(t,x),\q (t,x,u)\in[0,T]\times\dbR^n\times U.\ee
Under \rf{si}, (H3) becomes the following:
\ms
{\bf(H3)$'$} There is a continuous map $\psi:\D[0,T]\times\dbR^n\times\dbR\times\dbR^{1\times n}\to U$ such that
\bel{define-psi'}\ba{ll}
\ns\ds pb(s,x,\psi(t,s,x,\th,p))+g(t,s,x,\psi(t,s,x,\th,p),\th,p\si(s,x))\\
\ns\ds=\min_{u\in U}\big[pb(s,x,u)+g(t,s,x,u,\th,p\si(s,x))\big],\q(t,s,x,\th,p)\in \D[0,T]\times\dbR^n\times\dbR\times\dbR^{1\times n}.\ea\ee
\ms
Under (H3)$'$, the equilibrium HJB equation \rf{Th-differential-form-rewrite} (with $\t=0$) reads
\bel{Th-differential-form-si}
\left\{\begin{aligned}
& \Th_s(t,s,x)+{1\over2}\tr[\Th_{xx}(t,s,x)a(s,x)]+\Th_{x}(t,s,x) b\big(s,x,\psi(s,s,x,\Th(s,s,x),\Th_x(s,s,x))\big)\\
&\qq +g\big(t,s,x,\psi(s,s,x,\Th(s,s,x),\Th_x(s,s,x)),\Th(s,s,x),\Th_x(t,s,x) \si(s,x)\big)=0,\\
&\qq\qq\qq\qq\qq\qq\qq\qq\qq (t,s,x)\in \D[0,T]\times\dbR^n,\\
& \Th(t,T,x)=h(t,x),\q (t,x)\in[0,T]\times\dbR^n,
\end{aligned}\right.\ee
where $a(\cd)$ is defined by \rf{def-a}. In the current case, the equilibrium strategy is given by
\bel{Psi*1}\Psi(s,x)=\psi(s,s,x,\Th(s,s,x),\Th_x(s,s,x)),\qq(s,x)\in[0,T]\times\dbR^n,\ee
for which $\Th_{xx}(\cd\,,\cd\,,\cd)$ does not appear.
\ms
For the well-posedness of \rf{Th-differential-form-si}, we make the following assumption.
\begin{taggedassumption}{(H5)}\label{ass:A1PDE} \rm
The maps
$$\left\{\2n\ba{ll}
\ds(s,x,\th,p)\mapsto b(s,x,\psi(s,s,x,\th,p)),\qq(s,x)\mapsto a(s,x),\\
\ns\ds(t,s,x,\th,p,\hat p)\mapsto g(t,s,x,\psi(s,s,x,\th,p),\th,\hat p\si(s,x)),
\qq(s,x)\mapsto h(s,x)\ea\right.$$
are bounded, have all required differentiability with bounded derivatives.
Moreover, there exist two constants $\l_0,\l_1>0$ such that
$$\l_0 I\les a(t,x)\les \l_1 I,\q \forall~(t,x)\in[0,T]\times\dbR^n.$$
\end{taggedassumption}
We have the following result whose proof can be found in \cite[Theorem 6.1]{Wei-Yong-Yu 2017}.
\begin{theorem}\label{theorem-PDE} \sl Let {\rm (H5)} hold. Then PDE \rf{Th-differential-form-si} admits a unique classical solution.
\end{theorem}
In the proof of \autoref{equi-stra}, we introduce the assumption (H4) to get the local near optimality of the equilibrium strategy $\Psi(\cd,\cd)$.
When $\si(\cd)$ is independent of the control $u(\cd)$ (see \rf{si}),
the arguments of \autoref{equi-stra} still hold true with Assumption (H4) replaced by the following assumption:
\ms
{\bf(H4)$'$} There exists a nondecreasing continuous function $\rho:[0,\i)\to[0,\i)$ with $\rho(0)=0$ such that
$$|\Th^\e(t,s,x)-\Th(t,s,x)|+|\Th_x^\e(t,s,x)-\Th_x(t,s,x)|\les \rho(\e)(1+|x|),\qq\forall(s,x)\in[t,t+\e]\times\dbR^n.$$
We shall show that the above assumption is a consequence of (H5).
In fact, under (H5), it follows from \autoref{theorem-PDE} that $\Th(\cd,t+\e,\cd)$ is well-defined and belongs to $C^{1,2}([t,t+\e]\times\dbR^n;\dbR)$.
Thus one can regard \rf{PDE-Th-t-t+e} as a new equilibrium HJB equation satisfying (H5) with $h(\cd,\cd)$ and $[0,T]$ replaced by $\Th(\cd,t+\e,\cd)$ and $[t,t+\e]$, respectively. Moreover, we see that PDE \rf{PDE-Th-e} is the approximate equation of \rf{PDE-Th-t-t+e} with the partition $\Pi:t=t_0<t_1=t+\e$. Then by the last inequality in the proof of \cite[Theorem 6.2]{Wei-Yong-Yu 2017}, we have
$$
|\Th^\e(t,s,x)-\Th(t,s,x)|+|\Th_x^\e(t,s,x)-\Th_x(t,s,x)|\les K\|\Pi\|=K\e,\qq\forall(s,x)\in[t,t+\e]\times\dbR^n,
$$
for some constant $K>0$, which implies that the assumption (H4)$'$ holds.
Therefore, under (H1), (H2), (H3)$'$, and (H5), Problem (N) admits an equilibrium strategy over $[0,T]$.
\section{BSVIEs with Diagonal Values of $Z(\cd\,,\cd)$.}\label{BSVIEs}
In this section, we look at a new type BSVIE resulted from the equilibrium solution to Problem (N). Let (H5) hold with $d=n$ and \rf{si} hold, namely, $\si$ is independent of $u$. Thus, $\si(s,x)$ is invertible. Let $\Th(\cd\,,\cd\,,\cd)$ be the classical solution to the equilibrium HJB equation \rf{Th-differential-form-si}. Then $\Psi(\cd\,,\cd)$ defined by \rf{Psi*1} is an equilibrium strategy. Substituting this $\Psi(\cd\,,\cd)$ into \rf{equilibrium-system} leads to the following closed-loop system:
\bel{FSDE-BSVIE}\left\{\2n\ba{ll}
\ds\bar X(s)=\xi+\int_\t^sb\big(r,\bar X(r),\psi(r,r,\bar X(r),
\Th(r,r,\bar X(r)),\Th_x(r,r,\bar X(r)))\big)dr\\
\ns\ds\qq\qq\qq\qq+\int_\t^s\si(r,\bar X(r))dW(r),\\
\ns\ds\bar Y(s)\1n=\1n h(s,\bar X(T))\1n+\1n\int_s^T\3n g\big(s,r,\bar X(r),
\psi(r,r,\bar X(r),
\Th(r,r,\bar X(r)),\Th_x(r,r,\bar X(r))),\bar Y(r),\bar Z(s,r)\big)dr\\
\ns\ds\qq\qq\qq\qq-\int_s^T \bar Z(s,r)dW(r),\qq s\in[\t,T].\ea\right.\ee
On the other hand, we have
\bel{bar Y*}\bar Y(r)=\Th(r,r,\bar X(r))\equiv V(r,\bar X(r)),\qq r\in[\t,T],\ee
and
\bel{bar Z*}\bar Z(s,r)=\Th_x(s,r,\bar X(r))\si(r,\bar X(r)),\qq(s,r)\in\D[\t,T].\ee
Thus,
\bel{bar Z**}\Th_x(r,s,\bar X(r))=\bar Z(s,r)\si(r,\bar X(r))^{-1},\qq(s,r)\in\D[\t,T].\ee
Consequently, the above closed-loop system can be written as follows.
\bel{FSDE-BSVIE*}\left\{\2n\ba{ll}
\ds\bar X(s)=\xi+\int_\t^sb\big(r,\bar X(r),\psi(r,r,\bar X(r),
\bar Y(r),\bar Z(r,r)\si(r,\bar X(r))^{-1})\big)dr+\int_\t^s\si(r,\bar X(r))dW(r),\\
\ns\ds\bar Y(s)\1n=\1n h(s,\bar X(T))\1n+\1n\int_s^T\3n g\big(s,r,\bar X(r),
\psi(r,r,\bar X(r),
\bar Y(r),\bar Z(r,r)\si(r,\bar X(r))^{-1}),\bar Y(r),\bar Z(s,r)\big)dr\\
\ns\ds\qq\qq\qq\qq-\int_s^T \bar Z(s,r)dW(r),\qq s\in[\t,T].\ea\right.\ee
If we denote
\bel{bar b,bar g}\ba{ll}
\ns\ds\bar b(r,x,y,\z)=b\big(r,x,\psi(r,r,x,y,\z\si(r,x)^{-1}\big),\qq(r,x,y,\z)\in[\t,T]\times\dbR^n
\times\dbR\times\dbR^{1\times n},\\
\ns\ds\bar g\big(s,r,x,y,z,\z)=g(s,r,x,\psi(r,r,x,y,\z\si(r,x)^{-1}),y,z\big),\\
\ns\ds\qq\qq\qq\qq\qq\qq(s,r,x,y,z,\z)\in\D[\t,T]
\times\dbR^n\times\dbR\times\dbR^{1\times n}\times\dbR^{1\times n}.\ea\ee
Then the above closed-loop system can further be written as follows.
\bel{FSDE-BSVIE**}\left\{\2n\ba{ll}
\ds\bar X(s)=\xi+\int_\t^s\bar b\big(r,\bar X(r),\bar Y(r),\bar Z(r,r)\big)dr+\int_\t^s\si(r,\bar X(r))dW(r),\\
\ns\ds\bar Y(s)\1n=\1n h(s,\bar X(T))\1n+\1n\int_s^T\3n\bar g\big(s,r,\bar X(r),
\bar Y(r),\bar Z(s,r),\bar Z(r,r)\big)dr-\int_s^T \bar Z(s,r)dW(r),\ea\right.\q s\in[\t,T].\ee
Unlike the BSVIEs studied in the literature, the above BSVIE contains the {\it diagonal values} $\bar Z(r,r)$ of $Z(\cd\,,\cd)$. To our best knowledge, this is the first time that such a BSVIE appears. Our above results show that the above coupled FSDE and BSVIE admits an adapted solution $(\bar X(\cd),\bar Y(\cd),\bar Z(\cd\,,\cd))$. Moreover, the representation \rf{bar Y*}--\rf{bar Z*} holds, with $\Th(\cd\,,\cd\,,\cd)$ being the classical solution to the equilibrium HJB equation \rf{Th-differential-form-si}. We point out that \rf{bar Y*} and \rf{bar Z**}, which represents the solution $\Th(\cd\,,\cd\,,\cd)$ to the equilibrium HJB equation \rf{Th-differential-form-si}, is a kind of {\it Feynman--Kac formula}.
\ms
The above naturally motivates us to investigate the following more general coupled FSDE and BSVIE:
\bel{FSDE-BSVIE-general}\left\{\2n\ba{ll}
\ds X(s)=\xi+\int_\t^s b\big(r,X(r),Y(r),Z(r,r)\big)dr+\int_\t^s\si\big(r,X(r),Y(r)\big)dW(r),\\
\ns\ds Y(s)\1n=\1n h(s,X(T))\1n+\1n\int_s^T\3n g\big(s,r,X(r),
Y(r),Z(s,r),Z(r,r)\big)dr-\int_s^TZ(s,r)dW(r),\q s\in[\t,T].\ea\right.\ee
The main feature is that the generator $g$ of the above BSVIE contains the diagonal value $Z(r,r)$. In the rest of this section, we will sketch some relevant results of the above coupled FSDE and BSVIE. More general detailed investigation of such BSVIEs will be carried out elsewhere.
\ms
Inspired by the results of previous sections, as well as the ideas from \cite{Ma-Protter-Yong 1994, Wang-Yong 2019}, we let $(t,s,x)\mapsto\Th(t,s,x)$ be $C^{0,1,2}(\D[0,T]\times\dbR^n)$. Applying It\^o's formula to the process $s\mapsto\Th(t,s,X(s))$, one obtains
\bel{Th-Th}\ba{ll}
\ns\ds\Th(t,T,X(T))-\Th(t,t,X(t))=\int_t^T\[\Th_s(t,s,X(s))+\Th_x(t,s,X(s))b(s,X(s),Y(s),Z(s,s))\\
\ns\ds\qq\qq\qq\qq\qq\qq\qq\qq+{1\over2}\tr\(\Th_{xx}(t,s,X(s))a(s,X(s),Y(s))\)\]ds\\
\ns\ds\qq\qq\qq\qq\qq\qq\qq\qq+\int_t^T\Th_x(t,s,X(s))\si(s,X(s),Y(s))dW(s),\ea\ee
where
$$a(s,x,y)=\si(s,x,y)\si(s,x,y)^\top,\qq(s,x,y)\in[0,T]\times\dbR^n\times\dbR.$$
Comparing \rf{Th-Th} with the BSVIE in \rf{FSDE-BSVIE-general}, we see that the following should be the right choice:
\bel{Th=h}\Th(t,T,X(T))=h(t,X(T)),\ee
\bel{Y,Z}Y(t)=\Th(t,t,X(t)),\qq Z(t,s)=\Th_x(t,s,X(s))\si(s,X(s),Y(s)),\ee
and
$$\ba{ll}
\ns\ds\Th_s(t,s,X(s))+\Th_x(t,s,X(s))b\big(s,X(s),Y(s),Z(s,s)\big)\\
\ns\ds\q+{1\over2}\tr\(\Th_{xx}(t,s,X(s))a(s,X(s),Y(s))\)+g\big(t,s,X(s),Y(s),Z(t,s),
Z(s,s)\big)=0.\ea$$
Thus, formally, we should have
$$\ba{ll}
\ns\ds\Th_s(t,s,X(s))+\Th_x(t,s,X(s))b\big(s,X(s),\Th(s,s,X(s)),\Th_x(s,s,X(s))\si(s,X(s),
\Th(s,s,X(s)))\big)\\
\ns\ds\qq+{1\over2}\tr\(\Th_{xx}(t,s,X(s))a\big(s,X(s),\Th(s,s,X(s))\big)\)\\
\ns\ds\qq+g\big(t,s,X(s),\Th(s,s,X(s)),\Th_x(t,s,X(s))\si(s,X(s),\Th(s,s,X(s))),\\
\ns\ds\qq\qq\qq\qq\qq\qq\Th_x(s,s,X(s))\si(s,X(s),\Th(s,s,X(s)))\big)=0.\ea$$
Hence, we have the following result.
\bt{theorem-coupled-SDE-BSVIE} \sl Let the following system admit a classical solution $\Th(\cd\,,\cd\,,\cd)$:
\bel{PDE}\left\{\2n\ba{ll}
\ds\Th_s(t,s,x)+{1\over2}\tr\[\Th_{xx}(t,s,x)a(s,x,\Th(s,s,x))\]\\
\ns\ds\q+\Th_x(t,s,x)b\big(s,x,\Th(s,s,x),
\Th_x(s,s,x)\si(s,x,\Th(s,s,x))\big)\\
\ns\ds\q+g\big(t,s,x,\Th(s,s,x),\Th_x(t,s,x)\si(s,x,\Th(s,s,x)),\Th_x(s,s,x)
\si(s,x,\Th(s,s,x))\big)=0,\\
\ns\ds\Th(t,T,x)=h(t,x),\qq(t,x)\in[0,T]\times\dbR^n,\ea\right.\ee
Let $X(\cd)\equiv X(\cd\,;\t,\xi)$ be the solution to the following FSDE:
$$\ba{ll}
\ns\ds X(s)=\xi+\int_\t^sb\big(r,X(r),\Th(r,r,X(r)),\Th_x(r,r,X(r))\si(r,X(r),
\Th(r,r,X(r)))\big)dr\\
\ns\ds\qq\qq+\int_\t^s\si\big(s,X(s),\Th(r,r,X(r))\big)dW(r),\qq s\in[\t,T].\ea$$
Then $(Y(\cd),Z(\cd\,,\cd))$ defined by \rf{Y,Z} is an adapted solution to the
BSVIE in \rf{FSDE-BSVIE-general}.
\et
When $\si(s,x,\th)$ is independent of $\th$, \autoref{theorem-PDE} provides a sufficient condition for the well-posedness of \rf{PDE}.
We now look at the uniqueness of the adapted solutions \rf{FSDE-BSVIE-general}. To this end, let us first introduce the following assumption.
\begin{taggedassumption}{(H6)}\label{ass:A2PDE} \rm
Let $d=n$.
There exist maps $\mu:\D[0,T]\to\dbR$, $\n:\D[0,T]\to\dbR$, $g_0:[0,T]\times\dbR^n\times\dbR\times\dbR^{1\times n}\to\dbR$, $h_0:[0,T]\times\dbR^n\to\dbR$, $\a:[0,T]\to\dbR^n$
such that
\bel{A21}
\begin{aligned}
&h(t,x)=\mu(t,T)h_0(T,x),\q \forall~(t,x)\in[0,T]\times\dbR^n,\\
&\bar g(t,s,x,y,\z,z)=\n(t,s)g_0(s,x,y,\z)+z\a(s),\\
&\qq\qq\qq \forall~(t,s,x,y,\z,z)\in\D[0,T]\times\dbR^n\times\dbR\times\dbR^{1\times n}\times\dbR^{1\times n}.
\end{aligned}
\ee
Suppose that the above maps are all bounded, have all required differentiability with bounded derivatives.
There exist continuous maps $\cM,\cN,\cK:[0,T]\to\dbR$ and $M:[0,T]^2\to\dbR$ such that
\bel{A22}
\begin{aligned}
& \cM(t)>0, \q \cM(t)\mu(t,T)-\cM(s)\mu(s,T)=M(t,s)\cN(T),\q t,s\in [0,T],\\
& \cM(t)\n(t,r)-\cM(s)\n(s,r)=M(t,s)\cK(r),\q t,s,r\in[0,T].
\end{aligned}
\ee
\end{taggedassumption}
Let us list some possible functions $\mu(\cd,\cd)$ and $\n(\cd,\cd)$ satisfying (H6) as follows:
\begin{enumerate}[(i)]
\item {\it Heterogeneous discounting}: $\mu(t,T)=e^{-\l_1(T-t)}$, $\nu(t,r)=e^{-\l_2(r-t)}$ with $\l_1,\l_2>0$, $\l_1\ne\l_2$.
We can let $\cM(t)=e^{-\l_1t}$, $M(t,s)=e^{(\l_2-\l_1)t}-e^{(\l_2-\l_1)s}$, $\cN(T)=0$, $\cK(r)=e^{-\l_2r}$.
\item {\it Convex combination of two exponential discounting }: $\mu(t,T)=\a e^{-\l_1(T-t)}+(1-\a)e^{-\l_2(T-t)} $,
$\nu(t,r)=\a e^{-\l_1(r-t)}+(1-\a)e^{-\l_2(r-t)} $, with $\a\in(0,1)$, $\l_1,\l_2>0$, $\l_1\ne\l_2$. We can let $\cM(t)=e^{-\l_1t}$, $M(t,s)=e^{(\l_2-\l_1)t}-e^{(\l_2-\l_1)s}$, $\cN(T)=(1-\a)e^{-\l_2 T}$, $\cK(r)=(1-\a)e^{-\l_2r}$.
\item {\it Quasi-exponential discounting}: $\mu(t,T)=\big(1+\a(T-t)\big)e^{-\l(T-t)}$,
$\nu(t,r)=\big(1+\a(r-t)\big)e^{-\l(r-t)}$, with $\a,\l>0$. We can let $\cM(t)=e^{-\l t}$, $M(t,s)=s-t$, $\cN(T)=\a e^{-\l T}$, $\cK(r)=\a e^{-\l r}$.
\end{enumerate}
Under (H6), the equation \rf{FSDE-BSVIE**} becomes
\bel{coupled-SDE-BSVIE-bar-mu-v}\left\{\begin{aligned}
\bar X(t) &=\xi+\int_\t^t\bar b(r,\bar X(r),\bar Y(r),\bar Z(r,r))dr+\int_\t^t\si(r,\bar X(r))dW(r),\q t\in[\t,T],\\
\bar Y(t) &=\mu(t,T)h_0(T,\bar X(T)) +\int_t^T \big[\n(t,r) g_0(r,\bar X(r),\bar Y(r),\bar Z(r,r))+\bar Z(t,r)\a(r)\big]dr\\
&\qq\qq\qq\qq\q -\int_t^T\bar Z(t,r)dW(r),\q t\in [\t,T].
\end{aligned}\right.\ee
For the above coupled SDE and BSVIE, we have the following well-posedness result.
\begin{theorem}\label{theorem-uniqueness} \sl
Let {\rm (H5)} and {\rm (H6)} hold. Then the coupled SDE and BSVIE \rf{coupled-SDE-BSVIE-bar-mu-v} admits a unique adapted solution.
\end{theorem}
\begin{proof}
The existence of the adapted solution to \rf{coupled-SDE-BSVIE-bar-mu-v} is a combination of \autoref{theorem-PDE} and \autoref{theorem-coupled-SDE-BSVIE}.
We only need to prove the uniqueness here.
Let $(\bar X_i(\cd),\bar Y_i(\cd),\bar Z_i(\cd,\cd))$; $i=1,2$ be two adapted solutions to \rf{coupled-SDE-BSVIE-bar-mu-v}. By the uniqueness of BSVIE, there exist uniquely $(y_i(\cd\,,\cd),z_i(\cd\,,\cd));i=1,2$ such that
\bel{coupled-SDE-BSVIE-bar-mu-v-proof}
\begin{aligned}
y_i(t,s) &=\mu(t,T)h_0(T,\bar X_i(T))+\int_s^T\big[\n(t,r)g_0(r,r \bar X_i(r),\bar Y_i(r),\bar Z_i(r,r))+z_i(t,r)\a(r) \big]dr\\
&\qq\qq\qq\qq -\int_s^T z_i(t,r)dW(r),\q (t,s)\in \D[\t,T],~i=1,2,
\end{aligned}\ee
and
\bel{l-z-Y-Z}
y_i(t,t)=\bar Y_i(t),\q z_i(t,s)=\bar Z_i(t,s),\q(t,s)\in\D[\t,T],~i=1,2.
\ee
For any $t\in[\t,T)$, multiplying the both side of BSVIE \rf{coupled-SDE-BSVIE-bar-mu-v-proof} by $\cM(t)$, we have
\begin{align*}
\cM(t)y_i(t,s) &=\cM(t)\mu(t,T)h_0(T,\bar X_i(T)) +\int_s^T \big[\cM(t)\n(t,r) g_0(r, \bar X_i(r),y_i(r,r), z_i(r,r))\\
&\qq\qq\qq+\cM(t) z_i(t,r)\a(r)\big]dr-\int_s^T\cM(t) z_i(t,r)dW(r),\q (t,s)\in \D[\t,T].
\end{align*}
Let
\bel{ti-Y-Z}
\wt X_i(t)=\bar X_i(t),\q \wt y_i(t,s)=\cM(t)y_i(t,s),\q \wt z_i(t,s)=\cM(t)z_i(t,s),\q (t,s)\in\D[\t,T],
\ee
then
\begin{align*}
\wt y_i(t,s) &=\cM(t)\mu(t,T)h_0(T,\wt X_i(T)) +\int_s^T \big[\cM(t)\n(t,r) g_0\big(r, \wt X_i(r),\cM^{-1}(r)\wt y_i(r,r), \cM^{-1}(r)\wt z_i(r,r)\big)\\
&\qq\qq\qq\qq +\wt z_i(t,r)\a(r)\big]dr -\int_s^T\wt z_i(t,r)dW(r),\q (t,s)\in \D[\t,T].
\end{align*}
For any $t'\in[\t,T)$, combining the above with \rf{A22}, $\big(\wt y_i(t,s)-\wt y_i(t',s),\wt z_i(t,s)-\wt z(t',s)\big);t\vee t'\les s\les T$ satisfies
\begin{align*}
\wt y_i(t,s)-\wt y_i(t',s) &=\big[\cM(t)\m(t,T)-\cM(t')\m(t',T)\big]h_0(T,\ti X_i(T)) \\
&\q+\int_s^T \Big[\big[\cM(t)v(t,r)-\cM(t')v(t',r)\big] g_0\big(r,\wt X_i(r),\cM^{-1}(r)\wt y_i(r,r), \cM^{-1}(r)\wt z_i(r,r)\big)\\
&\qq\qq +\big[\wt z_i(t,r)-\wt z_i(t',r)\big]\a(r)\Big]dr-\int_s^T\big[\wt z_i(t,r)-\wt z_i(t',r)\big]dW(r)\\
&=M(t,t')\cN(T)h_0(T,\ti X_i(T)) \\
&\q+\int_s^T \Big[M(t,t')\cK(r)g_0\big(r,\wt X_i(r),\cM^{-1}(r)\wt y_i(r,r), \cM^{-1}(r)\wt z_i(r,r)\big)\\
&\qq\qq +\big[\wt z_i(t,r)-\wt z_i(t',r)\big]\a(r)\Big]dr -\int_s^T\big[\wt z_i(t,r)-\wt z_i(t',r)\big]dW(r).
\end{align*}
Let $(\h Y_i(\cd),\h Z_i(\cd));i=1,2$ be the unique solution to the following BSDE:
\bel{Y-Z-hat}
\begin{aligned}
\h Y_i(s)&=\cN(T)h_0(T,\wt X_i(T)) \\
&\q+\int_s^T \Big[\cK(r)g_0\big(r, \wt X_i(r),\cM^{-1}(r)\wt y_i(r,r), \cM^{-1}(r)\wt z_i(r,r)\big)+\h Z_i(r)\a(r)\Big]dr \\
&\q-\int_s^T\h Z_i(r)dW(r),\q s\in [\t,T],
\end{aligned}\ee
respectively. Multiplying the both side of BSDE \rf{Y-Z-hat} by $M(t,t')$, $\big(M(t,t')\h Y_i(\cd),M(t,t')\h Z_i(\cd)\big)$ satisfies
\begin{align*}
M(t,t')\h Y_i(s)&=M(t,t')\cN(T)h_0(T,\wt X_i(T)) \\
&\q+\int_s^T \Big[M(t,t')\cK(r)g_0\big(r, \wt X_i(r),\cM^{-1}(r)\wt y_i(r,r), \cM^{-1}(r)\wt z_i(r,r)\big)+M(t,t')\h Z_i(r)\a(r)\Big]dr \\
&\q-\int_s^TM(t,t')\h Z_i(r)dW(r),\q s\in [t\vee t',T].
\end{align*}
For the fixed $t,t'$, by the uniqueness of the adapted solution to the above BSDE, we have
$$\wt y_i(t,s)-\wt y_i(t',s)=M(t,t')\h Y_i(s),\q\wt z_i(t,s)-\wt z_i(t',s)=M(t,t')\h Z_i(s),\q s\in [t\vee t',T].$$
In particular, the following holds by taking $t=s$ and $t'=\t$,
\bel{relationship}
\wt y_i(s,s)-\wt y_i(\t,s)=M(s,\t)\h Y_i(s),\q \wt z_i(s,s)-\wt z_i(\t,s)=M(s,\t)\h Z_i(s).\ee
Thus, $(\wt X_i(\cd),\wt y_i(\t,\cd),\h Y_i(\cd),\wt z_i(\t,\cd),\h Z_i(\cd))$ satisfies the following coupled FBSDEs:
$$\left\{\begin{aligned}
\wt X_i(s) &=\xi+\int_\t^s\bar b\big (r,\wt X_i(r),\cM^{-1}(r)[\wt y_i(\t,r)+M(r,\t)\h Y_i(r)],\cM^{-1}(r)[\wt z_i(\t,r)+M(r,\t)\h Z_i(r)]\big)dr\\
&\q+\int_\t^s\si(r,\wt X_i(r))dW(r),\q s\in[\t,T],\\
\wt y_i(\t,s) &=\cM(\t)\mu(\t,T)h_0(T,\wt X_i(T))\\
&\q+\int_s^T\Big [\cM(\t)v(\t,r) g_0\big(r, \wt X_i(r),\cM^{-1}(r)\big[\wt y_i(\t,r)+M(r,\t)\h Y_i(r)\big] , \cM^{-1}(r)\big[\wt z_i(\t,r)+M(r,\t)\h Z_i(r)\big]\big)\\
&\qq\qq+\wt z_i(\t,r)\a(r)\Big]dr -\int_s^T\wt z_i(\t,r)dW(r),\q s\in [\t,T],\\
\hat Y_i(s)&=\cN(T)h_0(T,\wt X_i(T)) \\
&\q+\int_s^T \Big[\cK(r)g_0\big(r,\wt X_i(r),\cM^{-1}(r)\big[\wt y_i(\t,r)+M(r,\t)\h Y_i(r)\big], \cM^{-1}(r)\big[\wt Z_i(\t,r)+M(r,\t)\h Z_i(r)\big]\big) \\
&\qq\qq +\h Z_i(r)\a(r)\Big]dr-\int_s^T\h Z_i(r)dW(r),\q s\in [\t,T].
\end{aligned}\right.
$$
By \cite[Theorem 4.1]{Ma-Protter-Yong 1994}, the above coupled FBSDEs admit a unique adapted solution.
It follows that
$$\big(\wt X_1(\cd),\wt y_1(\t,\cd),\h Y_1(\cd),\wt z_1(\t,\cd),\h Z_1(\cd)\big)=\big(\wt X_2(\cd),\wt y_2(\t,\cd),\h Y_2(\cd),\wt z_2(\t,\cd),\h Z_2(\cd)\big).$$
Combining this with \rf{relationship}, we have
$$
\wt y_1(s,s)=\wt y_2(s,s),\q \wt z_1(s,s)=\wt z_2(s,s), \q s\in[\t,T].
$$
By the definition \rf{ti-Y-Z} of $(\wt X_i(\cd),\wt y_i(\cd,\cd),\wt z_i(\cd,\cd))$ and the relationship \rf{l-z-Y-Z}, we have
\bel{X-Y(s,s)-Z(s,s)}
\begin{aligned}
&\bar X_1(s)=\bar X_2(s),\q \bar Y_1(s)=y_1(s,s)=y_2(s,s)=\bar Y_2(s),\\
&\bar Z_1(s,s)=z_1(s,s)=z_2(s,s)=\bar Z_2(s,s), \q s\in[\t,T].
\end{aligned}
\ee
Then $(\bar Y_i(\cd),\bar Z_i(\cd,\cd))$ satisfies the following BSVIE:
\bel{BSVIE-bar-Z(s,s)}
\begin{aligned}
\bar Y_i(t) &=\mu(t,T)h_0(T,\bar X(T)) +\int_t^T \big[v(t,r) g_0(r, \bar X(r),\bar Y(r),\bar Z(r,r))+\bar Z_i(t,r)\a(r)\big]dr\\
&\qq\qq\qq\qq -\int_s^T \bar Z_i(t,r)dW(r),\q (t,s)\in \D[\t,T],
\end{aligned}\ee
with $\bar X(s)\deq \bar X_1(s)\equiv \bar X_2(s), \bar Y(s)\deq \bar Y_1(s)\equiv \bar Y_2(s), \bar Z(s,s)\deq\bar Z_1(s,s)\equiv\bar Z_2(s,s); s\in[\t,T]$.
Note that \rf{BSVIE-bar-Z(s,s)} is a classical BSVIE (without $\bar Z_i(r,r)$).
By \autoref{lmm:well-posedness-SDE}, we have
$$
(\bar Y_1(\cd),\bar Z_1(\cd,\cd))= (\bar Y_2(\cd),\bar Z_2(\cd,\cd)).
$$
Combining the above with \rf{X-Y(s,s)-Z(s,s)}, the uniqueness of the adapted solution to the coupled FSDE and BSVIE \rf{coupled-SDE-BSVIE-bar-mu-v} is obtained.
\end{proof}
\begin{remark}\rm
Under (H5)--(H6), \autoref{theorem-uniqueness} establishes the well-posedness of the coupled SDE and BSVIE \rf{coupled-SDE-BSVIE-bar-mu-v},
which is relevant to several important recursive optimal control problems with nonexponential discounting.
The more general case \rf{FSDE-BSVIE-general} is still open. We hope to explore that in our future publications.
\end{remark}
\section{Concluding Remarks}\label{remarks}
In this section, we are making some remarks to conclude this paper.
\ms
First of all, for recursive cost functional with nonexponential discounting, should one use parameterized BSDEs as in \cite{Yong 2012,Wei-Yong-Yu 2017} or use BSVIE as in the current paper? For a stochastic optimal control problem, a recursive cost functional, with exponential discounting, can be described by the adapted solution to a BSDE. When the discounting is nonexponential, and/or the running cost rate and the terminal cost are initial time dependent, then the recursive cost functional had better to use a BSVIE, instead of a parameterized BSDE (as in \cite{Yong 2012,Wei-Yong-Yu 2017}). To be convincing, let us present a brief argument on that.
\ms
For any initial pair $(t,\xi)\in\sD$ and control $u(\cd)\in\sU[t,T]$, let $X(\cd)$ be the corresponding state process. Motivated by the nonexponential discounting,
one may consider the following cost functional
\bel{cost-none1}
\h J(t,\xi;u(\cd))=\dbE_t\[h(t,X(T))+\int_t^T g(t,s,X(s),u(s))ds\]
\ee
with suitable functions $h(\cd)$ and $g(\cd)$.
If we let
\bel{cost-none-y}
y(t,s)=\dbE_s\[h(t,X(T))+\int_s^Tg(t,r,X(r),u(r))dr\],\qq s\in[t,T],\ee
then for some $ z(\cd,\cd)$, the pair $( y(\cd,\cd), z(\cd,\cd))$ is the unique adapted solution to the following BSDE (with parameter $t$):
\bel{BSDE-noyz}
y(t,s)=h(t,X(T))+\int_s^T g(t,r,X(r),u(r))dr-\int_s^T z(t,r)dW(r),\q s\in[t,T],\ee
and
$$\h J(t,\xi;u(\cd))=y(t,t).$$
Following the above idea and inspired by \cite{Duffie-Epstein 1992,El Karoui-Peng-Quenez 1997,Wei-Yong-Yu 2017}, to construct the recursive cost functional for the state-control pair $(X(\cd),u(\cd))$, one might intuitively consider the following BSDE (with parameter $t$)
\bel{BSDE-yz}
y(t,s)=h(t,X(T))+\int_s^T g(t,r,X(r),u(r),y(t,r),z(t,r))dr-\int_s^T z(t,r)dW(r),\q r\in[t,T],\ee
and define the cost functional by
\bel{h J^R}\h J(t,\xi;u(\cd))=y(t,t).\ee
But, taking $s=t$ in \rf{BSDE-noyz}, we obtain
\bel{BSDE-yz1}
y(t,t)=h(t,X(T))+\int_t^T g(t,r,X(r),u(r),y(t,r),z(t,r))dr-\int_t^T z(t,r)dW(r),\q t\in[0,T],\ee
which is not an equation for the process $t\mapsto y(t,t)$ since $y(t,s)$ appears on the right-hand side of the above. Further, since $y(s,s)\neq y(t,s)$ in general, one finds that the dependence of current cost functional value $y(t,t)$ on the future cost functional values $\h J(s,X(s);u(\cd))=y(s,s);t< s\les T$ seems not to be valid. From this viewpoint, $y(t,t)$ defined above seems not to be a good candidate of the recursive cost functional with nonexponential discounting.
\ms
However, the above parameterized BSDE suggests us try to consider the following BSVIE:
\bel{BSVIE-yz}
Y(t)=h(t,X(T))+\int_t^T g(t,r,X(r),u(r),Y(r),Z(t,r))dr-\int_t^T Z(t,r)dW(r),\ee
and define the cost functional by
\bel{def-J} J(t,\xi;u(\cd))=Y(t),\ee
as we have done in this paper.
We know that under proper condition, the above BSVIE admits a unique adapted solution $(Y(\cd),Z(\cd,\cd))$ and process $Y(\cd)$ is the most natural candidate for the recursive cost functional with nonexponential discounting in the following sense: The current cost functional value $J(t,\xi;u(\cd))=Y(t)$ really depends on the cost functional values $J(r,X(r);u(\cd))=Y(r)$ for $r\in[t,T]$, through a BSIVE. Furthermore, such a recursive cost functional is time-consistent itself, in the following sense: The future value of the cost functional predicted/calculated today will match the value of the cost functional when that specific future time moment arrives. Some more detailed derivation and discussion can be found in \cite{Wang-Sun-Yong 2019}. In a word, when we consider the stochastic optimal control problems with recursive cost functional having generalized (nonexponential) discounting, BSVIE description should be a more proper choice than the parameterized BSDE.
\ms
Now, let use make a direct comparison between the equilibrium HJB equations resulting from the approach of \cite{Wei-Yong-Yu 2017} and the one of this paper. For convenience, we only consider the case that $\si$ is independent of control $u$. As in \cite{Wei-Yong-Yu 2017}, if the recursive cost functional is taken to be \rf{h J^R}, then the equilibrium HJB equation reads
\bel{Th-differential-form-si*}
\left\{\begin{aligned}
& \Th_s(t,s,x)+{1\over2}\tr[\Th_{xx}(t,s,x)a(s,x)]+\Th_{x}(t,s,x) b\big(s,x,\psi(s,s,x,\Th(s,s,x),\Th_x(s,s,x))\big)\\
&\qq +g\big(t,s,x,\psi(s,s,x,\Th(s,s,x),\Th_x(s,s,x)),{\color{red}\Th(t,s,x)},\Th_x(t,s,x) \si(s,x)\big)=0,\\
&\qq\qq\qq\qq\qq\qq\qq\qq\qq (t,s,x)\in \D[0,T]\times\dbR^n,\\
& \Th(t,T,x)=h(t,x),\q (t,x)\in[0,T]\times\dbR^n.\end{aligned}\right.\ee
Comparing the above with \rf{Th-differential-form-si}, we see that $\Th(t,s,x)$ in the above is replaced by $\Th(s,s,x)$ in \rf{Th-differential-form-si}. This is the main consequence of using BSVIE instead of parameterized BSDE. As we explained above, such a replacement makes the problem more natural. On the other hand, from the mathematical viewpoint, since $\Th(s,s,x)$ has been appeared in $\psi(\cd)$, regardless the above replacement, therefore, replacing $\Th(t,s,x)$ by $\Th(s,s,x)$ mathematically reduces the complexity of the equation.
\ms
Next, in the current paper, by using the idea inspired by multi-person differential games (\cite{Yong 2012, Wei-Yong-Yu 2017}) and representation of adapted solutions to BSVIEs (\cite{Yong 2017b,Wang-Yong 2019}), we obtain the equilibrium strategy for Problem (N), which is time-consistent, locally near optimal, and it is determined by the solution to an equilibrium HJB equation. We have seen that the equilibrium HJB equation in this paper is an interesting modification of that found in \cite{Wei-Yong-Yu 2017}.
\ms
Further, as a byproduct, in obtaining a Feynman-Kac type formula for the equilibrium HJB equation, we introduce a new class of BSVIEs for which the diagonal value $Z(s,s)$ of process $Z(\cd\,,\cd)$ appears. For such kind of equations, some very special cases have been studied and the general case is left widely open. Actually, our introduction in this paper initiates the research for such kind of BSVIEs. Some relevant ideas and results for the so-called extended BSVIEs can be found in \cite{Wang 2019}.
\ms
Finally, we provide a partial list of the widely open questions concerning our Problem (N):
\ms
$\bullet$ Solvability of general equilibrium HJB equation \rf{Th-differential-form-rewrite}, with non-degenerate and bounded diffusion, i.e.,
$$\l_0I\les\si(t,x,u)\si(t,x,u)^\top\les\l_1I.$$
$\bullet$ When $\si(t,x,u)$ is degenerate (and it is independent of $u$), is it possible to use viscosity solution to identify/characterize the equilibrium value function?
\ms
$\bullet$ If the map $\psi$ defined by \rf{define-psi} is not regular enough, or not unique, or even does not exist, what one can do for Problem (N)?
\ms
|
2,877,628,088,504 | arxiv | \section{Introduction}
\subsection{Big Numbers}
Of the things I am going to discuss, the most difficult to accept may be the incredibly large numbers that are an unavoidable part of them. We have already had to swallow some pretty big numbers. For example, the observed flatness of the universe implies that it is much bigger than the horizon scale---by how much no one knows.
Again: the landscape of vacua may be orders of magnitude larger---even in the logarithm---than the often-quoted $10^{500}$. Again, no one knows.
What has been less appreciated is the time-scales over which the mechanisms of eternal inflation take place. It will take a few hundred billion years for the CMB to thermalize to the local de Sitter temperature, but for the large-scale attractors that eternal inflation envisions, the time scale for equilibration is at least $e^{10^{120}}$. Skepticism about such monstrous time intervals is perfectly legitimate. But in their defense I would make two points. The first is that these \it are \rm the time-scales of eternal inflation: to discuss eternal inflation but to reject their legitimacy is in my opinion illogical.
The second remark is that eternal inflation may have explanatory power. (In the last part of this lecture I will review some examples.) It offers explanations for the small value of the cosmological constant; the strange coincidence between the time at which observations are made, and the onset of cosmic-acceleration; and as we will soon see, the arrow of time. But all of these explanations require us to consider the statistics of super-huge distances, times, and landscapes. So I ask you to keep an open mind.
\subsection{The Boltzmann-Fluctuation Problem}
It has been claimed that the arrow-of-time can only be explained by assuming that the initial entropy of the universe is very small. In an eternally inflating universe this is not correct; a low entropy initial condition is neither necessary nor sufficient for an arrow-of-time. Let's begin with whether it is sufficient. Suppose there is a theory that contains only vacua with positive cosmological constants, and some agent prepares the universe in a state of very low entropy. For example, the state of lowest entropy would be the vacuum with largest cosmological constant.
The initial vacuum would decay, through a chain of tunneling events, to the state of smallest cosmological constant. Each decay would be followed by a transient period during which an arrow-of-time would exist, until the vacuum reached de Sitter thermal equilibrium. During such a transient period observers would see a normal directional flow of time.
The problem with this theory, when viewed from the super-long time-scales of eternal inflation, was pointed out by Dyson, Kleban, and Susskind \cite{Dyson:2002pf}. Once maximum entropy is reached, there is an infinite amount of time for ergodicity to create large thermal fluctuations with \it local \rm uncorrelated arrows of time.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=.3]{phasespace.pdf}
\caption{An ergodic path through the phase space of an inflating universe. The small red dot is a very low-entropy starting point such as an inflating geometry with a large vacuum energy. The dark green disk is the region which looks like our universe. The light green region contains observer-friendly freak-universes. The overwhelming number of trajectories that pass into either of the green areas do not flow directly from the tiny red dot. A trajectory will flow from the dark green disk into the red dot as frequently as it flows out of it. }
\label{f6}
\end{center}
\end{figure}
These Boltzmann fluctuations permit structure to form and evolve, but the overwhelming number of such fluctuations are freak occurrences which look nothing like our world. Freak-worlds would statistically dominate, implying that a ``normal" universe would be the extraordinary exception. If time is viewed as a river, then this kind of theory would describe a damned up channel in which the only flows are local thermal fluctuations. What we need is a mechanism to create a global flow.
If a low-entropy initial condition is not sufficient for an arrow-of-time, is it necessary? The answer is no: as Bousso has argued, ``up-transitions" (fluctuations in which the cosmological constant increases) can allow a causal patch to jump to a low entropy state from a larger entropy state. From there it may evolve normally \cite{Bousso:2011aa}. As Bousso himself argues, this can only be viable if, in addition to de Sitter vacua, the landscape contains so-called terminal vacua. Terminal vacua are causal patches that have exited from the prevailing inflation by having either zero or negative cosmological constant.
Basically, the dominance by Boltzmann fluctuations is avoided if the vacuum exits inflation well before the recurrence time.
As I will explain, the fractal-flow is a new mathematical attractor that replaces conformal invariance when terminals are present, and induces an eternal arrow-of-time\footnote{Professor Nomura has kindly informed me of similar ideas in his paper \cite{Nomura:2011dt}}.
\subsection{Eternal de Sitter Space?}
Let me dispel a false idea. Suppose there is a landscape described by a collection of scalar fields, and a number of strictly positive local minima of the potential. One normally thinks of each local minimum as a de Sitter vacuum, but strictly speaking there is only one equilibrium state. Either from a local or a global perspective, the vacua make tunneling transitions---both down and up---so that an eventual equilibrium is set up. That equilibrium would be the only exact de Sitter vacuum.
From a causal patch viewpoint this is just thermal equilibrium in which up and down-transitions equilibrate. In equilibrium the probability for any given metastable vacuum $m$ is given by
\begin{equation}
P(m) = z^{-1} e^{S_m}
\end{equation}
where $z$ is a normalization factor and $S_m$ is the usual Hawking Gibbons entropy of the local minimum. Put another way, all microstates are equally probable.
From the global or multiversal viewpoint the same arrow-less equilibrium is described by a conformally invariant fixed-point of a putative dS/CFT \cite{Strominger:2001pn}. The lack of an arrow-of-time makes conformal invariance a thing to be avoided.
This raises a question: Is a theory of absolutely stable de Sitter equilibrium mathematically consistent? The answer may be no; the non-compact symmetries of de Sitter space---conformal symmetries in dS/CFT---are inconsistent with the finite entropy of a causal patch. The no-go theorem was proved ten years ago by Goheer, Kleban, and Susskind \cite{Goheer:2002vf}. I view this as a positive thing; it mathematically rules out a class of theories which are phenomenologically unviable. ( The theorem of \cite{Goheer:2002vf} makes explicit reference to recurrence time-scales. Presumably, approximate theories of de Sitter space can make sense for ordinary cosmological times.)
The purpose of this lecture is explain, in as mathematically a precise fashion as I can, the relations between eternal inflation, conformal fixed-points, terminal vacua, and the arrow-of-time. To do so I will rely on a model of the tree-like structure of eternal inflation \cite{Harlow:2011az}. What one finds in the model, and I believe it to be true more generally,
is that introduction of terminal vacua leads to a new kind of attractor-behavior called a \it fractal-flow. \rm The fractal-flow has a robust arrow-of-time, not dependent on fluctuations. It has a universal scaling behavior but it is not a CFT. In fact it is not a field theory in any ordinary sense.
\setcounter{equation}{0}
\section{The Tree Model Without Terminals}
I will begin with a short review of the Harlow, Shenker, Stanford, Susskind model.
The causal structure of the model multiverse is a tree-graph that grows out of a root in the remote past. At each node of the tree, one incoming edge splits into some number, $p$ of outgoing edges, sometimes called cells in \cite{Harlow:2011az}. The simplest case of $p=2$ is sufficient for my purposes and I will adopt it from now on. Each doubling of the number of edges will be called a 2-folding.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=.3]{f1.pdf}
\caption{The eternally inflating tree and its future boundary.}
\label{f6}
\end{center}
\end{figure}
The two concepts of \it future boundary \rm and \it causal patch \rm are central to the tree model. Let us define them.
The tree has a future asymptotic boundary which is the precise analog of the future boundary of de Sitter space. The future boundary of a tree is not smooth geometry in the usual sense. One defines it by following a future-directed path to $u= \infty.$ Each such path ends on a point $x$ of the future boundary. As explained in \cite{Harlow:2011az} the set of such points defines the $2$-adic \ number system. Go to any point $x$ on the $2$-adic \ boundary and run backward along a unique sequence of edges to the root. That sequence of edges is the causal patch of the future boundary-point $x.$
The tree exponentially expands so that after $u$ 2-foldings the number of nodes and edges is $2^u.$ Each edge of the tree has a ``color" $m$ which represents a vacuum on a strictly positive cosmological constant \ landscape. It is assumed that the color $m$ represents a collection of microstates described by an entropy $S_m.$
At each node the incoming edge with vacuum-color $n$ splits into two edges. Each new edge has probability $\gamma_{nm}$ to make a transition from color $m$ to color $n.$ The transition matrix $\gamma_{nm}$ plays the role of the dimensionless tunneling rate for a transition from vacuum type $m$ to type $n.$ The fundamental property of microscopic reversibility is implemented by the condition of detailed balance,
\begin{equation}
\frac{\gamma_{nm}}{\gamma_{mn}} = e^{S_n-S_m}.
\label{detail}
\end{equation}
Equivalently, detailed balance can be expressed in terms of a real symmetric matrix $M_{nm} = M_{mn}:$
\begin{equation}
\gamma_{nm} = M_{nm} e^{S_n.}
\label{symmetric M}
\end{equation}
Let us define the probabilities $P_m(u)$ as the fraction of edges with color $m$ after $u$ 2-foldings of the tree.
The $P_m$ satisfy the rate equations
\begin{equation}
P(u+1) = G P(u)
\label{rate}
\end{equation}
where $P$ is a column-vector composed of the $P_m;$ and $G$ is the matrix,
\begin{equation}
G_{mn} = \delta_{mn} - \sum_r \gamma_{rm} \delta_{mn} + \gamma_{mn}
\label{G-matrix}
\end{equation}
The matrix $G$ is not symmetric so its eigenvectors are not orthogonal, but detailed balance allows us to transform the equation to a form in which the eigenvectors are orthonormal. To that end, define the diagonal matrix
\begin{equation}
Z_{mn}=\delta_{mn} e^{S_n/2}.
\end{equation}
The matrix
\begin{equation}
S=Z^{-1} G Z
\end{equation}
is seen to be symmetric. The column vectors can also be transformed:
\begin{equation}
\Phi = Z^{-1} P.
\end{equation}
The rate equation in terms of $\Phi$ has the symmetric form,
\begin{equation}
\Phi(u+1) = S \Phi(u).
\end{equation}
Note that the original probabilities $P_m$ are given by
\begin{equation}
P_m= e^{S_m/2}\Phi_m
\label{p=es phi}
\end{equation}
Since $S$ is symmetric its eigenvectors are orthogonal. We denote them by the symbols $(I)_m.$ The label $I$ runs over the same number of values as does the color label $m.$ The $(I)$ satisfy the eigen-equations
\begin{equation}
S \cdot (I) = \lambda_I (I)
\end{equation}
The eigenvectors satisfy
\begin{equation}
(I)\cdot (J) = \sum_m (I)_m(J)_m = \delta_{IJ.}
\end{equation}
and
\begin{equation}
\sum_I (I)_m (I)_n = \delta_{mn}
\end{equation}
\subsection{Equilibrium}
All of the eigenvalues $\lambda_I$ are positive. The largest eigenvalue is equal to $1$ and all others are less than $1.$ We will write the eigenvalues in terms of scaling-dimensions\footnote{I will justify the term scaling-dimension shortly.} $\Delta_I,$
\begin{equation}
\lambda_I = 2^{-\Delta_I}
\end{equation}
Note that the leading eigenvalue $\lambda_0 = 1$ corresponds to scaling dimension zero. Typically the other dimensions are very small, being of order the tunneling rates $\gamma.$
The eigenvector corresponding to eigenvalue $1$ is called $(0)$ and its elements are given by
\begin{equation}
(0)_m \sim e^{S_m/2}.
\end{equation}
The first thing to notice about $(0)$ is that it is the only eigenvector which does not tend to zero with the time $u.$ All others fade to zero like
$2^{-\Delta_I u}$ Thus $(0)$ represents the equilibrium distribution of the colors, and the other eigenvectors represent transients.
In terms of the probabilities $P$ the equilibrium distribution is
\begin{equation}
P_m^{(0)} =z^{-1} e^{S_m }
\label{equilibrium}
\end{equation}
where $z^{-1}$ is a normalization factor.
There are two meanings to the probabilities $P_m^{(0)}:$ one global, one local. Globally $P_m^{(0)}$ is the fraction of edges that have color $m$ after transients have faded away. They are completely independent of the initial conditions. At a given time $u$ the total number of edges is
\begin{equation}
N_{total} = 2^u
\end{equation}
and the number of edges of type $m$ is
\begin{equation}
N_m= 2^u P_m(u) \to z^{-1} e^{S_m } 2^u .
\end{equation}
The other interpretation of $P$ is local. As we follow a causal patch from the root of the tree to the future boundary, the color of the edges will vary according to the rate equation. The quantity $P_m(u)$ is the probability that at time $u$ the color is $m.$ After initial transients fade away, the probability that the causal patch has color $m$ is $P_m^{(0)}.$ This interplay of local and global is called local/global duality \cite{Bousso:2009mw}.
One final point is that the time scale for the $P_n$ to fully equilibrate to the distribution \ref{equilibrium}, from an arbitrary initial condition, is determined by the slowest rates in the rate equation. These will be the up-transitions. For the up-transition from $n$ (a state of large entropy) to $m$ (a state of low entropy) the rate is of order $e^{-S_n}.$ Thus the time for full equilibration is of order $e^{S}$ where $S$ is the maximum entropy on the de Sitter landscape. That is why I said the time scales associated with eternal inflation are at least as big as $e^{10^{120}}.$ The number $10^{120}$ is the entropy of our local de Sitter vacuum.
\subsection{Global Correlations}
To study global correlations we temporarily cut off the time at some large time $u =u_0,$ thereby defining a regulated version of the future boundary. The points of the regulated boundary are called $x.$ There is a natural metric on the (regulated) boundary defined in the following way. Pick any two points $x, \ y .$ From each point there is a unique path of edges that runs backward to the root. Run backward along the paths from $x$ and $y$ until they intersect at a node, and define the time of intersection to be $u_i.$ The $2$-adic \ distance between $x$ and $y$ is defined by
\begin{equation}
|x-y|_2 \ = \ {2^{-u_i}}
\label{distance}
\end{equation}
The $2$-adic \ distance is a measure of how far into the past you have to follow the causal patches until they come into causal contact. Note that the distance defined in this way is not sensitive to the cutoff $u_0.$
A system of correlation functions at time $u_0$ can be defined as follows. The one-point function $C_n(x)$ is defined as the probability that the edge at $x$ has color $n.$ Assuming that $u_0$ is late enough that transients have died out, $C_n(x)$ will be independent of both $u_0$ and $x.$ It is obviously given by the equilibrium distribution,
\begin{equation}
C_n(x) = P_n^{(0)} = \frac{1}{z}e^{S_n}
\end{equation}
Next consider the correlation function $C_{nm}(x,y),$ defined as the joint probability that the edges leading into $x$ and $y$ have colors $n$ and $m.$ The only source of correlation between $x$ and $y$ is the fact that they trace back to the common node at time $u_0.$ Suppose the color of the edge coming into that node is $r.$
Define $P_{nr}(u)$ to be a conditional probability that a causal patch at time $u_i+u$ will have color $n,$ given that at time $u_i$ it has color $r.$ In \cite{Harlow:2011az} $P_{nr}(u)$ was called a propagator, although it has nothing to do with field theoretic propagators.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=.3]{propagator.pdf}
\caption{The propagator is the probability that a causal patch (the dark sequence of edges)
at time $u_i+u$ will have color $n,$ given that at time $u_i$ it has color $r.$.}
\label{f6}
\end{center}
\end{figure}
The propagator is
given by
the $u$-th power of the matrix $G$:
\begin{equation}
P_{nr}(u) \equiv (G^u)_{nr}= (I)_n \ \lambda_I^u \ (I)_r \ e^{\frac{S_n-S_r}{2}}.
\label{connector}
\end{equation}
Since once the branches leading to $x$ and $y$ split off from one another they have no further interaction, the
probability that $x$ and $y$ have color $n$ and $m$ has the form
\begin{equation}
P_{n r}(u_0 -u_i)P_{m r}(u_0 -u_i).
\label{partial prob}
\end{equation}
To complete the computation we multiply \ref{partial prob} by the probability that the common node at $u_i$
has color $r,$ and then sum over $r.$
\begin{equation}
C_{nm}(x,y)=\sum_r P_r P_{n r}(u_0 -u_i)P_{m r}(u_0 -u_i).
\label{C=PPP}
\end{equation}
If we assume that the time of the intersection, $u_i$ is late enough that transients have died away, then we can replace $P_r$ by the equilibrium eigenvector and we find
\begin{equation}
C_{nm}(x,y)=\frac{1}{z}\sum_r e^{S_r} P_{n r}(u_0 -u_i)P_{m r}(u_0 -u_i).
\label{C=ePP}
\end{equation}
The final step is to substitute \ref{connector} and use the orthonormality of the eigenvectors $(I)$ to obtain,
\begin{equation}
C_{nm}(x,y)= \frac{1}{z}\sum_{I}
\lambda_I^{2 (u_0-u_i)} \ (I)_{n} \ (I)_{m} \ e^{S_{n}+S_{nm} \over 2}.
\end{equation}
or, using \ref{distance} we find the suggestive form,
\begin{equation}
C_{nm}(x,y)= \frac{1}{z}\sum_{I}
\lambda_I^{2 u_0} \ (I)_{n} \ (I)_{m} \ e^{S_{n}+S_{nm} \over 2} \frac{1}{|x-y|_2^{\Delta_I}}.
\label{suggestive}
\end{equation}
The interpretation is obvious. The correlation function is a superposition of conformal correlation functions for fields of dimension $\Delta_I.$ The pre-factors $\lambda_I^{ u_0}$ for each external line are wavefunction renormalization factors. Stripping off these factors we find a system of conformal correlation functions $\langle {\cal{O}}_I(x) {\cal{O}}_J(y) \rangle$ that satisfy
\begin{equation}
\langle {\cal{O}}_I(x) {\cal{O}}_J(y) \rangle = \frac{\delta_{IJ}}{|x-y|_2^{2\Delta_I}}
\end{equation}
There are a number of interesting things about this result. First of all, consider the term in \ref{suggestive} associated with the leading eigenvector $(I) = (0).$ Since $\Delta_0=0$ this term is given by the factorized form,
\begin{eqnarray}
C_{nm}(x,y)&=& \frac{1}{z}
(0)_{n} \ (0)_{m} \ e^{S_{n}+S_{nm} \over 2} \cr \cr
&=& P_n^{(0)}P_m^{(0)}
\label{cluster}
\end{eqnarray}
Since the other terms tend to zero as inverse powers of $|x-y|_2$ the two-point function satisfies cluster decomposition.
Remarkably the conformal correlation functions have the precise form of CFT correlation functions for fields of dimension $\Delta_I $ except that the usual distance $|x-y|$ is replaced by the $2$-adic \ distance $|x-y|_2.$
This correspondence with conformal field theory is not accidental. As was shown in \cite{Harlow:2011az}, there is an underlying symmetry of tree-graphs that acts on the boundary as $2$-adic \ conformal transformations.
However---and this is the important point---the symmetry only acts on graphs made of undirected edges. The edges of the tree-model are supposed to be future-directed and in general the directed-ness destroys the $2$-adic \ conformal symmetry. But as long as detailed balance is satisfied the symmetry is restored and the correlations are those of a CFT on the $2$-adic \ boundary. The similarity with CFT correlations is not restricted to two-point functions and includes such things as operator product expansions.
For example, the three point function has the usual conformal form
\begin{equation}
\langle \mathcal{O}_I(x)\mathcal{O}_J(y)\mathcal{O}_K(z)\rangle
=
\frac{C_{IJK}}{|x-y|_p^{\,\Delta_I+\Delta_J - \Delta_K} |x-z|_p^{\,\Delta_I+\Delta_K-\Delta_J} |z-y|_p^{\,\Delta_K + \Delta_J - \Delta_I}}.
\label{3pt1}
\end{equation}
with
\begin{equation}
C_{IJK} = \mathcal{N}^{1/2}\sum_n e^{-S_n \over 2} \ (I)_n \ (J)_n \ (K)_n.
\label{3pt2}
\end{equation}
The structure constants $C_{IJK}$ satisfy the associativity condition,
\begin{equation}
\sum_H \ C_{IJH}C_{HKL} =\sum_H \ C_{IKH}C_{HJL}
\end{equation}
\subsection{Multiverse Fields}
The fields ${\cal{O}}_I(x) $ are not any of the usual perturbative fields such as string-theory moduli. They are defined as transforms of fields ${\Phi}_n(x) $ which take on the value $1$ at a point in the multiverse where the color is $n$ and zero elsewhere. The ${\cal{O}}_I(x) $ are fields of definite dimension, whereas the ${\Phi}_n(x) $ are linear superposition with different dimensions. The ${\Phi}_n(x) $ and the ${\cal{O}}_I(x) $ are non-perturbative objects that only make sense on scales at least as big as a causal patch---in other words super-horizon scales. The purpose of these \it multiverse fields \rm is to study global properties of the multiverse.
The characteristic feature of multiverse fields is their extremely low dimensionality. The dimensions $\Delta_I$ are generally proportional to the transition rates $\gamma_n$ which in a realistic theory would be Coleman De Luccia \ tunneling rates. Moreover, if the landscape is as rich as string theory suggests \cite{Bousso:2004fc}\cite{Kachru:2003aw}\cite{Susskind:2003kw} , then there should be a huge collection of such fields and a discretuum of operator dimensions.
It is clear that the non-perturbative version of a de Sitter---conformal field theory correspondence would have to contain this kind of vast array of multiverse fields. In particular in the doubled-dS/CFT version of \cite{Harlow:2012dd} they would correspond to source fields (arguments of the Wheeler de Witt functional) which are integrated over in calculating expectation values.
\subsection{Causal Patch Correlations}
From an observational point of view we are not interested in the global view but only the view from a causal patch \cite{Bousso:2009mw}. Causal patches are of course embedded in a growing tree. It is not completely obvious whether or not the causal patch inherits an arrow-of-time from the surrounding expansion. To see that it does not we can directly investigate correlations within a causal patch.
Consider the following question: Given that a point on a causal patch has color $m$ at time $u,$ what is the probability that it will have color $n$ at the next instant, $u+1?$ I'll write this probability as
$$P(n, u+1 | m,u ).$$
The answer is obviously an element of the rate matrix $\gamma_{nm}.$ Using \ref{symmetric M} we can write this in the form
\begin{equation}
P(n, u+1 | m,u ) = M_{nm} e^{S_n}.
\end{equation}
Now let us ask the time-reversed question: Given that a point on a causal patch has color $m$ at time $u,$ what is the probability that it had color $n$ at the previous instant $u-1?$ From Bayes' theorem,
\begin{equation}
P(n, u-1 | m,u ) = P(m,u | n, u-1 ) \frac{P(n, u-1)}{P(m,u )}.
\label{bayes}
\end{equation}
The probability $P(m,u | n, u-1 )$ is the transition matrix element $\gamma_{mn}.$ This gives,
\begin{equation}
P(n, u-1 | m,u ) = \gamma_{mn} \frac{P(n, u-1)}{P(m,u )}.
\end{equation}
In general this is nothing special, but if the transients have died out by time $u$ we may write this as
\begin{equation}
P(n, u-1 | m,u ) = \gamma_{mn} e^{S_n-S_m}
\end{equation}
or
\begin{equation}
P(n, u-1 | m,u ) = M_{mn} e^{S_n}
\end{equation}
Given the symmetry of $M$ we find
\begin{equation}
P(n, u-1 | m,u ) = P(n, u+1 | m,u )
\end{equation}
In other words there is a complete time-reversal symmetry and a total absence of an arrow-of-time. We could easily extend this to more general situations---for example,
\begin{equation}
P(n, u-v | m,u ) = P(n, u+v | m,u )
\end{equation}
for any integer $v.$
The message to be taken away from this is that conformal symmetry of the future boundary of an eternally inflating de Sitter geometry is associated with the same time-reversibility ( detailed balance) that leads to thermal equilibrium in a causal patch.
It follows that a theory with an arrow-of-time must not have a conformally invariant future boundary.
\setcounter{equation}{0}
\section{The Tree Model With Terminals}
The existence of terminal vacua changes the
story in a very positive direction,
by a mechanism that is simply illustrated in the
tree model. With terminals, the conformally invariant fixed-point is
replaced by a new kind of attractor---one that does have an arrow-of-time.
In the presence of terminal vacua, the co-moving coordinate volume (but not the proper volume) eventually becomes
solidly dominated by terminals. Eternal inflation does not end, but it is restricted to a
fractal of coordinate volume that decreases like $\lambda_D^u$
($\lambda_D^u$ being the dominant eigenvalue, soon to be defined).
Despite the fact that the coordinate volume goes to zero, it is
still possible to define a collection of non-trivial conditional correlation functions on future
infinity:
Following \cite{Harlow:2011az} we add terminal vacua to the tree model. This is done by randomly pruning the tree.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=.3]{f6.pdf}
\caption{The pruned tree of a landscape with terminals.}
\label{f6}
\end{center}
\end{figure}
Suppose an edge has color $n.$ When it branches the probability that a daughter edge is terminal will be denoted $\gamma_n.$ The rate equations for non-terminal vacua are still a closed set of equations. Equations \ref{detail}, \ref{symmetric M}, and \ref{rate}, are unchanged but the expression \ref{G-matrix} for the rate matrix is replaced by,
\begin{equation}
G_{mn} = \delta_{mn}(1-\gamma_n) - \sum_r \gamma_{rm} \delta_{mn} + \gamma_{mn}
\label{G'-matrix}
\end{equation}
These equations no longer satisfy probability conservation for the simple reason that probability leaks into the terminal vacuum.
The matrix $S=Z^{-1}GZ$ is still symmetric and its eigenvectors still form an orthonormal basis, now with all eigenvalues less than one in magnitude.
The largest eigenvalue---the so-called dominant eigenvalue---of this modified $G$-matrix determines the asymptotic late-time population of non-terminal vacua. The dominant eigenvalue also defines a dominant scaling dimension, $2^{-\Delta_D} = \lambda_D.$
The corresponding eigenvector, denoted $(D)_m,$ is called the dominant eigenvector.
We also define the probability $P^{\{D\}}_m$ through a version of equation \ref{p=es phi}
\begin{equation}
P^{\{D\}}_m = e^{\frac{S_m}{2}} (D)_m.
\end{equation}
The probabilities $P^{\{D\}}_m$ are no longer proportional to $e^{S_n}.$ They tend to be heavily weighted toward the longest lived vacua and not necessarily the largest entropy vacua.
It should be noted that although the dominant eigenvector has time dependence
\begin{equation}
2^{-\Delta_D u}
\end{equation}
the decrease in its magnitude does not mean that eternal inflation ends. The volume of inflating vacuum grows a little slower than previously, with the time dependence
\begin{equation}
2^{(1-\Delta_D) u}
\end{equation}
However, the probability for survival along a trajectory tends to zero.
As in the case without terminals, propagators can be defined by \ref{connector} but their actual values are determined by the matrix
\ref{G'-matrix} and not \ref{G-matrix}.
\subsection{ Fractal-Flow Behavior}
I have explained that there is a close relation between conformal symmetry of the future boundary and the absence of an arrow-of-time. It follows that any theory that has a time-arrow must break conformal symmetry. The tree model with terminals is an example in which we can test out the hypothesis. We will see that the existence of terminals leads to an attractor called a fractal-flow which breaks conformal invariance while creating a global arrow-of-time.
The mathematical structure of a fractal-flow is describable in terms of correlation functions similar to those that we discussed earlier.
These correlation functions are defined by asking conditional questions of the following kind:
Given a point $x,$ what is the probability that it has color $n$
given that it has not been swallowed up by a terminal bubble?
Similarly, Given two points $x$ and $y,$ what is the conditional
probability that they have color $n_x$ and $n_y,$ given that neither
has been swallowed up by a terminal bubble?
Such questions define a collection of conditional correlation
functions
\begin{eqnarray}
&& C_n(x) \cr \cr
&& C_{n_x, n_y}(x , y) \cr \cr
&& C_{n_x, n_y, n_z}(x , y, z)\cr \cr
&&\ldots
\label{conditional correlators}
\end{eqnarray}
The conditional correlation functions are calculated as before with one exception. In
\ref{C=ePP} instead of the equilibrium probability we use the dominant probability
$P_n^{\{D\}}.$
The one point function $C_n(x)$ is the dominant probability,
\begin{equation}
C_n(x) = P_n^{\{D\}} = e^{\frac{S_n}{2}} (D)_n \ 2^{-\Delta_D u_0}
\label{pd}
\end{equation}
Because the dominant probability is time-dependent,
\begin{equation}
P_n^{\{D\}} \sim 2^{-\Delta_D u}
\end{equation}
all correlations decay with time. This is a manifestation of the fact that the co-moving volume (but not proper volume) becomes filled with terminal vacua, leaving over only a fractals-worth of inflating co-moving space.
If one is only interested in relative conditional probabilities than the time-dependent factor may be dropped. Given that the point $x$ has not been swallowed by a terminal bubble, the relative probability that it has color $m$ is $P_m^{\{D\}}.$ It should be noted that in general the dominant probability vector is not similar to the statistical weight $e^{S_m}.$ The statistical weight is dominated by the vacuum of lowest cosmological constant \. The dominant probability is dominated by the vacuum of lowest decay rate. In general these need not be the same. More generally, the statistical weight depends only on the entropies $S_n$ and not on the decay rates. The dominant eigenvector does depend on the rates.
Next consider the two-point function. The method of calculation is the same as before, with the exception that I mentioned. Using \ref{pd}
\begin{equation}
C_{nm}(x,y)= \sum_{r}
2^{-\Delta_D u_i} e^{S_{r} \over 2} \ (D)_{r}
P_{nr}(u_0 -u_i)P_{mr}(u_0 -u_i),
\label{CABD}
\end{equation}
or substituting \ref{connector} and using
$$
2^{u_i} =\frac{1}{ |x-y|_2}
$$
we get
\begin{equation}
C_{nm}(xy)= \sum_{I,J}\left\{ \sum_r e^{-S_r \over 2} \ (D)_r \ (I)_r \
(J)_r
\frac{1}{ |x-y|_2^{(\Delta_J+\Delta_I-\Delta_D)}}
\right\}
\left[
(I)_{n} \
(J)_{m} \ e^{S_{n} +S_{m} \over 2} \ \ 2^{-(\Delta_I+\Delta_J)u_0} \right].
\label{term 2pt}
\end{equation}
In terms of the fields with well-defined scaling dimensions,
\begin{equation}
\langle \mathcal{O}_I(x)\mathcal{O}_J(y)\rangle= \sum_n e^{-S_n \over 2} \ (D)_n \ (I)_n \ (J)_n
\left({1 \over |x -y|_2} \right)^{(\Delta_J+\Delta_I-\Delta_D ) }.
\label{insertion}
\end{equation}
Equations \ref{term 2pt} and \ref{insertion} have significant differences with their equilibrium counterparts. First of all the two-point function $C_{nm}(xy)$ does not exhibit cluster decomposition. Cluster decomposition is connected to the fact that operator products typically include the unit operator with dimension zero. We saw earlier how cluster decomposition arose from the contribution of the eigenvector $(0)$ which in fact does have operator dimension $\Delta_0 = 0.$ By contrast, the theory with terminals has no eigenvector with dimension zero. The closest analog is the dominant eigenvector. It is straightforward to calculate the contribution to \ref{term 2pt} from $(I) = (J) =(D).$ One finds a contribution proportional to
\begin{equation}
\frac{1}{|x-y|_2^{\Delta_D}} P_n^D P_m^D
\end{equation}
This is not conventional clustering which would require a term which did not go to zero with increasing $|x-y|_2.$ From this we can be sure that the conditional correlation functions do not define a conventional field theory.
From \ref{insertion} one also sees something non-standard; namely, the two point function is not diagonal in dimension. In fact comparing the two point function
of equation \ref{insertion} with the three point function in \ref{3pt1} and \ref{3pt2}, we see that it
closely resembles a conformal three-point function involving an insertion of the dominant field $ \mathcal{O}_D (\infty)$
evaluated at infinity. This obviously breaks conformal invariance which would require the two-point function to be diagonal.
Similar things are true for higher-point functions: there is always an extra insertion of $\mathcal{O}_D (\infty)$ breaking conformal invariance, but the correlation functions do have definite scaling properties under scale-transformations.
This fractal-flow attractor seems to be a new type of structure quite distinct from conformal field theory. Given its connection to the arrow-of-time problem, it surely deserves more study.
\subsection{Local Fractal-Flow}
Let us return to the local viewpoint---the view from a causal patch. Earlier I showed how the equilibrium fixed-point leads to time-symmetric correlation functions characteristic of fluctuations in thermal equilibrium. Now I want to reconsider the issue in the theory with terminals.
As before, the first question is:
Given that a point on a causal patch has color $m$ at time $u,$ what is the probability that it will have color $n$ at the next instant, $u+1?$
$$P(n, u+1 | m,u ).$$
The answer is completely unchanged from the earlier discussion,
$$
P(n, u+1 | m,u ) = \gamma_{nm} = M_{nm} e^{S_n}.
$$
But now consider the time-reversed question: Given that a point on a causal patch has color $m$ at time $u,$ what is the probability that it had color $n$ at the previous instant, $u-1?$ In other words what is
$$P(n, u-1 | m,u ).$$
The Bayesian analysis still gives \ref{bayes}
$$
P(n, u-1 | m,u ) = P(m,u | n, u-1 ) \frac{P(n, u-1)}{P(m,u )}.
\label{bayes'}
$$
but now the factor $ \frac{P(n, u-1)}{P(m,u )}$ is given by the dominant eigenvector. One finds
\begin{equation}
P(n, u-1 | m,u )= \gamma_{nm} e^{S_m -S_n} 2^{\Delta_D}\frac{P^{\{D\}}_n}{P^{\{D\}}_m}
\end{equation}
or,
\begin{equation}
\frac{P(n, u-1 | m,u )}{P(n, u+1 | m,u )}= 2^{\Delta_D} \frac{e^{-S_n}P^{\{D\}}_n}{e^{-S_m}P^{\{D\}}_m} \ \neq 1
\end{equation}
It is easy to generalize this to allow arbitrary (integer) time-interval $v$,
\begin{equation}
\frac{P(n, u-v | m,u )}{P(n, u+v | m,u )}= 2^{( \Delta_D) v} \frac{e^{-S_n}P^{\{D\}}_n}{e^{-S_m}P^{\{D\}}_m} \ \end{equation}
Thus we see that terminals induce a robust arrow-of-time that persists long after transients have faded, and the dominant eigenvector dominates.
\setcounter{equation}{0}
\section{Irrelevance of initial Entropy}
There is a paradoxical issue about terminals that
concerns the relevance of initial conditions for observations made within a causal patch. Let us suppose that there are $10^{500}$ vacua on the de Sitter Landscape. In this case we can expect that wherever we begin, after a few hundred transitions from one minimum to another, we will hit a terminal. The only way to avoid doing so is for ``up-transitions" to cause repeated jumps to increased cosmological constant . The rates for up-transitions are vastly smaller than for down transitions. Therefore from a strictly local perspective one might conclude that the probability is very high for the observed universe to be within a few hundred transitions from the initial condition. In particular there would certainly not have been time to equilibrate to the dominant eigenvector. If this is correct, statistical predictions about our location in the landscape crucially depend on initial conditions, about which we know very little.
On the other hand, if one looks at the tree globally, the overwhelming number of branches occur at late time. The number of branches ending at time $u_0$ grows exponentially. Therefore, if one makes the usual assumption of typicality, the best guess would be that we are extremely far from the initial condition, and from the transients associated with it.
Does this
matter for anything we observe? I think it does. In defining a measure on the landscape, it makes a difference whether we assume a prior probability given by the dominant eigenvector, or by something else. Bousso \cite{Bousso:2009dm}, who advocates the local view, also advocates using the dominant eigenvector---his quantitative analysis of the cosmological constant depends on it. But, as Bousso admits, it is hard to see how this can be justified without some reference to the global viewpoint.
If we do take the global view then
most branches of the tree occur at such late times relative to the initial condition, that transients will have died away. Probabilities are then governed by the universal asymptotic behavior and not the initial color---or entropy---of the root. If this is right,
there is no advantage to a low entropy starting point \cite{Bousso:2011aa}. Whatever the initial state, as long as it leads to eternal inflation the existence of a time-arrow is guaranteed by the fractal-flow attractor.
There is an important quantitative question that I have not discussed, but which has recently addressed \cite{Bousso:2011aa}. Roughly speaking it is whether the flow is strong enough to overwhelm the tendency for Boltzmann fluctuations. One can imagine a limit in which the decay to terminals---the values of the $\gamma_n$---are vastly smaller than the other transition rates. In this limit the flow would be so feeble that it would be overwhelmed by fluctuations. Such stagnated regions of the landscape, which allow observers, are dangerous because freak Boltzmann fluctuations in those regions could potentially outweigh all normal environments.
\cite{Bousso:2011aa} has analyzed the quantitative question and I have nothing to add to that analysis.
\setcounter{equation}{0}
\section{The Explanatory Power of Eternal Inflation}
I want to justify the claim that I made earlier that eternal inflation has explanatory power. I mentioned two items in addition to the arrow-of-time:
\begin{itemize}
\item The small value of the cosmological constant.
\item The cosmic coincidence that we happen to live at a time, relative to the big bang,
comparable to the time-scale provided by the cosmological constant.
\end{itemize}
The explanations that eternal inflation offers are statistical in nature. Here is where the large numbers of eternal inflation come in. First of all, the landscape \cite{Bousso:2004fc} \cite{Kachru:2003aw} \cite{Susskind:2003kw} must be assumed to be diverse enough to provide a dense set of possible values for constants such as the cosmological constant, the parameters ordinary slow-roll inflation, and even a broad range of time-scales for biological evolution.
Secondly, space is so big that every environment occurs many times.
Finally time is so big that the dominant fractal-flow attractor dominates the population of vacua.
The explanatory power of eternal inflation rests to some degree on anthropic considerations which require us to incorporate a few additional ingredients into the tree model. The new ingredients concern the existence of observers, although they do not depend on any detailed assumptions about the nature of life or intelligence. First, the usual assumption of typicality: the probability of an observation of a given type is proportional to the number of such observations made under the cutoff surface. It is not our purpose to justify this assumption but only to show how the geometry and causal structure of the multiverse influence the answer.
Next, we need to define the concept of a bubble-nucleation of a given vacuum type $n.$
Consider an edge of type $n$ somewhere on the tree. If the rates $\gamma$ are small then the near-term causal future of that causal patch containing that edge will most likely be dominated by cells (edges) of similar type. However that is not generally true of the causal past of the edge. The causal past is defined by tracing back along a unique series of edges of the tree.
Eventually as one works backward, a point will occur where the color is no longer $n.$
That point defines the nucleation event. If all rates $\gamma$ are very small, then what grows out of the nucleation point will be mostly of type $n.$
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=.3]{bubblenucleation.pdf}
\caption{On the right side of the figure a red bubble is shown nucleating in de Sitter space. Ignoring later nucleations, the bubble expands to fill the causal future of the nucleation point. The tree-analog of the same process is shown in the left side of the figure. }
\label{f6}
\end{center}
\end{figure}
Another standard assumption is that observers in the type $n$ environment can exist for a limited range of proper time, subsequent to the nucleation event. For simplicity we take that period to be concentrated around a proper time $\tau_{obs}.$ This proper time can be translated into a number of links. In the $n$-vacuum the proper time of a segment composed of $L$ links is
\begin{equation}
\tau = {L \over H_n}.
\end{equation}
Setting this equal to $\tau_{obs}$ gives the number of links $u_{obs}$ from the nucleation point to the place where observations take place,
\begin{equation}
u_{obs} = H_n \tau_{obs}.
\end{equation}
Let us also assume that when the nucleation of a bubble of type $n$ takes place, or shortly thereafter, an amount of matter is created within a causal patch that will eventually be assembled into a number of observers $\nu_n.$ All other things being equal, the value of $\nu_n$ will depend on $H_n.$ The form of the dependence follows from two facts: The first is that surfaces of constant proper-time $\tau$ are hyperbolic surfaces of constant negative curvature. Observers are distributed uniformly over such surfaces. The second fact is the the bounding surface of a causal patch has area $H_n^{-2}.$ Since in a hyperbolic space area and volume are proportional, it follows that the number of observers in a causal patch also varies as $H_n^{-2}.$
\begin{equation}
\nu_n \sim H_n^{-2}.
\label{nu = 1/HH}
\end{equation}
In principle we would like to count all observations that occur below the cutoff surface, but because populations exponentially increase, in an eternally inflating world it is sufficient to count observations that take place at the cutoff. Thus one defines the the measure ${\cal{M}}(n)$ to be proportional to number of observations occurring at time $u_0$, in bubbles of type $n,$ anywhere in the multiverse. At the end of the calculation, $u_0$ is allowed to tend to infinity. This choice of cutoff is natural in the model and in particular respects the symmetries but it is not the only possible choice.
There are several factors that go into ${\cal{M}}(n)$. The first and most obvious is $\nu_n,$ the number of observers that form in a single bubble of type $n.$
The next relevant consideration is the color $m$ of the immediate ancestor of the bubble. If the ancestor color is assumed to be $m$ then we must include a factor of the probability for color $m$ in the statistical ensemble. Assuming the statistical ensemble is given by the dominant eigenvector gives a factor $P^D_m.$ In addition we must include the probability $\gamma_{nm}$ that the vacuum $m$ decays to $n.$
Summing over the possible ancestor vacua leads to the factor
$$\sum_m \nu^n \ P^D_m \ \gamma_{nm}$$ or using \ref{nu = 1/HH}
\begin{equation}
{\cal{M}}(n) \sim \frac{1}{H_n^2} \sum_m \ P^D_m \ \gamma_{nm}
\end{equation}
The final and most important factor comes from the fact that if the observations take place at time $u_0,$ the bubbles must nucleate at time $u_0 - u_{obs}.$
Consider the total number of sites available on the tree at the nucleation time.
That number is
$2^{(u_0 - u_{obs})}.$ With this factor included, the measure becomes proportional to
\begin{equation}
{\cal{M}}(n) \sim \frac{1}{H_n^2} \ \sum_m \ \gamma_{nm} \ P^D_m \ 2^{(u_0 - H_n \tau_{obs})}
\end{equation}
Now we see a possible danger: the answer appears to depend on the cutoff surface $u_0$ through the factor
$2^{u_0}$. But this factor represents the exponential growth of all populations and should be factored out of relative probabilities.
After dropping the cutoff dependent factor the remaining measure is
\begin{equation}
{\cal{M}}(n) \sim \frac{1}{H_n^2} \ \sum_m \ \gamma_{nm} \ P^D_m \ \ 2^{- H_n \tau_{obs}}.
\end{equation}
The factors that interests us most is the
ones containing the hubble constant $H_n,$
\begin{equation}
{\cal{M}}(n) \sim \frac{1}{H_n^2} \ 2^{ - H_n \tau_{obs}}
\label{cc measure}
\end{equation}
We may interpret this measure as the probability density for observations of a cosmological constant
proportional to $H_n^2. $
\begin{equation}
\frac{dP(\lambda, \tau_{obs})}{d\lambda} \sim \frac{1}{\lambda} 2^{ - H_n \tau_{obs}}
\label{final measure}
\end{equation}
The fact that the base of the exponential is $2$ and not some other number is an artifact of the
fact that the tree branches into to edges at each node. In a more realistic theory the base would be
$e^3$ corresponding to the growth of volume $e^{3H}$ during each e-folding. With that substitution \ref{final measure} agrees exactly with the causal-patch and light-cone measure of Bousso and collaborators, and the following conclusions can be taken straight from \cite{Bousso:2010im}. In particular the \ref{final measure} can be interpreted as a joint probability for the cosmological constant $\lambda$ and the observer-time $\tau_{obs}.$ The factor $ \frac{1}{\lambda}$ favors the smallest values of $\lambda$ that are consistent with the existence of observers. The exponential factor may be thought of as a probability measure of the time of observation. As such, it favors $\tau_{obs} \sim H^{-1},$ consistent with the cosmic coincidence observation.
\section*{Acknowledgements}
My interest in the arrow-of-time dates back to discussions with Savas Dimopoulos in 1978. We were working on baryogenesis and together, realized that time-asymmetry was one of the necessary conditions for baryon-asymmetry. Although he may not remember it all, I am grateful to Savas for many insights into the nature of time-asymmetry.
The material in this lecture represents work done in collaboration with Daniel Harlow, Douglas Stanford, and Stephen Shenker (It should not be referenced without also referencing \cite{Harlow:2011az}). Much of what is in this lecture is due to them.
I have also benefited from stimulating discussions with Raphael Bousso.
This work is supported by the Stanford Institute for Theoretical Physics and NSF Grant 0756174.
|
2,877,628,088,505 | arxiv | \section{Introduction}
Deep learning (DL) was first proposed to solve problems where a set of training data was collected for centralized data processing. In recent years, with the rapid advancement in this field, its applications have extended to various industries, benefiting people’s life. However, collecting and transmitting such enormous data into centralized storage facilities is usually time-consuming, inefficient, and with privacy concerns. Limitations in network bandwidths and so on could bring in high latency. Moreover, the risk of personal data breaches correlated with data transmission to a centralized computing recourse causes data privacy concerns. Especially, with the increase of data privacy awareness in society, legal restrictions such as the General Data Protection Regulation (GDPR) \cite{1} have been promoted making such a centralized framework even unpractical.
In contrast, compared with a centralized framework where clients have to provide raw data to a central server for the model training, in a decentralized framework, the sensitive data of a client is processed directly on its local device. The concept of decentralized deep learning (DDL) was first proposed to facilitate the training of a deep network with billions of parameters using tens of thousands of CPU cores \cite{2}. A few years later, the famous federated learning (FL) was proposed by Google \cite{3}, allowing privacy-preserving collaborative learning among edge devices by leveraging on-device model training and trained model sharing. For one thing, local model training greatly reduces the latency in a centralized framework. Another important point is that a large system consisting of thousands of clients improves its performance by aggregating the results from local model training, without disclosing raw training data.
Despite the success of FL, in real life, a participating local device typically necessitates certain qualifications for efficient local model training. Limitations in device memory and computation capability can greatly increase the local training time of a client, and network bandwidth limitations can result in the increase of clients’ waiting time for transferring models thus causing a delay in an FL training cycle. Furthermore, non-independent and identically distributed (Non-IID) data of clients results in time-consuming convergence of FL. To tackle the challenge of communication efficiency in DDL approaches such as split learning (SL) and smart client selection have been proposed.
Moreover, towards a future integrated society by leveraging multi-agent multi-access edge computing, it necessitates building trust in such emerging technologies, i.e. trustworthiness. Nevertheless, recent works have demonstrated that FL may not always provide sufficient guarantees to personal data privacy and deep learning model integrity. Even in a decentralized framework like FL, an attacker still can compromise systems by injecting a trojan into either a client's local training data or its local model, and such an attack can further expand its influence to other clients through model sharing. In other cases, an attacker could even steal information from clients by observing the transmitted model gradients. To overcome these threats, defense strategies aiming to improve systems robustness and detect malicious behaviors are applied in FL. To this end, there are three pillars for the development of scalable decentralized deep learning covering FL technical fundamentals, communication efficiency, and security and privacy (trustworthiness) (Fig. \ref{sun1}).
\begin{figure}
\centerline{\includegraphics[width=\linewidth]{sun1.png}}
\caption{Three pillars for scalable decentralized deep learning.}
\label{sun1}
\end{figure}
This survey paper is organized as follows. Section \ref{sec2} comprehensively demonstrates the technical fundamentals for facilitating DDL and relevant applications in various fields. Section \ref{sec3} presents the main challenges and promising methodologies for future scalable DDL, from the perspectives of communication efficiency under edge heterogeneity and trustworthiness. Section \ref{sec4} concludes the paper, discussing open challenges and future directions.
\section{Towards Decentralized Deep Learning}
\label{sec2}
\subsection{Multi-Access Edge Computing}
According to an annual report from Nokia in 2020 \cite{4}, a large increase in the number of broadband IoT and Critical IoT devices will be observed in the next five years, such as AR, VR, and cloud robotics. Likewise, the number of massive IoT devices like different types of meters and sensors is to increase greatly as well. Most of these devices will be operated based on Artificial Intelligence (AI). In this regard, with the proliferation of smart devices and applications at the network edge based on AI such as intelligent environment sensors, autonomous vehicles, health care, smart grid, and so on, AI is playing a key role in data processing, knowledge acquisition, and resource management. For instance, AI has been used in edge service optimization in the Internet of Vehicles (IoV) \cite{xu2022} and other compelling applications for collective intelligence in wireless networks \cite{WangJZRCH20}. Traditionally, data generated on a smart device is sent to a remote computing server for processing. Though 5G aims to provide greater connectivity for multi-type devices with a big boost in the speed of handling big data, there still needs a wider coverage to facilitate efficient data processing. For this reason, a better solution to latency reduction is to combine with multi-access edge computing (MEC) technology. The inextricably correlated MEC reduces latency by leveraging compute resources in a network closer to the end-users, e.g., a local server and the gateway.
\subsection{Data Privacy and Decentralized Deep Learning }
The mathematical model of the perceptron was first proposed back in 1958 \cite{5}, which is a probabilistic model for information storage and organization. The multi-layer perceptron \cite{6} adds to the practicability of neural networks, as a useful alternative to traditional statistical modeling techniques. Lecun et al. \cite{7} presented deep learning (DL) that allows computational models composed of multiple processing layers to learn representations of data with multiple levels of abstraction. Nowadays, various DL models have been developed and broadly adopted in many walks of society, such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), and so forth.
Moreover, there are mainly two topologies of DL for processing distributed data, i.e. centralized DL (server-oriented) and decentralized DL (client-oriented or server-less) (Fig. \ref{sun2}). A centralized or stand-alone framework leverages a central high-performance computing resource to achieve the desired model performance by collecting data from various data sources. In this case, the collected data is usually exposed to the AI algorithm on the cloud. In contrast, a decentralized framework is considered as a privacy-preserving architecture by leveraging local model training based on distributed data sources on resource-constrained devices like smartphones. Since its introduction, the decentralized framework \cite{dml} has proliferated in academia and industry. Li et al.\cite{muli} further extended the concept of the parameter server framework and demonstrated a robust, versatile, and high-performance implementation, capable of handling a diverse array of algorithms for distributed machine learning problems based on local training data. Moreover, in recent years, federated learning (FL) has become one of the most famous decentralized frameworks, which was proposed by Google initially to improve the Google Keyboard (Gboard)’s performance in next word prediction \cite{3}. The architecture of FL allows users to take full advantage of an AI algorithm without disclosing their original local training data, bridging the gap between centralized computing resources and distributed data sources. FL achieves a better model by leveraging globally shared model parameters.
\begin{figure*}
\centerline{\includegraphics[width=0.78\linewidth]{sun2.png}}
\caption{The rising of decentralized deep learning.}
\label{sun2}
\end{figure*}
Furthermore, a fully decentralized framework refers to server-less architectures based on technologies such as the blockchain and edge consensus \cite{sl, 52, 54, 61, 66, kim20}, and ad hoc network \cite{adhoc}. For instance, Swarm Learning (SL) \cite{sl} is a decentralized approach that combines edge computing, blockchain-based peer-to-peer networking, and other state-of-the-art decentralization technologies for classifying diseases with distributed medical data. Moreover, Li et al. \cite{52} presented a decentralized federated learning framework based on blockchain for the global model storage and the local model update exchange, where the local updates are encrypted and stored in blocks of the blockchain after the consensus by the committee. Similarly, Kim et al.\cite{kim20} demonstrated an end-to-end latency model of chained federated learning architecture, with the optimal block generation rate decided by communication, computation, and consensus delays.
In recent years, data privacy has been a major concern, exacerbated by social events such as the Cambridge Analytica scandal \cite{8} and FBI-Apple encryption dispute \cite{9}. Data privacy concerns associated with the centralized data processing of a traditional DL pipeline necessitate more considerations on privacy-preserving system design and data protection strategies. To this end, the decentralized framework provides a promising solution to data privacy in large-scale multi-agent collaborative learning. For instance, massively decentralized nodes can be applied to diverse use cases, such as industrial IoT \cite{10}, environment monitoring using diverse sensors \cite{11}, human behavior recognition from surveillance cameras \cite{12}, robotics, and connected autonomous vehicles control \cite{13, 14}, federated network intrusion detection across multiple parties \cite{15, 16} and so forth.
\subsection{Federated Learning from a Network System Perspective}
\subsubsection{Cross-silo and Cross-device}
The cross-silo setting of FL represents a scenario of multi-party collaborative model training (Fig. \ref{sun3}), where collected data from local devices is sent to an edge server located inside the organization for computation. In this case, an upper-level remote server is applied for further computation. Data is transparent to all clients inside the organization, but it would not be exposed outside it. For instance, healthcare institutes could adopt this scheme to share medical images for identifying a rare disease \cite{18}. In this case, the cross-silo setting allows the institutes to share insights on the disease under data protection. On the other hand, a cross-device setting is a more rigorous scenario that collected data should not leave a device. It necessitates efficient on-device computing and timely model transmission to a remote server directly.
\begin{figure}
\centerline{\includegraphics[width=\linewidth, trim=1 1 1 1,clip]{sun3.png}}
\caption{Federated learning leverages multi-party model sharing sharing for privacy-preserving machine learning.}
\label{sun3}
\end{figure}
\subsubsection{Client Selection Policy}
Typically, to reduce the latency of waiting, at each round, the Parameter Server (PS) randomly selects only a small subset k out of m clients for the local model training and broadcasts the current global model w to the selected clients. Then, starting from w, each client$i$ updates its local model wi by training on its data and transmits the local model update back to the PS. In addition, other client selection policies such as cluster-based selection and reinforcement learning-based selection are adopted to reduce the time cost for global model convergence (see \ref{Section 3.1.2}).
\subsubsection{Synchronism}
There are two types of client scheduling approaches, i.e., synchronous FL and asynchronous FL. In synchronous FL, at each round, the PS waits for the completion of all allocated local training. However, in this case, the slowest local training task due to a relatively large data volume, computing device constraints, and so on becomes the bottleneck of training. In contrast, the asynchronous FL allows a client to upload the local update at any stage of the training. Besides, a client can offer multiple functions including local model training, network traffic transit, and so forth \cite{19, 69}.
\subsubsection{Aggregation}
As aforementioned, the next round’s global model takes the value of all local model updates’ aggregation results. Federated averaging (FedAvg) \cite{17} computes a weighted average as in (\ref{eq1}) to update the global model, given the volume of local training data is varying from client to client (the contribution is varying).
\begin{equation}
\label{eq1}
w_{t+1} = w_t + \sum_{i\in k}\frac{n_i}{n_k}(w_t^i - w_t)
\end{equation}
Where $w_t$ represents the weights of the current global model, $w_{t+1}$ is the weights of the next round’s updated global model, $w_t^i$ is the weights of client$i$'s trained local model, and $n_i$ and $n_k$ represent respectively the volume of client$i$'s local training data and that of the total training data from all the selected clients.
Moreover, robust aggregation strategies aim to drop a malicious update by measuring similarity among local model updates (see \ref{Section 3.2.5}). According to a local update's integrity, only qualified updates are aggregated into the global model at each round.
\subsubsection{Deep Learning Models}
Most of the current studies and applications around FL are based on a supervised model, where the model is trained on labeled data for typically a classification task. However, in real life, collected data is usually unlabeled and a supervised model is not compatible. Deep learning models including unsupervised learning and reinforcement learning are not sufficiently studied in the context of FL. For instance, by leveraging FL for a reinforcement task of robotics, a global agent could learn multiple action policies efficiently from diverse environments at the same time \cite{20}.
\subsubsection{Client Server Network Security}
From the perspective of a network system, FL encounters threats from mainly three components of the systems, i.e., the parameter server (PS), the client, and the transmission pathway. The PS is usually well secured and highly maintained compared with edge devices. Besides, the communication between the PS and a client is also commonly protected through end-to-end encryption. On the other hand, though the integrity of a client is verified to participate in the FL training, an edge still encounters intrusion by an adversary due to its relatively incomplete defense strategies taken at local. In FL, due to all clients have equal access to the global model through the aggregated model broadcast at each round, it provides a huge attacking surface for the adversary to compromise the systems. To this end, we consider that a compromised edge is the main threat to FL systems (see also \ref{Section 3.2}).
\section{Challenges and Methodologies towards Scalable Decentralized Deep Learning}
\label{sec3}
\subsection{Communication Efficiency Under Edge Heterogeneity}
Communication efficiency is an important contributor to evaluating the performance and scalability of distributed processing. Decentralized deep learning (DDL) can reduce computation time by synchronizing the different models during training. However, this leads to an increase in communication cost as the model size increases or the convergence becomes slow. Notably, one of the largest challenges of scaling FL in real life today is that device qualifications at the edge are varying. In particular, such heterogeneity lies in two main aspects, i.e., device capability and data distribution.
Firstly, for the device capability, especially in the case of cross-device FL, a DL model is usually operated on a resource-constrained mobile device such as smartphones. The capabilities of these mobile devices are varying due to hardware limitations and cost constraints. Moreover, the network bandwidth of a local area network (LAN) also greatly limits the model transmission efficiency, resulting in a delay in the decentralized learning cycle.
Secondly, for the data distribution, samples held by different clients are typically diverse with different data sizes and distributions, i.e., non-independent and identical distributed (non-IID). For example, in a multi-classification task with $C$ categories. Each Client$k$ owns a local dataset $D^{(k)}$ that consists of samples with unbalanced labels $\{1,2,…,C\}$. Then, Client1 has 80\% samples from Label1 and Client2 has 80\% samples from Label2. Mathematically speaking, suppose that $f_w:x\rightarrow y$ denotes a supervised neural network classifier with parameters $w$, taking an input $x_i \in x$ and outputting a real-valued vector where the $j$th element of the output vector represents the probability that $x_i$ is recognized as class $j$. Given $f_w(x)$, the prediction is given by $\hat{y} = \mbox{arg}\max_j f_w(x)_j$ where $f_w(x)_j$ denotes the $j$th element of $f_w(x)$. We assume the common data distribution $p(x|y)$ is shared by all clients in FL, and client$i$ has $p_i(y)$. Then when samples held by clients are skewed with various $p_i(y)$, $p_i(x,y)=p(x|y)p_i(y)\,s.t.\,p_i(y)\neq p_j(y)$, for all $i\neq j$. Client1 follows $p_1(x,y)$ and Client2 follows $p_2(x,y)$, i.e., they are non-IID. Though the random client selection policy in classical FL aims to reduce the time for waiting, the non-IID local data of the selected clients could give rise to a time-consuming convergence or even failing to converge the global model. In this section, we demonstrate the most relevant methodologies used to spread and reduce the amount of data exchanged between the server and clients tackling the edge heterogeneity problem (Table \ref{tab:freq}).
\begin{table*}
\centering
\caption{Methodologies for Improving communication efficiency under Edge Heterogeneity}
\label{tab:freq}
\footnotesize
\begin{tabular}{lllll}
Challenge & Work & Year &Methodology&Application\\
Resource-Constrained & Vepakomma et al. \cite{22} & 2018 & Split Learning& Image classification\\
Edge & Nishio et al. \cite{FedCS} & 2018 & Resource scheduling & Image classification\\
& Singh et al. \cite{23} & 2019 & Split Learning& IoT, Healthcare\\
& Thapa et al. \cite{21} & 2020 & Split Learning& Healthcare, Image classification \\
& Khan et al. \cite{19} & 2020 & Stackelberg game theory& Image classification\\
Data Heterogeneity & Jeong et al. \cite{FAug} & 2018 & Federated Augmentation & Image classification\\
& Sener et al. \cite{25} & 2018 & K-Center clustering & Image classification\\
& Zhao et al. \cite{noniid} & 2018 & Data-sharing strategy & Image classification\\
& Wang et al. \cite{26} & 2020 & Reinforcement Learning & Image classification\\
& Duan et al. \cite{astraea} & 2020 & Data augmentation and rescheduling & Image classification\\
& Sun et al. \cite{13} & 2021 & Segmented Federated Learning & Cybersecurity\\
\end{tabular}
\footnotesize
\end{table*}
\subsubsection{Resource-Constrained Edge}
Despite FL allowing each client to train its local model, the communication efficiency of FL is largely limited by the client-side qualifications such as network bandwidth, device memory, and computation capability, and so on. Under these circumstances, Split Learning (SL) \cite{21, 22} was proposed to facilitate model training based on edge cloud computing. In SL, a complicated DL model is partitioned into two sub-networks based on a specific layer called the cut layer, and then these sub-networks are trained on the client and the PS respectively (Fig. \ref{sun4}(a)). For each round, the client leverages forward propagation of its local sub-network based on local data, and then sends the intermediate representation of local data at the cut layer together with labels (vanilla Split Learning) to the PS for completing the forward propagation and the computation of loss. Finally, the gradients of the cloud-side sub-network are computed using back propagation, and they are sent back to the client for updating its local model. As such, for each round’s training, several times of transmission between the client and the PS are necessary.
\begin{figure}
\centerline{\includegraphics[width=\linewidth, trim=1 1 1 1,clip]{sun4.png}}
\caption{(a) The architecture of the vanilla Split Learning \cite{22}. (b) A model performance comparison between FL and SL regarding the number of total model parameters and the number of total clients. The hyperbola shows the regions where one model outperforms the other regarding communication efficiency \cite{23}.}
\label{sun4}
\end{figure}
Moreover, a comprehensive comparison of communication efficiency between SL and FL was presented by Singh et al. \cite{23}. To study the relationship between communication efficiency and factors such as the total client number and model parameter number, a trade-off between the two models was demonstrated (Fig. \ref{sun4}(b)), where the hyperbola shows the regions where one model outperforms the other regarding communication efficiency, in other words, less data transmission between the client and the PS. Besides, by comparing SL and FL in real-life scenarios of smart watches with users in a diverse range from hundreds to millions, the result suggests that SL is more efficient and scalable when it comes to a relatively large number of clients and a relatively large DL model.
Furthermore, to tackle various constraints of computational resources and wireless channel conditions, Nishio et al. \cite{FedCS} demonstrated a method called FedCS. In FedCS, the PS estimates the time required for conducting several steps of FL based on resource information of clients and schedules clients such that it aggregates as many client updates as possible to accelerate performance improvement during training. It shows a significantly shorter training time compared with the classical FL. In addition, Khan et al. \cite{19} proposed an incentive-based FL framework based on the Stackelberg game theory to motivate the participation of devices in the learning process, while optimizing the client selection for minimizing the overall training cost of computation and communication.
\subsubsection{Data Heterogeneity}
\label{Section 3.1.2}
Though the random client selection in FedAvg has been working well given data samples held by different clients are independent and identical decentralized (IID) \cite{24}. Unfortunately, this scheme doesn’t work well when applied to real-world data samples, which are typically non-IID. For this reason, an efficient client selection policy during FL training is critical for the fast convergence of the global model, instead of the random client selection policy. Sener et al. \cite{25} presented the K-Center clustering algorithm for choosing images to be adopted from a very large collection. They aim to find a subset such that the performance of the model on the labeled subset and that on the whole dataset will be as close as possible (Fig. \ref{sun5}). Furthermore, by leveraging the K-Center algorithm in FL under the non-IID settings, participating clients can be clustered into various groups based on their data distributions. Then, by carefully selecting clients from each group during training, it contributes to a faster global model convergence and performance improvement \cite{26}.
\begin{figure}
\centerline{\includegraphics[width=0.8\linewidth, trim=1 1 1 1,clip]{sun5.png}}
\caption{K-Center clustering algorithm aims to find a core-set (blue points) to represent the whole dataset (blue and red points) when conducting the training. \cite{25}}
\label{sun5}
\end{figure}
Similarly, a reinforcement learning (RL)-based FL on non-IID data was presented by Wang et al \cite{26}, where an experience-driven control framework called FAVOR intelligently chooses the clients to participate in each round of FL (Fig. \ref{sun6}). This approach aims to counterbalance the bias introduced by non-IID data, thus speeding up the global model convergence. The objective of this approach is to achieve the desired model performance within the fewest rounds. In particular, deep Q-learning (DQN) is adopted to learn how to select a subset of clients at each round thus maximizing a reward computed from the current reward and expected future rewards. Besides, there are three main components w.r.t. RL, i.e., the state, the action, and the reward \cite{27}. Here the state of the environment is defined as compressed weights of the global model and local models. The available actions for the RL agent are a large space of size $\binom{N}{K} $, where $K$ is the total number of clients and $N$ is the number of selected clients at each round of FL. Finally, the reward of the DQN agent consists of the incentive from achieving high accuracy and the penalty for taking more rounds to achieve the desired performance.
\begin{figure}
\centerline{\includegraphics[width=0.8\linewidth]{sun6.png}}
\caption{The RL-based client selection policy for faster convergence of FL. \cite{26}}
\label{sun6}
\end{figure}
Moreover, Segmented Federated Learning (Segmented-FL) was proposed to tackle data heterogeneity in network intrusion detection \cite{13}. Participants with highly skewed non-IID network traffic data are separated into various groups for personalized federated learning based on their recent behavior. Then, each group is assigned an individual global model for the aggregation respectively. Besides, for each round's training, a new group segmentation is formed and the global model of a group is updated based on the weighted averaging of its current global model, local model updates from the group, and the other existing global models. Consequently, it shows that Segmented-FL for network intrusion detection with massively distributed data sources outperforms the classical FL.
In addition, Zhao et al. \cite{noniid} presented a data-sharing strategy to improve training on non-IID data by creating a small data subset that is globally shared between all the edge devices. The experiments show that accuracy can be increased by ~30\% for the CIFAR10 dataset with only 5\% globally shared data, compared with the accuracy of FL. Likewise, Jeong et al. \cite{FAug} proposed federated augmentation (FAug) where each device trains a generative model, and thereby augments its local data towards yielding an IID dataset. The result shows around 26x less communication overhead for achieving the desired test accuracy compared to FL.
\subsection{Trustworthiness}
\label{Section 3.2}
A decentralized framework such as federated learning (FL) encounters threats from malicious AI. As aforementioned in Section 2, threats from an edge client are more common and critical for FL, compared with the security of the server and the middle data transmission. FL extends the surface for the attacker to compromise one or several participants. In this regard, an adversary can intrude such decentralized systems through a compromised edge as a backdoor, by either manipulating local training data \cite{28, 29, 30, 31} or replacing a local model update \cite{28, 31, 32}, triggering attacker-desired behavior. This kind of attack extends its influence to other clients in the systems through malicious model sharing with poisoned model weights. (see \ref{Section 3.2.1}, \ref{Section 3.2.2}, \ref{Section 3.2.3}, \ref{Section 3.2.4})
In the contrast, the controversy surrounding threats on FL has also been intensively discussed in recent years. These defense strategies can mainly be separated into two categories, i.e., robust aggregation and anomaly detection. For the robust aggregation, it is related to improving the resilience of an aggregation algorithm (e.g. FedAvg) by either carefully selecting the local models for aggregation \cite{33, 34, 35} or adding noise to the aggregated model for counterbalancing the malicious update\cite{31, 37, 38}. On the other hand, various anomaly detection approaches are leveraged for identifying a malicious local model update, including comparing the similarity between local updates and finding the ones greatly diverging from the others \cite{39, 40, 41, 42, 64}, applying a cloud validation set with a small number of data samples from each client \cite{61}, and so on. (see \ref{Section 3.2.5})
\subsubsection{Threat Models}
\label{Section 3.2.1}
Our taxonomy for threat models (Fig. \ref{sun7}) comprehensively demonstrates various attacking methodologies in decentralized deep learning systems.
\begin{figure*}
\centerline{\includegraphics[width=0.8\linewidth, trim=2 2 2 2,clip]{sun7.png}}
\caption{Our taxonomy for threat models in decentralized deep learning systems.}
\label{sun7}
\end{figure*}
Given the level of an adversary's prior knowledge on compromised clients, attacks on FL can be divided into white-box attacks and black-box attacks. For the black-box setting, the attacker has only access to a client’s local data set and the objective is to replace the dataset with compromised backdoor data. The typical black-box attacks include backdoor attacks and label flipping attacks \cite{labelflipping}. On the other hand, for the white-box setting, the attacker is considered to have control over both the local data and the model of a client. In this case, the attacker can send back any malicious model it prefers to the PS. One typical white-box attack is the model replacement.
Moreover, depending on the attacker’s objective, an attack is either an untargeted attack that aims to reduce the accuracy of the FL model or a targeted attack that aims to compromise the FL model to output an adversary-desired label. Besides, according to the attacking timing, an attack can be mounted at either the training phase \cite{28} or the inference phase \cite{44}, where the training phase refers to the model training in FL and the inference phase refers to the application after attaining a converged model.
In addition, the continuity of an attack has also influence on the attacking performance, where a single-shot attack usually involves one malicious participant who aims to inject a long-lasting malicious trojan into the model by mounting the attack in a single round of training and a repeated attack usually involves one or more malicious participants with a high possibility to be mounted in multiple rounds of training.
We offer an overview of the most effective attacks on FL in the following several sections, covering backdoor attacks, model replacement, and information stealing.
\subsubsection{Backdoor Attacks}
\label{Section 3.2.2}
The goal of a backdoor attack is to corrupt other clients' model performance on specific sub-tasks. Given an attacker has only access to a client’s local data (a black-box attack), a trojan backdoor \cite{28, 29} corrupts a subset of local training data by adding a trojan pattern to the data and relabeling them as the target class (Fig. \ref{sun8}). Besides, Lin et al. \cite{30} adopted the composition of existing benign features and objects in a scene as the trigger. It leverages a mixer to generate mixed and poisonous samples, and then trains the local model on these samples as well as original benign data. Furthermore, semantic backdoors cause a model to produce an attacker-chosen output on unmodified images. For example, Wang et al. demonstrated an edge-case backdoor that targets prediction sub-tasks which are unlikely to be found in the training or test data sets, but are however natural \cite{31}. To conduct the attack, they trained the local model based on a mix of the backdoors and benign training data with a carefully chosen ratio. The result shows that this attack can bypass simple norm-based defense algorithms such as the norm bounding \cite{40}.
\begin{figure}
\centerline{\includegraphics[width=\linewidth, trim=1 1 1 1,clip]{sun8.png}}
\caption{Samples of various types of trojan backdoors. \cite{29}}
\label{sun8}
\end{figure}
\subsubsection{Model Replacement}
\label{Section 3.2.3}
The model replacement is one type of white-box attack \cite{28}, through replacing the global model with a malicious model. As aforementioned, in FedAvg, the PS updates the global model by performing a weighted average of all local trained models. A model replacement attack aims to submit a malicious model update $w_t^m=\frac{n_k}{n_{adv}}(w_t^{adv}-w_t) + w_t$ instead of $w_t^{adv}$, where $w_t^{adv}$ denotes a poisoned local model based on the aforementioned methods such as backdoor attacks and $n_{adv}$ denotes the number of samples owned by the adversary. Given the attack is usually mounted after the global model converges when additional local model training will not improve the global model and its loss settles within an error range around the optimum, each honest client$i$ then will obtain an updated local model $w_t^i$ approximately equal to the current global model $w_t$. $w_t^i-w_t\approx 0$. Equation (\ref{eq2}) is the mathematical proof of the model replacement attack.
\begin{equation}
\label{eq2}
\begin{gathered}
w_{t+1}= w_t + \sum_{i \in k}\frac{n_i}{n_k} (w_t^i - w_t)\\
= w_t+\frac{n_{1}}{n_k}(w_t^1 - w_t)+..+ \frac{n_{adv}}{n_k} (w_t^m - w_t)\\
\approx w_t+\frac{n_{adv}}{n_k} (w_t^m - w_t)\\
= w_t+\frac{n_{adv}}{n_k}(\frac{n_k}{n_{adv}}(w_t^{adv}-w_t) + w_t - w_t)\\
=w_t^{adv}
\end{gathered}
\end{equation}
The combination of semantic backdoors and model replacement formulates a long-lasting and invisible attack on the systems. For instance, Bagdasaryan et al. \cite{28} demonstrated an attacking method of the constrain and scale on the CIFAR-10 dataset, aiming to poison the global model using a set of car images with certain features (racing strip, green color, and stripped background wall) as triggers. In detail, the constrain-and-scale method is defined as in (\ref{eq3}). They mounted a single-shot model replacement attack where one malicious participant was selected for a single round in FL. Then, by updating the model based on the model prediction accuracy on both main classes and backdoor classes, and an anomaly detection algorithm’s accuracy, it aims to achieve the desired malicious performance while bypassing the anomaly detection. Finally, it shows that such attacks can bypass anomaly detection and retains a high accuracy for many rounds after the single-shot attack.
\begin{equation}
\label{eq3}
L_{model} = \alpha L_{class} + (1 - \alpha)L_{ano}
\end{equation}
Where $L_{class}$ captures the accuracy on both the main and backdoor tasks, $L_{ano}$ represents the performance of an anomaly detection algorithm taken at the PS, and $\alpha$ controls the importance of evading anomaly detection.
\subsubsection{Information Stealing }
\label{Section 3.2.4}
By leveraging generative adversarial networks (GANs)\cite{gan}, an adversary could reconstruct the training data of another client in FL by just downloading the global model \cite{45}. In GANs, there is a tradeoff between the discriminator and the generator, where the discriminator trains based on whether it succeeds in distinguishing between the adversarial samples drawn from the generator and real data from the targeted data class, and the generator trains based on whether it succeeds in fooling the discriminator. At each round of FL, the adversary replaces the discriminator of the implemented GANs with the latest global model from the parameter server. Then the generator of the GANs produces adversarial samples from Gaussian noise and updates itself based on the inference result from the discriminator and the label of the targeted data class. In this case, with the adversarial training, the generator of the adversary could produce crispier samples to train a local DL model using the fake samples of the targeted data class. Besides, malicious model parameters of the adversary are then transmitted to the victim through model aggregation. The compromised model parameters would lure the victim to expose more detail on its training data, due to the victim would need more effort in model training thus identifying between real data and fake data. Consequently, the model update of the victim would allow the adversary to generate crispier and crispier adversarial samples that reveal the raw training data.
In addition, Nasr et al. \cite{8835245} demonstrated a comprehensive analysis on white-box membership inference attacks, where only correlated information of local training data leaked from the model sharing. Different from the aforementioned reconstruction attack where the objective is to reconstruct the raw training data of a victim, this kind of attack aims to infer whether a specific data sample was used in the victim’s local model training.
\subsubsection{Defense Models}
\label{Section 3.2.5}
Defense strategies against the threat models in FL to date can mainly be separated into two categories, i.e., robust aggregation and anomaly detection. For the robust aggregation, instead of employing the random client selection policy in FedAvg, other selection approaches have been proposed against underlying malicious local updates, such as Krum\cite{33}, Trimmed mean\cite{34}, and so on. Krum selects a single local update from $m$ local models that is similar to other models as the global model based on pairwise Euclidean distances between local updates. In detail, for each model, it computes the sum of distances between a local model and its closest $m-c-2$ local models, where $m$ is the total number of clients, and $c$ is the assumed maximum number of compromised clients. On the other hand, trimmed mean sorts all local updates at each round, i.e., $w_{1j}$, $w_{2j}$, ···, $w_{mj}$, where $w_{ij}$ represents the $j$th round's model of the $i$th client. Then by removing the largest and smallest $\beta$ of them, the mean of the remaining $m - 2\beta$ models is employed as the result of the $j$th round’s global model. Moreover, another important strategy of robust aggregation is the Differential Privacy (DP), which limits the influence of a malicious update on model aggregation by adding a small fraction of Gaussian noise to the parameters of a local update. In particular, the cloud-side DP where the noise is added directly to the aggregated global model bounds the success of attacks such as the information stealing \cite{31, 37}. The client-side DP where the noise is added to each client's local update aims to achieve the optimized tradeoff between defense efficiency and model performance on main tasks of FL \cite{38}.
Furthermore, norm bounding and anomaly detection are technologies adopted to drop malicious updates. In the norm bounding, the norm of a local update is a projected positive vector, such as the length of the model parameter vector. Since the malicious updates based on backdoor attacks and model replacement attacks of an adversary are likely to produce model parameters with large norms compared with other honest clients, an efficient way is to drop the updates whose norm is above a certain threshold \cite{40}. Likewise, anomaly detection in FL is usually based on comparing the similarity among local updates. For example, Cao et al. \cite{39} presented a Euclidean distance-based malicious local model detection. They demonstrated that if a local model had a distance under a certain constrain with more than half of the local models, it would be probably benign. Tolpegin et al. \cite{41} proposed a PCA-based defense against label flipping attacks. They observed and plotted standardized parameters of local updates to separate the benign and malicious ones. Additionally, Zhao et al. \cite{42} presented a poisoning defense method using generative adversarial networks. By reconstructing data from a local update and feeding the generated data to each of the clients’ models, they aimed to specify the label with the most occurrences as the true label for each input. Finally, the local updates were divided into the benign cluster and the malicious cluster by evaluating the prediction accuracy on the generated data using the obtained labels.
\section{Concluding Remarks}
\label{sec4}
In multi-access edge computing, decentralized deep learning (DDL) is considered to facilitate privacy-preserving knowledge acquisition from enormous various types of edge data. This survey provides an overview of DDL from two novel perspectives of communication efficiency and trustworthiness, offering state-of-the-art technologies to tackle challenges in leveraging DDL for social practices.
Federated learning as a classical solution to data privacy in centralized learning, aims to leverage local model training for collective machine learning among multiple clients. Whereas, real-life challenges such as edge heterogeneity and adversarial attacks have greatly limited the capability and scalability of this technology. Given the capability limitation of an edge device, the convergence of a complicated model is costly and time-consuming. A more compatible architecture appears to be split learning, which brings the gap between a centralized computing resource and decentralized data sources. Besides, data heterogeneity is a common problem when applying real-world data, to this end, a more adaptive client selection policy could benefit the fast convergence of FL. Moreover, the topic of trustworthiness in DDL has also been attracting an explosive growth of interest in recent years. We summarized the latest threat models in DDL based on various criteria and provided our novel taxonomy. Finally, we discussed some of the most promising defense strategies against such threats on FL. In addition, there are still other important topics not covered in this survey, including mitigating algorithmic bias in DDL \cite{56, 62} and incentive mechanism for mobile device participation \cite{19, 68}.
The current rapid advancement and broad application of deep learning in today’s society necessitates building trust in such emerging technology. The privacy-preserving DDL offers practical solutions to future large-scale multi-access edge computing. The breadth of papers surveyed suggests that DDL is being intensively studied, especially in terms of privacy protection, edge heterogeneity, and adversarial attacks and defenses. Furthermore, the future trends of DDL put weight on topics such as efficient resource allocation, asynchronous communication, and fully decentralized frameworks.
\section*{Acknowledgment}
The authors would like to thank the anonymous reviewers for helpful comments.
\bibliographystyle{IEEEtran}
|
2,877,628,088,506 | arxiv | \section{The CSL model}\label{sec:tcm}
Using the language of non-relativistic quantum field theory,
the CSL model is formulated in terms of a stochastic differential equation in the Fock space
associated with the system \cite{Ghirardi1990}.
Given different types of particles, where the type $j$ has mass $m_j$,
the mass-proportional CSL model \cite{Pearle1994} is defined by
\begin{eqnarray}\label{eq:sdecsl}
\mathrm{d} \ket{\varphi_t} &=& \left[-\frac{i}{\hbar}\hat{H} \mathrm{d} t + \frac{\sqrt{\gamma}}{m_0} \int \mathrm{d} {\bf y} [\hat{M}({\bf y})-\langle M({\bf y}) \rangle_t ] \mathrm{d} W_t({\bf y}) \right. \nonumber \\
&&\left.- \frac{\gamma}{2 m^2_0} \int \mathrm{d}{\bf y} [\hat{M}({\bf y})-\langle M({\bf y}) \rangle_t ]^2 \mathrm{d} t \right] \ket{\varphi_t},
\end{eqnarray}
where $\hat{H}$ is the standard quantum Hamiltonian, $\langle A \rangle_t \equiv \bra{\varphi_t} \hat{A} \ket{\varphi_t}$, $m_0$ is a reference mass (usually the mass of a nucleon)
and $\hat{M}({\bf y})$ is a smeared mass density operator:
\begin{equation}
\hat{M}({\bf y}) = \sum_j m_j \int \frac{\mathrm{d} {\bf x}}{(\sqrt{2 \pi} r_C)^3} e^{-\frac{|{\bf y}-{\bf x}|^2}{2 r^2_C}} \hat{\psi}_j^{\dag}({\bf x}) \hat{\psi}_j({\bf x}).
\end{equation}
Here, $\hat{\psi}_j^{\dag}({\bf x})$ and $\hat{\psi}_j({\bf x})$ are, respectively, the creation and the annihilation
operator of a particle of type $j$ in the point ${\bf x}$,
while $W_t({\bf y})$ is an ensemble of independent Wiener processes,
one for each point in space. The model
is characterized by two parameters: $\gamma$,
which sets the strength of the collapse process, and $r_C$,
which determines the threshold above which spatial superpositions are suppressed.
The choice of the numerical values for these parameters is of course ultimately dictated by the agreement
with the experimental data; the originally proposed values are \cite{Ghirardi1990} $r_C = 10^{-7} \text{m}$ and $\gamma = 10^{-30} \text{cm}^{3}\text{s}^{-1}$.
\begin{figure*}[!ht]
\hskip-3cm{\bf (a)}\hskip5cm{\bf (b)}\hskip6cm{\bf (c)}\\
\includegraphics[width=.65\columnwidth]{fig1a}\hspace{0.3cm}\includegraphics[width=.65\columnwidth]{fig1b}\hspace{0.3cm}\includegraphics[width=.65\columnwidth]{fig1c}
\caption{{\bf (a)} and {\bf (b)} Evolution of the position probability distribution $|\varphi_t(x)|^2 = |\braket{x}{\varphi_t}|^2$ in the CSL model in one dimension, for one nucleon
initially in a balanced
superposition of two gaussian states with equal variance $\sigma^2$ and centered, respectively, in $\alpha$ and $-\alpha$.
The probability distribution is plotted for a single realization of the random noise and at times
$\lambda t = 0$ (black solid line), $\lambda t = 0.1$ (blue dot-dashed line),
$\lambda t = 0.3$ (red dashed line) and $\lambda t = 0.4$ (green dotted line) in {\bf (a)}, while $\lambda t = 0.5$ (black solid line), $\lambda t = 0.6$ (blue dot-dashed line),
$\lambda t = 0.8$ (red dashed line) and $\lambda t = 0.9$ (green dotted line) in {\bf (b)}; $\sigma/r_C = 0.55$ and $\alpha/r_C=2.5$.
{\bf (c)} Time evolution of the position variance, $(\Delta_t x)^2 = \bra{\varphi_t} \hat{x}^2 \ket{\varphi_t}-(\bra{\varphi_t} \hat{x} \ket{\varphi_t})^2$, for different realizations of the noise field.
We have applied the Euler-Maruyama method \cite{Kloeden1992,Semina2014} to Eq.(\ref{eq:sdecsl}), for $\hat{H}=0$ and time step $\lambda \Delta t = 0.01$.
As discussed in the text, the rate $\lambda$ has to be replaced by the rate $\Gamma$ defined in Eq.(\ref{eq:G})
if a macroscopic object is taken into account, in accordance with the amplification mechanism.}
\label{fig:1}
\end{figure*}
The mass density operators $\hat{M}(\bf y)$ in Eq.(\ref{eq:sdecsl})
induce
a collapse of the wavefunction $\ket{\varphi_t}$
around the common eigenvectors of the position operators of the particles
composing the system \cite{Ghirardi1990}.
Hence, the asymptotic wavefunction is sharply localized around
defined positions, excluding possible spatial superpositions.
The collapse rate for a microscopic system is given by $\lambda = \gamma/(4 \pi r^2_C)^{3/2} \approx 2.2 \times 10^{-17} \text{s}^{-1}$.
Such a small value guarantees that the spatial localization
due to the noise field can be safely neglected if a microscopic system
is taken into account.
Now instead, consider a macroscopic rigid body in a superposition
of two states distant more than $r_C$.
Its center of mass collapses with an effective rate \cite{Adler2007,Bassi2014}
\begin{equation}\label{eq:G}
\Gamma = \lambda n^2 \tilde{N},
\end{equation}
where $n$ is the number of constituents of the body
contained in a volume $r_C^3$ and $\tilde{N}$
denotes how many such volumes are held in the macroscopic body.
This relation clearly shows the amplification mechanism, which is at the basis
of every collapse model. The localization induced by the noise field
grows with the size of the system, so that the center of mass
of any macroscopic object behaves, for all practical purposes, according to classical mechanics.
The peculiar property of the CSL model is the quadratic dependence of the rate $\Gamma$
on the number of constituents, which is a direct
consequence of the action of the noise field on identical particles \cite{Bassi2013}.
The main features of the CSL model are summarized in Fig.\ref{fig:1},
where we represent the time evolution of the position probability distribution of one particle, which is
initially in a superposition of two gaussian states.
The wavefunction is subjected continuously to the action of the noise, which suppresses the superposition between the two gaussians, leading to a gaussian state
localized around one of the two initial peaks, in a time scale fixed by the collapse rate, see Fig.\ref{fig:1} {\bf (a)} and {\bf (b)}.
The diffusive nature of the dynamics in the CSL model is clearly illustrated by the
time-evolution of the position variance, see Fig.\ref{fig:1} {\bf (c)}.
A relevant drawback of the original CSL model,
as well as of most collapse models, is that the average kinetic
energy of the quantum system diverges on the long time scale.
The model predicts that the energy of a particle with mass $m$ increases linearly in time
with a rate
$
\xi = 3\hbar^2 m \lambda/(4 r^2_C m^2_0).
$
As will become clear by the following analysis,
the reason for such an energy increase is precisely due to the absence
of dissipation within the model.
The noise acts like an infinite temperature background, steadily increasing
the energy of the system.
\section{Dissipative extension of the CSL model}\label{sec:deo}
\subsection{Definition of the model via a non-linear stochastic differential equation}
Now that we have clarified the problem of the CSL model we want to work out,
as well as the features that must be preserved,
we are in the position to formulate a new, dissipative CSL model.
As for the original model, the most compact way to do so, is to
define a proper stochastic differential equation.
Specifically, we consider the following non-linear stochastic differential equation:
\begin{eqnarray}\label{eq:sdecsld}
&&\mathrm{d} \ket{\varphi_t} = \left[-\frac{i}{\hbar}\hat{H} \mathrm{d} t + \frac{\sqrt{\gamma}}{m_0} \int \mathrm{d} {\bf y} [\hat{\mathbb{L}}({\bf y})-r_t({\bf y})] \mathrm{d} W_t({\bf y}) \right. \\
&&\left.- \frac{\gamma}{2 m^2_0} \int \mathrm{d} {\bf y} [[\hat{\mathbb{L}}^{\dag}({\bf y})\hat{\mathbb{L}}({\bf y})+r^2_t({\bf y})-2r_t({\bf y})\hat{\mathbb{L}}({\bf y})] \mathrm{d} t \right] \ket{\varphi_t}, \nonumber
\end{eqnarray}
with $r_t({\bf y}) \equiv \bra{\varphi_t}(\hat{\mathbb{L}}^{\dag}({\bf y})+\hat{\mathbb{L}}({\bf y}))\ket{\varphi_t}/2$ and
\begin{eqnarray}
\hat{\mathbb{L}}({\bf y}) &\equiv& \sum_j \frac{m_j}{(1+k_j)^3} \int \frac{\mathrm{d} {\bf x}}{(\sqrt{2 \pi} r_C )^3} e^{-\frac{|{\bf y}-{\bf x}|^2}{2 r^2_C(1+k_j)^2}} \nonumber\\
&\times& \hat{\psi}_j^{\dag}({\bf x}) \hat{\psi}_j\left(\frac{1-k_j}{1+k_j}{\bf x}+\frac{2 k_j}{1+k_j}{\bf y}\right),\label{eq:ly}
\end{eqnarray}
where
\begin{equation}\label{eq:kj}
k_j \equiv \frac{\hbar}{2 m_j v_{\eta}r_C}.
\end{equation}
The inclusion of dissipation calls for the introduction
of a new parameter, $v_{\eta}$, with the dimension
of a velocity. Crucially, this parameter is related to the temperature of the noise field,
as it will be shown later (see Eq.(\ref{eq:T})), where the numerical choice of $v_{\eta}$ will be also discussed.
The structure of the stochastic differential equation (\ref{eq:sdecsld}) generalizes that of Eq.(\ref{eq:sdecsl})
to the case of non self-adjoint operators \cite{Barchielli2009,Bassi2013b}.
Indeed, for $v_{\eta}\rightarrow \infty$, so that $k_j \rightarrow 0$, one recovers the original CSL model.
The physical meaning of the operator $\hat{\mathbb{L}}({\bf y})$ in Eq.(\ref{eq:ly}) is better understood
by taking into account also its momentum representation. One has
\begin{eqnarray}
\hat{\mathbb{L}}({\bf y}) &=& \sum_j \frac{m_j}{(2 \pi \hbar)^3}
\int \mathrm{d} {\bf P} \mathrm{d} {\bf Q}\, \hat{a}^{\dag}_j({\bf P}+{\bf Q}) \, e^{-\frac{i}{\hbar} {\bf Q} \cdot {\bf y}} \nonumber\\
&&\times \exp\left(-\frac{r^2_C}{2 \hbar^2}\left|(1+k_{j}){\bf Q}+ 2 k_{j} {\bf P}\right|^2\right)
\hat{a}_j({\bf P}), \label{eq:lymom}
\end{eqnarray}
where $\hat{a}^{\dag}_j({\bf P})$ and $\hat{a}_j({\bf P})$ are, respectively, the creation
and annihilation operator of a particle of the type $j$ with momentum ${\bf P}$.
By Eqs.(\ref{eq:ly}) and (\ref{eq:lymom}), we see that the action of the
collapse noise can be compared to that of an external potential which depends not only on the position, but also on momentum of the system,
thus inducing dissipation.
In particular, since the exchanged momentum $Q_i$ in the spatial direction $i= x, y, z$ has a gaussian distribution peaked around $- 2P_i k_j/(1+k_j)$,
the action of the noise will suppress high momenta,
so that the mean kinetic energy of the system, as well as the mean momentum, is subject to relaxation.
This is explicitly shown in Sec. \ref{sec:era}.
Indeed, contrary to any external field, the collapse noise induces an anti-hermitian coupling with matter,
which is necessary in order to induce localization. In addition, the introduction of dissipation
also leads to an hermitian contribution to the coupling, see Sec. \ref{sec:haa}.
\subsection{Linear stochastic differential equation}
In order to study the solutions and properties of Eq.(\ref{eq:sdecsld}), it is
often convenient to deal with an equivalent linear stochastic differential equation \cite{Bassi2003,Bassi2005}.
Here, we briefly sketch the standard procedure which provides
the linear stochastic differential equation associated with the non-linear one in the form given by Eq.(\ref{eq:sdecsld}).
For a complete treatment, the reader is referred to \cite{Barchielli2009}.
Consider a non-linear stochastic differential equation as Eq.(\ref{eq:sdecsld}).
Recall that $W_t({\bf x})$ denotes an ensemble of independent Wiener processes
defined on a common probability space $(\Omega, \mathcal{F}, \mathbbm{P})$.
Let $B_t(\bf x)$ be the ensemble of processes given by
\begin{equation}
B_t({\bf x}) = W_t({\bf x}) + 2 \int_0^t \mathrm{d} s \, r_s({\bf x}),
\end{equation}
where $r_s({\bf x})$ has been defined after Eq.(\ref{eq:sdecsld}).
Now, by means of the Girsanov theorem one can define a new probability $\mathbbm{Q}$ on
$(\Omega, \mathcal{F})$ such that $B_t({\bf x})$ is an ensemble of Wiener processes under $\mathbbm{Q}$ \cite{Barchielli2009}.
In addition, one can define a random vector $\ket{\psi_t}$ such that
\begin{equation}\label{eq:norm}
\ket{\varphi_t} = \frac{\ket{{\psi}_t}}{\|{\psi}_t \|},
\end{equation}
where $\ket{\varphi_t}$ satisfies Eq.(\ref{eq:sdecsld}), while $\ket{{\psi}_t}$ satisfies \cite{Barchielli2009}
\begin{eqnarray}
\mathrm{d} \ket{{\psi}_t} &=& \left[-\frac{i}{\hbar}\hat{H}\mathrm{d} t-\frac{\gamma}{2 m^2_0}\int \mathrm{d} {\bf y}\hat{\mathbb{L}}^{\dag}({\bf y})\hat{\mathbb{L}}({\bf y}) \mathrm{d} t \right. \nonumber\\
& &\left.+ \frac{\sqrt{\gamma}}{m_0} \int \mathrm{d} {\bf y} \hat{\mathbb{L}}({\bf y}) \mathrm{d} B_t({\bf y}) \right] \ket{{\psi}_t}.
\end{eqnarray}
This is the linear stochastic differential equation associated with the dissipative CSL model.
\subsection{Hermitian and anti-hermitian coupling of the noise field}\label{sec:haa}
In this paragraph, we show that Eq.(\ref{eq:sdecsld}) can be written in a way such that
the coupling between the classical collapse noise and quantum matter is made explicit.
In particular, the introduction of dissipation within the CSL model
leads to an hermitian contribution to the coupling of the collapse noise with quantum matter,
in addition to a contribution in the usual (for collapse models) anti-hermitian form \cite{Bassi2013,Adler2014}.
It is here convenient to use the Stratonovich formalism \cite{Arnold1971,Bassi2003}
and to consider the decomposition of $\hat{\mathbb{L}}({\bf y})$ into its hermitian and anti-hermitian part,
$\hat{\mathbb{L}}({\bf y}) = \hat{\mathbb{L}}^{(a)}({\bf y}) + i \hat{\mathbb{L}}^{(b)}({\bf y})$, with $\hat{\mathbb{L}}^{(a)}({\bf y})$
and $\hat{\mathbb{L}}^{(b)}({\bf y})$ self-adjoint operators.
Thus,
the non-linear stochastic equation given by Eq.(\ref{eq:sdecsld}) leads to
\begin{eqnarray}
\frac{\mathrm{d} \ket{\varphi_t}}{\mathrm{d} t} &=& \left[-\frac{i}{\hbar}\left(\hat{H}- \frac{\sqrt{\gamma} \hbar}{m_0} \int \mathrm{d} {\bf y} \hat{\mathbb{L}}^{(b)}({\bf y}) w({\bf y},t) + R \right)\right. \nonumber\\
&&+\left. \frac{\sqrt{\gamma}}{m_0} \int \mathrm{d} {\bf y} \hat{\mathbb{L}}^{(a)}({\bf y}) w({\bf y},t) + S \right] \ket{\varphi_t},\label{eq:eq}
\end{eqnarray}
where $w({\bf x}, t)$ is the white-noise field, which can be formally written as $w({\bf x},t) = \mathrm{d} W_t({\bf x})/ \mathrm{d} t$
and satisfies the relations
\begin{equation}
\mathbb{E}[w({\bf x},t)] = 0 \qquad \mathbb{E}[w({\bf x},t) w({\bf y},t')] = \delta({\bf x} - {\bf y})\delta(t-t'),
\end{equation}
$\mathbb{E}$ being the stochastic average under the reference probability $\mathbbm{P}$.
Moreover, $R$ is an hermitian contribution coming from the passage to the Stratonovich formalism and reads
\begin{eqnarray}
R &=& - \frac{\gamma \hbar}{m^2_0} \int \mathrm{d} {\bf y} \left( \hat{\mathbb{L}}^{(b)}({\bf y})\hat{\mathbb{L}}^{(a)}({\bf y}) \right.\nonumber\\
&&\left. -2 \hat{\mathbb{L}}^{(b)}({\bf y})\langle \hat{\mathbb{L}}^{(a)}({\bf y}) \rangle_t
- \langle \hat{\mathbb{L}}^{(b)}({\bf y}) \hat{\mathbb{L}}^{(a)}({\bf y}) \rangle_t \right).
\end{eqnarray}
On the other hand, $S$ includes the non-linear contributions preserving the norm of the state vector and is given by
\begin{eqnarray}
S &=& - \frac{\gamma}{m^2_0} \int \mathrm{d} {\bf y} \left[\left( \hat{\mathbb{L}}^{(a)}({\bf y})- \langle \hat{\mathbb{L}}^{(a)}({\bf y}) \rangle_t \right)^2 + ( \langle \hat{\mathbb{L}}^{(a)}({\bf y}) \rangle_t )^2\right. \nonumber \\
&&\left.- \langle (\hat{\mathbb{L}}^{(a)}({\bf y}))^2 \rangle_t \right]
- \frac{\sqrt{\gamma}}{m_0} \int \mathrm{d} {\bf y} \langle\hat{\mathbb{L}}^{(a)}({\bf y})\rangle_t w({\bf y},t);
\end{eqnarray}
compare with Eq.(7.43) in \cite{Bassi2003}.
Equation (\ref{eq:eq}) describes the coupling between the classical field $w({\bf x}, t)$ and quantum matter.
Now, since $\hat{\mathbb{L}}({\bf y})$ is not a self-adjoint operator such a coupling has an hermitian, as well as an anti-hermitian
contribution.
Note that in the original CSL model the collapse noise is coupled with matter only via an anti-hermitian term \cite{Bassi2003,Adler2014}.
To be explicit,
Eq.(\ref{eq:lymom}) implies
\begin{eqnarray}
\hat{\mathbb{L}}^{\dag}({\bf y}) & = & \sum_j \frac{m_j}{(2 \pi \hbar)^3}
\int \mathrm{d} {\bf P} \mathrm{d} {\bf Q}\, \hat{a}^{\dag}_j({\bf P}+{\bf Q}) \, e^{-\frac{i}{\hbar} {\bf Q} \cdot {\bf y}}\\
&&\times \exp\left(-\frac{r^2_C}{2 \hbar^2}\left|(1-k_{j}){\bf Q}- 2 k_{j} {\bf P}\right|^2\right)
\hat{a}_j({\bf P}) \nonumber
\end{eqnarray}
and thus
\begin{eqnarray}
\hat{\mathbb{L}}^{(a)}({\bf y}) &=& \sum_j \frac{ m_j}{(2 \pi \hbar)^3}
\int \mathrm{d} {\bf P} \mathrm{d} {\bf Q}\, \hat{a}^{\dag}_j({\bf P}+{\bf Q})
\hat{a}_j({\bf P}) \nonumber\\
&&\times \exp\left(-\frac{r^2_C}{2 \hbar^2}\left( |{\bf Q}|^2+k_j^2|{\bf Q}+2{\bf P}|^2 \right) \right) \nonumber\\
&&\times \cosh \left(\frac{k_j r^2_C}{\hbar^2} {\bf Q}\cdot({\bf Q}+2{\bf P}) \right)
\end{eqnarray}
and
\begin{eqnarray}
\hat{\mathbb{L}}^{(b)}({\bf y}) &=& - \sum_j \frac{ m_j}{(2 \pi \hbar)^3}
\int \mathrm{d} {\bf P} \mathrm{d} {\bf Q}\, \hat{a}^{\dag}_j({\bf P}+{\bf Q})
\hat{a}_j({\bf P}) \nonumber\\
&&\times \exp\left(-\frac{r^2_C}{2 \hbar^2}\left( |{\bf Q}|^2+k_j^2|{\bf Q}+2{\bf P}|^2 \right) \right) \nonumber\\
&&\times \sinh \left(\frac{k_j r^2_C}{\hbar^2} {\bf Q}\cdot({\bf Q}+2{\bf P}) \right).
\end{eqnarray}
Of course, $\hat{\mathbb{L}}^{(b)}({\bf y})=0$ for $k_j =0$.
\section{Energy relaxation}\label{sec:era}
In this section, we investigate the energy behavior of
a system subjected to the collapse noise in our extended model.
We deal with the master equation implied by the stochastic differential equation given by Eq.(\ref{eq:sdecsld}).
After presenting the equation in a second-quantization formalism, we describe the corresponding operators
in the case of a fixed number of particles. In particular, by focusing on the one-particle case, we
show explicitly the exponential relaxation of the energy to a finite value.
\subsection{Master equation for the system's statistical operator}\label{sec:mea}
The non-linear, as well as the linear, stochastic differential equation fully fixes the collapse
model we are defining here. However, one is often interested in the predictions
of the model related with the statistical mean of some physical quantity,
\begin{equation}\label{eq:o}
O(t) \equiv \mathbbm{E}[\bra{\varphi_t}\hat{O}\ket{\varphi_t}],
\end{equation}
where, as usual, $\ket{\varphi_t}$ is the stochastic state of the system satisfying Eq.(\ref{eq:sdecsld}).
For this reason, it can be convenient to deal directly with the evolution of the average state
\begin{equation}\label{eq:hat}
\hat{\rho}(t) = \mathbbm{E}[\ket{\varphi_t}\bra{\varphi_t}],
\end{equation}
so that one recovers the usual relation
\begin{equation}
O(t) = \mbox{tr}\left\{\hat{O} \hat{\rho}(t)\right\}.
\end{equation}
By using the It\^{o} calculus, it is easy to see that Eq.(\ref{eq:sdecsld})
implies the following master equation:
\begin{eqnarray}
\frac{\mathrm{d}}{\mathrm{d} t}\hat{\rho}(t)
&=& - \frac{i}{\hbar}\left[\widehat{H} \,,\, \hat{\rho}(t)\right] +
\frac{\gamma}{m^2_0} \int \mathrm{d} {\bf y} \left[\hat{\mathbb{L}}({\bf y})\hat{\rho}(t)\hat{\mathbb{L}}^{\dag}({\bf y}) \right. \nonumber\\
&&\left. - \frac{1}{2}\left\{\hat{\mathbb{L}}^{\dag}({\bf y}) \hat{\mathbb{L}}({\bf y}), \label{eq:mecsldiss}
\hat{\rho}(t)\right\} \right].
\end{eqnarray}
This is a Lindblad master equation \cite{Lindblad1976,Gorini1976,Breuer2002}, indicating that we are in the presence of a time-homogeneous Markovian dynamics.
The Lindblad operators are the same operators as those appearing in the stochastic differential equation defining the model, see Eq.(\ref{eq:ly}) or Eq.(\ref{eq:lymom}).
The expressions of the Lindblad operators $\hat{\mathbb{L}}({\bf y})$ restricted to a sector of the Fock space
with a fixed number of particles is easily obtained as follows.
Let us assume for simplicity that we have $N$ particles of the same type and mass $m$.
The corresponding restriction of $\hat{\mathbb{L}}({\bf y})$
reads
\begin{eqnarray}
\hat{\mathbb{L}}({\bf y}) &=& \frac{m}{(2 \pi \hbar)^3} \sum^{N}_{\alpha =1}
\int \mathrm{d} {\bf Q}\, e^{\frac{i}{\hbar} {\bf Q} \cdot (\hat{{\bf x}}_{\alpha}- {\bf y})} \nonumber\\
&& \times \exp\left(-\frac{r^2_C}{2 \hbar^2}\left|(1+k){\bf Q}+ 2 k \hat{{\bf P}}_{\alpha}\right|^2\right),
\label{eq:lybb}
\end{eqnarray}
where $\hat{{\bf x}}_{\alpha}$ and $\hat{{\bf P}}_{\alpha}$ are, respectively, the position
and momentum operator of the $\alpha$-th particle and $k$ is the constant
given by Eq.(\ref{eq:kj}) with $m_j = m$.
Indeed,
consider the Hilbert space $L^{2}(\mathbb{R}^3)$ and the corresponding Fock space $\mathcal{F}(L^{2}(\mathbb{R}^3)) = \mathbb{C} + \sum^{\infty}_{n=1} L^{2}(\mathbb{R}^3)^{\tilde{\otimes}N}$,
where $L^{2}(\mathbb{R}^3)^{\tilde{\otimes}N}$ denotes the symmetric or antisymmetric part of the tensor product $L^{2}(\mathbb{R}^3) \otimes \cdots \otimes L^{2}(\mathbb{R}^3)$, $N$ times.
Now consider the operator on $\mathcal{F}(L^{2}(\mathbb{R}^3))$ given by \cite{Schwabl2008}
\begin{equation}\label{eq:c}
\hat{\mathbb{A}} = \int \mathrm{d} {\bf P} \mathrm{d} {\bf P'} \hat{a}^{\dag}({\bf P}') \bra{{\bf P}'} \hat{A}^{(1)}(\hat{{\bf x}}, \hat{\bf P}) \ket{{\bf P}} \hat{a}({\bf P}),
\end{equation}
where $\hat{A}^{(1)}(\hat{{\bf x}}, \hat{\bf P})$ is a single-particle operator on $L^{2}(\mathbb{R}^3)$, with $\hat{{\bf x}}$ and $\hat{\bf P}$, respectively, position and momentum operators on $L^{2}(\mathbb{R}^3)$.
Hence, the restriction of $\mathbb{A}$ on the $N$-particle sector of the Fock space
reads
\begin{equation}\label{eq:un}
\hat{\mathbb{A}} = \sum^{N}_{\alpha=1} \hat{A}^{(1)}(\hat{{\bf x}}_\alpha, \hat{\bf P}_\alpha),
\end{equation}
$\hat{{\bf x}}_\alpha$ and $\hat{\bf P}_\alpha$ being the position and momentum operator of the $\alpha$-th particle.
The relation between Eq.(\ref{eq:c}) and Eq.(\ref{eq:un}) is indeed the same as that between
Eq.(\ref{eq:lymom}) and Eq.(\ref{eq:lybb}).
If we further restrict to the case of a single free particle with mass $m$,
we end up with the following
master equation for the one-particle average state $\hat{\rho}^{(1)}$
\begin{eqnarray}
\frac{\mathrm{d}}{\mathrm{d} t}\hat{\rho}^{(1)}(t)
&=& - \frac{i}{\hbar}\left[\frac{{\hat{\bf P}}^2}{2m} \,,\, \hat{\rho}^{(1)}(t)\right] + \frac{\gamma m^2}{(2 \pi \hbar)^3 m^2_0} \nonumber\\
&&\left(
\int \mathrm{d} {\bf Q} \, e^{\frac{i}{\hbar} {\bf Q} \cdot \hat{{\bf x}}}
L({\bf Q}, \hat{\bf P}) \hat{\rho}^{(1)}(t)
L({\bf Q}, \hat{\bf P})e^{-\frac{i}{\hbar} {\bf Q} \cdot \hat{{\bf x}}}\right. \nonumber\\
&&\left. -\frac{1}{2}\left\{L^2({\bf Q}, \hat{\bf P}) , \hat{\rho}^{(1)}(t)\right\}\right),\label{eq:mecsldiss2}
\end{eqnarray}
with
\begin{equation}\label{eq:lqp}
L({\bf Q}, \hat{\bf P}) = e^{-\frac{r_C^2}{2\hbar^2} \left|(1+k) {\bf Q}+ 2 k \hat{{\bf P}}\right|^2}.
\end{equation}
Let us note that the inclusion of dissipation in the CSL model preserves the
invariance under translations of the system's evolution, but it breaks the invariance under boosts,
as directly seen by the master equation (\ref{eq:mecsldiss2}) \cite{Holevo1993}.
Nevertheless, the characterization of the overall dynamics by means of
a proper first-principle underlying theory,
which involves both the sources of the collapse noise and the quantum systems affected by it,
should allow to recover a fully covariant description; see also the discussion in the next paragraph.
\subsection{Evolution equation for the average kinetic energy and noise temperature}\label{sec:eef}
The master equation for the system's statistical operator provides us with
the evolution equation of the mean kinetic energy
of the one-particle system,
\begin{equation}
H(t) = \text{tr}\left\{\hat{{\bf P}}^2/(2m) \hat{\rho}^{(1)}(t)\right\}.
\end{equation}
Exploiting the translation covariance of Eq.(\ref{eq:mecsldiss2}) \cite{Holevo1993,Vacchini2009}
one gets
\begin{eqnarray}
\frac{\mathrm{d}}{\mathrm{d} t} H(t) &=&
\frac{\gamma m}{2(2 \pi \hbar)^3 m^2_0} \int \mathrm{d} {\bf Q}
\text{tr}\left\{e^{-\frac{r_C^2}{\hbar^2} \left|(1+k){\bf Q}+2 k \hat{{\bf P}}\right|^2} \right.\nonumber \\
&&\left.\times\left( |{\bf Q}|^2 +2 \hat{{\bf P}} \cdot {\bf Q} \right)\hat{\rho}^{(1)}(t)\right\}, \nonumber\\
&=& \frac{3 \hbar^2 \lambda m}{4(1+k)^5 r^2_C m^2_0}- \frac{4 k \lambda m^2}{(1+k)^5 m^2_0} H(t). \label{eq:hht}
\end{eqnarray}
This equation is solved by
\begin{equation} \label{eq:ht}
H(t) = e^{- \chi t}\left(H(0)-H_{\text{as}} \right) + H_{\text{as}},
\end{equation}
with relaxation rate
\begin{equation}
\chi = \frac{4 k \lambda m^2}{(1+k)^5 m^2_0}
\end{equation}
and asymptotic kinetic energy
\begin{equation}
H_{\text{as}} =\frac{3 \hbar^2}{16 k m r^2_C}.
\end{equation}
As expected, now we do have dissipation. The mean energy of the system can
decrease as a consequence of the action of the noise.
Moreover, even if the energy grows, there is an upper threshold value above which it cannot increase.
The long-time energy divergence is now avoided \cite{foot2}.
This is precisely the result
we wanted and the most natural way to interpret it is to
say that the collapse noise has a finite temperature toward which
the system thermalizes \cite{Bassi2005}. Explicitly,
$H_{\text{as}}$ corresponds to a noise temperature
\begin{equation}\label{eq:T}
T = \frac{\hbar v_{\eta}}{4 k_B r_C},
\end{equation}
where we used Eq.(\ref{eq:kj}) and $k_B$
is the Boltzmann constant.
The original CSL model is recovered in the limit $T \rightarrow \infty$: the noise acts like an infinite temperature background, which
explains the energy divergence.
The temperature of the noise in Eq.(\ref{eq:T}) does not depend on the mass of the system, which is
a very important feature of our model. In addition, the
state of the system actually equilibrates to the canonical Gibbs state, see Appendix \ref{sec:adc}.
These hallmarks of the evolution induced by Eq.(\ref{eq:sdecsld}) depend substantially
on the choice of the operators $\hat{\mathbb{L}}({\bf y}) $ in Eq.(\ref{eq:ly}).
It is an open question to identify the entire class of operators satisfying these natural requests.
In Appendix \ref{sec:adc}, we take into account a physically motivated alternative to the choice made
in Eq.(\ref{eq:ly}), showing how the relaxation dynamics
of the resulting collapse model is very similar to that presented here and,
in particular, the noise temperature is still given by Eq.(\ref{eq:T}).
The exponential relaxation of the energy $H(t)$ in Eq.(\ref{eq:ht}) is
the same as that in the dissipative Ghirardi-Rimini-Weber (GRW) model recently introduced in \cite{Smirne2014}.
This is not surprising, since, as for the case without dissipation, the extended GRW and CSL
models share the same one-particle master equation.
If we think that the collapse model fixed by Eq.(\ref{eq:sdecsld}) describes
in a phenomenological way
the action of a real physical field filling space, it is now clear how the principle
of energy conservation can be reestablished. The energy gained
or lost by the system can be ascribed to an energy exchange with the noise field, as the latter can be influenced back by the presence
of the system.
An explicit characterization of this process
requires an underlying theory, which has to guarantee the classical nature of the noise field, with its own (non-quantum)
equations of motion, in order to provide a proper objective collapse of the wavefunction \cite{Bassi2003,Bassi2013,Adler2014}.
In addition,
one can already say that a collapse noise with typical cosmological features would correspond to
a low-temperature noise \cite{Bassi2010,Bassi2013}, at most of the order of few Kelvins.
By Eq.(\ref{eq:T}), we see that the noise temperature $T$ is in one-to-one
correspondence with the new parameter $v_{\eta}$. For instance, $v_{\eta} = 10^5 \text{m}/\text{s}$
(i.e. $k \approx 3 \times 10^{-6}$ for a nucleon)
gives $T \approx 1 K$.
Hence, more than the specific value of the noise temperature,
the important thing is that even in the presence of a low-temperature noise
the resulting collapse model can be introduced in a consistent way; see also the discussion in the next section.
It is worth noting that it is not always
possible to properly modify a given collapse model to include dissipation
via the action of a low-temperature noise. In \cite{Bahrami2014b}
we discuss how such a modification is not feasible for the Di{\'o}si-Penrose model~\cite{Diosi1987},
in which gravity is involved to provide a phenomenological description of the wavefunction collapse.
In our model,
every system with a temperature higher than about $1 \text{K}$ is cooled by the action of the collapse noise.
Thus, we are led to reject
the bounds on the collapse rate $\lambda$ relying on a balance
between the system's heating due to the action of the noise in the original CSL model and the cooling
due to, e.g., the Universe expansion or the energy radiation.
This is the case for the heating of the protons constituting the intergalactic medium
or for the energy accumulation in interstellar dust grains \cite{Adler2007}.
Note how, in particular, the heating of the IGM provides the second strongest bound to date on the localization rate $\lambda$ \cite{Adler2009}.
Even more, we expect that cosmological
data will put strong bounds on the dissipation parameter $k$ (equivalently, on $v_{\eta}$).
The modified long-time behavior of the energy predicted
by our model will have to be compared with the constraints coming from
such data. Some preliminary results have been obtained for the non-dissipative CSL model \cite{Adler2007,Lochan2012}.
Dissipative effects are expected to play an important role also in the experimental investigation of collapse models via optomechanical systems \cite{Bahrami2014,Nimmrichter2014},
where proper signatures could be visible in the density noise spectrum of the mechanical oscillator,
or via the spontaneous photon emission from electrically charged particles \cite{Adler2013b,Donadi2014},
as the latter is registered over a period of several years.
In both situations, we expect that dedicated experiments
should allow to restrict the possible values of $k$; of course,
also in relation with the other parameters of the model.
\section{Macroscopic objects: localization and amplification mechanism}\label{sec:mol}
As recalled in Sec. \ref{sec:tcm},
any proper collapse model is characterized by the amplification mechanism.
The localizing action of the collapse noise has
to increase with size of the system, which guarantees a
consistent description of microscopic and macroscopic systems within a unique theoretical framework.
Here, we show that the amplification mechanism holds in our extended
model, at least as long as one deals with a macroscopic rigid body.
The description of more complex systems, where the internal dynamics
becomes involved, calls for a more detailed specification of the system's evolution \cite{Smirne2014}.
We stress that the following considerations are valid also in the case of a low temperature noise.
As anticipated, even for a noise temperature $T \approx 1 K$
we have effective localization and amplification mechanisms, so that the noise
actually induces a classical behavior of the center of mass of macroscopic objects.
Finally, the different role of the momentum-dependent localization operators of our model
in, respectively, the energy relaxation and the wave-function localization is clarified
by comparing the time scales of the two phenomena.
\subsection{Localization of the center of mass of a rigid body and amplification mechanism}
Consider a macroscopic object made up of $N$ particles of equal mass $m$.
One can neglect the contributions due to the electrons and consider the mass of the proton $m_{\text{P}}$
equal to the mass of the neutron $m_{\text{N}}$, i.e. $m_{\text{P}} \approx m_{\text{N}} \equiv m$.
In addition, we deal with a rigid body, which
allows us to decouple the evolution of the center of mass from that of the relative coordinates~\cite{Smirne2014}.
Let $\hat{{\bf x}}_{\text{CM}} = \sum_j \hat{{\bf x}}_j/N$ be the position operator of the center of mass,
while the relative coordinates $\hat{{\bf r}}_j$, $j=1, \ldots, N-1$, are fixed by $\hat{{\bf x}}_j = \hat{{\bf x}}_{\text{CM}}+ \sum_{j'} \Lambda_{j j'} \hat{{\bf r}}_{j'}$,
for a suitable matrix with elements $\Lambda_{j j'}$.
We neglect the possible rotations of the system: this greatly simplifies
the description, without affecting the physical meaning of the results \cite{Ghirardi1990}.
By virtue of the rigid-body approximation, the relative coordinates are fixed
and the center-of-mass momentum $\hat{{\bf P}}_{\text{CM}}$ is simply proportional to each individual momentum $\hat{{\bf P}}_j$,
according to $ \hat{{\bf P}}_j\approx \hat{{\bf P}}_{\text{CM}}/N$, for each $j$.
Moreover, consider a total Hamiltonian $\hat{H} = \hat{H}_{\text{CM}}+ \hat{H}_r$,
given by the sum of two terms associated with, respectively, the center of mass and the relative degrees of freedom.
Thus, the state of center of mass $\ket{\varphi^{(\text{CM})}_t}$ satisfies a stochastic differential equation
with the same form as Eq.(\ref{eq:sdecsld}), where $\hat{H}$ is replaced by $\hat{H}_{\text{CM}}$ and $\hat{\mathbb{L}}({\bf y})$ is replaced by [compare with Eq.(\ref{eq:lybb})]
\begin{eqnarray}\label{eq:sss}
\hat{\mathbb{L}}^{(\text{CM})}({\bf y}) &=& \frac{m}{(2 \pi \hbar)^3} \int \mathrm{d} {\bf Q} \mathcal{F}_r({\bf Q}) e^{\frac{i}{\hbar} {\bf Q} \cdot (\hat{{\bf x}}_{\text{CM}}-{\bf y})} \\
&&\times \exp\left(-\frac{r^2_C}{2 \hbar^2}\left|(1+k ){\bf Q}+ 2 k \hat{{\bf P}}_{\text{CM}}/N\right|^2\right), \nonumber
\end{eqnarray}
$\hat{{\bf P}}_{\text{CM}}$ being the center-of-mass momentum operator and
we introduced the function
\begin{equation}
\mathcal{F}_r({\bf Q}) = \sum_j \exp\left(\frac{i}{\hbar} \sum_{j'} \Lambda_{j j'} {\bf Q} \cdot {\bf r}_{j'}\right),
\end{equation}
where ${\bf r}_j$ is the fixed $j$-th relative coordinate of the rigid body. The factor $\mathcal{F}_r({\bf Q})$
conveys the influence of the internal structure on the evolution of the center of mass
and it is due to the indistinguishability of particles: it is also present
in the original CSL model \cite{Ghirardi1990}, but not in the GRW models \cite{Ghirardi1986,Smirne2014}.
This factor determines the specific features of the amplification mechanism within the CSL model.
Hence, we are interested in the action of the operator $\hat{\mathbb{L}}^{(\text{CM})}({\bf y})$
on a generic state $\ket{\varphi^{(\text{CM})}}$ of the center of mass and, in particular, we focus
on the changes (if any) in the localization process due to dissipation.
Using the notation$\scalar{{\bf x}}{\varphi^{(\text{CM})}} = \varphi ({\bf x})$
and $\scalar{{\bf P}}{\varphi^{(\text{CM})}} = \tilde{\varphi} ({\bf P})$,
we have the (non-normalized) wave function
\begin{eqnarray}
\phi({\bf x}) &\equiv& \bra{{\bf x}}\hat{\mathbb{L}}^{(\text{CM})}({\bf y}) \ket{\varphi^{(\text{CM})}}\\
&= &
\frac{m}{(2 \pi \hbar)^{9/2}} \int \mathrm{d} {\bf Q} \mathrm{d} {\bf P} \mathcal{F}_r({\bf Q}) e^{\frac{i}{\hbar} {\bf Q} \cdot ({\bf x}-{\bf y})}e^{\frac{i}{\hbar} {\bf P} \cdot {\bf x}} \nonumber\\
&&\times
\exp\left(-\frac{r^2_C}{2 \hbar^2}\left|(1+k ){\bf Q}+ 2 k \frac{{\bf P}}{N}\right|^2\right) \tilde{\varphi}({\bf P}). \nonumber
\end{eqnarray}
Now, we use the continuum limit, so that
\begin{equation}
\mathcal{F}_r({\bf Q}) =\int \mathrm{d} {\bf z} D({\bf z}) e^{\frac {i}{\hbar} {\bf Q} \cdot {\bf z}},
\end{equation}
$D({\bf z})$ being the macroscopic density of particles, and we obtain
\begin{eqnarray}
\phi({\bf x}) &=& \frac{m}{(2 \pi \hbar)^{3/2}(\sqrt{2 \pi}r_C (1+k)^3)} \int \mathrm{d} {\bf z} \mathrm{d} {\bf P} D({\bf z}) e^{\frac{i}{\hbar} {\bf P} \cdot {\bf x}} \nonumber\\
&& \exp\left(-\frac{|{\bf x} - {\bf y} + {\bf z}|^2}{2 r^2_C(1+k)^2}\right)e^{-\frac{2 i k}{(1+k)\hbar N} {\bf P} \cdot ({\bf x} - {\bf y} + {\bf z})} \tilde{\varphi}({\bf P}). \nonumber
\end{eqnarray}
Since
$$
\varphi({\bf x}) = \int \mathrm{d} {\bf P} \frac{e^{\frac{i}{\hbar} {\bf P} \cdot {\bf x}}}{(2 \pi \hbar)^{3/2}} \tilde{\varphi}({\bf P}),
$$
one gets
\begin{eqnarray}
\phi({\bf x}) &=& \frac{m}{(2 \pi r^2_C)^{3/2}} \int \mathrm{d} {\bf z} D({\bf z}) \frac{\exp\left(-\frac{|{\bf x} - {\bf y} + {\bf z}|^2}{2 r^2_C(1+k)^2}\right)}{(1+k)^3} \nonumber\\
&& \times \varphi\left({\bf x} - \frac{2 k}{(1+k)N} ({\bf x} - {\bf y} + {\bf z})\right). \label{eq:expv}
\end{eqnarray}
We now assume that this macroscopic density $D({\bf y})$ does not vary significantly on the length-scale
fixed by $r_C(1+k)$, so that
the exponential in Eq.(\ref{eq:expv}) varies as a function of ${\bf z}$ much faster than the other terms within the integral \cite{foot4}.
Thus, we can make the substitution
\begin{equation}
\frac{1}{(2 \pi r^2_C(1+k)^2)^{3/2}}
\exp\left(-\frac{|{\bf x} - {\bf y} + {\bf z}|^2}{2 r^2_C(1+k)^2}\right) \rightarrow \delta^3\left({\bf x} - {\bf y} + {\bf z}\right)
\end{equation}
in Eq.(\ref{eq:expv}), getting
\begin{equation}\label{eq:agia}
\phi({\bf x}) \approx m D({\bf y}-{\bf x}) \varphi({\bf x}).
\end{equation}
This clearly shows how the effects on the localization process due to the presence of the momentum operator in $\hat{\mathbb{L}}^{(\text{CM})}({\bf y})$
can be safely neglected, thus guarantying the convergence toward well localized states.
The localization of the wavefunction, as, e.g., represented in Fig.\ref{fig:1}, is basically not modified
by the introduction of dissipation in the model.
One can draw the same conclusion by dealing with the master equation of the average state, and, in addition,
one recovers Eq.(\ref{eq:G}) as an effective characterization of the scaling of the localization
strength with the size of the system.
The Lindblad master equation for the state of the center of mass is given by [compare with Eq.(\ref{eq:mecsldiss})]
\begin{eqnarray}
\frac{\mathrm{d}}{\mathrm{d} t}\hat{\rho}^{(\text{CM})}(t)
&=& - \frac{i}{\hbar}\left[\widehat{H}_{\text{CM}} \,,\, \hat{\rho}^{(\text{CM})}(t)\right] \nonumber\\
&& + \frac{\gamma}{m^2_0} \int \mathrm{d} {\bf y} \left[\hat{\mathbb{L}}^{(\text{CM})}({\bf y})\hat{\rho}^{(\text{CM})}(t)\hat{\mathbb{L}}^{(\text{CM})\dag}({\bf y})\right. \nonumber\\
&&\left. - \frac{1}{2}\left\{\hat{\mathbb{L}}^{(\text{CM}) \dag}({\bf y}) \hat{\mathbb{L}}^{(\text{CM})}({\bf y}),
\hat{\rho}^{(\text{CM})}\right\} \right]. \label{eq:mecsldisscm}
\end{eqnarray}
By using Eq.(\ref{eq:agia}) and neglecting the free Hamiltonian contribution,
we end up with the equation in the position representation
\begin{equation}
\partial_t \bra{{\bf x}'} \hat{\rho}^{(\text{CM})}(t) \ket{{\bf x}''} = - \Lambda({\bf x'}, {\bf x}'') \bra{{\bf x}'} \hat{\rho}^{(\text{CM})}(t) \ket{\bf x}'',
\end{equation}
where
\begin{equation}
\Lambda({\bf x'}, {\bf x}'') \approx \gamma \int \mathrm{d} {\bf y}[D^2({\bf y})-D({\bf y}) D({\bf y}+{\bf x}'-{\bf x}'')].
\end{equation}
The same expression was obtained for the original CSL model in \cite{Ghirardi1990},
under the so-called sharp-scanning approximation.
In particular, if we consider a rigid body with constant density $D$, we get \cite{Ghirardi1990}
\begin{equation}
\Lambda({\bf x'}, {\bf x}'') = \gamma D n_{\text{out}},
\end{equation}
with $n_{\text{out}}$ the number of particles of the body when its center of mass
is in the position ${\bf x}'$ that are outside the volume
occupied by the body when its center of mass is in ${\bf x}''$.
Indeed, if $n_{\text{out}}$ is equal to the total number of particles (i.e.
there is no overlap between the volumes occupied by the macroscopic rigid body when its center of
mass is in, respectively, ${\bf x}'$
and ${\bf x}''$), one recovers Eq.(\ref{eq:G}), up to an irrelevant constant factor $(4 \pi)^{3/2}$.
The localization rate, which is vey small for microscopic systems increases
with the size of the system proportionally to the square of the number of particles,
which is a direct signature of the action of the noise on indistinguishable particles.
\subsection{Collapse rate versus relaxation rate}
The comparison between the dissipation rate $\chi$ and the localization rate $\Gamma$, see Eq.(\ref{eq:G}),
shows how the two phenomena occur on different time scales:
while the center of mass of a macroscopic system will be quickly localized by the action
of the noise, dissipation can possibly play a role on the system's evolution only on the long time scale.
We take into account the evolution of the center-of-mass
energy of a macroscopic rigid body with $N$ nucleons, $H^{(\text{CM})}(t) = {tr}\left\{\hat{{\bf P}}^2/(2M) \hat{\rho}^{(\text{CM})}(t)\right\}$, where $M = N m_0$
is the total mass.
If we repeat the calculations performed in Sects.\ref{sec:eef},
the master equation~(\ref{eq:mecsldisscm}), at first order in $k$,
leads to an exponential relaxation of the energy
with rate
\begin{equation}\label{eq:chi}
\chi \approx \frac{16 \sqrt{2} k r_C^5 \lambda}{ (2 r^2_C + R^2)^{5/2}},
\end{equation}
where we considered a spherical object with radius radius $R$ and constant density
and we made the following approximation \cite{Bahrami2014}
\begin{eqnarray}
\mathcal{F}_r({\bf Q}) &=& \frac{3 N}{4 \pi R^3} \left(\sin\left(\frac{Q R_0}{\hbar}\right)-\frac{Q R_0}{\hbar}\cos\left(\frac{Q R_0}{\hbar}\right)\right) \nonumber\\
& \approx & e^{- \frac{Q^2 R^2}{2 \hbar^2}}. \nonumber
\end{eqnarray}
The dissipation rate $\chi$ in Eq.(\ref{eq:chi}) is much smaller than the corresponding localization rate,
which, according to Eq.(\ref{eq:G}), is given by
\begin{equation}
\Gamma = \lambda n^2 \tilde{N} = \lambda \left(\frac{N r^3_C}{4 \pi R^3/3}\right)^2 \frac{4 \pi R^3/3}{r_C^3} = \lambda \frac{N^2 r^3_C}{4 \pi R^3/3}.
\end{equation}
The ratio between the two rates in the case $R \gg r_C$ is
\begin{equation}
\frac{\Gamma}{\chi} \approx 10^{4} N^2 \left(\frac{R}{r_C}\right)^2.
\end{equation}
If we consider
a reference density $D = 5 \,{\text g}\, {\text cm}^{-3}$, one has $N \approx 10^{25} (R[{\text cm}])^3$,
since $1 \text{g}$ of matter contains approximately an Avogadro's number of nucleons.
Now, set a radius $R = 1 \text{mm}$, so that $N \approx 10^{22}$.
In this case, the localization rate is $\Gamma = 10^{14} \text{s}^{-1}$, while the dissipation rate is $\chi=10^{-41}\text{s}^{-1}$: the noise
localizes the center of mass of the macroscopic body on very short time scales, while
the influence of dissipation can be safely neglected during the whole evolution of the macroscopic system.
Similarly, if $R = r_C = 10^{-5} \text{cm}$, implying $N \approx 10^{10}$,
we get $\chi = 10^{-22} \text{s}^{-1}$, so that dissipation can still be neglected, while in this case $\Gamma \approx 10^{2}\text{s}^{-1}$.
Moreover, one could wonder how this analysis changes if we choose a different one-particle
localization rate $\lambda$. For the value proposed by Adler \cite{Adler2007}, $\lambda = 10^{-9}\text{s}^{-1}$,
we have that dissipation can still be neglected for $R = 1 \text{mm}$, where $\chi = 10^{-33}\text{s}^{-1}$
(and $\Gamma = 10^{22} \text{s}^{-1}$).
Instead, for $R = r_C = 10^{-5} \text{cm}$, we end up with $\chi = 10^{-14}\text{s}^{-1}$,
so that dissipation can play a role on the secular evolution of the system.
However, also in this case the effect of dissipation on the localization of the wavefunction is completely negligible.
Localization occurs on a much shorter time scale than dissipation, $\Gamma = 10^{10} \text{s}^{-1}$, and then the influence
of the dissipative terms in Eq.(\ref{eq:expv}) can be neglected to study localization, even if it can subsequently
play a role in the long-time behavior of the system.
\section{Conclusions}\label{sec:c}
The main purpose of collapse models is to provide
a unified framework for the description
of microscopic and macroscopic systems,
thus avoiding an ad-hoc dividing line within the theory,
as well as yielding a dynamical explanation for the collapse of the wavefunction.
The results of this paper point out that this program can be followed
taking into account basic physically-motivated demands.
We have included dissipation in the CSL model, which is up to now
the most refined collapse model. This allowed us to remove the divergence
of the energy on the long time scale affecting the original CSL model.
This divergence
traces back to an infinite temperature of the collapse noise, which is of course
an unrealistic feature of the model.
The inclusion of dissipation brings along a new parameter, which is strictly related with the finite
temperature of the noise.
Significantly, even in the presence of a low-temperature noise
the localization and the amplification mechanism are effective, so that the unified
description of microscopic and macroscopic systems is still guaranteed.
A realistic description of the wavefunction collapse can be further developed, for example
by also including a non-white noise \cite{Adler2007b,Adler2008} within the model.
Nevertheless, one should keep in mind that the specific features of
the collapse noise can be fixed only through a first-principle
underlying theory, which can clarify the physical origin of the noise \cite{Adler2004,Bassi2013}.
The development of such an underlying theory is one of the main goals of the research on collapse models and, more in general,
on the theories going beyond quantum mechanics.
\begin{acknowledgments}
The authors acknowledge financial support by the EU
project NANOQUESTFIT, INFN, FRA-2014 by the University of Trieste and
COST (MP1006). They also thank M. Bahrami, L. Di{\'o}si, S. Donadi, L. Ferialdi, and B. Vacchini
for many useful discussions.
\end{acknowledgments}
|
2,877,628,088,507 | arxiv | \section{Introduction}
Finding materials with the desired properties is an important task in the condensed matter physics and material sciences, and guides the direction in the structural search field \cite{Review_1,Review_2}.
Alloying is commonly used for obtaining the desired materials, as the structural, electronic, transport and optical properties of alloys can be tuned by varying the compositions \cite{Liang_1, Liang_2, Liang_3}.
Unfortunately, the complexity of alloy structural search grows exponentially with the number of atoms in a unit cell.
Thus, reducing the computing costs while maintaining the reliability of the search results is of high importance for a structural search method.
To tackle the structural search problem, many algorithms have been developed, such as simulated annealing (SA) \cite{SA_1,SA_2,SA_3}, genetic algorithm (GA) \cite{GA_1,GA_2,GA_3}, particle swarm optimization \cite{PSO1,PSO2}, and \textit{ab initio} random structure search \cite{Random}.
The methods have shown success to various degrees, but there are still large room for improvement.
For example, the SA and \textit{ab initio} random structure search require many thousands of iterations to reach the low-lying minima in the energy landscape and are expensive due to the CPU demanding density functional theory (DFT) \cite{DFT_1,DFT_2} calculations.
The GA search, however, is extremely dependent on the quantity and diversity of chosen populations.
Machine learning (ML) is becoming increasingly popular in accelerating the discovery of new materials by encoding physical knowledge into property models \cite{SchNet,MEGNet}.
For instance, the deep tensor neural network \cite{DTNN} unifies many-body Hamiltonians to design neural network.
The crystal graph convolutional neural network (CGCNN) \cite{CGCNN} considers the topology of crystal to build graph, providing a universal and interpretable representation of materials.
These methods minimize the need of DFT calculations and have shown high performance in property predictions via the combination of ML and physical concepts.
Nevertheless, training the ML models to gain the desired generalization capability still requires a large amount of labeled data that often means a large amount of expensive DFT computations.
Therefore, reducing the number of data required by training the property prediction model (PPM) is a key issue to be resolved for the improved efficiency.
ML may also be used to design the strategy of exploring the potential energy surface (PES) to cut the computational cost of a structural search method.
As is known, the reward-driven reinforcement learning (RL) focuses on the best policy of exploration in an interactive environment.
RL has shown success in various fields.
For instance, AlphaGo \cite{AlphaGo} showed its strong ability in combinatorial optimization to maximize the gain and defeated the world championship in Go game.
In the fields of physics, chemistry, and biology, RL has been used to design nanophotonic devices \cite{nanophotonics} and drug molecules \cite{drug_1,drug_2,drug_3,drug_4} by learning the policy to optimize the objective function.
Naturally, RL is expected to be helpful for the structural search methods by learning and making decision on the favorable descent path on PES.
In this work, we proposed a crystal combinatorial optimization program (CCOP), which uses a weighted crystal graph convolutional neural network (WCGCNN) as the PPM, and employs a deep reinforcement simulated annealing (DRSA) technique as the searching algorithm.
The active learning \cite{active_1,active_2,active_3} is applied with the selection of highly representative samples so that the training of PPM requires only a small data set.
The DRSA is used to guide the search agent to further reduce the computational cost.
The numerical efficiency of CCOP is illustrated by its applications to the search of the ordered structures of six testing multi-alloys: \ce{BN}, \ce{BeZnO2}, \ce{AlNSiC}, \ce{GaNZnO}, \ce{BeMgCaO3}, and \ce{Al2Ga2Zn2N3O3}.
The test also reveals that the PPM is interpretable, accurate and fast to compute.
Meanwhile, the DRSA is shown to have the highest performance among the tested search methods, including the conventional RL algorithm of proximal policy optimization (PPO2) \cite{PPO}, SA, and the random search.
\section{Methods}
The workflow of CCOP proposed here is illustrated in Fig.~\ref{fig:fig1}.
It consists of four major parts: i) training set labeling, ii) property prediction model, iii) structural search, and iv) sample selection.
Each part is explained as follows.
\begin{figure}[!htb]
\includegraphics[width=8.3cm]{fig1.jpg}
\caption{Workflow of the crystal combinatorial optimization program (CCOP). The program starts with a few initial structures as the samples. The samples are labeled with DFT energies and used as the training set (step 1). The labeled training set is used to train a machine-learning potential that fits the PES (step 2). It is followed by training an RL agent for the efficient sampling of the PES to find the low-energy structures (step 3). Then clustering analysis is performed on the sampled structures to select samples for the training set (step 4). The entire optimization program runs in a closed loop.}
\label{fig:myfig1}
\end{figure}
\textbf{Training Set Labeling.}
The training set of structural samples is labeled with the single-point DFT energies that are calculated with the Vienna ab initio Simulation Package (VASP) \cite{VASP_1,VASP_2,VASP_3}.
The generalized gradient approximation (GGA) with Perdew-Burke-Ernzerhof (PBE) exchange and correlation functional \cite{PBE} is used, and the ion-electron interactions are treated by projector-augmented-wave (PAW) \cite{PAW_1,PAW_2} technique.
The initial training structures are generated by sampling the PES with 30 SA steps.
30 more structures selected by active learning are added to the training set after every fitting-search iteration of CCOP.
\textbf{Property Prediction Model.}
Training a PPM to replace the expensive DFT calculations (step 2 in Fig. \ref{fig:myfig1}) is the second part. The PPM is built based on CGCNN, and modified under the architecture of message passing neural network \cite{MPNN}.
At the $ k $-th message passing phase, atom vectors $ \bm{h}_{i}^{k} $, $ \bm{h}_{j}^{k} $ and their bond feature vector $ \bm{e}_{ij} $ form the message by function $ M_{k} $. The aggregated message $ \bm{m}_{i}^{k+1}=\sum_{j\in N(i)}M_{k}(\bm{h}_{i}^{k},\bm{h}_{j}^{k},\bm{e}_{ij}) $ is the sum of all $ N(i) $ neighbors of atom vector $ \bm{h}_{i}^{k} $ in crystal graph $ \mathcal{G} $.
New representation of atom vector $ \bm{h}_{i}^{k+1}=U_{k}(\bm{h}_{i}^{k},\bm{m}_{i}^{k+1}) $ is obtained by updating function $ U_{k} $.
After $ K $ times of message passing, property can be predicted by $ \hat{y}=R(\{\bm{h}_{i}^{K}|i\in \mathcal{G}\}) $, where $ R $ is a differentiable function.
In order to focus on the property related messages in the model, we assign each message with a weight $ \omega_{j}^{k} $ and modify $ M_{k} $ of CGCNN as
\begin{equation}
M_{k}=w_{j}^{k}\cdot\sigma(\bm{x}_{ij}^{k}\bm{W}_{f}^{k}+\bm{b}_{f}^{k})\odot g(\bm{x}_{ij}^{k}\bm{W}_{s}^{k}+\bm{b}_{s}^{k}),
\end{equation}
where $ \odot $ denotes element-wise multiplication, $ \bm{W}_{f}^{k}, \bm{W}_{s}^{k} $ and $ \bm{b}_{f}^{k}, \bm{b}_{s}^{k} $ are weight matrices and bias vectors of the $ k $-th layer, respectively, and $ \sigma $ is a sigmoid function, $ g $ is a softplus function \cite{softplus}.
$ \bm{x}_{ij}^{k}=\bm{h}_{i}^{k}\oplus\bm{h}_{j}^{k}\oplus\bm{e}_{ij} $ concatenates neighboring atom pair with their bond.
The 12 nearest neighboring atoms are found by using a cutoff distance of 8 $ \mathrm{\AA} $. The message weights, $ \omega_{j}^{k} $, for the 12 neighboring atoms are initialized with the same value, since it's hard to tell which message is more important at first.
Moreover, we perform a gated structure \cite{GRU} to control the update process,
\begin{eqnarray}
U_{k}&&=\bm{z}^{k}_{i}\odot\bm{h}_{i}^{k}+(\bm{I}-\bm{z}^{k}_{i})\odot\bm{m}_{i}^{k+1},\\
\bm{z}^{k}_{i}&&=\sigma[\bm{W}^{k}_{u}\cdot(\bm{h}_{i}^{k}\oplus\bm{m}_{i}^{k+1})],
\end{eqnarray}
where $ \bm{I} $ is an all-ones vector, $ \bm{W}^{k}_{u} $ is the weight matrix, $ \sigma $ is applied to scale each dimension of $ \bm{z}^{k}_{i} $ in $ [0,1] $,
and weight vector $ \bm{z}^{k}_{i} $ is used to determine the update ratio of $ \bm{h}_{i}^{k} $. The gated structure has shown a good performance in retaining and filtering information \cite{GRU-advantage}.
When the message passing process finish, atom $ i $ is embedded into its chemical environment by iteratively including the surroundings, thus $ \bm{h}_{i}^{K} $ can be treated as the representation of atom $ i $ in the structure.
As for the representation of crystal structure, we sum up $ \mathcal{N} $ atom vectors and average them as the crystal vector $ \bm{c}=\sum_{i}\bm{h}_{i}^{K}/\mathcal{N} $, which contains machine-learned structural features.
A three-layer fully connected network \cite{Goodfellow} is set as the differentiable function for the property prediction,
\begin{equation}
\hat{y}=\bm{W}_{3}(g(\bm{W}_{2}(g(\bm{W}_{1}\bm{c}+\bm{b}_{1}))+\bm{b}_{2}))+\bm{b}_{3},\label{equ:mybais}
\end{equation}
where $ \bm{W}s $ are the weights, and $ \bm{b}s $ are the biases. Eq. \ref{equ:mybais} is a universal approximator for nonlinear functions \cite{FFNN}. More details about the WCGCNN based PPM are given in the Section I of the Supplemental Materials (SM).
The PPM is trained with the MXNet \cite{MXNet} framework using the Adam optimizer \cite{Adam} for gradient descent optimization, using the mean square error (MSE) as the loss function (step 2 in Fig. \ref{fig:myfig1}). The training uses 150 samples obtained by SA and DFT calculations as the validation set. Since the size of training set is small and there are more parameters in WCGCNN than CGCNN, it is hard to determine the suitable model in the huge parameter space. Therefore, the weights of the nearest atoms and update ratios are frozen at the beginning of training (60 epochs), namely training the network in a smaller parameter space. The model with the lowest validation loss is then chosen, and the weights and update ratio are unfrozen to fine tune the model (60 epochs). The two-step training technique can ensure the training process goes well.
\textbf{Structural Search with DRSA.}
Training an RL agent and use it to direct the SA path for the efficient determination of the lowest-energy structures (step 3 in Fig. \ref{fig:myfig1}) is the third part.
The positions of atoms are encoded as a one-dimensional list, and a sequence of actions are applied to minimize the structural energy.
An action is defined here as the exchange between atoms with the same sign of electricity, and the exchange between the same atoms is forbidden.
The allowed actions are weighted by the RL agent and the forbidden actions are masked by 0.
At the $ t $-th SA step, the DRSA agent performs action $ a_{t} $ by the $ \epsilon $-greedy policy $ \pi_{\theta}(a_{t}|s_{t}) $ to adjust structure $ s_{t} $ under the Metropolis criterion \cite{SA_1}.
The value of energy descent $ r_{t+1}=E_{0}-\hat{E}_{t+1} $ is defined as the reward, where $ E_{0} $ is the energy of the search starting structure calculated by DFT and $ \hat{E}_{t+1} $ is the energy of sample predicted by PPM.
The discounted sum of rewards is defined as $ G(\tau)=\sum_{t=0}^{T-1}\gamma^{t}r_{t+1} $, where $ \tau=\{s_{0},a_{0},s_{1},r_{1},a_{1},\cdots,s_{T},r_{T}\} $ is a trajectory of Markov decision process and $ \gamma $ is the discount factor determining the priority of short-term rewards.
Minimizing the structural energy within $ T $ steps is equivalent to maximizing the expectation of $ G $.
The agent's policy $ \pi_{\theta}(a_{t}|s_{t}) $ is a key for finding the optimal structure.
The agent is trained by the clipped loss of PPO2 \cite{PPO},
\begin{equation}
\mathcal{L}_{\mathrm{A}}=-\hat{\mathbb{E}}_{\tau,t}[\min(p_{t}(\theta)\hat{A}_{t},
\mathrm{clip}(p_{t}(\theta),1-\epsilon,1+\epsilon)\hat{A}_{t})], \label{eq:myloss}
\end{equation}
where $ p_{t}(\theta)=\pi_{\theta}(a_{t}|s_{t})/\pi_{\theta_{old}}(a_{t}|s_{t}) $ is the probability ratio of current and old policies, $ \hat{A}_{t} $ is the estimator of advantage \cite{GAE}. The $ \mathrm{clip} $ function in Eq. \ref{eq:myloss} removes the incentive of $ p_{t} $ beyond the interval $ [1-\epsilon,1+\epsilon] $, and means a penalty for a large policy update.
To reduce the variance of $ \hat{A}_{t} $, the TD(0) \cite{Sutton} form of $ \hat{A}_{t} $ is adopted, i.e., $ \hat{A}_{t}=r_{t+1}+\gamma V_{\pi}(s_{t+1})-V_{\pi}(s_{t}) $, where the state-value function $ V_{\pi}(s_{t})=\mathbb{E}_{\tau}[r_{t}|s_{t}] $ is the expected return from state $ s_{t} $. $ V_{\pi}(s_{t}) $ can be approximated by minimizing the loss \cite{A3C}
\begin{equation}
\mathcal{L}_{\mathrm{C}}=\hat{\mathbb{E}}_{\tau,t}[r_{t+1}+\gamma V_{\pi}(s_{t+1})-V_{\pi}(s_{t})]^{2}.
\end{equation}
Through learning from the search paths, the agent's policy $ \pi_{\theta}(a_{t}|s_{t}) $ generates a suitable weight for each action to minimize energy, thus improving the search efficiency of SA actions.
More information about the DRSA can be found in the Section II of SM.
\textbf{Sample Selection.}
Choosing the representative samples to add to the training set (step 4 in Fig. \ref{fig:myfig1}) is the last part of the workflow.
To be most beneficial for reducing the mean absolute error (MAE) of PPM predictions, samples with the highest uncertainties should be considered.
The uncertainty $ \Omega $ is defined as the variance of predictions \cite{active_1}
\begin{equation}
\Omega(\bm{x})=\dfrac{1}{M-1}\sum_{m=1}^{M}(f_{m}(\bm{x})-\dfrac{1}{M}\sum_{l=1}^{M}f_{l}(\bm{x}))^{2}, \label{eq:myomega}
\end{equation}
where $ \bm{x} $ is a searched sample, $ f_{m} $ denotes a trained PPM, and $ M $ is the number of PPMs.
Specifically, 10\% of the samples with the highest uncertainties and 10\% of the samples with the lowest energies are selected. The crystal vectors of these samples are calculated by the PPM, followed by $ t $-distributed stochastic neighbor embedding (TSNE) \cite{TSNE} to reduce the dimension of crystal vectors.
The reduced vectors are grouped into 30 clusters by the Kmeans method \cite{Kmeans}.
The minimal energy sample in each cluster with its DFT energy computed is added into the training set.
Moreover, the lowest energy sample in the training set, with the energy referred as $ E_{0} $ above, is used as the initial structure of the next fitting-search iteration.
\section{Results and Discussion}
The effectiveness of the CCOP is tested by searching the lowest-energy structure of the ordered configurations of multi-alloy.
Six alloys with compositions from simple to complex are used as the testing cases: \ce{BN}, \ce{BeZnO2}, \ce{AlNSiC}, \ce{GaNZnO}, \ce{BeMgCaO3}, and \ce{Al2Ga2Zn2N3O3}.
The search is restricted to that the alloys maintain the 72-atom wurtzite-like lattice configuration in any temperature environment \cite{BN, BeZnO2, GaNZnO}.
The unit cell consists of 8 lattice layers and there are 9 atomic sites in each layer. Initially each layer is filled with 9 cations or anions, and the cation-layer and the anion-layer alternate.
The atomic arrangements are then changed in the search process to obtain lower energy configurations.
As there is only one type of anions (cations) in BN, the search action in BN refers to the exchange of the positions of anion and cation, instead of exchanging among cations or among anions for the structural searches of the other five alloys.
\subsection{Model Interpretability}
\begin{figure*}[!htb]
\centering
\includegraphics[width=16cm]{fig2.jpg}
\caption{(a) The atom similarity matrix of the ordered AlNSiC structure before and after PPM training. The similarity coefficient is defined as the cosine distance between the atom vectors. (b) Atomic weights of the 12 neighboring atoms by the trained PPM, notice that the nearest 4 atoms have higher weights than the others. (c) Comparison between the PPM predicted energies and the calculated DFT energies, with the inset showing the mean absolute error (MAE) of the predictions.}
\label{fig:myfig2}
\end{figure*}
Interpretability of ML model, e.g., the characteristic vectors extracted from the crystal information are consistent with the physical intuition, is desired in the fields of physics, chemistry and material sciences.
Here, AlNSiC is used as an example to show the interpretability of our PPM.
We label each atom vector from 1 to 72 and each layer from 1 to 9, then construct the atom similarity matrix, with a size of $72\times 72$, under different message passing phases.
As seen in Fig. \ref{fig:myfig2}(a), the distribution of the atom similarity matrix before the PPM training mainly depends on the input atom features, e.g., electronegativity and valence electrons.
The atom similarity matrix is almost unchanged after three message passing process, which indicates that the PPM cannot extract effective information from the input structures.
After the training, atoms are gradually separated into two clusters, i.e., the Al-N and Si-C clusters.
For example, the Al atoms (1$\sim$9) show a higher similarity with the N atoms (10$\sim$18) than the Si atoms (19$\sim$27) and C atoms (28$\sim$36), consistent with the fact that Al bonds with N, not Si or C.
The results are consistent with the structural characteristics of wurtzite structure and the matching of valence electrons.
For instance, the binary compound AlN forms a stable wurtzite structure by the $sp^3$ hybridization for $3s^23p$ and $2s^22p^3$ electrons of Al and N atoms, respectively.
Similarly, SiC forms a wurtzite structure by $sp^3$ hybridization for $3s^23p^2$ electrons of Si and $2s^22p^2$ electrons of C.
Thereby the Al atoms at the lowest-energy order structure must be bonding with N first, while the high-energy structures have more random atomic distributions, as seen on the top of Fig. \ref{fig:myfig2}(c).
Moreover, the Al-N and Si-C layers, which form the hexagonal rings with the neighbor atoms along the [001] direction, usually appear alternatively in the lowest-energy structure due to the crystal periodicity.
Thus the Al and N atoms show high similarity in the atom similarity matrix, corresponding to the strong tendency of their bonding.
\begin{figure*}[!htb]
\centering
\includegraphics[width=17cm]{fig3.jpg}
\caption{ Performances of DRSA and PPO2 on searching the low energy structures of 6 multi-alloys: (a) The accumulative reward (AR, in eV) and the density of state (DOS) of DRSA and PPO2 versus the search paths. The shadow of the line is the standard deviation (SD) of the ARs. The inner panel shows the change of MAE with the fitting-search iteration. (b) Average reduction of MAE by DRSA relative to PPO2, and (c) Improvement on the AR by DRSA relative to PPO2.}
\label{fig:myfig3}
\end{figure*}
Fig. \ref{fig:myfig2}(b) shows that, although the initial weights are uniformly set to 1/12, after the training, the weights of the 4 nearest neighbor atoms become larger, while the weights for the other 8 neighbors become smaller. That is, the energy prediction is predominately determined by the 4 nearest neighbor, and it matches perfectly with the four-coordinated tetrahedrons, e.g., 3Si-C-Al and 3Al-N-Si, in the lowest-energy structure of AlNSiC.
The atom similarity matrix and the weights verify that the PPM can effectively extract the structural characteristics from the training data.
The learned weights ensures that the choice of atom exchanges is not as random as the SA, and reduces the cost of choosing the energy descent path in DRSA.
Fig. \ref{fig:myfig2}(c) displays the PPM predicted energies against the DFT calculated values and the mean absolute error (MAE) for each fitting-search iteration.
Based on the active learning, 30 most representative samples from the DRSA paths in each fitting-search iteration are selected to enhance as much as possible the prediction accuracy of PPM.
As seen in Fig. \ref{fig:myfig2}(c), with the progress of the fitting-search iteration, the searching area gradually moves from the initial high-energy structures to the low-energy structures, and the corresponding MAE usually decreases.
The results reflect that the search program is capable of effectively finding the low-energy area in the PES and finally obtains the ordered structure of AlNSiC with an energy of -7.33 eV/atom.
Notice that all the energy values shown below refer to the energies per atom.
The interpretability of ML model as shown in Fig. \ref{fig:myfig2} is a common feature of our PPM for all the alloys tested. Additional example can be found in Section III of the SM.
\begin{figure*}[!htb]
\includegraphics[width=16.5cm]{fig4.jpg}
\caption{ The distribution of the clusters, the uncertainties, and the energies of selected samples for the six multi-alloys at the last fitting-search iteration. The $x$- and $y$-axis are the feature vectors generated by TSNE. Each cluster is labeled and colored by the rank of energy. The arrow curves show the direction of energy descent.}
\label{fig:myfig4}
\end{figure*}
\subsection{Search Ability of DRSA and PPO2}
Fig. \ref{fig:myfig3} compares the performances of DRSA and PPO2 on searching the low energy structures of 6 multi-alloys, averaged over 5 separate DRSA or PPO2 runs for every alloy.
Each of the 5 DRSA or PPO2 runs consists of 5 fitting-search iterations and 1000 search paths per iteration, for a total of 5000 search paths per run.
Fig. \ref{fig:myfig3}(a) shows the accumulative rewards (ARs) of DRSA and PPO2 for searching the low energy structures of 6 alloys.
AR is calculated as the total energy descent $E_{0}-\hat{E}_{n}$, where $ \hat{E}_{n} $ is the minimum predicted energy at the $ n $-th search path.
As seen in Fig. \ref{fig:myfig3}(a), AR of DRSA is higher than that of PPO2 for every alloy examined.
The results are understandable as DRSA is basically PPO2, but with the policy complexity reduced by the physical constraint of SA.
With the help of the PPM agent that learns the weight of atomic exchange, which ensures the choice of atom exchanges is not as random as the SA, DRSA requires no more than 5000 search paths in reaching its maximum reward for most alloys.
The fast convergence is achieved also because that the PPO2 agent learns the energy descent policy from the paths of DRSA and the fitting-search style relieves the difficulty of training the agent, resulting in a reduced number of steps to reach the maximal reward.
\begin{table}[tp]
\caption{Number of executable actions (Actions), number of possible structures (Structures), and the improvement of sample efficiency (ISE) by DRSA relative to PPO2 for six multi-alloys.}
\label{tab:mytable1}
\begin{ruledtabular}
\renewcommand{\arraystretch}{1.1}
\begin{tabular}{lccc}
Type of alloys & Actions & Structures & ISE (\%) \\ \hline \rule{-3pt}{10pt}
$ \mathrm{BN} $ & 1296 & $ 10^{20} $ & 1370\\
$ \mathrm{BeZnO_{2}} $ & 324 & $ 10^{9} $ & 198\\
$ \mathrm{AlNSiC} $ & 648 & $ 10^{19} $ & 349\\
$ \mathrm{GaNZnO} $ & 648 & $ 10^{19} $ & 320\\
$ \mathrm{BeMgCaO_{3}} $ & 432 & $ 10^{15} $ & 135\\
$ \mathrm{Al_{2}Ga_{2}Zn_{2}N_{3}O_{3}} $ & 756 & $ 10^{25} $ & 192
\end{tabular}
\end{ruledtabular}
\end{table}
DRSA uses fewer samples and performs better than PPO2.
The improvement may be measured with the improvement of sample efficiency (ISE) defined as $ \mathrm{ISE}=N_{\text{PPO2}}/N_{\text{DRSA}}*100\% $, where $ N_{\text{PPO2}} $ and $ N_{\text{DRSA}} $ are the number of samples searched by PPO2 and DRSA, respectively.
ISE, together with the number of executable actions (allowed exchanges of atom positions) and the number of possible structures for each of the six alloys are shown in Table \ref{tab:mytable1}.
The number of possible structures, or the possible permutations of the atom sites, is found to be from $10^9$ to $10^{25}$ for the 6 alloys.
As the number of possible structures is very large, certainly vastly larger than $ N_{\text{PPO2}} $ and $ N_{\text{DRSA}} $, employing search methods such as DRSA and PPO2 is necessary in practice.
Meanwhile, DRSA is superior with a smaller number of samples and a larger AR than PPO2.
Among the 6 alloys tested, the binary alloy BN shows the highest ISE of about 1370\%.
This may be attributed to that BN has the largest number of executable actions, 1296, at each search step (see Table \ref{tab:mytable1}).
In fact, without the constraint of SA, the random search at the beginning of PPO2 causes an inefficient exploration of the low-energy area.
PPO2 also shows a slow convergence of the policy in a large action space.
The lowest-energy structure is often missing even with 5 different runs of PPO2 (see Table S2 of SM for more testing results). When the number of actions is relatively small, however, the difficulty of policy learning for the RL agent is much reduced and the performance of PPO2 is much improved.
As a result, PPO2 may catch up DRSA on the accumulated reward, as shown in Fig. \ref{fig:myfig3} for \ce{BeZnO2} and \ce{BeMgCaO3} with the number of actions of 324 and 432, respectively.
The MAE of the predicted energies in each fitting-search iteration reflects the variation of the accuracy of PPM.
As seen in the insets of Fig. \ref{fig:myfig3}(a), the MAE for either DRSA or PPO2 normally decreases at first due to the increase in the training data, and then converges because of the lack of diversity in the newly added representative samples. Overall, however, DRSA not only shows higher ARs, but also lower MAEs than PPO2 for the 6 tested alloys.
To be more quantitative, Fig. \ref{fig:myfig3}(b)(c) show respectively the reduction of MAE and the improvement of reward by DRSA relative to PPO2 for the tested alloys. Here, the reduction of MAE is calculated as $ \sum_{i=1}^{5}=(\mathrm{MAE}^{\mathrm{PPO2}}_{i}-\mathrm{MAE}^{\mathrm{DRSA}}_{i})/\mathrm{MAE}_{i}^{\mathrm{PPO2}}*100\% $, where $ \mathrm{MAE}_{i}^{\mathrm{DRSA}} (\mathrm{MAE}_{i}^{\mathrm{PPO2}})$ is the MAE of DRSA (PPO2) at the i-th iteration. The improvement of reward is calculated as $ \sum_{i=1}^{5000}(\mathrm{AR}_{i}^{\mathrm{DRSA}}-\mathrm{AR}_{i}^{\mathrm{PPO2}})/\mathrm{AR}_{i}^{\mathrm{PPO2}}*100\% $, where $ \mathrm{AR}_{i}^{\mathrm{DRSA}} (\mathrm{AR}_{i}^{\mathrm{PPO2}}) $ is the AR of DRSA (PPO2) at the i-th search path.
As can be seen in Fig. \ref{fig:myfig3}(b)(c), there is a positive correlation between the magnitudes of the reduction of MAE and the improvement of reward, both are the largest for BN and the second largest for \ce{Al2Ga2Zn2N3O3}.
While BN has the largest number of executable actions, \ce{Al2Ga2Zn2N3O3} has the largest number of possible structures and the second largest number of executable actions (see Table \ref{tab:mytable1}).
Combined with a complicated chemical composition, \ce{Al2Ga2Zn2N3O3} is hard for PPO2 to handle. Not only the SD of AR increases with the search, the SD of MAE (see the error bar of MAE) also increases, from 3.3 meV/atom for the 1st iteration to 22 meV/atom for the 5th iteration.
Hence, PPO2 is unstable when applied to \ce{Al2Ga2Zn2N3O3}.
Detailed data analysis shows that, due to a lack of Metropolis criterion to constrain the search direction, a significant number of high energy structures are produced by PPO2, causing difficulty in training PPM.
It then leads to high MAE of PPM and improper exploration of PES. On the contrary, a positive feedback between fitting and search is formed in the constraint search of DRSA, leading to a continuous reduction of MAE and a stable search.
\subsection{Clustering Analysis}
To illustrate the benefit of the active learning for the sample selection, the results of the clustering analysis at the last fitting-search iteration are displayed in Fig. \ref{fig:myfig4}.
As shown in Fig. \ref{fig:myfig4}, the structural samples are grouped by Kmeans into 30 clusters, with the minimal sample energy decreases from cluster 1 to cluster 30 and the energies are similar for samples in the same cluster.
The uncertainty of the PPM predictions (Eq. \ref{eq:myomega}) decreases with the decreased sample/cluster energy, indicating that the PPM is adaptive during the search, especially to the low-energy area of the PES.
The correlation between the sample/cluster energies and uncertainties may be quantified with the Pearson correlation coefficient. As shown in Fig. S4 of the SM, the Pearson correlation coefficients are found to be higher than 0.9 for all 6 alloys, with the highest of 0.951 for \ce{Al2Ga2Zn2N3O3}, further illustrating the adaptability of PPM.
\begin{figure*}[!t]
\includegraphics[width=16.5cm]{fig5.jpg}
\caption{(a) The calculated energy versus the fitting-search iteration for the six multi-alloys. The inner panels are the lowest-energy structures and the energy distribution percentage of the search results. (b) The improvement scores of DRSA, PPO2, and SA for the six multi-alloys.}
\label{fig:myfig5}
\end{figure*}
The proximity of the energies for structures in the same cluster and the strong correlation between the energies and uncertainties are the results of the feature classification of the PPM.
Therefore, adding only the lowest-energy structure of each cluster to the training set is quite reasonable and has the benefit of minimizing the size of the training set.
Meanwhile, adding the the high energy structures with high PPM prediction uncertainties, as mentioned in the sample selection strategy above, can help the PPM to fit the entire PES better.
Consequently, the active learning in the sample selection has a positive effect on the structural searching.
It improves the accuracy of PPM by putting the most representative samples on the energy landscape in the training set.
It also reduces the times of expensive DFT calculations \cite{Active_DFT_1,Active_DFT_2} effectively by reducing the size of the training set.
The DRSA predictions are also coherent with the physical intuition.
The lowest-energy structures of \ce{AlNSiC}, \ce{BeZnO2}, \ce{BN}, and \ce{GaNZnO} are predicted to be ordered structures with the obvious layered characteristics.
Specifically, \ce{GaNZnO} consists of the Ga-N and Zn-O layers placed alternatively along the $z$-axis direction (see the insets in Fig. \ref{fig:myfig4} ), providing the best match of the valence electrons for the stabilization of the $sp^3$ hybridization.
For the quaternary alloy \ce{BeMgCaO_{3}}, the Be and Ca atoms are placed in different layers due to the large difference in their atomic sizes, while the Mg atoms are evenly distributed in the Be- and Ca-layer.
Similar structural feature is observed in the lowest energy structure of the quintary alloy \ce{Al2Ga2Zn2N3O3}.
\subsection{Method Comparison}
The lowest energies and the structural energy distributions of the 6 alloys versus the searching iterations for the methods of DRSA, PPO2, SA, and the random search are shown in Fig. \ref{fig:myfig5}(a). The performance of different methods may be measured by the improvement score which is defined as $ s = p\cdot\dfrac{1}{5}\sum_{i=1}^{5}|\Delta E_{i}|/|\Delta E_{i}^{R}|\cdot 100\% $.
Here $p$ is the weighted average concerning the structural energy distribution, with the weight ratios of 6:3:1 for the low-, middle-, and high-energy structures. $|\Delta E_{i}|=|E_{i}-E_{0}|$ is the energy difference between the lowest-energy structure of the $ i $-th iteration and the initial structure, and we use the random search $ |\Delta E_{i}^{R}| $ as the benchmark.
The improvement scores of DRSA, PPO2, and SA for the 6 alloys are shown in Fig. \ref{fig:myfig5}(b).
On average, the improvement scores of DRSA, PPO2, and SA for the 6 alloys are 60.37\%, 11.93\% and 34.62\%, respectively.
DRSA has the highest score due to its combination of RL and SA, while PPO2 has the lowest performance due to its inefficiency in a large action space. More information about the performances of the 4 search methods can be found in Table S2 of SM.
As seen in Table S2, DRSA usually finds the lowest-energy structure with the smallest number of iterations. Moreover, all the 6 lowest-energy structures, one for each of the 6 alloys, are found by DRSA. However, 4 of them are missed by PPO2 and SA, while 5 of them are missed by the random search.
Clearly, the newly proposed method of CCOP that combines the WCGCNN, SA, and the RL is capable of drastically reducing the computational cost, while maintaining the desirable accuracy. Compared to SA that is more efficient than PPO2 and the random search (Table S2), CCOP may reduce the computational cost from 7$ \sim $9 days for SA to 1$ \sim $2 hours (Section V of SM), i.e., a reduction of two orders of magnitude.
\section{Conclusions}
A machine-learning assisted structural prediction method named as CCOP is proposed. The novel features of CCOP include: i) Using a modified CGCNN model as the PPM to replace the expensive DFT calculations. ii) Guiding the structural search paths by DRSA, a method combining the advantages of RL agent and Metropolis criterion, to accelerate the searching process. iii) Employing an active learning based sample selection method to reduce the PPM prediction error and minimize the size of the training set.
Through testing applications concerning the structural searches of 6 multi-alloys, it is demonstrated that: i) The PPM has the desired feature of interpretability, since the results of the atom similarity matrix and the atom exchange weights for the nearest neighbors are both consistent with the physical intuition. ii) DRSA outperforms SA, PPO2 and the random search approach by finding the lowest energy structures with the smallest number of steps. The benefit of DRSA is more pronounced when considering that SA, PPO2 and the random search miss most of the lowest-energy structures for 6 alloys. iii) Selecting samples through active learning makes the PPM adaptive during the search, resulting in an efficient exploration of the low-energy area of the PES.
Overall, the integrated search framework of CCOP is found to cut the computational cost of a conventional SA by two orders of magnitude. CCOP should be useful for the speedy discovery of novel materials.
\begin{acknowledgments}
The work is sponsored by the National Natural Science Foundation of China (Nos. 12074362, 11774416 \& 11774324).
We thank the USTC supercomputing center for providing computational resources for this project.
C.L. and H.L. contribute equally to this work.
\end{acknowledgments}
\bibliographystyle{apsrev4-1}
|
2,877,628,088,508 | arxiv |
\subsubsection{Outline of the Neural Network}
The goal of the network is classifying whether a pulsar search observation contains a pulsar or not.
In order to do this we use a neural network with two parts: dedisperision and classification.
The dedispersion part is described in detail in Section \ref{dedis_arch}. Briefly, it creates
1D time series which contain all pulses that the network sees in the data, similar to the results of conventional dedispersion
methods.
In conventional dedispersion, given a specific DM value, different frequency channels are combined (averaged) after applying a frequency-dependent time delay associated with that DM. This implies that every time point in the averaged time-series is a function of a narrow temporal window at each frequency. In contrast, for our neural network, a single output time point is a function of several seconds of temporal window across all frequencies --- a property that would allow it to be sensitive to a range of DMs in one step while classical techniques require multiple DM trials.
The classifying part of the network classifies the 1D time series and is described in Section \ref{class_arch}. Instead of using a neural network to classify the resulting time series, the time series can also be investigated for pulsar candidates using standard pulsar search techniques that are normally applied to dedispersed time series.
The classifiers in this work only uses the strongest signal in the dedispersed time series which assumes that there is only one pulsar at most in an observation.
For pulsar surveys, which utilise a large field of view and possibly contain a larger number of pulsars in a single observations, the classifiers could be adapted to produce a candidate list instead of a classification of the whole observation.
\begin{figure*}
\includegraphics[width=1.2\columnwidth]{figures/whole_net_1d_v5.pdf}
\centering
\caption{Architecture of the complete neural network.
The numbers at the bottom right of each block indicate the data volume of the output of the block.
In the dedispersion network we use 1D convolution which means that the first entry of the data
volume is a chosen number of channels and the second entry is the number of time steps.
The blue lines indicate at which stages the various losses and the gradients which are used to train the network are calculated. The structure of one layer of the dilation blocks is shown in Figure \ref{fig:single_block}. The structure of a STFT classifier is shown in Figure \ref{fig:stft_class}.}
\label{fig:architecture}
\end{figure*}
\subsubsection{Preprocessing and Regularization}\label{sec:preprocess}
The input to the network are filterbank files which can contain simulated or real pulsars. The details about using simulated pulsars are included in Section \ref{sec:simulations}.
The observations are loaded using \textsc{sigpyproc}\footnote{\hypertarget{https://github.com/ewanbarr/sigpyproc}{https://github.com/ewanbarr/sigpyproc}}.
As part of the preprocessing channels corresponding to the highest and lowest observing frequency are eliminated as they mostly contain noise.
To normalize the input values we subtract the mean of all input values from each value and divide all values by the standard deviation of the input values.
To regularize the network we use a dropout layer (see Section \ref{sec:nn_basics}) with a dropout value of 0.1 after normalization which means that during training 10\% of the input data is zeroed.
\subsubsection{Dedispersion Architecture}\label{dedis_arch}
\begin{figure}
\centering
\includegraphics[scale=0.45]{figures/single_block_1d_v4.pdf}
\caption{Exemplary architecture of the third layer of the second dilated block which is indicated in Figure \ref{fig:architecture}. The whole dilated block has 5 layers. The other layers share the same properties except for the dilation which is increasing with each layer.
The data volume shown in grey is kept constant by choosing the right padding value and truncating the edges.}
\label{fig:single_block}
\end{figure}
The main part of the network is based on the Temporal Convolutional Networks (TCN) described in \citet{2018arXiv180301271B}. This network type mainly consists of 1D dilated convolutions where the output of each layer has the same cardinality as the input to that layer. The modifications we made to the canonical TCN in our architecture are shown in Figure \ref{fig:single_block}; we added a GroupNorm layer \citep{2018arXiv180308494W} after every non-linearity which resulted in faster convergence and the convolutions are non-causal, i.e., the output depends on both past and future values.
Since we are post-processing the data, this is not a problem.
Given the dimensions of the input data and the size of the network, it is evident that we need a few "information bottlenecks" to tame the memory requirements of the proposed model. In the initial layers therefore, we utilise strided convolutions to effectively downsample our temporal resolution by a factor of four; thus going from 400,000 time steps at the input to 100,000 after the first convolutional layer.
Following the initial strided convolutions the data are fed through two blocks of TCNs. The first block has a dilation rate of $d=2^L$ \citep[as in][]{2018arXiv180301271B} where L is the depth of the layer, starting with 0, while the dilation rate in the second block \citep[as in][]{2019arXiv190300695C} is chosen to be $d=w^L$ where $w$ is the kernel size which is 4 in this work. The increased dilation rate in the second block allows us to reach a higher receptive field with fewer layers. Between the two blocks the number of channels is reduced by a convolution with kernel size one, while the number of channels in each block is kept constant.
After the dilated blocks, a single residual block with a dilation and kernel size of one outputs an estimation of the dedispersed time series.
The last convolution in the dedispersion network can have multiple output channels where each output channel can be seen as a different time series which can be subsequently analysed.
In our network design we create multiple time series as the output of the neural network dedispersion. Because the neural network tries to work out multiple different DMs that might be plausible, we made this deliberate choice to create two different time series as outputs to enable the neural network to separate the low- and high-DM pulsars (details in Section \ref{train_obj}).
\subsubsection{Classifying Architecture}\label{class_arch}
\begin{figure}
\centering
\includegraphics[scale=0.45]{figures/stft_class_comb_v2.pdf}
\caption{Architecture of the STFT classifier. In this case the dedispersion network has two output channels. The calculation of the STFT results in the FFT of 7 different segments.
These segments show up in the the first dimension of the data volume which is shown in grey after each layer.
For each input channel five different harmonic combinations are created which results in ten channels which is seen in the last dimension of the data volume.
We are using 2D-convolutions here but the kernel size in the last dimension is always one which means that the different harmonic combinations are only combined in the global max pooling layer. The output size two is required by the used cross-entropy loss where each value represents the likelihood of the pulsar either containing a pulsar or not containing a pulsar.}
\label{fig:stft_class}
\end{figure}
The second part is the classifying part of the network. Since pulsar signals usually are very faint but show periodicity we decided to utilise the
FFT and the FFA in the classification as in traditional pulsar searches (see Section \ref{sec:survey}).
The classification is the result of the combination of three individual classifiers; based on FFA, FFT, and the Short-Time Fourier Transform (STFT). Each classifier is trained individually and the results are linearly combined using learnable weights to deliver the final results.
Utilising the STFT in one of the classifiers is done due to the nature of pulsar signals. Using the STFT allows the network to observe the temporal stability of the pulsar signal when doing the classification which is not done in the FFT and FFA classifiers. While in this study we only look at non-accelerated pulsars, the structure of the classifier allows us in theory to also be more sensitive to some accelerated pulsars than a simple FFT classifier would be.
Calculating the STFT will result in less power of an accelerated pulsar being spread out over multiple FFT bins.
Similar to the stack search described in \citet{2004MNRAS.355..147F}, a neural network could recover an accelerated signal by combining the individual segments of the STFT.
It should be noted that the classification in our neural network works under tighter restrictions than classical pulsar searching pipelines. While usual pipelines create a range of pulsar candidates for a single observation, this classifier tries to classify if the entire observation contains a pulsar or not.
\paragraph*{FFT classifier}
This classifier uses the FFT of the dedispersed time series as input. The spectral power of periodic signals calculated via the FFT is reflected not only at the true periodicity but also in the various harmonics of the period. To mitigate this effect, we implemented the incoherent harmonic summing technique first introduced in \citet{1969Natur.221..816T}; the magnitude of the FFT output is stretched by various factors and added to the original FFT. In our case, this resulted in five different sequences - the original FFT and four harmonically combined versions up to the second, fourth, eighth and sixteenth harmonics. To keep the noise level identical between sequences, each sequence is divided by $\sqrt{N}$ where $N$ is the number of combined harmonics.
High frequencies are truncated since high frequency noise sometimes leads to misclassification.
Techniques to reduce the effect of FFT scalloping where the power of a pulsar may be reduced if the pulse period lies between FFT bins have not been employed.
If the dedispersion part of the network has multiple output channels the result of the FFT is concatenated in the same dimension as the harmonic combinations.
After calculating the FFT, truncating high frequencies and summing the harmonics this results in an output shape for a single observation with two output channels in the dedispersion network of:
\begin{equation*}
\label{eq:shape}
(40000, 10)
=
\text{(fft frequencies, harmonic sums} \times \text{channels)}.
\end{equation*}
The FFT output is passed through a small convolutional net which outputs a single channel.
The convolutions are applied over all harmonic combinations separately.
The single output channel ideally, like in normal pulsar search algorithms, represents a measure of the significance of the peaks in the Fourier transform.
In order to obtain a classification result for the whole observation we subsequently apply global max pooling along the dimension containing the FFT frequencies and the dimension containing the different harmonic combinations.
This global pooling allows us to find the most significant and pulsar-like signal in the observation and localise the frequency of this signal.
The amplitude of the output of the pooling layer is used for the classification result of the classifier.
Because the convolutions are applied over the different harmonic combinations separately the final classification result is only the result of the channel which is most likely to contain a pulsar and not the combination of multiple channels.
\paragraph*{STFT Classifier}
The STFT classifier has a similar structure to the FFT classifier. The structure of the classifier can be seen in Figure~\ref{fig:stft_class}. The only difference is that while the FFT based classifier only uses the FFT as input, the STFT classifier also computes two STFTs by splitting up the time series into two and four non-overlapping segments and calculating the FFT for those.
This results in seven FFTs. In order to process these FFTs with different sizes with a CNN we apply average pooling in order to achieve the same length and frequency resolution in the individual FFTs.
Before feeding them to a convolutional neural network the FFTs are concatenated along the channel dimension which means that the classifying convolutional neural network first trains kernels which combine these different FFTs.
This allows the convolutional neural network to see how the strength of a signal of a certain frequency develops during the observation.
\paragraph*{FFA classifier}
The FFA classifier computes the FFA of the intermediate time series using the Python library \textsc{riptide}\footnote{\hypertarget{https://github.com/v-morello/riptide}{https://github.com/v-morello/riptide}} \citep{2020MNRAS.497.4654M}. To gain sensitivity to the range of pulsar periods which are present in the test set we compute the FFA in three period ranges using the parameters shown in Table~\ref{tab:ffa_para}.
We also compute the period-dependent detection threshold of the peak detection algorithm of the riptide library and subtract this threshold from the resulting FFA. This eases detection by removing large-scale variations and reducing the effect of noisy segments of the FFA.
The result is fed into a similar network as for the other classifiers. In the case of the FFA no harmonic pooling is necessary as the result is phase coherent.
The FFA calculation is based on an existing library which does not allow us to propagate the gradients through it and is running on a CPU. In this implementation the calculation of the FFA can increase the time needed for one training loop by a factor of 10. Since the FFA classifier does not provide useful gradients for the training of the dedispersion network the FFA classifier is only added to the model in the last step of training.
\input{tables/ffa_para.tex}
\subsection{Training Objectives}\label{train_obj}
The ultimate goal of the network is to discern whether an observation contains a pulsar or not.
For this goal we use the {\it cross-entropy loss} to train our network which measures the classification performance of our network. This loss measures the performance of the classification and is the loss function most commonly used in classification problems.
Each classifier is trained individually but also the final classification result is added to the loss.
We add an intermediate loss function that allows the network to find suitable weights for the dedispersion network faster than the classification loss alone would enable. The target function is calculated as follows; the simulated filterbank data is dedispersed and the positions of the peak identified, these locations are convolved with a Gaussian kernel to give the desired signal.The \emph{reconstruction loss} is then calculated as the mean squared loss (MSE) between the output of the dedispersion branch the target signal described above.
When the dedispersion network has multiple output channels we train each channel to be receptive to a part of the whole DM range of the training set. For this we only contain the pulses in the target function of the channel when we want it to be receptive to that particular DM. Otherwise the target function of the channel only contains zeroes.
The loss function that is used to train the network is a combination of the reconstruction loss, the classification loss of the individual classifiers and the classification loss of the combination of the classifiers.
Initially the weight of the reconstruction loss is high but since our final goal is a good classification of the data the weight of the classification loss in the combined loss is increased during training. The details how these weights change during training are described in Section \ref{sec:training}.
\section{Introduction}
\input{introduction_3}
\section{Related Work and Concepts}
We start here by describing traditional approaches to pulsar searches followed by
examples of the application of machine learning techniques in pulsar astronomy.
We then provide a brief primer on the neural network concepts that are relevant to our proposed model.
\subsection{Common Pulsar Survey Techniques}
\label{sec:survey}
\input{related_work_survey}
\subsection{Machine Learning in Pulsar Astronomy}
\input{related_work_astroml}
\subsection{Neural Networks}
\input{related_work_nn}
\section{Methods}
We implemented the neural networks using \textsc{PyTorch}\footnote{An open-source framework for deep learning \url{https://pytorch.org}. The results presented in this work were trained using \textsc{pytorch} 1.1.0.} \citep{NEURIPS2019_9015}.
In this Section we will describe the architecture of the neural network which is shown in Figure \ref{fig:architecture}, the training objectives which are used to train the network, the flavours of neural networks we use and the training procedure we use to train the network.
\subsection{Neural Network Architecture}
\input{architecture}
\subsection{Trained Models}
\input{models}
\subsection{Performance Metrics}
\input{perf_metrics}
\subsection{Training Procedure}
\input{training}
\section{Data}
\label{sec:data}
In this section we will describe the data of the PALFA survey which we analyze in this work and the simulations that are used to to train the neural network. Table \ref{tab:set_size} summarizes the size of the used data sets.
\input{tables/set_size}
\input{tables/psr_table_pred.tex}
\subsection{Observations}
\input{observations}
\subsection{Simulations}
\input{simulations}
\section{Results}
In this section we describe the performance of the neural networks. First we assess the quality of the dedispersed output using the FFA (Section \ref{sec:ffa_perf}) and afterwards we judge the classification performance of our model on simulated (Section \ref{sec:perf_fake}) and real (Section \ref{sec:perf_real}) pulsars. In Section \ref{sec:rfi} we discuss how well the model copes with RFI.
\subsection{FFA Performance}
\input{ffa_performance}
\subsection{Classifier Performance on Simulated Data}
\input{class_performance_fake}
\subsection{Classifier Performance on Real Data}
\input{class_performance_real}
\subsection{Influence of RFI}
\input{rfi}
\section{Applicability to Pulsar Surveys}
\input{application}
\section{Conclusion}
\input{conclusion}
\section*{Acknowledgements}
\input{acknowledgements}
\section*{Data Availability}
The data underlying this article are publicly available observations of the PALFA survey.
Details of the survey can be found at \url{https://www.naic.edu/alfa/pulsar/}.
\bibliographystyle{mnras}
|
2,877,628,088,509 | arxiv | \section{Introduction}
Motivated by significant applications in medical imaging \cite{SA99} and exploration geophysics \cite{BA84}, we consider the two-dimensional
inverse acoustic scattering of time-harmonic point sources by a locally rough surface, which aims to reconstruct the shape
and location of the rough surface from the measured near-field data.
The scattering surface is described by a curve
\begin{eqnarray}\label{a1}
\Gamma:=\{(x_1,x_2)\in\mathbb R^2: x_2=f(x_1)\}
\end{eqnarray}
where $f$ is assumed to be a Lipschitz continuous function with compact support. This means that the surface $\Gamma$
is a local perturbation of the plane
$\Gamma_0:=\{(x_1,x_2)\in\mathbb R^2: x_2=0\}.$
The rough surface $\Gamma$ separates the whole space into two half-spaces denoted by
\begin{eqnarray*}\label{a3}
\Omega_1:=\{(x_1,x_2)\in\mathbb R^2: x_2>f(x_1)\}\quad{\rm and}\quad \Omega_2:=\{(x_1,x_2)\in\mathbb R^2: x_2<f(x_1)\},
\end{eqnarray*}
which are filled with homogeneous mediums described by wave numbers $\kappa_1>0$ and $\kappa_2>0$, respectively.
For impenetrable cases, the incident wave $u^i$ is induced by the point source, which means
\begin{eqnarray}\label{a4}
u^i(x)=\Phi_{\kappa_1}(x,x_s):=\frac{\rm i}{4}H_0^{(1)}(\kappa_1|x-x_s|)\quad {\rm for}\;\; x_s\in\Omega_1.
\end{eqnarray}
Here, $H_0^{(1)}$ is the Hankel function of the first kind of order zero, and $\Phi_{\kappa_1}$ is the fundamental solution of the Helmholtz
equation satisfying $\Delta\Phi_{\kappa_1}(\cdot, x_s)+\kappa_1^2\Phi_{\kappa_1}(\cdot, x_s)=-\delta_{x_s}(\cdot)$ in $\mathbb R^2$, where $\delta$ is the Kronecker delta distribution. Then the scattering of $u^i$ by the rough surface $\Gamma$ can be modelled by
\begin{equation}\label{a5}
\left\{\begin{aligned}
&\Delta u+\kappa_1^2u=-\delta_{x_s} \qquad\qquad\textrm{in}\;\; \Omega_1, \\
&\mathcal{B}u=0\qquad\qquad\qquad\qquad\;\;\textrm{on}\;\; \Gamma,\\
&\lim_{|x|\rightarrow \infty}|x|^{\frac{1}{2}}\left(\partial_{|x|} u^s-{\rm i}\kappa_1 u^s\right)=0,\quad
\end{aligned}
\right.
\end{equation}
where $u:=u^i+u^s$ denotes the total field which is the sum of the incident field $u^i$ and the scattered field $u^s$, ${\mathcal B}$ stands for the boundary condition on $\Gamma$ satisfying ${\mathcal B}u:=u$ if $\Gamma$ is a sound-soft rough surface, and ${\mathcal B}u=\partial_{\nu} u$ if $\Gamma$ is a sound-hard rough surface. Here and throughout, $\nu=\nu(x)$ is the upward normal vector directing into $\Omega_1$ for $x\in\Gamma$, and $\partial_{\nu}$ stands for the normal derivative. The last condition in (\ref{a5}) is the well-known Sommerfeld radiation condition which holds uniformly for all directions $\hat{x}:=x/|x|\in{\mathbb S}_+:=\{x\in{\mathbb R}^2: |x|=1,x_2>0\}$.
For the penetrable case, consider an incoming wave induced by the point source (\ref{a4})
to be incident on the scattering interface $\Gamma$ from the domain $\Omega_1$. Then the scattering of $u^i$ by $\Gamma$ can be modelled by
\begin{eqnarray}\label{a7}
\left\{\begin{array}{lll}
\Delta u^s+\kappa_1^2u^s=0& \textrm{in}\;\Omega_1 \\[2mm]
\Delta u^s+\kappa_2^2u^s =0&\textrm{in}\;\Omega_2 \\[2mm]
u^s|_+-u^s|_- =-u^i&{\rm on\;}\Gamma\\[2mm]
\partial_\nu u^s|_+-\partial_\nu u^s|_- =-\partial_\nu u^i& {\rm on\;}\Gamma\\[2mm]
\displaystyle\lim_{|x|\rightarrow\infty}|x|^{\frac12}\left(\partial_{|x|} u^s-{\rm i}\kappa u^s\right)=0.
\end{array}
\right.
\end{eqnarray}
Here the notation $\cdot|_\pm$ represents the limits of $\cdot$ approaching $\Gamma$ from $\Omega_1$ and $\Omega_2$, respectively,
$\kappa$ is the wavenumber defined by $\kappa:=\kappa_1$ in $\Omega_1$ and $\kappa:=\kappa_2$ in $\Omega_2$.
The last condition in (\ref{a7}) is the Sommerfeld radiation condition which holds uniformly for all directions $\hat{x}:=x/|x|\in{\mathbb S}:=\{x\in{\mathbb R}^2: |x|=1\}$. It is worth pointing out that the scattered field $u^s$ satisfying the Sommerfeld radiation condition has the asymptotic behavior of an outgoing spherical wave
\begin{eqnarray*}\label{a9}
u^s(x)=\frac{e^{{\rm i}\kappa |x|}}{|x|^{\frac{1}{2}}}\left\{u^{\infty}(\hat{x})+O\left(\frac{1}{|x|}\right)\right\}\quad {\rm for}\; |x|\to\infty
\end{eqnarray*}
uniformly in all direction $\hat{x}\in{\mathbb S}$, where $u^{\infty}(\hat{x})$ is known as the far field pattern of $u^s$.
Given the incident wave, the rough surface, and the boundary condition, the direct scattering problem is to determine the distribution of the scattered wave, which is extensively studied by the variational method \cite{SE10,MT06} and the integral equation approach \cite{LYZ13,DTS03,ZS03} with employing a generalized Fredholm theory \cite{SZ97, SZ00}. Recently, a novel technique was proposed to prove the well-posedness of (\ref{a5}) for sound-soft case in \cite{DLLY17}, based on transferring the unbounded, locally rough surface scattering problem into an equivalent boundary value problem with the compactly supported boundary data, whose well-posedness follows from the classical Fredholm theory. The novel technique has been extended to deal with penetrable, locally rough surface in \cite{LYZ21}, and it has been extended in \cite{LYZ22} to investigate penetrable, locally rough surface with embedded obstacles in the lower half-space.
While the inverse scattering problem is to determine the rough surface from the measured scattered field in some certain domain. For the time-harmonic case, there exists a large number of references on inversion methods such as Newton-type approaches \cite{GJP11,GJ11,GJ13,SP02,QZZ19,RM15,ZZ13}, the Kirsch-Kress schemes \cite{CR10,LZ13}, nonlinear integral equation methods \cite{L19,LB13}, reconstruction algorithms based on transformed field expansions \cite{GL13,GL14}, the factorization method \cite{RG08,AL08}, linear sampling methods \cite{DLLY17,LYZ21, ZY22}, and the direct imaging methods \cite{LLLL15, LZZ18,LZZ19}.
For the time-domain case, a singular source method has been extended to solve the inverse rough surfae scattering problem\cite{C03}.
The reverse time migration (RTM) method is a sample-type method which are widely applied in exploration geophysics \cite{BA84, BCS01, JFC85} and seismic imaging \cite{BCS01}. The main idea of the RTM method consists of two steps. The first step is to back-propagate the complex conjugated data into the back-ground medium, and the second step is to compute the cross-correlation between the incident field and the back propagated field. Thus we can define the imaging functional as the imaginary part of the cross-correlation, which always peak on the boundary of the scatterer. Since the RTM method can provide an effective, stable and powerful reconstruction of the scatterer, it has gained considerable attention
and has been extensively investigated by mathematicians and engineers. Mathematically, the justification of the RTM method has been proved rigorously for the inverse obstacle scattering problem for acoustic waves \cite{CCH131,L21,CH153,CH152}, elastic waves \cite{CH151}, and electromagnetic waves \cite{CCH132}.
It is worth mentioning that the RTM method in these references requires the full scattering data (both the intensity and phase information). Moreover, in a variety of realistic applications, the full data is not available, but instead the phaseless data. In this case, the RTM approach using phaseless data is developed to recover the target for acoustic waves \cite{CH17} and electromagnetic waves \cite{CH16}. It is shown in \cite{CH16, CH17} that the imaging functional using phaseless data is essentially the same as the imaging functional using full data. It is noticed that the above references about the RTM method are related to the inverse obstacle scattering problem. However, it is challenging to develop the RTM method for reconstructing an infinite rough surface since the general Helmholtz-Kirchhoff identity is not valid for unbounded rough surfaces. As far as we know, no such a result is available so far.
The purpose of this paper is to investigate the RTM scheme to solve the inverse scattering by infinite, locally rough surfaces, which aims to reconstruct the rough surface from the near-field data. The key difficulty is that the usual Helmholtz-Kirchhoff identity presented in \cite{CCH131} is not applicable to the infinite rough surface case. To overcome this difficulty, by introducing a special locally rough surface, we establish a modified Helmholtz-Kirchhoff identity and define a modified imaging functional which is associated with the Green's function of the special locally rough surface. Based on the modified Helmholtz-Kirchhoff identity, the mathematical justification of the RTM method has been proved rigorously, where we demonstrate that the modified imaging functional enjoys the nice feature that it always peaks on the boundary of the rough surface for sound-soft case and penetrable case, and it always reaches a nadir on the boundary of the rough surface for sound-hard case. Thus, the modified imaging functional provides a stable and powerful reconstructions of the rough surface, which is confirmed in the numerical experiments.
The rest of the paper is organized as follows. In section 2, we develop the RTM method for impenetrable locally rough surfaces which includes the sound-soft rough surface and the sound-hard rough surface. Section 3 is devoted to the RTM approach for penetrable locally rough surfaces. In section 4, we present some numerical experiments to demonstrate the validity of the RTM method. This paper concludes with some general remarks and discussions on the future work in section 5.
\section{The RTM for impenetrable locally rough surfaces}
\setcounter{equation}{0}
In this section, we investigate the RTM method for inverse acoustic scattering by impenetrable locally rough surfaces with the Dirichlet and Neumann boundary conditions.
\begin{figure}[htbp]
\centering
\includegraphics[width=5in, height=3in]{pictures/Geomery_impenetrable.jpg}
\caption{The setting of RTM method for sound-soft and sound-hard cases.}
\label{f1}
\end{figure}
As shown in Figure \ref{f1}, let $\Gamma$ be the locally rough surface defined by (\ref{a1}), whose local perturbation is contained
in a rectangle sampling domain $S$. We choose a large enough $R$ and define a special locally rough surface $\Gamma_R$ as
\begin{eqnarray}\label{b1}
\Gamma_R:=\{(x_1,x_2)\in\mathbb R^2: x_2=0\;\;{\rm for }\;|x_1|\geq R,\;\;{\rm and}\;\; x_2=-\sqrt{R^2-x_1^2}\;\;{\rm for}\;|x_1|\leq R\}
\end{eqnarray}
such that the sampling domain $S$ lies totally above $\Gamma_R$. And we choose $r>R$ and assume that there are $N_s$ point sources $x_s$ uniformly distributed on $\Gamma_s:=\partial B^+_r$ and $N_r$ receivers $x_r$ uniformly distributed on $\Gamma_r:=\partial B^+_r$. Here, $B_r$ denotes the disc with the origin as the center and $r$ as the radius, and $\partial B_r^+$ be the upper semicircle.
\subsection{The Dirichlet problem}
In this subsection, we study the RTM for the sound-soft rough surface which means ${\mathcal B}u=u$ in Problem (\ref{a5}). To this end, we first consider the scattering of the incident point source $\Phi_{\kappa_1}(x,x_s)$ given by (\ref{a4}) by the special locally rough surface $\Gamma_R$, which reads
\begin{eqnarray}\label{b2}
\left\{\begin{array}{lll}
\Delta G_D^s(x,x_s)+\kappa_1^2G_D^s(x,x_s)=0 &\textrm{in}\;\; \Omega_R, \\[3mm]
G_D^s(x,x_s)=-\Phi_{\kappa_1}(x,x_s)&\textrm{on}\;\; \Gamma_R,\\[3mm]
\displaystyle\lim_{|x|\rightarrow \infty}|x|^{\frac{1}{2}}\left(\partial_{|x|} G_D^s(x,x_s)-{\rm i}\kappa_1 G_D^s(x,x_s)\right)=0.
\end{array}
\right.
\end{eqnarray}
Here $G^s_D(x,x_s)$ is the scattered field, $G_D(x,x_s):=G_D^s(x,x_s)+\Phi_{\kappa_1}(x,x_s)$ denotes the total field, and $\Omega_R$ is the upper half-space separated by $\Gamma_R$, which means
\begin{eqnarray*}\label{b3}
\Omega_R:=\{(x_1,x_2)\in\mathbb R^2: x_2>0\;\;{\rm for }\;|x_1|\geq R,\;\;{\rm and}\;\; x_2>-\sqrt{R^2-x_1^2}\;\;{\rm for}\;|x_1|\leq R\}.
\end{eqnarray*}
It follows from \cite{GJ11,DLLY17} that Problem (\ref{b2}) is well-posed in a standard Sobolev space.
For $x_s\in\Gamma_s$, recall that $u^s(x,x_s)$ is the solution of Problem (\ref{a5}) with Dirichlet boundary condition when the incident source is located at $x_s$. Define
\begin{eqnarray}\label{b8}
V_D(x,x_s):=u^s(x,x_s)-G^s_D(x,x_s)
\end{eqnarray}
then it is easily checked that it solves
\begin{eqnarray}\label{b9}
\left\{\begin{aligned}
&\Delta V_D(x,x_s)+\kappa_1^2V_D(x,x_s)=0 \qquad\qquad\textrm{in}\;\; \Omega_1, \\
&\;V_D(x,x_s)=-G_D(x,x_s)\qquad\qquad\qquad\;\textrm{on}\;\; \Gamma\setminus\Gamma_R,\\
&\;V_D(x,x_s)=0\qquad\qquad\qquad\qquad\qquad\;\;\;\textrm{on}\;\;\Gamma\cap\Gamma_R,\\
&\displaystyle\lim_{|x|\rightarrow \infty}|x|^{\frac{1}{2}}\left(\partial_{|x|} V_D-{\rm i}\kappa_1 V_D\right)=0.
\end{aligned}
\right.
\end{eqnarray}
Noting that we can compute the scattered field $G_D^s(x_r,x_s)$ by solving Problem (\ref{b2}) using Nystr\"{o}m method or finite element method. Thus, we can obtain $V_D(x_r,x_s)$ from the measurement $u^s(x_r,x_s)$ and (\ref{b8}).
Now, we are able to introduce the RTM method which consists of two steps. The first step is to back-propagate the complex conjugated data $\overline{V_D(x_r,x_s)}$ into the domain $\Omega_R$; the second step is to calculate the imaginary part of the cross-correlation of $G_D(\cdot, x_s)$ and the back-propagation field. More precisely, we summarize it in the following algorithm.
\newline
{\bf Algorithm 1 (RTM for sound-soft locally rough surfaces)}: Given the data $V_D(x_r,x_s)$ for $r=1,2,...,N_r$ and $s=1,2,...,N_s$.
\begin{itemize}
\item Back-propagation: for $s=1,2,...,N_s$, solve the problem
\begin{eqnarray}\label{b10}
\left\{\begin{aligned}
&\Delta W_D(x,x_s)+\kappa_1^2W_D(x,x_s)=\frac{|\Gamma_r|}{N_r}\sum_{r=1}^{N_r}\overline{V_D(x_r,x_s)}\delta_{x_r}(x) \qquad\textrm{in}\;\; \Omega_R, \\
& \;W_D(x,x_s)=0\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\;\;\textrm{on}\;\; \Gamma_R,\\
&\; \displaystyle\lim_{|x|\rightarrow \infty}|x|^{\frac{1}{2}}\left(\partial_{|x|} W_D-{\rm i}\kappa_1 W_D\right)=0,
\end{aligned}
\right.
\end{eqnarray}
to address the solution $W_D$.
\item Cross-correlation: for each sampling point $z\in S$, calculate the indicator function
\begin{eqnarray*}\label{b11}
{\rm Ind}_D(z)=\kappa_1^2{\rm Im}\left\{\frac{|\Gamma_s|}{N_s}\sum_{s=1}^{N_s}G_D(z,x_s)W_D(z,x_s)\right\}
\end{eqnarray*}
and then plot the mapping ${\rm Ind}_D(z)$ against $z$.
\end{itemize}
It follows from the linearity that the solution of Problem (\ref{b10}) can be represented by
\begin{eqnarray*}\label{b12}
W_D(x,x_s)=-\frac{|\Gamma_r|}{N_r}\sum_{r=1}^{N_r}\overline{V_D(x_r,x_s)}G_D(x,x_r),
\end{eqnarray*}
which leads to
\begin{eqnarray}\label{b13}
{\rm Ind}_D(z)=-\kappa_1^2{\rm Im}\left\{\frac{|\Gamma_s|}{N_s}\frac{|\Gamma_r|}{N_r}\sum_{s=1}^{N_s}\sum_{r=1}^{N_r}G_D(z,x_s)G_D(z,x_r)\overline{V_D(x_r,x_s)}\right\}\quad z\in S.
\end{eqnarray}
Observing that the sampling point $z\in S$, the source location $x_s\in\Gamma_s=\partial B_r^+$, the receiver $x_r\in\Gamma_r=\partial B_r^+$, and $S$ is inside in $B_R$, we have $G_D(z,x_s)$ and $G_D(z,x_r)$ are smooth. Combining of the smoothness of $V_D(x_r,x_s)$ and the trapezoid quadrature formula yields that ${\rm Ind}_D(z)$ given by (\ref{b13}) is a discrete formula of the following continuous function:
\begin{eqnarray*}\label{b14}
\widetilde{{\rm Ind}}_D(z)=-\kappa_1^2{\rm Im}\int_{\Gamma_r}\int_{\Gamma_s}G_D(z,x_s)G_D(z,x_r)\overline{V_D(x_r,x_s)}{\rm d}s(x_s){\rm d}s(x_r),\quad z\in S.
\end{eqnarray*}
The remaining part of this subsection aims to give a resolution analysis of the function $\widetilde{{\rm Ind}}_D(z)$. Our destination is to show that $\widetilde{{\rm Ind}}_D(z)$ will have contrast at the rough surface $\Gamma$ and decay away from $\Gamma$. To this end, we first introduce the following modified Helmholtz-Kirchhoff identity.
\begin{lemma}\label{lem1}
Let $G_D(x,z)$ be the total field of the scattering problem (\ref{b2}), and $\nu$ be the unit upward normal to $\partial B_r^+$, then we have
\begin{eqnarray*}\label{b4}
\int_{\partial B_r^+}\left(\overline{G_D(\xi,x)}\frac{\partial G_D(\xi,z)}{\partial \nu(\xi)}-\frac{\partial \overline{G_D(\xi,x)}}{\partial\nu(\xi)}G_D(\xi,z)\right){\rm d}s(\xi)=2{\rm i}{\rm Im}G_D(x,z)
\end{eqnarray*}
for any $x,z\in B_r\cap\Omega_R$.
\end{lemma}
\begin{proof}
For any $x,z\in B_r\cap\Omega_R$, we choose a sufficient small $\varepsilon>0$ such that the circles $B_{\varepsilon}(x)$, $B_{\varepsilon}(z)$ with $x,z$ as the center and $\varepsilon$ as the radius contains in the domain $B_r\cap\Omega_R$. A direct application of the Green theorem to $\overline{G_D(\cdot, x)}$ and $G_D(\cdot,z)$ in the domain $(B_r\cap\Omega_R)\setminus(B_{\varepsilon}(x)\cup B_{\varepsilon}(z))$ yields
\begin{eqnarray}\nonumber
0&=&\int_{(B_r\cap\Omega_R)\setminus(B_{\varepsilon}(x)\cup B_{\varepsilon}(z))}\left(\overline{G_D(\xi,x)}\Delta G_D(\xi,z)-\Delta\overline{G_D(\xi,x)}G_D(\xi,z)\right){\rm d}\xi\\\nonumber
&=&\int_{\partial B_r^+\cup(B_r\cap\Gamma_R)\cup\partial B_{\varepsilon}(x)\cup\partial B_{\varepsilon}(z)}\left(\overline{G_D(\xi,x)}\frac{\partial G_D(\xi,z)}{\partial \nu(\xi)}-\frac{\partial\overline{G_D(\xi,x)}}{\partial \nu(\xi)}G_D(\xi,z)\right){\rm d}s(\xi)\\\label{b5}
&=&I_1+I_2+I_3+I_4,
\end{eqnarray}
where $\nu(\xi)$ denotes the unit outward normal to $\partial B_r^+$ when $\xi\in\partial B_r^+$, $\nu(\xi)$ denotes the unit downward normal to $B_r\cap\Gamma_R$ when $\xi\in B_r\cap\Gamma_R$, and $\nu(\xi)$ denotes the unit normal to $\partial B_{\varepsilon}(x)$, $\partial B_{\varepsilon}(z)$ into the interior of $B_{\varepsilon}(x)$, $B_{\varepsilon}(z)$, respectively. Since $G_D(\xi,x)$ and $G_D(\xi,z)$ vanish on $B_r\cap\Gamma_R$, we have $I_2=0$. For the item $I_3$, using $G_D(\xi,x)=G_D^s(\xi,x)+\Phi_{\kappa_1}(\xi,x)$ gives that
\begin{eqnarray*}\nonumber
I_3 &=& \int_{\partial B_{\varepsilon}(x)}\left(\overline{\Phi_{\kappa_1}(\xi,x)}\frac{\partial G_D(\xi,z)}{\partial \nu(\xi)}-\frac{\partial\overline{\Phi_{\kappa_1}(\xi,x)}}{\partial \nu(\xi)}G_D(\xi,z)\right){\rm d}s(\xi)\\\nonumber
&&+\int_{\partial B_{\varepsilon}(x)}\left(\overline{G^s_D(\xi,x)}\frac{\partial G_D(\xi,z)}{\partial \nu(\xi)}-\frac{\partial\overline{G^s_D(\xi,x)}}{\partial \nu(\xi)}G_D(\xi,z)\right){\rm d}s(\xi)\\\label{b6}
&\to&-G_D(x,z),\quad{\rm as}\;\;\varepsilon\to 0.
\end{eqnarray*}
Similarly, we can obtain
\begin{eqnarray}\label{b7}
\lim_{\varepsilon\to 0}I_4=\overline{G_D(z,x)}.
\end{eqnarray}
Combining (\ref{b5})--(\ref{b7}) implies that
\begin{eqnarray*}
\int_{\partial B_r^+}\left(\overline{G_D(\xi,x)}\frac{\partial G_D(\xi,z)}{\partial \nu(\xi)}-\frac{\partial\overline{G_D(\xi,x)}}{\partial \nu(\xi)}G_D(\xi,z)\right){\rm d}s(\xi)=2{\rm i}{\rm Im} G_D(x,z),
\end{eqnarray*}
where we use the reciprocity $G_D(x,z)=G_D(z,x)$ for any $x,z\in \Omega_R$, which is proven in the Lemma 3.1 of \cite{DLLY17}. The proof is completed.
\end{proof}
With the help of the above Helmholtz-Kirchhoff identity, we can address the following lemma which plays a key role in the analysis of $\widetilde{{\rm Ind}}_D(z)$.
\begin{lemma}\label{lem2}
For any $x,z\in S$, we have
\begin{eqnarray}\label{b15}
\kappa_1\int_{\partial B_r^+}\overline{G_D(x,\xi)}G_D(\xi,z){\rm d}s(\xi)={\rm Im}G_D(x,z)+\zeta_D(x,z)
\end{eqnarray}
where $\zeta_D(x,z)$ satisfies
\begin{eqnarray}\label{b16}
|\zeta_D(x,z)|+|\nabla_x\zeta_D(x,z)|\leq Cr^{-1}
\end{eqnarray}
uniformly for any $x,z\in S$.
\end{lemma}
\begin{proof}
For any $x,z\in S$, it follows from Lemma \ref{lem1} that
\begin{eqnarray*}
2{\rm i}{\rm Im}G_D(x,z)&=&\int_{\partial B_r^+}\left(\overline{G_D(\xi,x)}\frac{\partial G_D(\xi,z)}{\partial \nu(\xi)}-\frac{\partial \overline{G_D(\xi,x)}}{\partial\nu(\xi)}G_D(\xi,z)\right){\rm d}s(\xi)\\
&=&\int_{\partial B_r^+}\Bigg\{\overline{G_D(\xi,x)}\left[\frac{\partial G_D(\xi,z)}{\partial \nu(\xi)}-{\rm i}\kappa_1 G_D(\xi,z)\right]{\rm d}s(\xi)\\
&&-G_D(\xi,z)\left[\frac{\overline{\partial G_D(\xi,x)}}{\partial \nu(\xi)}+{\rm i}\kappa_1\overline{G_D(\xi,x)}\right]\Bigg\}{\rm d}s(\xi)\\
&&+2{\rm i}\kappa_1\int_{\partial B_r^+}\overline{G_D(\xi,x)}G_D(\xi,z){\rm d}s(\xi).
\end{eqnarray*}
Thus, a direct application of the reciprocity $G_D(\xi,x)=G_D(x,\xi)$ for $\xi\in \partial B_r^+$ and $x\in S$ yields
\begin{eqnarray*}
\kappa_1\int_{\partial B_r^+}\overline{G_D(x,\xi)}G_D(\xi,z){\rm d}s(\xi)={\rm Im}G_D(x,z)+\zeta_D(x,z)\quad{\rm for}\;\;{\forall x,z\in S},
\end{eqnarray*}
with
\begin{eqnarray*}
\zeta_D(x,z)&=&{\color{blue}\frac{\rm i}{2}}\int_{\partial B_r^+}\Bigg\{\overline{G_D(\xi,x)}\left[\frac{\partial G_D(\xi,z)}{\partial \nu(\xi)}-{\rm i}\kappa_1 G_D(\xi,z)\right]{\rm d}s(\xi)\\
&&-G_D(\xi,z)\left[\frac{\overline{\partial G_D(\xi,x)}}{\partial \nu(\xi)}+{\rm i}\kappa_1\overline{G_D(\xi,x)}\right]\Bigg\}{\rm d}s(\xi).
\end{eqnarray*}
Due to
\begin{eqnarray*}
G_D(\xi,y)=O(|\xi|^{-\frac{1}{2}}),\qquad\frac{\partial G_D(\xi,y)}{\partial \nu(\xi)}-{\rm i}\kappa_1 G_D(\xi,y)=O(|\xi|^{-\frac{3}{2}})
\end{eqnarray*}
for $y\in\{x,z\}$ and $x,z\in S$, it follows that
\begin{eqnarray*}
|\zeta_D(x,z)|\leq Cr^{-1}
\end{eqnarray*}
uniformly for $x,z\in S$. Since
\begin{eqnarray*}
\frac{\partial G_D(\xi,x)}{\partial x_j}=O(|\xi|^{-\frac{1}{2}}),\qquad\frac{\partial}{\partial x_j}\left[\frac{\partial G_D(\xi,x)}{\partial \nu(\xi)}-{\rm i}\kappa_1 G_D(\xi,x)\right]=O(|\xi|^{-\frac{3}{2}})
\end{eqnarray*}
for $j=1,2$ and $x\in S$, it follows that
\begin{eqnarray*}
|\nabla_x\zeta_D(x,z)|\leq Cr^{-1}
\end{eqnarray*}
uniformly for $x,z\in S$. Thus, we conclude that (\ref{b15}) and (\ref{b16}) hold. The proof is complete.
\end{proof}
\begin{lemma}\label{lem3}
Let $V_D(x,x_s)$ be the solution to Problem (\ref{b9}), then we have the Green's formula
\begin{eqnarray*}\label{b28}
V_D(x,x_s)=\int_{\Gamma\setminus\Gamma_R}\left[\frac{\partial G_D(\xi,x)}{\partial\nu(\xi)}V_D(\xi,x_s)-\frac{\partial V_D(\xi,x_s)}{\partial \nu(\xi)}G_D(\xi,x)\right]{\rm d}s(\xi)\qquad {\rm for}\;\;x\in\Omega.
\end{eqnarray*}
\end{lemma}
\begin{proof}
For any $x\in\Omega_1$, we choose a sufficient large $\rho>0$ and a sufficiently small $\varepsilon>0$ such that the disc $B_{\varepsilon}(x)$ with $x$ as the center and $\varepsilon$ as the radius contains in $\Omega_{\rho}:=B_{\rho}\cap\Omega_1$, where $B_{\rho}$ denotes the disc with the origin as the center and $\rho$ as the radius. We now apply Green's theorem to the functions $V_D(\cdot, x_s)$ and $G_D(\cdot,x)$ in the domain $\Omega_{\rho}\setminus B_{\varepsilon}(x)$ to obtain
\begin{eqnarray}\nonumber
0&=&\int_{\Omega_{\rho}\setminus B_{\varepsilon}(x)}\left[\Delta V_D(\xi,x_s)G_D(\xi,x)-\Delta G_D(\xi,x)V_D(\xi,x_s)\right]{\rm d}\xi\\\nonumber
&=&\left\{\int_{\partial B_{\rho}^+}-\int_{\Gamma\cap B_{\rho}}+\int_{\partial B_{\varepsilon}(x)}\right\}\left[\frac{\partial V_D(\xi,x_s)}{\partial \nu(\xi)}G_D(\xi,x)-\frac{\partial G_D(\xi,x)}{\partial\nu(\xi)}V_D(\xi,x_s)\right]{\rm d}s(\xi)\\\label{b19}
&:=&I_1-I_2+I_3,
\end{eqnarray}
where $\nu(\xi)$ denotes the unit normal which directs into the exterior of $B_{\rho}$ for $\xi\in\partial B_{\rho}^+$, and directs into the interior of $B_{\varepsilon}(x)$ for $\xi\in \partial B_{\varepsilon}(x)$. For the item $I_1$, our task is to show
\begin{eqnarray}\label{b20}
\lim_{\rho\to \infty} I_1=0.
\end{eqnarray}
To accomplish this, using the Sommerfeld radiation condition gives that
\begin{eqnarray}\nonumber
&&\int_{\partial B_{\rho}^+}\left[\left|\frac{\partial V_D(\xi,x_s)}{\partial \nu(\xi)}\right|^2+\kappa_1^2|V_D(\xi,x_s)|^2+2\kappa_1{\rm Im}\left(V_D(\xi,x_s)\frac{\partial \overline{V_D(\xi,x_s)}}{\partial\nu(\xi)}\right)\right]{\rm d}s(\xi)\\\label{b21}
&&=\int_{\partial B_{\rho}^+}\left|\frac{\partial V_D(\xi,x_s)}{\partial\nu(\xi)}-{\rm i}\kappa_1 V_D(\xi,x_s)\right|^2{\rm d}s(\xi)\to 0\quad{\rm as}\;\;\rho\to\infty.
\end{eqnarray}
A direct application of Green's theorem leads to
\begin{align}\nonumber
&\int_{\partial B_{\rho}^+}V_D(\xi,x_s)\frac{\partial\overline{V_D(\xi,x_s)}}{\partial\nu(\xi)}{\rm d}s(\xi)\\\label{b22}
&=\int_{B_{\rho}\cap\Gamma}V_D(\xi,x_s)\frac{\partial\overline{V_D(\xi,x_s)}}{\partial\nu(\xi)}{\rm d}s(\xi)+\int_{B_{\rho}\cap\Omega_1}\left[|\nabla V_D(\xi,x_s)|^2-\kappa_1^2|V_D(\xi,x_s)|^2\right]{\rm d}\xi.
\end{align}
We now insert the imaginary part of (\ref{b22}) into (\ref{b21}) and find that
\begin{eqnarray*}\label{b23}
\lim_{\rho\to\infty}\int_{\partial B_{\rho}^+}\left(\left|\frac{\partial V_D(\xi,x_s)}{\partial \nu(\xi)}\right|^2+\kappa_1^2|V_D(\xi,x_s)|^2\right){\rm d}s(\xi)=-2\kappa_1{\rm Im}\int_{\Gamma\setminus \Gamma_R}V_D(\xi,x_s)\frac{\partial\overline{V_D(\xi,x_s)}}{\partial\nu(\xi)}{\rm d}s(\xi),
\end{eqnarray*}
where we use the fact $V_D(\xi,x_s)=0$ for $\xi\in\Gamma\cap\Gamma_R$. Thus, we conclude
\begin{eqnarray}\label{b24}
\int_{\partial B_{\rho}^+}|V_D(\xi,x_s)|^2{\rm d}s=O(1),\quad \rho\to\infty.
\end{eqnarray}
Similarly, we have
\begin{eqnarray}\label{b25}
\int_{\partial B_{\rho}^+}|G_D(\xi,x)|^2{\rm d}s=O(1),\quad \rho\to\infty.
\end{eqnarray}
The item $I_1$ can be rewritten as
\begin{eqnarray*}
I_1&=&\int_{\partial B_{\rho}^+}\left[\frac{\partial V_D(\xi,x_s)}{\partial\nu(\xi)}-{\rm i}\kappa_1 V_D(\xi,x_s)\right]G_D(\xi,x){\rm d}s(\xi)\\
&&-\int_{\partial B_{\rho}^+}\left[\frac{\partial G_D(\xi,x)}{\partial\nu(\xi)}-{\rm i}\kappa_1 G_D(\xi,x)\right]V_D(\xi,x_s){\rm d}s(\xi)
\end{eqnarray*}
which, combining (\ref{b24}), (\ref{b25}), the Sommerfeld radiation condition, and Cauchy-Schwartz inequality, shows that (\ref{b20}) holds true.
For the item $I_3$, by $G_D(\xi,x)=G^s_D(\xi,x)+\Phi_{\kappa_1}(\xi,x)$, we have
\begin{eqnarray}\nonumber
I_3&=&\int_{\partial B_{\varepsilon}(x)}\left[\frac{\partial V_D(\xi,x_s)}{\partial \nu(\xi)}G^s_D(\xi,x)-\frac{\partial G^s_D(\xi,x)}{\partial\nu(\xi)}V_D(\xi,x_s)\right]{\rm d}s(\xi)\\\nonumber
&&+\int_{\partial B_{\varepsilon}(x)}\left[\frac{\partial V_D(\xi,x_s)}{\partial \nu(\xi)}\Phi_{\kappa_1}(\xi,x)-\frac{\partial \Phi_{\kappa_1}(\xi,x)}{\partial\nu(\xi)}V_D(\xi,x_s)\right]{\rm d}s(\xi)\\\label{b26}
&:=&I_{31}+I_{32}.
\end{eqnarray}
Applying the Green's theorem in $B_{\varepsilon}(x)$ shows that $I_{31}=0$. A straightforward calculation with using the mean value theorem shows that
\begin{eqnarray}\label{b27}
\lim_{\varepsilon\to 0}I_{32}= -V_D(x,x_s).
\end{eqnarray}
Thus, from (\ref{b19}), (\ref{b20}),(\ref{b26}), and (\ref{b27}), we have the following Green's formula
\begin{eqnarray*}\label{b28}
V_D(x,x_s)=\int_{\Gamma\setminus\Gamma_R}\left[\frac{\partial G_D(\xi,x)}{\partial\nu(\xi)}V_D(\xi,x_s)-\frac{\partial V_D(\xi,x_s)}{\partial \nu(\xi)}G_D(\xi,x)\right]{\rm d}s(\xi).
\end{eqnarray*}
The proof is completed.
\end{proof}
Now, we are in position to present the resolution result of the RTM method for recovering a sound-soft, locally rough surface.
\begin{theorem}\label{thm1}
For any $z\in S$, let $\psi_D(\xi,z)$ solve
\begin{eqnarray}\label{b29}
\left\{\begin{aligned}
&\Delta \psi_D(\xi,z)+\kappa_1^2\psi_D(\xi,z)=0 \qquad\qquad\qquad\;{\rm in}\;\; \Omega_1, \\
&\;\psi_D(\xi,z)=-{\rm Im}G_D(\xi,z)\qquad\qquad\qquad\quad\;\;{\rm on}\;\; \Gamma\setminus\Gamma_R,\\
&\;\psi_D(\xi,z)=0\qquad\qquad\qquad\qquad\qquad\qquad\;\;\;{\rm on}\;\; \Gamma\cap\Gamma_R,\\
&\displaystyle\lim_{|\xi|\rightarrow \infty}|\xi|^{\frac{1}{2}}\left(\partial_{|\xi|} \psi_D(\xi,z)-{\rm i}\kappa_1 \psi_D(\xi,z)\right)=0,
\end{aligned}
\right.
\end{eqnarray}
and $\psi_D^{\infty}(\hat{\xi},z)$ be the far-field pattern of $\psi_D(\xi,z)$. Then for the indicator function $\widetilde{{\rm Ind}}_D(z)$, we have
\begin{eqnarray*}\label{b30}
\widetilde{{\rm Ind}}_D(z)=\kappa_1\int_{{\mathbb S}^+}|\psi_D^{\infty}(\hat{\xi},z)|^2{\rm d}s(\hat{\xi})+\eta_D(z),\qquad \forall z\in S,
\end{eqnarray*}
where $\|\eta_D(z)\|_{L^{\infty}(S)}\leq Cr^{-1}$ with some constant $C$ depending on $R$.
\end{theorem}
\begin{proof}
Recall that
\begin{eqnarray}\nonumber
\widetilde{{\rm Ind}}_D(z)&=&-\kappa_1^2{\rm Im}\int_{\Gamma_r}\int_{\Gamma_s}G_D(z,x_s)G_D(z,x_r)\overline{V_D(x_r,x_s)}{\rm d}s(x_s){\rm d}s(x_r)\\\label{b31}
&=&-\kappa_1{\rm Im}\int_{\Gamma_s}G_D(z,x_s)\widetilde{W}_D(z,x_s){\rm d}s(x_s),
\end{eqnarray}
where
\begin{eqnarray}\label{b32}
\widetilde{W}_D(z,x_s):=\kappa_1\int_{\Gamma_r}G_D(z,x_r)\overline{V_D(x_r,x_s)}{\rm d}s(x_r).
\end{eqnarray}
Substituting the Green's formula presented by Lemma \ref{lem3} into (\ref{b32}) and exchanging the order of integration leads to
\begin{eqnarray}\nonumber
\widetilde{W}_D(z,x_s)&=&\int_{\Gamma\setminus\Gamma_R}\bigg\{\overline{V_D(\xi,x_s)}\frac{\partial}{\partial\nu(\xi)}\left[\kappa_1\int_{\Gamma_r}G_D(z,x_r)\overline{G_D(\xi,x_r)}{\rm d}s(x_r)\right]\\\nonumber
&&\qquad\qquad-\left[\kappa_1\int_{\Gamma_r}G_D(z,x_r)\overline{G_D(\xi,x_r)}ds(x_r)\right]\frac{\partial\overline{V_D(\xi,x_s)}}{\partial\nu(\xi)}\bigg\}{\rm d}s(\xi)\\\nonumber
&=&\int_{\Gamma\setminus\Gamma_R}\bigg\{\overline{V_D(\xi,x_s)}\frac{\partial}{\partial\nu(\xi)}\left[{\rm Im}G_D(\xi,z)+\zeta_D(\xi,z)\right]\\\label{b33}
&&\qquad\qquad-\left[{\rm Im}G_D(\xi,z)+\zeta_D(\xi,z)\right]\frac{\partial\overline{V_D(\xi,x_s)}}{\partial\nu(\xi)}\bigg\}{\rm d}s(\xi).
\end{eqnarray}
Substituting (\ref{b33}) into (\ref{b31}) gives
\begin{eqnarray*}\nonumber
\widetilde{{\rm Ind}}_D(z)&=&-{\rm Im}\int_{\Gamma\setminus\Gamma_R}\bigg\{\left[\kappa_1\int_{\Gamma_s}G_D(z,x_s)\overline{V_D(\xi,x_s)}{\rm d}s(x_s)\right]\frac{\partial}{\partial\nu(\xi)}\left[{\rm Im}G_D(\xi,z)+\zeta_D(\xi,z)\right]\\\nonumber
&&\qquad-\left[{\rm Im}G_D(\xi,z)+\zeta_D(\xi,z)\right]\frac{\partial}{\partial\nu(\xi)}\left[\kappa_1\int_{\Gamma_s}G_D(z,x_s)\overline{V_D(\xi,x_s)}{\rm d}s(x_s)\right]\bigg\}{\rm d}s(\xi)\\\nonumber
&=&-{\rm Im}\int_{\Gamma\setminus\Gamma_R}\bigg\{\phi_D(\xi,z)\frac{\partial}{\partial\nu(\xi)}\left[{\rm Im}G_D(\xi,z)+\zeta_D(\xi,z)\right]\\\label{b34}
&&\qquad\qquad-\left[{\rm Im}G_D(\xi,z)+\zeta_D(\xi,z)\right]\frac{\partial \phi_D(\xi,z)}{\partial\nu(\xi)}\bigg\}{\rm d}s(\xi),
\end{eqnarray*}
where $\phi_D(\xi,z)$ is defined by
\begin{eqnarray*}\label{b35}
\phi_D(\xi,z):=\kappa_1\int_{\Gamma_s}G_D(z,x_s)\overline{V_D(\xi,x_s)}{\rm d}s(x_s).
\end{eqnarray*}
Since $V_D(\xi,x_s)$ satisfies Problem (\ref{b9}), it follows from the linearity that $\overline{\phi_D}(\xi,z)$ solves the following problem
\begin{eqnarray}\label{b36}
\left\{\begin{aligned}
&\Delta \overline{\phi_D}(\xi,z)+\kappa_1^2 \overline{\phi_D}(\xi,z)=0 \qquad\qquad\qquad\;\;{\rm in}\;\; \Omega_1, \\
&\; \overline{\phi_D}(\xi,z)=-{\rm Im}G_D(\xi,z)-\zeta_D(\xi,z)\qquad\quad{\rm on}\;\; \Gamma\setminus\Gamma_R,\\
&\; \overline{\phi_D}(\xi,z)=0\qquad\qquad\qquad\qquad\qquad\qquad\;\;\;{\rm on}\;\; \Gamma\cap\Gamma_R,\\
&\displaystyle\lim_{|\xi|\rightarrow \infty}|\xi|^{\frac{1}{2}}\left(\partial_{|\xi|} \overline{\phi_D}(\xi,z)-{\rm i}\kappa_1 \overline{\phi_D}(\xi,z)\right)=0,
\end{aligned}
\right.
\end{eqnarray}
where we use Lemma \ref{lem2} to derive the boundary condition on $\Gamma\setminus\Gamma_R$. By the linearity, it is easy to obtain the following decomposition
\begin{eqnarray*}\label{b37}
\overline{\phi_D}(\xi,z)=\psi_D(\xi,z)+\varphi_D(\xi,z),
\end{eqnarray*}
where $\psi_D(\xi,z)$ and $\varphi_D(\xi,z)$ solve Problem (\ref{b36}) with the boundary data $-{\rm Im} G_D(\xi,z)$ and $-\zeta_D(\xi,z)$ on $\Gamma\setminus\Gamma_R$, respectively. Thus, we conclude
\begin{eqnarray}\nonumber
\widetilde{{\rm Ind}}_D(z)&=&-{\rm Im}\int_{\Gamma\setminus\Gamma_R}\bigg\{\left[\overline{\psi_D}(\xi,z)+\overline{\varphi_D}(\xi,z)\right]\frac{\partial}{\partial\nu(\xi)}\left[{\rm Im}G_D(\xi,z)+\zeta_D(\xi,z)\right]\\\nonumber
&&\qquad\qquad-\left[{\rm Im}G_D(\xi,z)+\zeta_D(\xi,z)\right]\frac{\partial }{\partial\nu(\xi)}\left[\overline{\psi_D}(\xi,z)+\overline{\varphi_D}(\xi,z)\right]\bigg\}{\rm d}s(\xi)\\\nonumber
&=&-{\rm Im}\int_{\Gamma\setminus\Gamma_R}\left[\overline{\psi_D}(\xi,z)\frac{\partial {\rm Im}G_D(\xi,z)}{\partial\nu(\xi)}-{\rm Im}G_D(\xi,z)\frac{\partial\overline{\psi_D}(\xi,z)}{\partial\nu(\xi)}\right]{\rm d}s(\xi)+\eta_D(z)\\\nonumber
&=&-{\rm Im}\int_{\Gamma\setminus\Gamma_R}\left[-{\rm Im}G_D(\xi,z)\frac{\partial {\rm Im}G_D(\xi,z)}{\partial\nu(\xi)}+\psi_D(\xi,z)\frac{\partial\overline{\psi_D}(\xi,z)}{\partial\nu(\xi)}\right]{\rm d}s(\xi)+\eta_D(z)\\\label{b38}
&=&-{\rm Im}\int_{\Gamma\setminus\Gamma_R}\psi_D(\xi,z)\frac{\partial\overline{\psi_D}(\xi,z)}{\partial\nu(\xi)}{\rm d}s(\xi)+\eta_D(z).
\end{eqnarray}
Here, $\eta_D(z)$ is defined by
\begin{eqnarray}\nonumber
\eta_D(z)&=&-{\rm Im}\int_{\Gamma\setminus\Gamma_R}\left[\overline{\psi_D}(\xi,z)\frac{\partial \zeta_D(\xi,z)}{\partial\nu(\xi)}-\zeta_D(\xi,z)\frac{\partial\overline{\psi_D}(\xi,z)}{\partial\nu(\xi)}\right]{\rm d}s(\xi)\\\nonumber
&&-{\rm Im}\int_{\Gamma\setminus\Gamma_R}\left[\overline{\varphi_D}(\xi,z)\frac{\partial \zeta_D(\xi,z)}{\partial\nu(\xi)}-\zeta_D(\xi,z)\frac{\partial\overline{\varphi_D}(\xi,z)}{\partial\nu(\xi)}\right]{\rm d}s(\xi)\\\label{b39}
&&-{\rm Im}\int_{\Gamma\setminus\Gamma_R}\left[\overline{\varphi_D}(\xi,z)\frac{\partial {\rm Im}G_D(\xi,z)}{\partial\nu(\xi)}-{\rm Im}G_D(\xi,z)\frac{\partial\overline{\varphi_D}(\xi,z)}{\partial\nu(\xi)}\right]{\rm d}s(\xi).
\end{eqnarray}
Applying Green's theorem to the functions $\psi_D(\xi,z)$ and $\overline{\psi_D}(\xi,z)$ in the domain $\Omega\cap B_{\rho}$ yields
\begin{align}\nonumber
&-{\rm Im}\int_{\Gamma\setminus\Gamma_R}\psi_D(\xi,z)\frac{\partial\overline{\psi_D}(\xi,z)}{\partial\nu(\xi)}{\rm d}s(\xi)\\\nonumber
&=-{\rm Im}\int_{\partial B^+_{\rho}}\psi_D(\xi,z)\frac{\partial\overline{\psi_D}(\xi,z)}{\partial\nu(\xi)}{\rm d}s(\xi)+{\rm Im}\int_{\Omega\cap B_{\rho}}\left[\psi_D(\xi,z)\Delta\overline{ \psi_D}(\xi,z)+|\nabla\psi_D(\xi,z)|^2\right]{\rm d}\xi\\\label{b40}
&=-{\rm Im}\int_{\partial B^+_{\rho}}\psi_D(\xi,z)\left[\frac{\partial\overline{\psi_D}(\xi,z)}{\partial\nu(\xi)}+{\rm i}\kappa_1\overline{\psi_D}(\xi,z)\right]{\rm d}s(\xi)+{\rm Im}\int_{\partial B^+_{\rho}}{\rm i}\kappa_1 |\psi_D(\xi,z)|^2{\rm d}s(\xi)
\end{align}
Note that
\begin{eqnarray*}
\psi_D(\xi,z)=O(|\xi|^{-\frac{1}{2}})\quad{\rm and}\quad \frac{\partial\psi_D(\xi,z)}{\partial\nu(\xi)}-{\rm i}\kappa_1\psi_D(\xi,z)=o(|\xi|^{-\frac{1}{2}}),
\end{eqnarray*}
we have
\begin{eqnarray}\label{b41}
\lim_{\rho\to\infty}\int_{\partial B^+_{\rho}}\psi_D(\xi,z)\left[\frac{\partial\overline{\psi_D}(\xi,z)}{\partial\nu(\xi)}+{\rm i}\kappa_1\overline{\psi_D}(\xi,z)\right]{\rm d}s(\xi)=0.
\end{eqnarray}
Since $\psi_D(\xi,z)$ satisfies the Sommerfeld radiation condition, it admits the following asymptotic behavior
\begin{eqnarray*}\label{b42}
\psi_D(\xi,z)=\frac{e^{{\rm i}\kappa_1 |\xi|}}{|\xi|^{\frac{1}{2}}}\left[\psi^{\infty}_D(\hat{\xi},z)+O(|\xi|^{-1})\right]
\end{eqnarray*}
which implies
\begin{eqnarray}\label{b43}
\lim_{\rho\to\infty}\int_{\partial B_{\rho}^+}\kappa_1 |\psi_D(\xi,z)|^2{\rm d}s(\xi)=\kappa_1\int_{{\mathbb S}^+}|\psi_D^{\infty}(\hat{\xi},z)|^2{\rm d}s(\hat{\xi}).
\end{eqnarray}
Hence, with the help of (\ref{b38}) and (\ref{b40})--(\ref{b43}), we arrive at
\begin{eqnarray*}\label{b44}
\widetilde{{\rm Ind}}_D(z)=\kappa_1\int_{{\mathbb S}^+}|\psi_D^{\infty}(\hat{\xi},z)|^2{\rm d}s(\hat{\xi})+\eta_D(z).
\end{eqnarray*}
The remaining part of the proof is the estimate of $\eta_D(z)$. For $\xi\in\Gamma\setminus\Gamma_R$ and $z\in S$, it follows from (\ref{b16}) that
\begin{eqnarray}\label{b45}
|\zeta_D(\xi,z)|+|\nabla_{\xi}\zeta_D(\xi,z)|\leq Cr^{-1}.
\end{eqnarray}
Observing that
\begin{eqnarray*}
{\rm Im} G_D(\xi,z)={\rm Im}G^s_D(\xi,z)+\frac{1}{4}J_0(\kappa_1 |\xi-z|),
\end{eqnarray*}
where $J_0$ stands for the Bessel function of order zero, it follows from the smoothness of $G_D^s(\xi,z)$ and $J_0(\kappa_1 |\xi-z|)$ that
\begin{eqnarray}\label{b46}
\left|{\rm Im} G_D(\xi,z)\right|+\left|\frac{\partial {\rm Im}G_D(\xi,z)}{\partial\nu(\xi)}\right|\leq C.
\end{eqnarray}
Since $\psi_D(\xi,z)$ and $\varphi_D(\xi,z)$ solve Problem (\ref{b36}) with the boundary data $-{\rm Im} G_D(\xi,z)$ and $-\zeta_D(\xi,z)$ on $\Gamma\setminus\Gamma_R$, respectively, a direct application of the well-posedness of Problem (\ref{b29}) (cf. \cite[Theorem 2.1]{DLLY17}) and the trace theorem shows that
\begin{eqnarray}\label{b47}
\|\psi_D(\xi,z)\|_{H^{\frac{1}{2}}(\Gamma\setminus\Gamma_R)}+\left\|\frac{\partial\psi_D(\xi,z)}{\partial\nu(\xi)}\right\|_{H^{-\frac{1}{2}}(\Gamma\setminus\Gamma_R)}
\lesssim \|{\rm Im}G_D(\xi,z)\|_{H^{\frac{1}{2}}(\Gamma\setminus\Gamma_R)}
\leq C,
\end{eqnarray}
and
\begin{eqnarray}\label{b48}
\|\varphi_D(\xi,z)\|_{H^{\frac{1}{2}}(\Gamma\setminus\Gamma_R)}+\left\|\frac{\partial\varphi_D(\xi,z)}{\partial\nu(\xi)}\right\|_{H^{-\frac{1}{2}}(\Gamma\setminus\Gamma_R)}\lesssim \|\zeta_D(\xi,z)\|_{H^{\frac{1}{2}}(\Gamma\setminus\Gamma_R)}\lesssim r^{-1},
\end{eqnarray}
where the notation $a\lesssim b$ means $a\leq Cb$ for some generic constant $C>0$, which may change step by step.
Thus, with the aid of (\ref{b39}) and (\ref{b45})--(\ref{b48}), we can easily obtain
\begin{eqnarray*}
\|\eta_D(z)\|_{L^{\infty}(S)}\leq Cr^{-1},
\end{eqnarray*}
with $C$ depending on $R$. The proof is finished.
\end{proof}
\subsection{The Neumann problem}
This subsection is devoted to investigating the RTM for locally rough surfaces satisfying Neumann boundary conditions, i.e. ${\mathcal B}u=\partial_{\nu} u$ in (\ref{a5}). To this end, we first consider the scattering of the incident point source $\Phi_{\kappa_1}(x,x_s)$ by the special locally rough surface $\Gamma_R$ with a Neumann boundary condition. More precisely, we consider
\begin{eqnarray}\label{c1}
\left\{\begin{array}{llll}
\Delta G_N^s(x,x_s)+\kappa_1^2G^s_N(x,x_s)=0 &\textrm{in}\;\; \Omega_R, \\[3mm]
\partial_{\nu} G^s_N(x,x_s)=-\partial_{\nu} \Phi_{\kappa_1}(x,x_s)\quad&{\rm on}\;\;\Gamma_R\\[3mm]
\displaystyle\lim_{|x|\rightarrow \infty}|x|^{\frac{1}{2}}\left(\partial_{|x|} G^s_N(x,x_s)-{\rm i}\kappa_1 G^s_N(x,x_s)\right)=0,
\end{array}
\right.
\end{eqnarray}
where $G^s_N(x,x_s)$ represents the scattered field, $G_N(x,x_s):=\Phi_{\kappa_1}(x,x_s)+G^s_N(x,x_s)$ represents the total field. By a standard discussion, one can easily conclude that Problem (\ref{c1}) admits a unique solution in the standard Sobolev space, and we refer to Theorem 2.7 in \cite{QZZ19} for details.
Recall that $u^s(x,x_s)$ denotes the scattered field associated with the incident field $\Phi_{\kappa_1}(x,x_s)$ by the locally rough surface $\Gamma$ with Neumann boundary condition. Now, we consider the difference
\begin{eqnarray}\label{c2}
V_N(x,x_s):=u^s(x,x_s)-G_N^s(x,x_s)
\end{eqnarray}
which, from (\ref{a5}) and (\ref{c1}), satisfies
\begin{eqnarray}\label{c3}
\left\{\begin{array}{llll}
\Delta V_N(x,x_s)+\kappa_1^2V_N(x,x_s)=0 &\textrm{in}\;\; \Omega, \\[3mm]
\partial_{\nu} V_N(x,x_s)=-\partial_{\nu} G_N(x,x_s)\quad&{\rm on}\;\;\Gamma\setminus\Gamma_R\\[3mm]
\partial_{\nu} V_N(x,x_s)=0\quad&{\rm on}\;\;\Gamma\cap\Gamma_R\\[3mm]
\displaystyle\lim_{|x|\rightarrow \infty}|x|^{\frac{1}{2}}\left(\partial_{|x|} V_N(x,x_s)-{\rm i}\kappa_1 V_N(x,x_s)\right)=0.
\end{array}
\right.
\end{eqnarray}
From the measurement $u^s(x_r,x_s)$ for $x_r\in\Gamma_r$ and $x_s\in\Gamma_s$, we can calculate $V_N(x_r,x_s)$ by (\ref{c2}) since $G^s_N(x_r,x_s)$ can be obtained by solving Problem (\ref{c1}) through the Nystr\"{o}m method or finite element method. Thus, we are in position to introduce the RTM which includes two steps. The first step is back-propagation in which we back-propagate the complex conjugated data $\overline{V_N(x_r,x_s)}$ into the domain $\Omega_R$; the second step is the cross-correlation in which we calculate the imaginary part of the cross-correlation of $G_N(x,x_s)$ and the back-propagation field.
{\bf Algorithm 2 (RTM for sound-hard locally rough surfaces)}: Given the data $V_N(x_r,x_s)$ for $r=1,2,...,N_r$ and $s=1,2,...,N_s$.
\begin{itemize}
\item Back-propagation: for $s=1,2,...,N_s$, solve the problem
\begin{eqnarray}\label{c4}
\left\{\begin{aligned}
&\Delta W_N(x,x_s)+\kappa_1^2W_N(x,x_s)=\frac{|\Gamma_r|}{N_r}\sum_{r=1}^{N_r}\overline{V_N(x_r,x_s)}\delta_{x_r}(x) \qquad\textrm{in}\;\; \Omega_R, \\
&\partial_{\nu} W_N(x,x_s)=0,\qquad\quad\qquad\qquad\qquad\qquad\qquad\qquad\quad\qquad\;\;\textrm{on}\;\;\Gamma_R,\\
&\; \displaystyle\lim_{|x|\rightarrow \infty}|x|^{\frac{1}{2}}\left(\partial_{|x|} W_N-{\rm i}\kappa_1 W_N\right)=0,
\end{aligned}
\right.
\end{eqnarray}
to get the solution $W_N(x,x_s)$.
\item Cross-correlation: for each sampling point $z\in S$, compute the indicator function
\begin{eqnarray*}\label{c5}
{\rm Ind}_N(z)=\kappa_1^2{\rm Im}\left\{\frac{|\Gamma_s|}{N_s}\sum_{s=1}^{N_s}G_N(z,x_s)W_N(z,x_s)\right\}
\end{eqnarray*}
and then plot the mapping ${\rm Ind}_N(z)$ against $z$.
\end{itemize}
Due to the linearity, the solution of Problem (\ref{c4}) can be formulated by
\begin{eqnarray*}\label{c6}
W_N(x,x_s)=-\frac{|\Gamma_r|}{N_r}\sum_{r=1}^{N_r}\overline{V_N(x_r,x_s)}G_N(x,x_s),
\end{eqnarray*}
which implies
\begin{eqnarray*}\label{c7}
{\rm Ind}_N(z)=-\kappa_1^2{\rm Im}\left\{\frac{|\Gamma_s|}{N_s}\frac{|\Gamma_r|}{N_r}\sum_{s=1}^{N_s}\sum_{r=1}^{N_r}G_N(z,x_s)G_N(z,x_r)\overline{V_N(x_r,x_s)}\right\}\qquad z\in S.
\end{eqnarray*}
Since $G_N(z,x_s)$, $G_N(z,x_r)$, and $V_N(x_r,x_s)$ are smooth for $z\in S$, $x_s\in\Gamma_s$, $x_r\in\Gamma_r$, it follows from the trapezoid quadrature formula that ${\rm Ind}_N(z)$ is a discrete formula of the following continuous function
\begin{eqnarray*}\label{c8}
\widetilde{{\rm Ind}}_N(z)=-\kappa_1^2{\rm Im}\int_{\Gamma_r}\int_{\Gamma_s}G_N(z,x_s)G_N(z,x_r)\overline{V_N(x_r,x_s)}{\rm d}s(x_s){\rm d}s(x_r)\qquad z\in S.
\end{eqnarray*}
In the remaining part of this subsection, we will demonstrate that $\widetilde{\rm Ind}_N(z)$ will have contrast at the rough surface $\Gamma$ and decay away from $\Gamma$. To this end, we first introduce the following three Lemmas, which can be proven by similar arguments with Lemma \ref{lem1}, Lemma \ref{lem2}, and Lemma \ref{lem3}, and thus we omit the proof.
\begin{lemma}\label{lem4}
Let $G_N(x,z)$ be the total field of Problem (\ref{c1}), and $\nu$ be the unit upward normal to $\partial B_r^+$, then we have
\begin{eqnarray*}\label{c9}
\int_{\partial B_r^+}\left(\overline{G_N(\xi,x)}\frac{\partial G_N(\xi,z)}{\partial\nu(\xi)}-\frac{\partial\overline{G_N(\xi,x)}}{\partial\nu(\xi)}G_N(\xi,z)\right){\rm d}s(\xi)=2{\rm i}{\rm Im}G_N(x,z)
\end{eqnarray*}
for any $x,z\in B_r\cap\Omega_R$.
\end{lemma}
\begin{lemma}\label{lem5}
For any $x,z\in S$, we have
\begin{eqnarray*}\label{c10}
\kappa_1\int_{\partial B_r^+}\overline{G_N(x,\xi)}G_N(\xi,z){\rm d}s(\xi)={\rm Im}G_N(x,z)+\zeta_N(x,z)
\end{eqnarray*}
where $\zeta_N(x,z)$ satisfies
\begin{eqnarray*}\label{c11}
|\zeta_N(x,z)|+|\nabla_x\zeta_N(x,z)|\leq Cr^{-1}
\end{eqnarray*}
uniformly for any $x,z\in S$.
\end{lemma}
\begin{lemma}\label{lem6}
Let $V_N(x,x_s)$ be the solution to Problem (\ref{c3}), then we have the Green's formula
\begin{eqnarray*}\label{c12}
V_N(x,x_s)=\int_{\Gamma\setminus\Gamma_R}\left[\frac{\partial G_N(\xi,x)}{\partial\nu(\xi)}V_N(\xi,x_s)-\frac{\partial V_N(\xi,x_s)}{\partial \nu(\xi)}G_N(\xi,x)\right]{\rm d}s(\xi)\qquad {\rm for}\;\;x\in\Omega_1.
\end{eqnarray*}
\end{lemma}
The above three lemmas enable us to present the main result of this subsection, which shows that the RTM approach is valid for reconstructing a sound-hard, locally rough surface.
\begin{theorem}\label{thm2}
For any $z\in S$, let $\psi_N(\xi,z)$ satisfy
\begin{eqnarray}\label{c13}
\left\{\begin{aligned}
&\Delta \psi_N(\xi,z)+\kappa_1^2\psi_N(\xi,z)=0 &&{\rm in}\;\; \Omega_1, \\
&\;\partial_{\nu} \psi_N(\xi,z)=-\partial_{\nu}{\rm Im}G_N(\xi,z)&&{\rm on}\;\; \Gamma\setminus\Gamma_R,\\
&\;\partial_{\nu} \psi_N(\xi,z)=0&&{\rm on}\;\; \Gamma\cap\Gamma_R,\\
&\displaystyle\lim_{|\xi|\rightarrow \infty}|\xi|^{\frac{1}{2}}\left(\partial_{|\xi|} \psi_N(\xi,z)-{\rm i}\kappa_1 \psi_N(\xi,z)\right)=0,
\end{aligned}
\right.
\end{eqnarray}
and $\psi_N^{\infty}(\hat{\xi},z)$ be the far-field pattern of $\psi_N(\xi,z)$. Then for the indicator function $\widetilde{{\rm Ind}}_N(z)$, we have
\begin{eqnarray*}\label{c14}
\widetilde{{\rm Ind}}_N(z)=\kappa_1\int_{{\mathbb S}^+}|\psi_N^{\infty}(\hat{\xi},z)|^2ds(\hat{\xi})+\eta_N(z),\qquad \forall z\in S,
\end{eqnarray*}
where $\|\eta_N(z)\|_{L^{\infty}(S)}\leq Cr^{-1}$ with some constant $C$ depending on $R$.
\end{theorem}
\begin{proof}
Note that the indicator function $\widetilde{{\rm Ind}}_N(z)$ can be rewritten by
\begin{eqnarray}\nonumber
\widetilde{{\rm Ind}}_N(z)&=&-\kappa_1^2{\rm Im}\int_{\Gamma_r}\int_{\Gamma_s}G_N(z,x_s)G_N(z,x_r)\overline{V_N(x_r,x_s)}{\rm d}s(x_s){\rm d}s(x_r)\\\label{c15}
&=&-\kappa_1{\rm Im}\int_{\Gamma_s}G_N(z,x_s)\widetilde{W}_N(z,x_s){\rm d}s(x_s)
\end{eqnarray}
with the definition
\begin{eqnarray*}\label{c16}
\widetilde{W}_N(z,x_s):=\kappa_1\int_{\Gamma_r}G_N(z,x_r)\overline{V_N(x_r,x_s)}{\rm d}s(x_r).
\end{eqnarray*}
With the aid of Lemma \ref{lem5} and Lemma \ref{lem6}, we address
\begin{eqnarray}\nonumber
\widetilde{W}_N(z,x_s)&=&\int_{\Gamma\setminus\Gamma_R}\bigg\{\overline{V_N(\xi,x_s)}\frac{\partial}{\partial\nu(\xi)}\left[\kappa_1\int_{\Gamma_r}G_N(z,x_r)\overline{G_N(\xi,x_r)}{\rm d}s(x_r)\right]\\\nonumber
&&\qquad-\left[\kappa_1\int_{\Gamma_r}G_N(z,x_r)\overline{G_N(\xi,x_r)}{\rm d}s(x_r)\right]\frac{\partial\overline{V_N(\xi,x_s)}}{\partial\nu(\xi)}\bigg\}{\rm d}s(\xi)\\\nonumber
&=&\int_{\Gamma\setminus\Gamma_R}\bigg\{\overline{V_N(\xi,x_s)}\frac{\partial}{\partial\nu(\xi)}\left[{\rm Im}G_N(\xi,z)+\zeta_N(\xi,z)\right]\\\label{c17}
&&\qquad-\left[{\rm Im}G_N(\xi,z)+\zeta_N(\xi,z)\right]\frac{\partial\overline{V_N(\xi,x_s)}}{\partial\nu(\xi)}\bigg\}{\rm d}s(\xi).
\end{eqnarray}
Substituting (\ref{c17}) to (\ref{c15}) yields
\begin{eqnarray*}\nonumber
&&\widetilde{{\rm Ind}}_N(z)=-{\rm Im}\int_{\Gamma\setminus\Gamma_R}\bigg\{\left[\kappa_1\int_{\Gamma_s}G_N(z,x_s)\overline{V_N(\xi,x_s)}{\rm d}s(x_s)\right]\frac{\partial }{\partial\nu(\xi)}\left[{\rm Im}G_N(\xi,z)+\zeta_N(\xi,z)\right]\\\nonumber
&&\qquad\qquad-\left[{\rm Im}G_N(\xi,z)+\zeta_N(\xi,z)\right]\frac{\partial }{\partial\nu(\xi)}\left[\kappa_1\int_{\Gamma_s}G_N(z,x_s)\overline{V_N(\xi,x_s)}{\rm d}s(x_s)\right]\bigg\}{\rm d}s(\xi)\\\nonumber
&&\qquad\quad:=-{\rm Im}\int_{\Gamma\setminus\Gamma_R}\bigg\{\phi_N(\xi,z)\frac{\partial }{\partial\nu(\xi)}\left[{\rm Im}G_N(\xi,z)+\zeta_N(\xi,z)\right]\\\label{c18}
&&\qquad\qquad-\left[{\rm Im}G_N(\xi,z)+\zeta_N(\xi,z)\right]\frac{\partial \phi_N(\xi,z)}{\partial\nu(\xi)}\bigg\}{\rm d}s(\xi)
\end{eqnarray*}
where
\begin{eqnarray*}
\phi_N(\xi,z):=\kappa_1\int_{\Gamma_s}G_N(z,x_s)\overline{V_N(\xi,x_s)}{\rm d}s(x_s).
\end{eqnarray*}
It follows from (\ref{c3}) that $\overline{\phi_N}(\xi,z)$ satisfies
\begin{eqnarray}\label{c19}
\left\{\begin{aligned}
&\Delta \overline{\phi_N}(\xi,z)+\kappa_1^2\overline{\phi_N}(\xi,z)=0 &&\textrm{in}\;\; \Omega_1, \\[3mm]
& \partial_{\nu} \overline{\phi_N}(\xi,z)=-\partial_{\nu} \left[{\rm Im}G_N(\xi,z)+\zeta_N(\xi,z)\right]\quad&&{\rm on}\;\;\Gamma\setminus\Gamma_R\\[3mm]
& \partial_{\nu} \overline{\phi_N}(\xi,z)=0\quad&&{\rm on}\;\;\Gamma\cap\Gamma_R\\[3mm]
& \displaystyle\lim_{|\xi|\rightarrow \infty}|\xi|^{\frac{1}{2}}\left(\partial_{|\xi|} \overline{\phi_N}(\xi,z)-{\rm i}\kappa_1 \overline{\phi_N}(\xi,z)\right)=0.
\end{aligned}
\right.
\end{eqnarray}
where we use Lemma \ref{lem5} to obtain the boundary condition on $\Gamma\setminus\Gamma_R$. Let $\psi_N(\xi,z)$ and $\varphi_N(\xi,z)$ satisfies Problem (\ref{c19}) with the boundary data $-\partial_{\nu} {\rm Im}G_N(\xi,z)$ and $-\partial_{\nu}\zeta_N(\xi,z)$ on $\Gamma\setminus\Gamma_R$, respectively. Thus, due to the linearity, we have
\begin{eqnarray*}
\overline{\phi_N}(\xi,z)=\psi_N(\xi,z)+\varphi_N(\xi,z).
\end{eqnarray*}
Hence, we conclude
\begin{eqnarray*}\nonumber
&&\widetilde{{\rm Ind}}_N(z)=-{\rm Im}\int_{\Gamma\setminus\Gamma_R}\bigg\{\left[\overline{\psi_N}(\xi,z)+\overline{\varphi_N}(\xi,z)\right]\frac{\partial }{\partial\nu(\xi)}\left[{\rm Im}G_N(\xi,z)+\zeta_N(\xi,z)\right]\\\nonumber
&&\qquad\qquad-\left[{\rm Im}G_N(\xi,z)+\zeta_N(\xi,z)\right]\frac{\partial}{\partial\nu(\xi)}\left[\overline{\psi_N}(\xi,z)+\overline{\varphi_N}(\xi,z)\right]\bigg\}{\rm d}s(\xi)\\\nonumber
&&=-{\rm Im}\int_{\Gamma\setminus\Gamma_R}\left[\overline{\psi_N}(\xi,z)\frac{\partial {\rm Im}G_N(\xi,z)}{\partial\nu(\xi)}-{\rm Im}G_N(\xi,z)\frac{\partial\overline{\psi_N}(\xi,z)}{\partial\nu(\xi)}\right]{\rm d}s(\xi)+\eta_N(z)\\\nonumber
&&={\rm Im}\int_{\Gamma\setminus\Gamma_R}\overline{\psi_N}(\xi,z)\frac{\partial\psi_N(\xi,z)}{\partial\nu(\xi)}{\rm d}s(\xi)+\eta_N(z).
\end{eqnarray*}
Here, $\eta_N(z)$ is defined by
\begin{eqnarray*}\nonumber
\eta_N(z)&=&-{\rm Im}\int_{\Gamma\setminus\Gamma_R}\left[\overline{\psi_N}(\xi,z)\frac{\partial \zeta_N(\xi,z)}{\partial\nu(\xi)}-\zeta_N(\xi,z)\frac{\partial\overline{\psi_N}(\xi,z)}{\partial\nu(\xi)}\right]{\rm d}s(\xi)\\\nonumber
&&-{\rm Im}\int_{\Gamma\setminus\Gamma_R}\left[\overline{\varphi_N}(\xi,z)\frac{\partial \zeta_N(\xi,z)}{\partial\nu(\xi)}-\zeta_N(\xi,z)\frac{\partial\overline{\varphi_N}(\xi,z)}{\partial\nu(\xi)}\right]{\rm d}s(\xi)\\\label{c21}
&&-{\rm Im}\int_{\Gamma\setminus\Gamma_R}\left[\overline{\varphi_N}(\xi,z)\frac{\partial {\rm Im}G_N(\xi,z)}{\partial\nu(\xi)}-{\rm Im}G_N(\xi,z)\frac{\partial\overline{\varphi_N}(\xi,z)}{\partial\nu(\xi)}\right]{\rm d}s(\xi).
\end{eqnarray*}
By Theorem 2.7 in \cite{QZZ19}, Lemma \ref{lem5}, and a similar argument with the sound-soft case, we can obtain
\begin{eqnarray*}
&&{\rm Im}\int_{\Gamma\setminus\Gamma_R}\overline{\psi_N}(\xi,z)\frac{\partial\psi_N(\xi,z)}{\partial\nu(\xi)}{\rm d}s(\xi)\\
&&=-{\rm Im}\int_{\Gamma\setminus\Gamma_R}\psi_N(\xi,z)\frac{\partial\overline{\psi_N}(\xi,z)}{\partial\nu(\xi)}{\rm d}s(\xi)\\
&&=-{\rm Im}\int_{\partial B^+_{\rho}}\psi_N(\xi,z)\left[\frac{\partial\overline{\psi_N}(\xi,z)}{\partial\nu(\xi)}+{\rm i}\kappa_1\overline{\psi_N}(\xi,z)\right]{\rm d}s(\xi)+{\rm Im}\int_{\partial B^+_{\rho}}{\rm i}\kappa_1 |\psi_N(\xi,z)|^2{\rm d}s(\xi)\\
&&\to \kappa_1\int_{{\mathbb S}^+}|\psi_N^{\infty}(\hat{\xi},z)|^2{\rm d}s(\hat{\xi})\quad{\rm as}\;\rho\to\infty,
\end{eqnarray*}
and
\begin{eqnarray*}
\|\eta_N(z)\|_{L^{\infty}(S)}\leq Cr^{-1},
\end{eqnarray*}
with $C$ depending on $R$. The proof is finished.
\end{proof}
\section{The RTM for penetrable locally rough interfaces}
\setcounter{equation}{0}
The destination of this section is to develop the RTM method for penetrable, locally rough interfaces. As shown in Figure \ref{f2},
let $\Gamma$ be the locally rough interface and $S$ be the sampling domain which contains the local perturbation of $\Gamma$. We choose a sufficient large $R$ and define $\Gamma_R$ by (\ref{b1}) so that the sampling domain $S$ lies totally above $\Gamma_R$. And for $r>R$, we denote by $B_{r}$ the disc with the origin as the center and $r$ as the radius. We assume that there are $N_s$ point sources $x_s$ uniformly distributed on $\Gamma_s:=\partial B_{r}$ and $N_r$ receivers $x_r$ uniformly distributed on $\Gamma_r:=\partial B_{r}$.
\begin{figure}[htbp]
\centering
\includegraphics[width=5in, height=3in]{pictures/Geomery_penetrable.jpg}
\caption{The setting of RTM method for the penetrable case.}
\label{f2}
\end{figure}
To establish the mathematic justification of the RTM method, we first introduce the Green's function $G_P$ associated with the two-dimensional Helmholtz equation in a two-layered medium separated by $\Gamma_R$, which satisfies
\begin{eqnarray}\label{d1}
\left\{\begin{array}{lll}
\Delta_x G_P(x,y)+\kappa_{P}^2(x)G_P(x,y)=-\delta_y(x) & \textrm{in}\;\;{\mathbb R}^2, \\[2mm]
\displaystyle\lim_{|x|\rightarrow \infty}|x|^{\frac{1}{2}}\left(\partial_{|x|} G_P(x,y)
-{\rm i}\kappa_{P}(x)G_P(x,y)\right)=0
\end{array}
\right.
\end{eqnarray}
in the distributional sense and the Sommerfeld radiation condition uniformly for all directions
$\hat{x}\in\mathbb{S}$. Here, $y\in{\mathbb R}^2\setminus\Gamma_R$ and the wave number $\kappa_P(x)$ is defined by $\kappa_P(x):=\kappa_1$ in $\Omega_{1,R}$ and $\kappa_P(x):=\kappa_2$ in $\Omega_{2,R}$, with $\Omega_{1,R}$ and $\Omega_{2,R}$ being the upper and lower half-space separated by $\Gamma_R$, respectively. We refer to Theorem 2.1 and Theorem 2.2 in \cite{YLZ22} for the well-posedness of the background Green's function $G_P(x,y)$ for $y\in{\mathbb R}^2\setminus\Gamma_R$.
Define
\begin{eqnarray*}\label{d2}
V_P(x,x_s):=u(x,x_s)-G_P(x,x_s),
\end{eqnarray*}
which, from (\ref{a7}) and (\ref{d1}), satisfies
\begin{eqnarray}\label{d3}
\left\{\begin{array}{lll}
\Delta V_P(x,x_s)+\kappa^2(x)V_P(x,x_s)=g(x,x_s) &\textrm{in}\;\; {\mathbb R}^2, \\[3mm]
\displaystyle\lim_{|x|\rightarrow \infty}|x|^{\frac{1}{2}}\left(\partial_{|x|} V_P(x,x_s)-{\rm i}\kappa(x) V_P(x,x_s)\right)=0.
\end{array}
\right.
\end{eqnarray}
Here $g(x,x_s)$ is a function with compact support and given by $g(x,x_s):=(\kappa_1^2-\kappa_2^2)G_P(x,x_s)$ in $D_R$ and $g(x,x_s)=0$ in ${\mathbb R}^2\setminus\overline{D_R}$, where $D_R:=\Omega_2\cap\Omega_{1,R}$.
The main idea of the RTM algorithm is to break up the reconstruction of $\Gamma$ into two parts: the first part is to back-propagate the complex conjugated data $\overline{V_P(x_r,x_s)}$ and the second part is to calculate the imaginary part of the cross-correlation of $G_P(z,x_s)$ and the back-propagation field. We summarize it in the following algorithm.
{\bf Algorithm 3 (RTM for penetrable locally rough interface)}: Given the data $V_P(x_r,x_s)$ for $r=1,2,...,N_r$ and $s=1,2,...,N_s$.
\begin{itemize}
\item Back-propagation: for $s=1,2,...,N_s$, solve the problem
\begin{eqnarray}\label{d4}
\left\{\begin{aligned}
&\Delta W_P(x,x_s)+\kappa_P^2(x)W_P(x,x_s)=\frac{|\Gamma_r|}{N_r}\sum_{r=1}^{N_r}\overline{V_P(x_r,x_s)}\delta_{x_r}(x) \qquad\textrm{in}\;\; {\mathbb R}^2, \\
&\; \displaystyle\lim_{|x|\rightarrow \infty}|x|^{\frac{1}{2}}\left(\partial_{|x|} W_P(x,x_s)-{\rm i}\kappa_P(x) W_P(x,x_s)\right)=0,
\end{aligned}
\right.
\end{eqnarray}
to obtain the solution $W_P$.
\item Cross-correlation: for each sampling point $z\in S$, calculate the indicator function
\begin{eqnarray*}\label{d5}
{\rm Ind}_P(z)=\kappa(x_r){\rm Im}\left\{\frac{|\Gamma_s|}{N_s}\sum_{s=1}^{N_s}\kappa(x_s)G_P(z,x_s)W_P(z,x_s)\right\}
\end{eqnarray*}
and then plot the mapping ${\rm Ind}_P(z)$ against $z$.
\end{itemize}
Combination of (\ref{d1}) and the linearity shows that the solution of (\ref{d4}) can be expressed by
\begin{eqnarray*}\label{d6}
W_P(x,x_s)=-\frac{|\Gamma_r|}{N_r}\sum_{r=1}^{N_r}G_P(x,x_r)\overline{V_P(x_r,x_s)}.
\end{eqnarray*}
Hence, we obtain
\begin{eqnarray*}\label{d7}
{\rm Ind}_P(z)=-{\rm Im}\left\{\frac{|\Gamma_s|}{N_s}\frac{|\Gamma_r|}{N_r}\sum_{s=1}^{N_s}\sum_{r=1}^{N_r}\kappa(x_r)\kappa(x_s)G_P(z,x_s)G_P(z,x_r)\overline{V_P(x_r,x_s)}\right\}\quad z\in S,
\end{eqnarray*}
which is a discrete formula of the following continuous function
\begin{eqnarray*}\label{d8}
\widetilde{{\rm Ind}}_P(z)=-{\rm Im}\int_{\Gamma_r}\int_{\Gamma_s}\kappa(x_r)\kappa(x_s)G_P(z,x_s)G_P(z,x_r)\overline{V_P(x_r,x_s)}{\rm d}s(x_s){\rm d}s(x_r),\quad z\in S.
\end{eqnarray*}
In the remaining part of this section, we restrict us to show that the function $\widetilde{{\rm Ind}}_P(z)$ will have contrast at the rough interface $\Gamma$ and decay away from $\Gamma$. To this end, we first introduce the following modified Helmholtz-Kirchhoff identity. It can be shown by a direct application of the Green's theorem along with the continuity of $G_P$ and its normal derivative across $\Gamma_R$, which is similar to the proof of Lemma \ref{lem1} and we omit it here.
\begin{lemma}\label{lem7}
Let $G_P$ be the background Green's function defined by (\ref{d1}). Then for any $x,z\in B_r\setminus\Gamma_R$, we have
\begin{eqnarray*}\label{d9}
\int_{\partial B_r}\left(\overline{G_P(\xi,x)}\frac{\partial G_P(\xi,z)}{\partial \nu(\xi)}-\frac{\partial \overline{G_P(\xi,x)}}{\partial\nu(\xi)}G_P(\xi,z)\right){\rm d}s(\xi)=2{\rm i}{\rm Im}G_P(x,z).
\end{eqnarray*}
\end{lemma}
With the above modified Helmholtz-Kirchhoff identity and the Sommerfeld radiation condition, it is easy to obtain the following Lemma which plays an important role in the analysis of $\widetilde{{\rm Ind}}_P(z)$.
\begin{lemma}\label{lem8}
For any $x,z\in S$, we have
\begin{eqnarray*}\label{d10}
\int_{\partial B_r}\kappa(\xi)\overline{G_P(x,\xi)}G_P(\xi,z){\rm d}s(\xi)={\rm Im}G_P(x,z)+\zeta_P(x,z)
\end{eqnarray*}
where $\zeta_P(x,z)$ satisfies
\begin{eqnarray}\label{d11}
|\zeta_P(x,z)|+|\nabla_x\zeta_P(x,z)|\leq Cr^{-1}
\end{eqnarray}
uniformly for any $x,z\in S$.
\end{lemma}
To analyze the indicator function $\widetilde{{\rm Ind}}_P(z)$, we also need the following Green's formula which is shown by Theorem 2.1 in \cite{LYZ21}.
\begin{lemma}\label{lem9}
Let $V_P(x,x_s)$ be the solution of Problem (\ref{d3}). Then we have
\begin{eqnarray*}\label{d12}
V_P(x,x_s)=(\kappa_2^2-\kappa_1^2)\int_{D_R}G_P(x,\xi)u(\xi,x_s){\rm d}\xi,\quad{\rm for}\;\;x\in{\mathbb R}^2.
\end{eqnarray*}
\end{lemma}
Now we are in position to present the main result of this section.
\begin{theorem}\label{thm3}
For any $z\in S$, let $\psi_P(\xi,z)$ be the solution of
\begin{eqnarray}\label{d13}
\left\{\begin{aligned}
&\Delta \psi_P(\xi,z)+\kappa^2(\xi)\psi_P(\xi,z)=(\kappa_2^2-\kappa_1^2)\chi_{D_R}(\xi){\rm Im}G_P(\xi,z) \qquad{\rm in}\;\; {\mathbb R}^2, \\
&\; \displaystyle\lim_{|\xi|\rightarrow \infty}|\xi|^{\frac{1}{2}}\left(\partial_{|\xi|} \psi_P(\xi,z)-{\rm i}\kappa(\xi) \psi_P(\xi,z)\right)=0,
\end{aligned}
\right.
\end{eqnarray}
where $\chi_{D_R}$ is the characterization function of the domain $D_R$ given by $\chi_{D_R}=1$ in $D_R$ and vanishes outside $D_R$, and $\psi_P^{\infty}(\hat{\xi},z)$ be the corresponding far field pattern. Then we have
\begin{eqnarray*}\label{d14}
\widetilde{{\rm Ind}}_P(z)=\kappa_1\int_{{\mathbb S}^+}|\psi_P^{\infty}(\hat{\xi},z)|^2{\rm d}s(\hat{\xi})+\kappa_2\int_{{\mathbb S}^-}|\psi_P^{\infty}(\hat{\xi},z)|^2{\rm d}s(\hat{\xi})+\eta_P(z)\quad{\forall z\in S}
\end{eqnarray*}
where $\|\eta_P(z)\|_{L^{\infty}(S)}\leq Cr^{-1}$ with some constant $C$ depending on $R$.
\end{theorem}
\begin{proof}
Note that
\begin{eqnarray*}\label{d15}
\widetilde{{\rm Ind}}_P(z)&=&-{\rm Im}\int_{\Gamma_r}\int_{\Gamma_s}\kappa(x_s)\kappa(x_r)G_P(z,x_s)G_P(z,x_r)\overline{V_P(x_r,x_s)}{\rm d}s(x_s){\rm d}s(x_r)\\
&=&-{\rm Im}\int_{\Gamma_s}\kappa(x_s)G_P(z,x_s)\widetilde{W}_P(z,x_s){\rm d}s(x_s)
\end{eqnarray*}
where
\begin{eqnarray*}\label{d16}
\widetilde{W}_P(z,x_s):=\int_{\Gamma_r}\kappa(x_r)G_P(z,x_r)\overline{V_P(x_r,x_s)}{\rm d}s(x_r).
\end{eqnarray*}
With the help of Lemma \ref{lem9} and Lemma \ref{lem8}, we can rewrite $\widetilde{W}_P(z,x_s)$ as
\begin{eqnarray*}\label{d17}
\widetilde{W}_P(z,x_s)&=&(\kappa_2^2-\kappa_1^2)\int_{D_R}\left[\int_{\Gamma_r}\kappa(x_r)G_P(z,x_r)\overline{G_P(x_r,\xi)}{\rm d}s(x_r)\right]\overline{u(\xi,x_s)}{\rm d}\xi\\
&=&(\kappa_2^2-\kappa_1^2)\int_{D_R}\left[{\rm Im}G_P(\xi,z)+\zeta_P(\xi,z)\right]\overline{u(\xi,x_s)}{\rm d}\xi
\end{eqnarray*}
which leads to
\begin{eqnarray*}\label{d18}
\widetilde{{\rm Ind}}_P(z)=-(\kappa_2^2-\kappa_1^2){\rm Im}\int_{D_R}\left[{\rm Im}G_P(\xi,z)+\zeta_P(\xi,z)\right]\phi_P(\xi,z){\rm d}\xi,
\end{eqnarray*}
with
\begin{eqnarray*}\label{d19}
\phi_P(\xi,z):=\int_{\Gamma_s}\kappa(x_s)G_P(z,x_s)\overline{u(\xi,x_s)}{\rm d}s(x_s).
\end{eqnarray*}
Due to the following Lippmann-Schwinger integral equation (cf. \cite[Theorem 2.1]{LYZ21})
\begin{eqnarray*}\label{d20}
u(\xi,x_s)+(\kappa_1^2-\kappa_2^2)\int_{D_R}G_P(\xi,y)u(y,x_s){\rm d}y=G_P(\xi,x_s)\quad{\rm for}\;\;\xi\in{\mathbb R}^2,
\end{eqnarray*}
we arrive at
\begin{eqnarray*}\nonumber
\overline{\phi_P(\xi,z)}&=&\int_{\Gamma_s}\kappa(x_s)\overline{G_P(z,x_s)}G_P(\xi,x_s){\rm d}s(x_s)-(\kappa_1^2-\kappa_2^2)\int_{D_R}G_P(\xi,y)\overline{\phi_P(y,z)}{\rm d}y\\\label{d21}
&=&{\rm Im}G_P(\xi,z)+\zeta_P(\xi,z)-(\kappa_1^2-\kappa_2^2)\int_{D_R}G_P(\xi,y)\overline{\phi_P(y,z)}{\rm d}y.
\end{eqnarray*}
Let $\theta(\xi,z)=\overline{\phi_P(\xi,z)}-\left[{\rm Im}G_P(\xi,z)+\zeta_P(\xi,z)\right]$, then
\begin{eqnarray*}\label{d22}
\theta(\xi,z)=(\kappa_2^2-\kappa_1^2)\int_{D_R}G_P(\xi,y)\left[\theta(y,z)+{\rm Im}G_P(y,z)+\zeta_P(y,z)\right]{\rm d}y.
\end{eqnarray*}
Hence, we conclude that $\theta(\xi,z)$ satisfies the Sommerfeld radiation condition and
\begin{eqnarray*}\label{d23}
\left\{\begin{aligned}
&\Delta \theta(\xi,z)+\kappa^2(\xi)\theta(\xi,z)=0\qquad &&{\rm in}\;\; {\mathbb R}^2\setminus\overline{D_R}, \\
&\Delta \theta(\xi,z)+\kappa_1^2\theta(\xi,z)=(\kappa_1^2-\kappa_2^2)\left[\theta(\xi,z)+{\rm Im}G_P(\xi,z)+\zeta_P(\xi,z)\right], &&{\rm in}\;\; D_R,
\end{aligned}
\right.
\end{eqnarray*}
which is equivalent to
\begin{eqnarray}\label{d24}
\left\{\begin{aligned}
&\Delta \theta(\xi,z)+\kappa^2(\xi)\theta(\xi,z)=(\kappa_1^2-\kappa_2^2)\chi_{D_R}(\xi)\left[{\rm Im}G_P(\xi,z)+\zeta_P(\xi,z)\right] \qquad{\rm in}\;\; {\mathbb R}^2, \\
&\; \displaystyle\lim_{|\xi|\rightarrow \infty}|\xi|^{\frac{1}{2}}\left(\partial_{|\xi|} \theta(\xi,z)-{\rm i}\kappa(\xi) \theta(\xi,z)\right)=0,
\end{aligned}
\right.
\end{eqnarray}
Let $\psi_P(\xi,z)$ and $\varphi_P(\xi,z)$ solve the same scattering problem (\ref{d24}) expect that the right hand term are replaced by $(\kappa_1^2-\kappa_2^2)\chi_{D_R}(\xi){\rm Im}G_P(\xi,z)$ and $(\kappa_1^2-\kappa_2^2)\chi_{D_R}(\xi)\zeta_P(\xi,z)$. Then by the linearity we have
\begin{eqnarray*}\label{d25}
\theta(\xi,z)=\psi_P(\xi,z)+\varphi_P(\xi,z)
\end{eqnarray*}
which yields
\begin{eqnarray*}\label{d26}
\phi_P(\xi,z)=\overline{\psi_P(\xi,z)}+\overline{\varphi_P(\xi,z)}+{\rm Im}G_P(\xi,z)+\overline{\zeta_P(\xi,z)}.
\end{eqnarray*}
Hence,
\begin{eqnarray*}
\widetilde{{\rm Ind}}_P(z)&=&(\kappa_1^2-\kappa_2^2){\rm Im}\int_{D_R}\left[{\rm Im}G_P(\xi,z)+\zeta_P(\xi,z)\right]\\
&&\times\left[\overline{\psi_P(\xi,z)}+\overline{\varphi_P(\xi,z)}+{\rm Im}G_P(\xi,z)+\overline{\zeta_P(\xi,z)}\right]{\rm d}\xi\\
&=&{\rm Im}\int_{D_R}\left[\Delta\psi_P(\xi,z)+\kappa_2^2\psi_P(\xi,z)\right]\overline{\psi_P(\xi,z)}{\rm d}\xi+\eta_P(z)\\
&=&{\rm Im}\int_{\partial D_R}\frac{\partial \psi_P(\xi,z)}{\partial\nu(\xi)}\overline{\psi_P(\xi,z)}{\rm d}s(\xi)+\eta_P(z)\\
&=&\kappa_1\int_{{\mathbb S}^+}|\psi_P^{\infty}(\hat{\xi},z)|^2{\rm d}s(\hat{\xi})+\kappa_2\int_{{\mathbb S}^-}|\psi_P^{\infty}(\hat{\xi},z)|^2{\rm d}s(\hat{\xi})+\eta_P(z)
\end{eqnarray*}
where we use the Green's theorem and the Sommerfeld radiation condition in the last step, and $\eta_P(z)$ is defined by
\begin{eqnarray}\label{dd}
\eta_P(z)=(\kappa_1^2-\kappa_2^2){\rm Im}\int_{D_R}\left[{\rm Im}G_P(\xi,z)\overline{\varphi_P(\xi,z)}+\zeta_P(\xi,z)\overline{\psi_P(\xi,z)}+\zeta_P(\xi,z)\overline{\varphi_P(\xi,z)}\right]{\rm d}\xi.
\end{eqnarray}
Now we are in position to show $\|\eta_P(z)\|_{L^{\infty}(S)}\leq Cr^{-1}$ with $C$ depending on $R$. Recall that $\psi_P(\xi,z)$ and $\varphi_P(\xi,z)$ solve Problem (\ref{d24}) expect that the right hand term are replaced by $(\kappa_1^2-\kappa_2^2)\chi_{D_R}(\xi){\rm Im}G_P(\xi,z)$ and $(\kappa_1^2-\kappa_2^2)\chi_{D_R}(\xi)\zeta_P(\xi,z)$, thus we can write $\psi_P(\xi,z)$ and $\varphi_P(\xi,z)$ in the following form
\begin{eqnarray}\label{d27}
&&\psi_P(\xi,z)=(\kappa_2^2-\kappa_1^2)\int_{D_R}G_{\Gamma}(\xi,y){\rm Im}G_P(y,z){\rm d}y\qquad\;{\rm for}\;\xi\in{\mathbb R}^2,\\\label{d28}
&&\varphi_P(\xi,z)=(\kappa_2^2-\kappa_1^2)\int_{D_R}G_{\Gamma}(\xi,y)\zeta_P(y,z){\rm d}y\qquad\qquad{\rm for}\;\xi\in{\mathbb R}^2.
\end{eqnarray}
Here $G_{\Gamma}(\xi,y)$ is the Green's function associated with the two-dimensional Helmholtz equation in a two-layered medium
separated by $\Gamma$, which satisfies
\begin{eqnarray}\label{d29}
\left\{\begin{array}{lll}
\Delta_{\xi}G_{\Gamma}(\xi,y)+\kappa^2(\xi)G_{\Gamma}(\xi,y)=-\delta_y(\xi) & \textrm{in}\;\;{\mathbb R}^2, \\[2mm]
\displaystyle\lim_{|\xi|\rightarrow \infty}|\xi|^{\frac{1}{2}}\left(\partial_{|\xi|} G_{\Gamma}(\xi,y)
-{\rm i}\kappa(\xi)G_{\Gamma}(\xi,y)\right)=0
\end{array}
\right.
\end{eqnarray}
in the distributional sense and the Sommerfeld radiation condition uniformly for all directions
$\hat{\xi}\in\mathbb{S}$. We refer to Theorem 3.2 and Theorem 3.3 in \cite{YLZ22} for the well-posedness of
Problem (\ref{d29}). It follows from (\ref{d27}) and (\ref{d28}) that
\begin{eqnarray}\label{d30}
\|\psi_P(\cdot,z)\|_{H^2(D_R)}\lesssim \|{\rm Im}G_P(\cdot,z)\|_{L^2(D_R)}\leq C\\\label{d31}
\|\varphi_P(\cdot,z)\|_{H^2(D_R)}\lesssim \|\zeta_P(\cdot,z)\|_{L^2(D_R)}\leq Cr^{-1}
\end{eqnarray}
where we use (\ref{d11}) and $C$ depends on $R$. A direct application of the smoothness of ${\rm Im}G_P(\cdot,z)$, (\ref{d11}), (\ref{dd}), (\ref{d30}), and (\ref{d31}), we can obtain
\begin{eqnarray*}
\|\eta_P(z)\|_{L^{\infty}(S)}\leq Cr^{-1}
\end{eqnarray*}
with $C$ depending on $R$. The proof is completed.
\end{proof}
\section{Numerical experiments}
\setcounter{equation}{0}
In this section, we first give an analysis of the indicator function $\widetilde{\rm Ind}_{\alpha}(z)$ with $\alpha=D, N, P$ and then present several numerical experiments to demonstrate the effectiveness of the RTM method.
According to Theorem \ref{thm1}, Theorem \ref{thm2}, and Theorem \ref{thm3}, it is easy to see that the behavior of the indicator function $\widetilde{\rm Ind}_{\alpha}(z)$ depends on $\psi_{\alpha}(\xi,z)$ when the source and measurement radius
$r$ is large enough, where $\alpha=D, N, P$. Notice that the function $\psi_{\alpha}(\xi,z)$ satisfies Problem (\ref{b29}), Problem (\ref{c13}), and Problem (\ref{d13}) with boundary data $-{\rm Im}G_D(\xi, z)$, $-\partial_{\nu}{\rm Im}G_N(\xi,z)$, and $(\kappa_2^2-\kappa_1^2)\chi_{D_R}(\xi){\rm Im} G_P(\xi, z)$, respectively. Observe that
\begin{eqnarray*}
&&{\rm Im}G_D(\xi,z)=\frac{1}{4}J_0(\kappa_1 |\xi-z|)+{\rm Im}G_D^s(\xi,z)\\
&&\partial_{x_j}{\rm Im}G_N(\xi,z)=\frac{\kappa}{4}J'_0(\kappa_1 |\xi-z|)\frac{\xi_j-z_j}{|\xi-z|}+\partial_{x_j}{\rm Im}G^s_N(\xi,z)\quad\;{\rm for}\;\;j=1,2\\
&&{\rm Im}G_P(\xi,z)=\frac{1}{4}J_0(\kappa_1 |\xi-z|)+{\rm Im}G_P^s(\xi,z)\qquad\qquad\qquad\quad\;{\rm for}\;\;\xi, z\in\Omega_{1,R}
\end{eqnarray*}
where $G_{\alpha}^s(\xi,z)$ $(\alpha=D, N, P)$ denotes the corresponding scattered fields associated with $\Gamma_R$. It is shown numerically that $G_{\alpha}^s(\xi,z)$ can be sufficiently small for $\xi,z\in S$ when $R$ is large enough, see \cite{DLLY17, LYZ21} for details. It is shown in (a) and (d) of Figure \ref{f7} that $J_0(\kappa_1 |\xi-z|)$ achieves a maximum at $\xi=z$, which implies that $J'_0(\kappa_1 |\xi-z|)=0$ when $\xi=z$. Hence, we can obtain that the functions ${\rm Im}G_D(\xi,z)$ and ${\rm Im}G_P(\xi,z)$ will achieve a maximum at $\xi=z$ and the function $\partial_{x_j}{\rm Im}G_N(\xi,z)$ will
achieve a minimum at $\xi=z$ for a sufficient large $R$. This property can be easily observed in Figure \ref{f7}. Based on this observation, we can expect that $\widetilde{\rm Ind}_D(z)$ and $\widetilde{\rm Ind}_P(z)$ will reach a peak on $\Gamma$, and $\widetilde{\rm Ind}_N(z)$ will hit a nadir on $\Gamma$.
\begin{figure}[htp]
\begin{center}
\subfigure[$J_0$]{\includegraphics[width=0.31\textwidth]{pictures/Test/J0.jpg}}
\subfigure[${\rm Im}G_D(x,z)$, $\kappa_1=10$]{\includegraphics[width=0.31\textwidth]{pictures/Test/Dirichlet.jpg}}
\subfigure[${\rm Im}G_P(x,z)$, $\kappa=(10,5)$]{\includegraphics[width=0.31\textwidth]{pictures/Test/Interface.jpg}}
\subfigure[$J'_0$]{\includegraphics[width=0.31\textwidth]{pictures/Test/The_derivative_of_J0.jpg}}
\subfigure[$\partial_{x_1}{\rm Im}G_N(x,z)$, $\kappa_1=10$]{\includegraphics[width=0.31\textwidth]{pictures/Test/Neumann_x1.jpg}}
\subfigure[$\partial_{x_2}{\rm Im}G_N(x,z)$, $\kappa_1=10$]{\includegraphics[width=0.31\textwidth]{pictures/Test/Neumann_x2.jpg}}
\caption{The image of functions $J_0$, $J'_0$, ${\rm Im}G_D(x,z)$, ${\rm Im}G_P(x,z)$, $\partial_{x_1}{\rm Im}G_N(x,z)$ and $\partial_{x_2}{\rm Im}G_N(x,z)$ with $R=95$, $x\in [-5, 5]\times[-1,1.5]$ and the source $z=(0,0)$.}\label{f7}
\end{center}
\end{figure}
In all examples, we assume that the locally rough surface function $f$ is supported in $[-5, 5]$, the sample domain $S=[-5, 5]\times [-1, 1.5]$, and $N_s=N_r=1024$ for impenetrable locally rough surfaces and $N_s=N_r=2048$ for penetrable locally rough surfaces, and we set $R=95$ for the special locally rough surface $\Gamma_R$. The synthetic data is generated by applying the Nystr\"{o}m method to solve the corresponding direct scattering problem, see \cite{LYZ13, LB13} for details.
To test the stability of the RTM method, we consider the performance of this method under noisy data. For some relative error $\tau>0$, we inject some noise into the data by defining
\begin{eqnarray*}
u_{\tau}^s(x)=u^s(x)+\tau\frac{\lambda}{\|\lambda\|_2}\|u^s(x)\|_2
\end{eqnarray*}
where $\lambda=\lambda_1+{\rm i}\lambda_2$ is complex-valued with $\lambda_1$ and $\lambda_2$ consisting of random numbers obeying standard normal distribution $N(0,1)$.
{\bf Example 1.} In this example, we exam the RTM method at different wave numbers. The locally rough surface $\Gamma$ is described as
\begin{equation}\nonumber
f_1(x_1)=\left\{\begin{array}{c}
0.5+0.6\sin(0.6\pi x_1)\exp(16/(x_1^2-16)) \qquad\;\; |x_1|<4, \\[1mm]
\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\;\;\; 0.5\;\qquad\quad|x_1|\geq 4.
\end{array}\right.
\end{equation}
The reconstruction results from exact data are presented in Figure \ref{f4}, where the top row is the results at $\kappa_1=10$ for the impenetrable case and $\kappa_1=10,\kappa_2=5$ for the penetrable case, and the bottom row is the reconstruction at $\kappa_1=20$ for the impenetrable case and $\kappa_1=20,\kappa_2=10$ for the penetrable case. It can be seen that the macroscale features of the rough surface can be captured at a smaller wave number, and the accurate details can be captured at a large wave number.
Inspired by Example 1, to improve the imaging quality, we develop the RTM method with the multifrequency data in the rest examples. For $\alpha= D, N, P$, let ${\rm Ind}_{\alpha}(z,\kappa)$ be the indicator function at the wave number $\kappa$, we define the multifrequency indicator function as the average of ${\rm Ind}_{\alpha}(z,\kappa_j)$ for $j=1,2,...,N_{\kappa}$, which reads as
\begin{eqnarray*}
{\rm Mind}_{\alpha}(z)=\frac{1}{N_{\kappa}}\sum_{j=1}^{N_{\kappa}}{\rm Ind}_{\alpha}(z,\kappa_j)
\end{eqnarray*}
And in the rest examples, we will use the RTM method with the multifrequency data to recover the locally rough surface. Here, the wave numbers are set to be $\kappa_1=10,15,20$ for impenetrable rough surfaces and $\kappa_1=10,15,20$, $\kappa_2=5,7.5,10$ for penetrable rough surfaces.
{\bf Example 2.} In this example, we test the stability of the RTM method. The locally rough surface $\Gamma$ is a multiscale profile given by
\begin{equation}\nonumber
f_2(x_1)=\left\{\begin{array}{c}
0.5+(0.5+0.05\sin(3\pi x_1))\exp(4/(x_1^2-16)) \qquad\;\; |x_1|<4, \\[1mm]
\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad 0.5\;\qquad\;\; |x_1|\geq 4.
\end{array}\right.
\end{equation}
The reconstructions with no noise and $20\%$ noise are illustrated in Figure \ref{f5}, which shows that the RTM method can provide a satisfactory imaging quality, even for $20\%$ noise.
{\bf Example 3.} In the last example, we present the reconstructed results with different measurement radius. The locally rough surface $\Gamma$ is described by a piecewise continuous function given by
\begin{equation}\nonumber
f_3(x_1)=\left\{\begin{array}{lll}
0.2 \qquad\;\; |x_1|\leq 1, \\[1mm]
0.3 \qquad\;\; 3\leq|x_1|\leq 4,\\[1mm]
0.5 \qquad\;\;{\rm others}.
\end{array}\right.
\end{equation}
The measurement radius is set to be $r=20$ and $r=100$, and the noise level is $20\%$. The numerical results are shown in Figure \ref{f6}, which demonstrates that the RTM method can provide satisfactory reconstructions at these measurement radiuses.
\begin{figure}[htp]
\begin{center}
\subfigure[$\kappa_1=10$, Dirichlet]{\includegraphics[width=0.31\textwidth]{pictures/Example1/Example1-k=10-Dirichlet-exact.jpg}}
\subfigure[$\kappa_1=10$,Neumann]{\includegraphics[width=0.31\textwidth]{pictures/Example1/Example1-k=10-Neumann-exact.jpg}}
\subfigure[$\kappa=(10,5)$,Penetrable]{\includegraphics[width=0.31\textwidth]{pictures/Example1/Example1-k=10-k_p=5-Penetrable-exact.jpg}}
\subfigure[$\kappa_1=20$,Dirichlet]{\includegraphics[width=0.31\textwidth]{pictures/Example1/Example1-k=20-Dirichlet-exact.jpg}}
\subfigure[$\kappa_1=20$,Neumann]{\includegraphics[width=0.31\textwidth]{pictures/Example1/Example1-k=20-Neumann-exact.jpg}}
\subfigure[$\kappa=(20,10)$,Penetrable]{\includegraphics[width=0.31\textwidth]{pictures/Example1/Example1-k=20-k_p=10-Penetrable-exact.jpg}}
\caption{Reconstructions of the locally rough surface given in Example 1 from data with no noise at different wave numbers.}\label{f4}
\end{center}
\end{figure}
\begin{figure}[htp]
\begin{center}
\subfigure[$0\%$ noise, Dirichlet]{\includegraphics[width=0.31\textwidth]{pictures/Example2/Example2-Multifrequency-Dirichlet-exact.jpg}}
\subfigure[$0\%$ noise,Neumann]{\includegraphics[width=0.31\textwidth]{pictures/Example2/Example2-Multifrequency-Neumann-exact.jpg}}
\subfigure[$0\%$ noise,Penetrable]{\includegraphics[width=0.31\textwidth]{pictures/Example2/Example2-Multifrequency-Penetrable-exact.jpg}}
\subfigure[$20\%$ noise,Dirichlet]{\includegraphics[width=0.31\textwidth]{pictures/Example2/Example2-Multifrequency-Dirichlet-20percent.jpg}}
\subfigure[$20\%$ noise,Neumann]{\includegraphics[width=0.31\textwidth]{pictures/Example2/Example2-Multifrequency-Neumann-20percent.jpg}}
\subfigure[$20\%$ noise,Penetrable]{\includegraphics[width=0.31\textwidth]{pictures/Example2/Example2-Multifrequency-Penetrable-20percent.jpg}}
\caption{Reconstructions of the locally rough surface given in Example 2 from data with no noise and 20\% noise.}\label{f5}
\end{center}
\end{figure}
\begin{figure}[htp]
\begin{center}
\subfigure[$r=20$, Dirichlet]{\includegraphics[width=0.31\textwidth]{pictures/Example3/Example3-Multifrequency-Dirichlet-20percent-R=20.jpg}}
\subfigure[$r=20$,Neumann]{\includegraphics[width=0.31\textwidth]{pictures/Example3/Example3-Multifrequency-Neumann-20percent-R=20.jpg}}
\subfigure[$r=20$,Penetrable]{\includegraphics[width=0.31\textwidth]{pictures/Example3/Example3-Multifrequency-Penetrable-20percent-R=20.jpg}}
\subfigure[$r=100$,Dirichlet]{\includegraphics[width=0.31\textwidth]{pictures/Example3/Example3-Multifrequency-Dirichlet-20percent-R=100.jpg}}
\subfigure[$r=100$,Neumann]{\includegraphics[width=0.31\textwidth]{pictures/Example3/Example3-Multifrequency-Neumann-20percent-R=100.jpg}}
\subfigure[$r=100$,Penetrable]{\includegraphics[width=0.31\textwidth]{pictures/Example3/Example3-Multifrequency-Penetrable-20percent-R=100.jpg}}
\caption{Reconstructions of the locally rough surface given in Example 3 from data with 20\% noise at different measurement radiuses.}\label{f6}
\end{center}
\end{figure}
From the above numerical experiments, it can be observed that the RTM method proposed in Theorem \ref{thm1}, Theorem \ref{thm2}, and Theorem \ref{thm3} can provide accurate and stable reconstructions for a variety of locally rough surfaces with the Dirichlet, the Neumann, and the transmission boundary conditions. In addition, it is easily seen that the RTM method with the multifrequency data could give a high quality reconstruction for some complicated locally rough surfaces such as multiscale case and piecewise continuous case, even for $20\%$ noisy data.
\section{Conclusion.}
This paper proposed an extended RTM method to recover the shape and location of a locally rough surface with a Dirichlet, Neumann, or transmission boundary conditions. The idea is mainly based on constructing a modified Helmholtz-Kirchhoff identity associated with a special locally rough surface. Numerical experiments demonstrated that our algorithm can provide
a stable and satisfactory reconstruction for a variety of locally rough surfaces, and it is robust to noise. As far as we know, this is the first RTM approach to recover an unbounded rough surface. However, it is more challenging to extend the RTM method to reconstruct a diffraction grating and a non-local, non-periodic rough surface. We hope to report the progress on this topic in the future.
{\bf Acknowledgments.} This work was supported by the NNSF of China grants No. 12171057, 12122114, and Education
Department of Hunan Province No. 21B0299.
|
2,877,628,088,510 | arxiv | \section{Introduction}
Alpha estimation is a regression problem that calculates the opacity value of each blended pixel in the foreground object. It serves as a prerequisite for a broad range of applications such as movie post production, digital image editing and compositing live action.
Formally, the composition image $I_i$ is represented as a linear combination of the background $B_i$ and foreground $F_i$ colors \cite{chuang2001bayesian}:
\begin{equation} \label{eq:1}
I_i = \alpha_iF_i+(1-\alpha_i)B_i
\end{equation}
where $\alpha_i \in [0,1]$ denotes the opacity or alpha matte of the foreground at pixel $i$.
Often, a user input is provided as a guidance in the form of a trimap, which assigns a label for every pixel as foreground $\alpha=1$, background $\alpha=0$ and unknown opacity. The goal of the matting algorithms is to estimate the unknown opacities by utilising the pixel color information of the known regions. Tackling the inverse problem of Eq. \ref{eq:1} is considerably difficult as there are 7 unknowns and 3 equations to be solved for an RGB image. The main motivation in this paper is to increase the matting accuracy by reducing the number of unknowns in Eq. \ref{eq:1}. To do so, we presume that the background information $B$, is known either by capturing a clear background or through reconstruction methods that can estimate the occluded background regions.
In traditional methods, the matte is estimated by inferring the alpha information in the unknown areas from those in known areas \cite{wang2008image}. For example, the matte values could be propagated from known to unknown areas based on the spatial and appearance affinity relation between them \cite{Aksoy2017matting,Chen2013matting,Chen2013mattlocal,He2010laplac,Lee2011nonlocal,Levin2008closedf,Levin2008spectral,Jian2004poisson}. An alternative solution is to compute the unknown mattes by sub-sampling the color and texture distribution of the foreground and background planes followed by an optimization such as likelihood of alpha values \cite{chuang2001bayesian,He2013iter,He2011global,Wang2005iter,Wang2007robust}. Despite the promising performance of these methods on public benchmarks, there is still an unresolved issue of natural image matting and consistency in videos between consecutive frames. One important reason causing this problem is the fact that the performance of these methods heavily rely on the accuracy of the given trimap. Generating the trimap for a sequence of images from a video is indeed a challenging task as it requires tracking the object of interest and defining an appropriate and relevant unknown areas to be solved.
To address these challenges, this paper presents a Background-Aware Generative Adversarial Network (AlphaGan-BG) which utilises the information present in the background plane to accurately estimate the alpha matte compensating for the issues caused by inaccurate trimap. Unlike the state of the art which only use RGB image and trimap as the input, AlphaGan-BG analyses the color and texture provided as background information to achieve a better accuracy.
To our best knowledge, this paper contributes the first deep learning approach which takes advantage of the background information to estimate alpha mattes. Both our qualitative and quantitative experiments demonstrate that AlphaGan-BG significantly outperforms the state of the art matting methods.
\section{Previous Works}
Alpha matting is a well established and studied field of research in computer vision with a rich literature. A significant amount of work has been done over the past decade to address the issues in natural image matting. More recently, deep learning approaches have shown an impressive performance on various computer vision tasks including image matting too.
This section briefly reviews the state of the art alpha matting methods within two categories: conventional methods and deep learning based methods.
\subsection{Conventional Matting Methods}
The conventional alpha matting approaches could be categorised into sampling based and affinity based methods. Sampling based methods \cite{chuang2001bayesian,Gastal2010SharedSF,He2011global,Xiaoxue2016cluster} initially collect a set of known foreground and background color samples to identify the best foreground-background color pair for a pixel.
The general rule is to use Eq. \ref{eq:1} to calculate the alpha value once the corresponding background and foreground colors are determined. The issue with the sampling based method is that they don't make use of the texture information present in the image and they don't enforce spatial smoothness thus introducing an additional spatial smoothness step. More importantly, there is always the ambiguity on how the samples are chosen and where are they chosen from; causing matte discontinuities.
For instance, Shared Matting \cite{Gastal2010SharedSF} select the samples from the trimap boundaries between the known and unknown pixels. Global Matting \cite{He2011global} makes use of all the pixels within the trimap boundary therefore increasing the performance time. Sparse sampling \cite{Karacan2015divergance} applies the sampling in a super-pixel level by assessing their similarity using KL-Divergence based distance measure.
Affinity-based methods work by analysing the affinities of neighboring pixels to propagate alpha information from known to unknown regions. Levin et al. \cite{Levin2008closedf} proposed a closed-form matting solution where the local color information is used to compute the affinity between two pixels. In \cite{Levin2008closedf} the alpha matte is calculated by solving a sparse linear system. The advantage of the closed-form solution is the prediction of the properties of the solution by analysing the eigenvectors of a sparse matrix. Chen et al. \cite{Xiaowu2012manifold} proposed a locally linear embedding system which represents every unknown pixel as a linear combination of its neighbors. KNN matting \cite{Chen2013matting} utilised nonlocal principal to find the affinities. The basis of this principal is that a pixel is a weighted sum of the pixels with similar appearance to the given weight \cite{Chen2013matting}. This method enforces the the pixels and their corresponding nonlocal neighbors to have close alpha value. Aksoy et al. \cite{Aksoy2017matting} constructed their method based on color-mixture flow using pixel-to-pixel connections between the image and it's corresponding trimap. The flow is based on local linear embedding with gradual improvement in matting quality as more building blocks are added to the information flow. It was shown in \cite{Aksoy2017matting} that combining local and non-local affinities can result in a higher quality alpha matte. Several other state of the art approaches such as Random Walks \cite{grady2005random}, FuzzyMatte \cite{Zheng2008fuzzy}, Spectral Matting \cite{Levin2008spectral} and Geodesic Matting \cite{Bai2007geo} can also be categorised as affinity based methods.
\subsection{Deep Learning Based Matting Methods}
Emerging field of deep learning along with the new generation of hardware, enabled many researches to tackle the issues of natural image matting with promising performances. Cho et al. \cite{Donghyeon2016DCNN} proposed an end-to-end Deep Convolutional Neural Networks (CNN) which utilises the results of the closed form matting and KNN matting for alpha estimation. Xu et al. \cite{Xu2017deepmatting} proposed a two part structure to predict alpha. The first part is an encoder-decoder module trained to predict the alpha from the input image and trimap; the second part is a small CNN trained to perform a post-processing step to increase the quality of the estimated alpha. Lu et al. \cite{Lu2019ICCV} proposed IndexNet Matting by introducing indexed pooling and upsampling operators. They modeled the indices as a function of the feature map to perform the upsampling. There are many other methods proposed to use deep learning to tackle the issues of natural image matting such as VDRN Matting \cite{Tang2019VDRN}, SampleNet Matting \cite{Tang2019CVPR}, AdaMatting \cite{Cai2019ICCV}, Late Fusion Matting \cite{Zhang2019CVPRlatefusion}, Inductive Guided Filter Matting \cite{Li2019inductive}, however, the analysis of these methods goes beyond the scope of our work.
\section{AlphaGan-BG Network}
\label{AlphaGanbgnet}
The framework in this research is built on the first proposed GAN to estimate alpha mattes. AlphaGAN \cite{Lutz2018alphagan} was introduced in 2018 motivated by the encoder-decoder structure proposed in \cite{Xu2017deepmatting}. The original architecture of AlphaGAN consists of a generator $G$ and discriminator $D$.
In the original form of AlphaGAN, $G$ accepts the input in a form of a 4 channel volume made of a composited image (3 channels) and the corresponding trimap (1 channel). $D$ is responsible for distinguishing the real from fake input volume. The first 3 channels of the input volume to $D$ belongs to the RGB values of the new composited images based on predicted alpha and the last channel is the original trimap to help $D$ focus on salient regions.
AlphaGAN followed the same path as the rest of the state of the art methods with the assumption that the only data available is an RGB image and the corresponding trimap. However, in this paper, background information is also considered as the known variable and the input to the network.
\subsection{Generator \texorpdfstring{$G$}{}}
In this research, $G$ is an encoder-decoder network that accepts the input in a form of a 7 channel volume, where the first 3 channels contain the RGB image, the second 3 channels contain the RGB background information and the last channel contains the trimap.
The encoder is based on ResNet50 \cite{He2016resnet} architecture pretrained on ImageNet \cite{Russakovsky2015imagenet} where the convolutions in the 3\textsuperscript{rd} and 4\textsuperscript{th} block of the ResNet are replaced by dilated convolutions with rate 2 and 4, respectively. To resample features at several scales, Atrous Spatial Pyramid Pooling (ASPP) module \cite{Chen2017ASPP,Chen2018ASPP} is added after ResNet block 4.
Similary to AlphaGAN, the decoder is simply a set of convolutional layers and skip connections from the encoder. The output of the encoder is bilinearly upsampled with the factor of 2 to maintain the same spatial resolution for the feature maps as the output of ResNet block 1. To reduce the dimensions, the output of the ResNet block 1 is fed into $1\times1$ convolutional layer and concatenated with the upsampled feature maps from encoder. This is followed by $3\times3$ convolutions and upsampling using the saved pooling indices in the first layer of the encoder. The results are once again concatenated with the feature maps from the encoder with the same resolution. Before feeding the output to the final set of convolution layers, transposed convolutions are applied to upsample it followed by a concatenation with the RGB input image.
ReLU \cite{Vinod2010relu} activation functions and Batch Normalization \cite{Sergey2015batchnorm} layers are used for all the layers except the last one which utilises a sigmoid activation to scale the output between 0 and 1. Fig. \ref{netstruct} illustrates the encoder-decoder structure of $G$.
\begin{center}
\includegraphics[width=\columnwidth]{images/net_struct.pdf}
\captionof{figure}{AlphaGan-BG: Structure of Generator ($G$).}\label{netstruct}
\end{center}
\subsection{Discriminator \texorpdfstring{$D$}{}}
This architecture employs PatchGAN \cite{Isola2017patchgan} as the discriminator. $D$ attempts to distinguish fake from real input which is a 7 channel volume. The real input is constructed by original composition using truth alpha, background and trimap. The fake input contains the new composition using the alpha generated by $G$, background and the trimap. By providing the background information, $D$ will enforce $G$ to output sharper and more accurate result as the issue of differentiating foreground and background is resolved by default.
\subsection{Loss Functions}
The full objective of the network is a combination of three loss functions: alpha-prediction loss $\mathcal{L}_{alpha}$, compositional loss $\mathcal{L}_{comp}$ and adversarial loss $\mathcal{L}_{GAN}$ \cite{Goodfellow2014loss}:
\begin{equation} \label{eq:2}
\mathcal{L}_{total} = \mathcal{L}_{alpha}+\mathcal{L}_{comp}+\mathcal{L}_{GAN}
\end{equation}
$\mathcal{L}_{alpha}$ is the absolute difference of the ground truth and predicted alpha values for all the pixels. $\mathcal{L}_{comp}$ is the absolute difference of the composited image using ground truth alpha and the composited image using predicted alpha. The composition in both cases are based on the ground truth foreground and background images \cite{Xu2017deepmatting}. $\mathcal{L}_{GAN}$ is defined based on the fundamentals of adversarial networks, where in this research, $G$ aims at generating alpha mattes close to the ground truth while $D$ aims at distinguishing real from fake input; resulting in $G$ minimizing the $\mathcal{L}_{GAN}$.
\section{Experiments and Discussion}
\subsection{Dataset}
The network in this paper is trained on Adobe Matting dataset \cite{Xu2017deepmatting} consists of 431 foreground images for training and 50 images for testing with corresponding ground truth. To augment the data, Pascal VOC 2008 \cite{everingham2010pascal} and MSCOCO images \cite{lin2014microsoft} are used as the background for image composition resulting in a training set containing 43100 images.
\subsection{Data Preparation}
\label{dataprep}
As described in Section \ref{AlphaGanbgnet}, this network takes advantage of the background information to predict alpha matte. This requires the background information to be available during the test phase as well as training. However, acquiring the background image during the test phase is a challenging task. To achieve this, several inpainting and background reconstruction methods \cite{Kim2019deepinpainting,Xu2019flowinpainting,Huang2016tempinpainting,LAUGRAUD201712,Laugraud2015medianbasedinpainting,Herling2014pixmix} are studied to analyse their accuracy and performance on static images and videos. The findings indicate that currently there is no background reconstruction method that can generate a clear background without artifacts. The ultimate goal is to obtain a reconstructed image which is equivalent of the original input used for the composition. The common artifacts present in the output of the reconstruction methods are the blur (degraded quality) and shift (translation), meaning that the region containing the object of interest is slightly translated in the reconstructed image.
To simulate these artifacts, two sets of backgrounds are augmented. In the first set, a random selection of images are manipulated by applying a hexagonal shape Gaussian blur with a random filter size. The location of the hexagonal blur is randomly chosen along the dimensions of the input image. The diameter of the shape is randomly selected between 120 and 345 pixels with rotation angle chosen by generating a linearly spaced vector. The blurred region is also translated using a 2D linear translation. In the second set, all the images are initially blurred followed by applying the hexagonal shape Gaussian blur at a random location.
Comparatively, the first scenario represents a more realistic case as it contains both clean and partially distorted backgrounds. However, the second set represents severely distorted cases where all the images are blurred with an additional distorted patch introducing a more challenging set for training.
\subsection{Training}
In this paper, two models are trained for evaluation purposes. The first model utilises the first set of background images as described in Section \ref{dataprep} and the second model uses the second set of backgrounds with severe distortion. In order to make the remaining sections easier to follow, we refer to the first model as \textit{AlphaGan-BG\_M} (Mildly distorted) and second model as \textit{AlphaGan-BG\_H} (Heavily distorted).
AlphaGan-BG\_M and AlphaGan-BG\_H are trained for 376 and 650 epochs respectively with the initial learning rate set to 0.0002. Adam optimizer \cite{kingma2014adam} with $\beta$ = 0.999 is also employed for optimization purposes.
\subsection{Results and Evaluation}
\subsection{Still Image Matting}
The evaluation for still images is performed on a set of 50 images from Adobe Matting \cite{Xu2017deepmatting} test set. Note that, none of the test images are considered as part of the training. Four metrics based on AlphaMatting benchmark \cite{Rhemann2009matting,alphamattingweb} are used for evaluation purposes including Sum of Absolute Difference (SAD), Mean Square Error (MSE), Connectivity (CONN) and Gradient Errors (GRAD). The test images from AlphaMatting benchmark are not considered as part of this evaluation as there is no available background information for the test set.
The background images used for evaluation are also manipulated using the pipeline describes in Section \ref{dataprep} to simulate the reconstruction artifacts. The performance of the trained models are compared against 8 state of the art methods ranked in AlphaMatting benchmark with publicly available code including Closed-Form Matting \cite{Levin2008closedf}, DCNN Matting \cite{Donghyeon2016DCNN}, Deep Matting \cite{Xu2017deepmatting}, IndexNet Matting \cite{Lu2019ICCV}, Information-flow Matting \cite{Aksoy2017matting}, KNN Matting \cite{Chen2013matting}, Late Fusion \cite{Zhang2019CVPRlatefusion} and AlphaGAN \cite{Lutz2018alphagan}.
\begin{center}
\begin{tabular}{l c c c c} \toprule
{Methods} & {SAD} & {MSE} & {GRAD} & {CONN} \\ \midrule
Closed-Form Matting \cite{Levin2008closedf} & 78.768 & 0.065 & 57.047 & 56.856 \\
DCNN Matting \cite{Donghyeon2016DCNN} & 85.842 & 0.070 & 57.622 & 65.196 \\
Deep Matting \cite{Xu2017deepmatting} & 33.075 & 0.017 & 23.316 & 34.204 \\
IndexNet Matting \cite{Lu2019ICCV} & 28.984 & 0.013 & 19.127 & 28.872\\
Information-flow Matting \cite{Aksoy2017matting} & 84.766 & 0.067 & 52.789 & 63.827\\
KNN Matting \cite{Chen2013matting} & 95.122 & 0.082 & 66.188 & 74.940\\
Late Fusion \cite{Zhang2019CVPRlatefusion} & 88.109 & 0.097 & 59.382 & 91.743\\
AlphaGAN \cite{Lutz2018alphagan} & 35.057 & 0.019 & 33.598 & 35.963\\ \midrule
AlphaGan-BG\_M & \textbf{11.312} & \textbf{0.002} & \textbf{4.850} & \textbf{8.696} \\
AlphaGan-BG\_H & 14.692 & 0.003 & 8.410 & 12.328\\ \bottomrule
\end{tabular}
\captionof{table}{The quantitative comparison of the AlphaGan-BG models against state of the art. The best average value/metric is emboldened.}\label{tabcomparison}
\end{center}
\begin{center}
\includegraphics[width=\columnwidth]{images/compare1-lowsize.pdf}
\captionof{figure}{Comparison with State of the Art Methods - Example 1.}\label{compare1}
\end{center}
Table \ref{tabcomparison} presents the numerical evaluation of the AlphaGan-BG models against the state of the art methods and clearly notes that AlphaGan-BG outperforms the other methods based on the commonly used AlphaMatting benchmark metrics. This experiment also validates the idea of using background information for alpha estimation. As discussed in Section \ref{dataprep}, based on the current state of the art in background reconstruction, it is very challenging to obtain a clear reconstructed background; However, this experiment demonstrates that even having a partial information about the background plane (with distortion) can significantly increase the accuracy of the alpha prediction.
Fig. \ref{compare1} and Fig. \ref{compare2} illustrate the qualitative comparison of the proposed models against the state of the art methods. A part of the predicted alpha mattes by the state of the art is marked in Fig. \ref{compare1} to closely expose the difference in the performance of the methods.
\begin{center}
\includegraphics[width=\columnwidth]{images/compare2-lowsize.pdf}
\captionof{figure}{Comparison with State of the Art Methods - Example 2.}\label{compare2}
\end{center}
The performance of the AlphaGan-BG\_M and AlphaGan-BG\_H in Fig. \ref{compare1} is a clear example and proof of an earlier statement that including the partial background information of the image during the matting pipeline, can significantly increase its accuracy and preserve fine details.
Fig. \ref{compare2} is another example of the visual comparison against the state of the art where the superior performance of the AlphaGan-BG\_M and AlphaGan-BG\_H is clearly visible through the marked areas. For more and detailed visual results refer to Appendix 1.
\subsection{Video Matting}
To evaluate the performance of the AlphaGan-BG\_M and AlphaGan-BG\_H on video sequences, we used four state of the art background reconstruction and inpainting methods including Deep Video Inpainting (DVI) \cite{Kim2019deepinpainting}, Deep Flow Inpainting (DFGI) \cite{Xu2019flowinpainting}, FVC \cite{Afifi2014fastvideocomp} and Gated Video Inpainting (GVI) \cite{chang2019free} to separate the foreground and background layers. We also considered backgrounds with simulated artifacts as part of this evaluation. The background layers are further used as the input to the proposed matting framework.
Three video sequences including \textit{Alex}, \textit{Castle} and \textit{Dmitriy} from VideoMatting benchmark \cite{Erofeev2015videomatting} are used for evaluation purposes. Tables \ref{tabcomparisonvideosad}-\ref{tabcomparisonvideoconn} present the numerical evaluation of the AlphaGan-BG models on video sequences. The aforementioned reconstruction methods are applied to each sequence to extract the background layer as one of the input channels to AlphaGan-BG models. One important and obvious take from this experiment is the fact that a successful background aware matting method significantly relies on the quality of the reconstructed background. Although, the point of this experiment is not to compare the performance of the reconstruction methods, a few state of the art techniques such as FCV \cite{Afifi2014fastvideocomp} generate background layers with less artifacts and similar to the simulated ones resulting in more accurate alpha estimation using AlphaGan-BG\_M. On the other hand, AlphaGan-BG\_H performs better in scenarios where the reconstructed background layers are heavily distorted such as DVI \cite{Kim2019deepinpainting} and DFGI \cite{Xu2019flowinpainting}.
A detailed set of visual results for this section is provided in Appendix 2.
\begin{center}
\begin{adjustbox}{width=\textwidth}
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c}
\Xhline{2\arrayrulewidth}
&
\multicolumn{3}{c|}{W\textbackslash Artifact} &
\multicolumn{3}{c|}{DVI \cite{Kim2019deepinpainting}} &
\multicolumn{3}{c|}{DFGI \cite{Xu2019flowinpainting}} &
\multicolumn{3}{c|}{GVI \cite{chang2019free}} &
\multicolumn{3}{c}{FVC \cite{Afifi2014fastvideocomp}} \\ \hline
& A & C & D & A & C & D & A & C & D & A & C & D & A & C & D \\ \Xhline{2\arrayrulewidth}
AlphaGan-BG\_M &\textbf{1.004} &\textbf{10.145} &\textbf{1.66} &9.95 &95.712 &11.856 &1.787 &32.808 &2.28 &16.814 &89.4 &15.046 &1.165 &11.385 &1.781 \\
AlphaGan-BG\_H &\textbf{1.28} &\textbf{28.062} &\textbf{1.758} &1.658 &53.513 &2.115 &1.292 &37.207 &1.775 &2.409 &56.7 &2.91 &1.3 &28.352 &1.77 \\ \Xhline{2\arrayrulewidth}
\end{tabular}
\end{adjustbox}
\captionof{table}{SAD Metric - Performance of the AlphaGan-BG models using different background reconstruction methods. \textit{A: Alex, C: Castle} and \textit{D: Dimitriy}. The best average value per model for each animation across all reconstruction method is emboldened.}\label{tabcomparisonvideosad}
\end{center}
\begin{center}
\begin{adjustbox}{width=\textwidth}
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c}
\Xhline{2\arrayrulewidth}
&
\multicolumn{3}{c|}{W\textbackslash Artifact} &
\multicolumn{3}{c|}{DVI \cite{Kim2019deepinpainting}} &
\multicolumn{3}{c|}{DFGI \cite{Xu2019flowinpainting}} &
\multicolumn{3}{c|}{GVI \cite{chang2019free}} &
\multicolumn{3}{c}{FVC \cite{Afifi2014fastvideocomp}} \\ \hline
& A & C & D & A & C & D & A & C & D & A & C & D & A & C & D \\ \Xhline{2\arrayrulewidth}
AlphaGan-BG\_M &\textbf{0.0001} &\textbf{0.001} &\textbf{0.0006} &0.014 &0.059 &0.014 &0.0008 &0.014 &0.001 &0.027 &0.053 &0.019 &0.0002 &\textbf{0.001} &0.0007 \\
AlphaGan-BG\_H &\textbf{0.0002} &\textbf{0.010} &\textbf{0.0008} &0.0006 &0.028 &0.001 &0.0003 &0.017 &\textbf{0.0008} &0.001 &0.031 &0.002 &0.0003 &0.011 &\textbf{0.0008} \\ \Xhline{2\arrayrulewidth}
\end{tabular}
\end{adjustbox}
\captionof{table}{MSE Metric - Performance of the AlphaGan-BG models using different background reconstruction methods. \textit{A: Alex, C: Castle} and \textit{D: Dimitriy}. The best average value per model for each animation across all reconstruction method is emboldened.}\label{tabcomparisonvideomse}
\end{center}
\begin{center}
\begin{adjustbox}{width=\textwidth}
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c}
\Xhline{2\arrayrulewidth}
&
\multicolumn{3}{c|}{W\textbackslash Artifact} &
\multicolumn{3}{c|}{DVI \cite{Kim2019deepinpainting}} &
\multicolumn{3}{c|}{DFGI \cite{Xu2019flowinpainting}} &
\multicolumn{3}{c|}{GVI \cite{chang2019free}} &
\multicolumn{3}{c}{FVC \cite{Afifi2014fastvideocomp}} \\ \hline
& A & C & D & A & C & D & A & C & D & A & C & D & A & C & D \\ \Xhline{2\arrayrulewidth}
AlphaGan-BG\_M &\textbf{0.365} &\textbf{5.779} &\textbf{2.51} &12.52 &123.945 &21.397 &1.376 &33.184 &3.931 &17.57 &110.25 &25.134 &0.582 &7.826 &2.835 \\
AlphaGan-BG\_H &\textbf{0.744} &\textbf{67.829} &\textbf{3.059} &1.075 &95.806 &4.061 &0.766 &75.408 &3.099 &2.111 &97.737 &6.213 &0.762 &68.364 &3.098 \\ \Xhline{2\arrayrulewidth}
\end{tabular}
\end{adjustbox}
\captionof{table}{GRAD Metric - Performance of the AlphaGan-BG models using different background reconstruction methods. \textit{A: Alex, C: Castle} and \textit{D: Dimitriy}. The best average value per model for each animation across all reconstruction method is emboldened.}\label{tabcomparisonvideograd}
\end{center}
\begin{center}
\begin{adjustbox}{width=\textwidth}
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c}
\Xhline{2\arrayrulewidth}
&
\multicolumn{3}{c|}{W\textbackslash Artifact} &
\multicolumn{3}{c|}{DVI \cite{Kim2019deepinpainting}} &
\multicolumn{3}{c|}{DFGI \cite{Xu2019flowinpainting}} &
\multicolumn{3}{c|}{GVI \cite{chang2019free}} &
\multicolumn{3}{c}{FVC \cite{Afifi2014fastvideocomp}} \\ \hline
& A & C & D & A & C & D & A & C & D & A & C & D & A & C & D \\ \Xhline{2\arrayrulewidth}
AlphaGan-BG\_M &\textbf{0.457} &\textbf{7.77} &\textbf{1.562} &9.78 &100.104 &11.932 &1.21 &33.173 &2.131 &17.066 &93.838 &15.254 &0.624 &9.208 &1.68 \\
AlphaGan-BG\_H &\textbf{0.801} &\textbf{27.864} &\textbf{1.707} &1.252 &55.5 &2.085 &0.818 &37.95 &1.725 &2.162 &58.965 &2.948 &0.825 &28.191 &1.719 \\ \Xhline{2\arrayrulewidth}
\end{tabular}
\end{adjustbox}
\captionof{table}{CONN Metric - Performance of the AlphaGan-BG models using different background reconstruction methods. \textit{A: Alex, C: Castle} and \textit{D: Dimitriy}. The best average value per model for each animation across all reconstruction method is emboldened.}\label{tabcomparisonvideoconn}
\end{center}
\section{Conclusion}
In this paper we proposed an approach inspired by a state of the art GAN model to validate the idea of using background information as part of the alpha matting process. The proposed approach utilises an encoder-decoder structure as generator and PacthGAN as discriminator. The input to the network consists of 7 channels, including RGB image, RGB background information and the trimap. The preliminary results of the experiments and evaluations against the benchmarked methods indicate the validity of the core idea in this research. Using the full or partial background information, AlphaGan-BG demonstrated a superior performance against the studied methods.
In the future work, we would like to train and analyse the performance of AlphaGan-BG on synthetic data. The background reconstruction process is another exciting aspect of this research that requires more investigation. The current performance of the models is achieved by simulating the reconstruction artifacts. However, we believe that AlphaGan-BG can obtain a higher accuracy if trained on a specific background reconstruction method with consistent noise and artifact pattern.
\printbibliography
\newpage
\section*{Appendix 1: Visual Results on Adobe Matting dataset}
\begin{center}
\includegraphics[width=\textwidth,height=\textheight,keepaspectratio]{images/appendix-lowsize.pdf}
\end{center}
\newpage
\section*{Appendix 2: Visual Results on Video Matting dataset - A Frame from Alex Video Sequence}
\begin{center}
\includegraphics[width=\textwidth,height=\textheight,keepaspectratio]{images/appendix2-p1-lowsize.pdf}
\end{center}
\newpage
\section*{Appendix 2: Visual Results on Video Matting dataset - A Frame from Castle Video Sequence}
\begin{center}
\includegraphics[width=\textwidth,height=\textheight,keepaspectratio]{images/appendix2-p2-lowsize.pdf}
\end{center}
\newpage
\section*{Appendix 2: Visual Results on Video Matting dataset - A Frame from Dmitriy Video Sequence}
\begin{center}
\includegraphics[width=\textwidth,height=\textheight,keepaspectratio]{images/appendix2-p3-lowsize.pdf}
\end{center}
\end{document}
|
2,877,628,088,511 | arxiv | \section{Introduction}
The nucleon spin structure function $g_1$, extracted from
the deep inelastic lepton-nucleon scattering with
polarization \cite{EMC1,EMC2}, if interpreted in the
infinite momentum frame, defines a fraction of
the nucleon spin carried by quark spins. Accordingly,
only about 30\% of the nucleon spin is carried by quark spins,
which is referred to usually as "spin crisis". Consequently,
a lot of experimental and theoretical efforts for the last 20
years were devoted to search for the rest of the
nucleon spin, without obvious success, however.
Similar results are obtained on the lattice both
for nucleons and lowest mesons (for a review see ref. \cite{Hag}).
In \cite{GLL1,GLL2,GLL3} we have suggested a way
to define in a gauge invariant manner
and measure the spin content of mesons in the rest frame
at different resolution scales, including the infrared
ones. The method is based on the unitary transformation
from the complete and orthogonal chiral quark-antiquark
basis to the complete and orthogonal $^{2S+1}L_J$ basis
in the rest frame \cite{GN}. Using a set of interpolators
that transform according to different chiral representations
\cite{CJ,G1}, one can measure on the lattice a ratio of
couplings of different interpolators to a given state in
the rest frame. This ratio determines a mixture of different
chiral representations in the meson wave functions. Then,
given this ratio and using a unitary transformation from the
chiral basis to the angular momentum basis, one obtains the
angular momentum content of a meson in the rest frame.
The result was that the angular momentum content of the
$\rho$-meson in the rest frame is given approximately
by the $^3S_1$ wave, without obvious trace of the
spin crisis. Then a natural question is which of the
definitions does reflect the spin content of a hadron?
Below we overview both principal aspects of the method
as well as the numerical results using as an example the
$\rho$-meson.
\section{The method}
The $I=1,
J^{PC}=1^{--}$ $\bar q q$ states with unbroken chiral symmetry
transform according to two chiral representations, $(0,1) +
(1,0)$ and $(1/2,1/2)_b$ \cite{CJ,G1}. These two representations
form a complete
and orthogonal basis.
The state that transforms as $(0,1) + (1,0)$ can be
created from the vacuum by the vector current,
\begin{equation}
O_\rho^V(x) = \qbar(x)\, \gamma^i \vec \tau\, q(x)\;,
\label{rV}
\end{equation}
and the state that belongs to the $(1/2,1/2)_b$ representation
can be created by the pseudotensor operator,
\begin{equation}
O_\rho^T(x) = \qbar(x)\, \sigma^{0i} \vec \tau\, q(x)\;.
\label{rT}
\end{equation}
In the continuum the physical $\rho$-meson with broken
chiral symmetry can be created from the vacuum by both
operators and the corresponding amplitudes are given as
\begin{eqnarray}
\langle 0 | \qbar(0) \gamma^\mu q(0) | V(p; \lambda)\rangle &=&
m_\rho f_\rho^V e^\mu_\lambda\;,
\label{rhoV}\\
\langle 0| \left(\qbar(0) \sigma^{\alpha \beta} q(0)\right)(\mu) | V(p; \lambda)\rangle &= &
\I f_\rho^T(\mu)
(e^\alpha_\lambda p^\beta - e^\beta_\lambda p^\alpha)\;,
\label{rhoT}
\end{eqnarray}
where $V(p; \lambda)$ is the vector meson state with the mass $m_\rho$,
momentum $p$ and polarization $\lambda$. The vector current is conserved,
consequently the vector coupling constant $f_\rho^V$ is scale-independent. The
pseudotensor ``current'' is not conserved and is subject to a nonzero anomalous
dimension. Consequently the pseudotensor coupling $ f_\rho^T(\mu)$ manifestly
depends on the resolution scale $\mu$.
The very fact that the $\rho$-meson can be created by the operators
that transform according two different chiral representations
tells that chiral symmetry is broken and the
meson wave function is a superposition of both chiral representations.
In the rest frame the ratio
\begin{equation}
\frac{f_\rho^V}{f_\rho^T(\mu)} = \frac
{\langle 0 | \qbar(0) \gamma^i q(0) | V(\lambda)\rangle}
{\langle 0 |\left(\qbar(0) \sigma^{0i} q(0)\right)(\mu) | V(\lambda)\rangle}
\label{rhoV/rhoT}
\end{equation}
determines the ratio of the two allowed chiral
representations in the $\rho$-meson wave function
at a given resolution scale $\mu$.
Such ratio can be measured on the lattice.
Given a set of operators $O_i$ above we
use the variational method \cite{VAR} and
calculate a cross-correlation matrix at zero spatial momentum (i.e., in the
rest frame),
\begin{equation}\label{corr_inf}
C(t)_{ij}=\langle O_i(t)O_j^\dagger(0)\rangle=\sum_{n=1}^\infty a_i^{(n)} a_j^{(n)*}
\mathrm{e}^{-E^{(n)} t}\;,
\end{equation}
with the coefficients giving the overlap of the operators with the
physical state,
\begin{equation}\label{eq_w_f}
a_i^{(n)}=\langle 0| O_i|n\rangle\;.
\end{equation}
With a set of operators spanning a complete and orthogonal basis with respect
to some symmetry group, these overlaps (coupling constants) give the complete
information about symmetry breaking. The interpolating composite operators $O_i$
are not normalized on the lattice
and consequently the absolute values of the coupling constants $a_i^{(n)}$
cannot be obtained. However, a ratio of the couplings is a well defined quantity
and can be computed as \cite{GLL1}
\begin{equation}\label{ratio_op_comp}
\frac{a_i^{(n)}}{a_k^{(n)}}=
\frac{\widehat C(t)_{ij} u_j^{(n)}}{\widehat C(t)_{kj} u_j^{(n)}}\;.
\end{equation}
Here $\widehat C$ is the cross-correlation matrix from (\ref{corr_inf}),
a sum is implied for the index $j$ on the right-hand side and $u_j^{(n)}$ are the eigenvectors obtained from the generalized eigenvalue problem,
\begin{equation}\label{gev_1}
\widehat C(t)_{ij} u_j^{(n)} =\lambda^{(n)}(t,t_0)\widehat
C(t_0)_{ij} u_j^{(n)}\;,
\end{equation}
with $t_0$ being some normalization point in Euclidean time.
The ratio (\ref{ratio_op_comp})
coincides with the ratio of matrix elements
(\ref{rhoV/rhoT}) with $i \equiv
V; ~ k \equiv T$.
The chiral basis in the quark-antiquark system is a complete one and can be
connected to the complete angular momentum basis in the rest frame via the
unitary transformation \cite{GN}
\begin{equation}\label{unitary_1}
\left(
\begin{array}{l}
|(0,1)\oplus(1,0);1 ~ 1^{--}\rangle\cr
|(1/2,1/2)_b;1 ~ 1^{--}\rangle
\end{array}
\right) = U\cdot
\left(
\begin{array}{l}
|1;{}^3S_1\rangle\cr
|1;{}^3D_1\rangle
\end{array}
\right)
\end{equation}
with
\begin{equation}\label{unitary_2}
U=
\left(
\begin{array}{cc}
\sqrt{\frac23} & \sqrt{\frac13} \cr
\sqrt{\frac13} & -\sqrt{\frac23}
\end{array}
\right)\;.
\end{equation}
Consequently, if we know the mixture of the two allowed chiral representations in
a physical state, then applying the unitary transformation
(\ref{unitary_1}) we are also able to obtain the angular momentum content of this
state in the rest frame. The mixture of the chiral representations
in the $\rho$-meson wave function at the resolution scale $\mu$ is given
by (\ref{ratio_op_comp}) and (\ref{rhoV/rhoT}) and can be measured in
lattice simulations.
By this we define what is meant under the angular momentum content
of the $\rho$-meson in the rest frame and, consequently,
we can answer the question whether or not the spin of the $\rho$-meson
is carried by spins of its valence quarks in the rest frame.
\section{Scale dependence of the chiral and angular momentum decompositions}
The ratio $a_V^{(n)}/a_T^{(n)}$ as well as a partial wave content of
a hadron are not the renormalization group invariant quantities. Hence
they manifestly depend on a resolution scale at which we probe the
hadron.
If we probe the hadron structure on the lattice with the local interpolators,
then we study the hadron decomposition at the scale fixed by
the lattice spacing $a$. For a reasonably small $a$ this scale
is close to the ultraviolet scale. However, we are interested
in the hadron content at the infrared scales, where mass is
generated. For this purpose we cannot use a large $a$, because
matching with the continuum QCD will be lost. Given a fixed,
reasonably small lattice spacing $a$ a small resolution scale $\mu \sim 1/R$
can be achieved by the gauge-invariant smearing of the point-like
interpolators. We smear every quark field in spatial directions
with the Gaussian profile over the size $R$ in
physical units such that $R/a\gg 1$, see Fig. 1. Then even in the continuum
limit $a \rightarrow 0$ we probe the hadron content at the resolution
scale fixed by $R$. Such a definition of the resolution is similar
to the experimental one, where an external probe is sensitive only to
quark fields (it is blind to gluonic fields) at a resolution
that is determined by the momentum transfer in spatial directions.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=.35\textwidth]{smearint.eps}
\includegraphics[width=.35\textwidth]{smearing.eps}
\end{center}
\caption{Gauge-invariant smearing and the resolution scale $R$ definition}
\end{figure}
We use three different smearing radii R for the quark
fields both in the source and sink.
The ``narrow" smearing width (index $n$) varies between
0.33 and 0.36 fm, depending on the set of configurations. The ``wide" smearing
radius (index $w$) lies between 0.66 and 0.69 fm and the ``ultrawide" one
is 0.81 - 0.85 fm (index $uw$). Hence we can study the hadron structure at resolutions
0.33 fm - 0.85 fm.
Consequently we have the following set of the interpolating operators
\begin{eqnarray}
O^V_n=\overline u_n \gamma^i d_n\;,\;\;
&O^V_w=\overline u_w \gamma^i d_w\;,\;\;
&O^V_{uw}=\overline u_{uw} \gamma^i d_{uw}\;,\;\;\nonumber\\
O^T_n=\overline u_n \gamma^t \gamma^i d_n\;,\;\;
&O^T_w=\overline u_w \gamma^t \gamma^i d_w\;,
&O^T_{uw}=\overline u_{uw} \gamma^t \gamma^i d_{uw}\;,
\end{eqnarray}
where $\gamma^i$ is one of the spatial Dirac matrices and $\gamma_t$ is the
$\gamma$-matrix in (Euclidean) time direction.
\section{Lattice details and choices of correlation matrix}
\begin{table
\begin{center}
\begin{tabular}{ccccccccc}
\hline
\hline
Set & $\beta_{LW}$ & $a\,m_0$ & \#{conf} & $a$ [fm] & $m_\pi$ [MeV] & $m_\rho$ [MeV] & $m_{\rho'}$ [MeV]\\
\hline
A\phantom{1} & 4.70 & -0.050 & 200 & 0.1507(17) & 526(7) & 911(11) & 1964(182)\\
B1 & 4.65 & -0.060 & 300 & 0.1500(12) & 469(5) & 870(10) & 1676(106)\\
B2 & 4.65 & -0.070 & 200 & 0.1406(11) & 296(6) & 819(18) & 1600(181)\\
C\phantom{1} & 4.58 & -0.077 & 300 & 0.1440(12) & 323(5) & 795(15) & 1580(159)\\
\hline
\hline
\end{tabular}
\caption{\label{tab:sim}
Specification of the data used here; for the gauge coupling only the
leading value $\beta_{LW}$ is given, $m_0$ denotes the bare mass parameter of
the CI action. Further details on the action, the simulation and the
determination of the lattice spacing and the $\pi$- and $\rho$-masses are found
in \cite{Gattringer:2008vj,Engel:2010my}.}
\end{center}
\end{table}
\begin{table
\begin{center}
\begin{tabular}{cccc}
\hline
\hline
Set & $R_n$ [fm] & $R_w$ [fm] & $R_{uw}$ [fm]\\
\hline
A\phantom{1} & 0.36 & 0.67 & --\\
B1 & 0.34 & 0.69 & 0.81\\
B2 & 0.34 & 0.66 & 0.85\\
C\phantom{1} & 0.33 & 0.66 & --\\
\hline
\hline
\end{tabular}
\caption{\label{tab:radii}
Specification of the smearing radii $R$.}
\end{center}
\end{table}
\begin{figure}[t]
\begin{center}
\includegraphics[height=8cm,clip]{rho-masses.eps}
\end{center}
\caption{\label{fig:masses}
The masses of both $\rho$ and $\rho'$ states extracted from
different $4 \times 4$ and $6 \times 6$ correlation matrices. The crosses indicate the mass
values from experiments.}
\end{figure}
In our study we use Chirally Improved fermions \cite{CI} and the
L\"uscher-Weisz gauge action \cite{LW}. The lattice size is $16^3 \times 32$. We
use dynamical gauge configurations with two mass-degenerate light quarks.
With the lattice spacing $\approx 0.15$ fm the spatial volume of the lattice is
$\approx 2.4^3$ fm$^3$. In our present study we limit ourselves to the
$\rho$ and $\rho'$ states. For details on the simulation we refer the reader to the
Table \ref{tab:sim} and to \cite{Gattringer:2008vj,Engel:2010my} .
For the sets A and C we
construct $4 \times 4$ correlation matrices (i.e., with both vector and
pseudotensor interpolators using narrow and wide smearing radii), while for
the sets B1 and B2 we study the $6 \times 6$ correlation matrix (with
narrow, wide and ultrawide smearings for both vector and pseudotensor
operators) as well as different $4 \times 4$ sub-matrices.
In Fig. \ref{fig:masses} we show masses of both the ground state
$\rho$-meson and its first excitation $\rho'$ extracted from different
correlation matrices.
\section{Chiral symmetry breaking and the angular momentum content
of $\rho$ and $\rho'$ mesons}
The ratio $a_V/a_T$
of the two allowed chiral representations in the $\rho$- and
$\rho'$-mesons versus the resolution $R$ is shown on Fig. \ref{fig:all}.
The results obtained from different $4 \times 4$ and $6 \times 6$
correlation matrices are consistent with each other. For the
ground state $\rho$ we observe strong chiral symmetry breaking
practically at all resolutions; only in the deep ultraviolet $R,a \ll 0.3$,
where chiral symmetry is unbroken, the tensor "current" decouples
from the $\rho$-meson. For the $\rho'$ the tensor operator decouples
much faster towards the ultraviolet than for the $\rho$-meson.
This shows that the wave functions of these two states are
significantly different and the chiral symmetry breaking is
different in both states. At the scale of $\sim 1$ fm
of the hadron size, i.e., at which the hadron mass is generated,
chiral symmetry breaking in the $\rho'$ state is less essential
than in the ground state.
Given this ratio and the unitary transformation from the chiral basis
to the angular momentum basis (\ref{unitary_1}) one can obtain
the partial wave content of both states.
For
the $\rho$ meson it is approximately $0.99\,|^3S_1\rangle -
0.1\,|^3D_1\rangle$. Hence the ground state in the infrared is practically a
pure $^3S_1$ state with a tiny admixture of the $^3D_1$ wave.
Consequently in the rest frame the spin of the $\rho$-meson
is almost completely carried by spins of its valence quarks. No
trace of the "spin crisis" is observed. In this our definition
we do not separate contributions of quarks and gluons into
hadron spin. In the confining regime, where the gauge invariance
and the quark-gluon interaction are
of crucial importance, it is not possible to separate both contributions
into total spin in a sensible gauge-invariant manner; after all the
quark-gluon interaction is a consequence of the gauge invariance.
The spin of the $\rho$-meson in the rest frame is carried by spins of its
valence quarks dressed by gluons.
The gluonic field is important for the angular momentum generation,
because it is this field that
provides chiral symmetry breaking and that is responsible for most of the
hadron mass.
In the excited $\rho$-meson there
is a significant contribution of the $^3D_1$ wave. In the
latter case the angular momentum content is between the following
two lower and
upper bound values. For the lower bound it is $0.88\,|^3S_1\rangle -
0.48\,|^3D_1\rangle$ and for the upper bound it is $0.97\,|^3S_1\rangle -
0.25\,|^3D_1\rangle$. This once again demonstrates that the first
excitation of
the $\rho$-meson cannot be considered a pure radial excitation
of the ground state $\rho$. Obviously,
both radial and orbital degrees of freedom are excited which reflects yet
unknown dynamics of confinement and chiral symmetry breaking.
\begin{figure}[t]
\begin{center}
\includegraphics*[height=8cm,clip]{ratio_vs_R_for_paper.eps}
\end{center}
\caption{\label{fig:all}
A ratio of the vector to the pseudotensor couplings versus a resolution scale $R$, as
extracted from all $4 \times 4$ and $6 \times 6$ correlation matrices.
Broken lines are drawn only to guide the eye.}
\end{figure}
\paragraph{Acknowledgments.}
We gratefully acknowledge support of the grants P21970-N16 and DK W1203-N08
of the Austrian
Science Fund FWF and the DFG project SFB/TR-55. The calculations have been performed
on the SGI Altix 4700 of the Leibniz-Rechenzentrum Munich and on the local
clusters at the University of Graz.
|
2,877,628,088,512 | arxiv | \section{introduction}
Quantum entanglement, one of the most striking consequences of nonlocal
quantum correlation, is fundamental in quantum physics both for
understanding the nonlocality of quantum mechanics\cite{Einstein} and its
role in quantum computations and communications\cite{Nielsen}. The
entanglement would undergo decoherence due to the unavoidable interaction
with the environment. As a result, an initially entangled two-qubit system
becomes totally disentangled after evolving for a finite time. This
phenomena is called entanglement sudden death (ESD)\cite{yu} and has been
recently demonstrated experimentally\cite{Almeida}.
Many works are devoted to the ESD of the qubits coupled to an environment
that results in irreversible loss, and the rotating-wave approximation (RWA)
is made on the interaction of the qubits with the field\cite{yu,Bellomo}.
The effect of the counter-rotating terms on the ESD has been less studied,
even for the Jaynes-Cummings (JC) model\cite{JC} where the qubit interacting
only with the single-cavity mode. In fact, some investigations have also
focused on the storing of entanglement in such a system\cite{ye}. So
entanglement dynamics for two independent JC atoms is also of fundamental
interest, and has been well studied only in the RWA\cite{Eberly1,Yonac,Ficek,Sainz
.
On the other hand, the JC model is also closely related to condensed matter
physics recently. It can be realized in some solid-state systems recently,
such as one Josephson charge qubit coupling to an electromagnetic resonator
\cite{Wallraff}, the superconducting quantum interference device coupled
with a nanomechanical resonator\cite{Chiorescu,squid}, and the most recently
LC resonator magnetically coupled to a superconducting qubit\cite{exp}. In
conventional quantum optics, the coupling between the two-level "natural"
atom and the single bosonic mode is quite weak, RWA has been usually
employed. With the progress of the fabrication, the artificial atoms may
interact very strongly with on-chip resonant circuits\cite
{Wallraff,squid,Chiorescu,exp}, RWA can not describe well the strong
coupling regime\cite{liu}. Therefore, it is highly desirable to explore the
entanglement dynamics for two independent JC atoms without RWA.
However, due to the consideration of the counter-rotating terms, the
photon number is not conserved, so the photonic Fock space has
infinite dimensions, any solution without RWA is highly nontrivial.
In the recent years, several non-RWA
approaches\cite{chenqh,zheng,liu,Liutao,Amico,Yuyu} has been
proposed in a few contexts. Especially, by using extended bosonic
coherent states, three of the present authors and a collaborator
have solved the Dicke model without RWA exactly in the numerical
sense\cite{chenqh}.
In this paper, we employ a numerically exact technique to solve JC model
without RWA by means of extended bosonic coherent states. The correlations
among bosons are added step by step until further corrections will not
change the results. All eigenfunctions and eigenvalues can be obtained
exactly. Based on the exact solutions to the single JC model, we can study
the entanglement evolution of two JC atoms easily. Analytical results
without RWA based on an unitary transformation are also presented. The paper
is organized as follows. In Sec.II, the numerically exact solution to the JC
model is proposed in detail and analytical results in terms of unitary
transformation are also provided. The numerical results and discussions are
given in Sec.III. The brief summary is presented finally in the last section.
\section{Model Hamiltonian}
The JC model can be written in the basis $\left(
\begin{array}{ll}
\left| \uparrow \right\rangle , & \left| \downarrow \right\rangle
\end{array}
\right) $ where the first index denoting spin-up and the second denoting
spin-down
\begin{equation}
H_{JC}=\frac \Delta 2\sigma _z+\omega a^{+}a+\lambda (a+a^{+})\sigma _x
\label{hamiltonian_JC1}
\end{equation}
where $a^{+}$ and $a$ are the bosonic annihilation and creation operators of
the cavity, $\Delta $ and $\omega $ are the frequencies of the atom and
cavity, $g$ is the atom-cavity coupling constant, and $\sigma _k(k=x,y,z)$
is the Pauli matrix of the two-level atoms. Here we set $\hbar =1$. The
detuning is defined as $\delta =\omega -\Delta $.
In this paper, we describe two approaches to solve the single JC model
without RWA. One is the numerically exact approach in terms of extended
bosonic coherent states, the other is an analytical approach based on a
unitary transformation.
\textsl{Numerically exact approach}.-- In order to facilitate the
calculation, we use a transformed Hamiltonian with a rotation around the $y-
axis by an angle $\pi /4$, so $\sigma _z$ $\rightarrow \sigma
_x,\sigma _x\rightarrow -\sigma _z$. This rotation can also be
described formally by the following transformation,
\begin{equation}
H_{JC}^{\prime }=VH_{JC}V^{+}=-\frac \Delta 2\sigma _x+\omega a^{+}a+\lambda
(a+a^{+})\sigma _z, \label{hamiltonian_JCnew}
\end{equation}
where
\[
V=\frac 1{\sqrt{2}}\left(
\begin{array}{ll}
1 & 1 \\
-1 & 1
\end{array}
\right),
\]
By introducing the new operators
\begin{equation}
A=a+g,B=a-g,g=\lambda /\omega, \label{newoperators}
\end{equation}
we can observe that the linear term for the bosonic operator is removed, and
only the number operators $A^{+}A $ and $B^{+}B$ are left. Therefore the
wavefunction can be expanded in terms of these new operators as
\begin{equation}
\left| \varphi ^{\prime }(t)\right\rangle =\left(
\begin{array}{l}
\sum_{n=0}^{N_{tr}}c_{1n}\left| n\right\rangle _A \\
\sum_{n=0}^{N_{tr}}c_{2n}\left| n\right\rangle _B\label{wavefunction}
\end{array}
\right).
\end{equation}
For $A$ operator, we have
\begin{equation}
\left| n\right\rangle _A=\frac{A^n}{\sqrt{n!}}\left| 0\right\rangle _A=\frac
\left( a+g/\omega \right) ^n}{\sqrt{n!}}\left| 0\right\rangle _A.
\label{Aperators}
\end{equation}
The property of $B$ operator is the same.
The Hamiltonian (2) remains unchanged under the transformation \textit{level}
$1\rightarrow \;$\textit{\ level} $2$ and $\;a^{+}($or $a)\rightarrow
-a{}^{+}($or $-a)$. So we can set $c_{2n}=\pm (-1)^nc_{1n}$ (or $c_{1n}=\pm
(-1)^nc_{2n}$ ). The final more concise wavefunction is supposed as
\begin{equation}
\left| \varphi ^{\prime }(t)\right\rangle =\left(
\begin{array}{l}
\sum_{n=0}^{N_{tr}}c_n\left| n\right\rangle _A \\
\pm \sum_{n=0}^{N_{tr}}(-1)^nc_n\left| n\right\rangle _B \labe
{wavefunction2}
\end{array}
\right).
\end{equation}
The Schr$\stackrel{..}{o}$ dinger equation is
\begin{equation}
\omega (m-g^2)c_m\pm \frac \Delta 2\sum_{n=0}^{N_{tr}}D_{mn}c_n=E^{(\pm
)}c_m, \label{schrodinger}
\end{equation}
where
\begin{equation}
D_{mn}=\exp (-2g^2)\sum_{k=0}^{\min [m,n]}(-1)^{-k}\frac{\sqrt{m!n!
(2g)^{m+n-2k}}{(m-k)!(n-k)!k!}.
\end{equation}
The $l-$th eigenfunction $\left| \varphi ^{\prime (l)}\right\rangle $ is
then obtained, which can also be expressed in the original basis of
Hamiltonian (1) as
\begin{equation}
\left| \varphi ^{(l)}\right\rangle =\left(
\begin{array}{l}
\phi _1^{(l)} \\
\phi _2^{(l)}\label{eigenfunction}
\end{array}
\right) =V^{+}\left| \varphi ^{\prime (l)}\right\rangle.
\end{equation}
The wavefunction at anytime then reads
\begin{equation}
\left| \varphi (t)\right\rangle =\sum_{l=1}^{M_0}h^{(l)}\exp (-iE_lt)\left|
\varphi ^{(l)}(t)\right\rangle ,M_0=2(N_{tr}+1), \label{eigenfunction_time}
\end{equation}
where $h^{(l)}$ is the coefficient to be determined.
For the two independent JC atoms, the eigenfunction of the total system is
given by
\begin{equation}
\left| \psi \right\rangle =\left| \varphi _1\right\rangle \otimes \left|
\varphi _2\right\rangle, \label{eigenfunction_total}
\end{equation}
where $\varphi _i$ is the eigenfunction the JC model $i(=1,2)$ with
\begin{equation}
\left| \varphi _i\right\rangle =\left(
\begin{array}{l}
\left| \phi _{i,1}\right\rangle \\
\left| \phi _{i,2}\right\rangle \label{eigenfunction_s}
\end{array}
\right).
\end{equation}
The eigenvalue of the total system is $E=E_1\oplus E_2$. The $j-th$
wavefunction for the total system can then be explicitly expressed
in the basis $\left(
\begin{array}{llll}
\left| \uparrow \uparrow \right\rangle , & \left| \uparrow \downarrow
\right\rangle , & \left| \downarrow \uparrow \right\rangle , & \left|
\downarrow \downarrow \right\rangle
\end{array}
\right) $
\begin{equation}
\left| \psi ^{(j)}\right\rangle =\left(
\begin{array}{l}
\phi _{1,1}^{(l)}\phi _{2,1}^{(k)} \\
\phi _{1,1}^{(l)}\phi _{2,2}^{(k)} \\
\phi _{1,2}^{(l)}\phi _{2,1}^{(k)} \\
\phi _{1,2}^{(l)}\phi _{2,2}^{(k)} \label{wavefunction_j}
\end{array}
\right) ,j=1,2,4(N_{tr}+1).
\end{equation}
The wavefunction of the total system at anytime then reads
\begin{equation}
\left| \psi (t)\right\rangle =\sum_{j=1}^{2M_0}f^{(j)}\exp (-iE_jt)\left|
\psi ^{(j)}\right\rangle ,M_0=2(N_{tr}+1), \label{wavefunction_t}
\end{equation}
where $f^{(j)}$ is the coefficient to be determined.
The initial two Bell states with anti-correlated and correlated spins, which
are denoted by the Bell state 1 and 2 respectively for convenience, in the
original base are the following if we use the column matrix
\begin{equation}
\left| \psi _{Bell}^{(1)}\right\rangle =\left(
\begin{array}{l}
0\left| 00\right\rangle \\
\cos \alpha \left| 00\right\rangle \\
\sin \alpha \left| 00\right\rangle \\
0\left| 00\right\rangle
\end{array}
\right) ,\;\left| \psi _{Bell}^{(2)}\right\rangle =\left(
\begin{array}{l}
\cos \alpha \left| 00\right\rangle \\
0\left| 00\right\rangle \\
0\left| 00\right\rangle \\
\sin \alpha \left| 00\right\rangle \label{Bell2}
\end{array}
\right) ,
\end{equation}
where $\left| 00\right\rangle $ denotes the photon number state in the two
JC atoms, $f^{(l)}$ is determined by using $\left| \psi (0)\right\rangle
=\left| \psi _{Bell}^{(i)}\right\rangle ,i=1,2$ .
For two-qubit states, entanglement can be quantified by the concurrence\cite
{Wootters}. It can be calculated from the following reduced density matrix
\begin{equation}
\rho =Tr_{ph}(\left| \psi (t)\right\rangle \left\langle \psi (t)\right|
)=\sum_{k,l=1}^{N_0}f^{(k)}f^{(l)}e^{-i(E_k-E_l)t}\Pi, \label{density}
\end{equation}
where $4\times 4$ matrix $\Pi $ is determined by $\left| \psi
^{(j)}\right\rangle $. The $4$ eigenvalues $\lambda _i$ of the matrix $\rho $
in decreasing order give the entanglement
\begin{equation}
C^{AB}(t)=\max [0,\sqrt{\lambda _1}-\sqrt{\lambda _2}-\sqrt{\lambda _3}
\sqrt{\lambda _4}] . \label{entanglement}
\end{equation}
\textsl{Unitary transformation approach}.-- In order to treat JC model
without RWA analytically, we perform a unitary transformation\cite{zheng} on
Hamiltonian (\ref{hamiltonian_JC1}) to eliminate the counter-rotating wave
term
\begin{eqnarray*}
H^S &=&e^SHe^{-S}, \\
H &=&H_0+H_1,H_0=\frac \Delta 2\sigma _z+\omega a^{+}a,H_1=g(a^{+}+a)(\sigma
_{+}+\sigma _{-}), \\
S &=&\frac{g\xi }\omega (a^{+}-a)(\sigma _{+}+\sigma _{-}),
\end{eqnarray*}
where $\xi $ is a parameter to be determined. The transformed Hamiltonian
can be expanded in terms of $g$ up to the $g^2$ term (higher terms are
neglected), and we have
\[
H^S=H_0+H_1^S+H_2^S+O(g^3),
\]
\begin{equation}
H_1^S=H_1+[S,H_0] \\
=g(\frac \Delta \omega \xi -\xi +1)(a^{+}\sigma _{-}+a\sigma _{+})+g(-\frac
\Delta \omega \xi -\xi +1)(a^{+}\sigma _{+}+a\sigma _{-}),
\end{equation}
\begin{equation}
H_2^S=[S,H_1]+\frac 12[S,[S,H_0]]=-\frac{g^2\Delta }{(\omega +\Delta )^2
\sigma _z-\frac{g^2(\omega +2\Delta )}{(\omega +\Delta )^2}.
\end{equation}
If we choose $\xi =\omega /(\Delta +\omega ),$ the counter-rotating wave
term can be eliminated, then we have the following renormalized JC
Hamiltonian in a RWA form
\begin{eqnarray}
H^S &=&\frac{\Delta _{eff}}2\sigma _z+\omega a^{+}a+g_{eff}(a^{+}\sigma
_{-}+a\sigma _{+}), \label{hamiltonian_ren} \\
\Delta _{eff} &=&\Delta \left( 1-\frac{2g^2}{(\Delta +\omega )^2}\right) ,
\label{d_eff} \\
g_{eff} &=&g\left( \frac{2\Delta }{\omega +\Delta }\right), \label{g_eff}
\end{eqnarray}
where a constant is removed. The effective detunning can be expressed in
terms of original detunning $\delta $ as
\begin{equation}
\delta _{eff} =\omega -\Delta _{eff}=\left( \delta +\frac{2g^2\left( \omega
-\delta \right) }{\left( 2\omega -\delta \right) ^2}\right).
\label{detunning_eq}
\end{equation}
For $\delta =0,$ we have
\begin{equation}
\delta _{eff}^0=\frac{g^2}{2\omega },\Delta _{eff}^0=\Delta -\delta
_{eff}^0,\;\;g_{eff}^0=g.
\end{equation}
Note that even for zero detunning, we have finite effective detunning after
the unitary transformation. What is more, the non-RWA results within the
unitary transformation approach is expected to be very close to the RWA ones
in the weak coupling regime, because the effective $g$ is not changed, and
only modification is the transition frequency which is reduced by a small
amount proportional to $g^2$.
Follow the derivation for the RWA case in Ref. \cite{Eberly1}, if the
initial state is the Bell state 1, the concurrence for two identical JC
atoms without RWA by the unitary transformation can be easily obtained
\begin{equation}
C_{AB}(t)=\left| \sin 2\alpha \right| \left[ 1-4N^2\sin ^2(\nu t/2)\right],
\label{ana_C_1}
\end{equation}
where
\begin{eqnarray}
N &=&\frac 1{2\sqrt{1+\left[ \frac{\left( \delta +\frac{2g^2\left( \omega
-\delta \right) }{\left( 2\omega -\delta \right) ^2}\right) }{2\left( \frac
2g}{1+1/\left( 1-\delta /\omega \right) }\right) }\right] ^2}}, \\
\nu &=&\sqrt{\left( \delta +\frac{2g^2\left( \omega -\delta \right) }{\left(
2\omega -\delta \right) ^2}\right) ^2+4\left( \frac{2g}{1+1/\left( 1-\delta
/\omega \right) }\right) ^2}.
\end{eqnarray}
For zero detunning, we have
\begin{equation}
N_0=\frac 1{2\sqrt{1+\frac{g^2}{16\omega ^2}}},\nu _{_0}=\frac g{N_0}.
\label{ana_N0}
\end{equation}
This is to say, within unitary transformation approach, there is no ESD from
the initial Bell state 1, similar to the RWA case.
Similarly, we can get the entanglement evolution from the initial Bell state
2
\begin{equation}
C_{AB}(t)=\left[ 1-4N^2\sin ^2(\nu t/2)\right] \left( \left| \sin 2\alpha
\right| -8N^2\sin ^2(\nu t/2)\cos ^2\alpha \right), \label{ana_C_2}
\end{equation}
The ESD occurs for
\begin{equation}
\left| \tan \alpha \right| <4N^2\sin ^2(\nu t/2).
\end{equation}
Note form Eq. (\ref{ana_N0}) that the value of $N_0$ is very close to the
RWA one even for $g=1$, so it is expected that the non-RWA results within
unitary transformation approach is nearly the same as that in RWA in a very
large coupling regime. Moreover, the unitary transformation approach could
not provide essentially different results, because final renormalized RWA
Hamiltonian is derived for all calculations. In our numerically exact
studies, the RWA-type Hamiltonian is not necessary, so some new results may
be obtained.
\section{Results and discussions}
\begin{figure}[tbp]
\includegraphics[scale=0.85]{FIG_1ab.eps}
\caption{ (Color online) The concurrence for atom-atom entanglement with the
initial atomic state: (a) Bell state 1 for $\alpha=\pi/4$ and (b) Bell state
2 for $\alpha=\pi/12$. $G= 2g$. The corresponding right figures are the
bird's-eye view. }
\label{C_ab_g1=g2}
\end{figure}
We first study the evolution of the concurrence for atom-atom
entanglement for two JC atoms without RWA when the states are
initiated with the given Bell states ($\alpha$ is fixed) (
(\ref{Bell2}). Actually, the dynamics of entanglement in bipartite
quantum systems is sensitive to initial conditions\cite{Roszak,yu}
We consider the two same JC atoms for zero detuning. The concurrence for
atom-atom entanglement as a function of time can be calculated with the use
of Eqs. (\ref{wavefunction_j}) and (\ref{wavefunction_t}). As show in Fig.
\ref{C_ab_g1=g2}(a), for small $G=2g$, the concurrence evolve as
cos^2(Gt/2) $, just follow that in RWA \cite{Eberly1} for Bell state 1. For
G$ above $0.5$, entanglement can drop to zero, which can not happen in the
two JC atoms in RWA for Bell state 1. The entanglement can be revival
irregularly. JC model is just $N=1$ Dicke model. We attribute this aperiodic
entanglement evolution to the quantum chaos in finite Dicke model\cite{Emary
. Without RWA, the system is integral, the emergency of the quantum chaos is
impossible, so regular behavior is always observed\cite{Eberly1,Ficek}.
It is interesting that the area of the ESD become larger with
increasing $G$. At very large coupling constant, the revival of
entanglement will not happen, in sharp contrast with those observed
in the RWA. Without RWA, the rule of the transfer of entanglement
between the two-qubit subsystems derived in
\cite{Yonac,Ficek,Sainz} may not hold due to presence of the
counter-rotating terms. Without RWA, we argue that there is no
entanglement invariant because the photon number is not conserved.
\begin{figure}[tbp]
\includegraphics[scale=0.6]{Bell_I_II.eps} \vspace{-0.8cm}
\caption{(Color online) The concurrence for atom-atom entanglement with the
initial atomic Bell state 1 (upper panel) and the Bell state 2 (down panel).
From left to right column, $g=0.05, 0.1$, and $0.3$. The numerically exact
results and those by the unitary transformation are denoted by the blue and
red curves. }
\label{anaC2tu}
\end{figure}
\begin{figure}[tbp]
\includegraphics[scale=0.85]{FIG_2ab.eps}
\caption{ (Color online) The concurrence for atom-atom entanglement with the
initial atomic state: (a) Bell state 1 for $\alpha=\pi/4$ and (b) Bell state
2 for $\alpha=\pi/12$. $g_1=2g_2, G= g_1+g_2$. The corresponding right
figures are the bird's-eye view.}
\label{C_ab_g1g2}
\end{figure}
For Bell state 2, the ESD can happen even in the RWA\cite{Eberly1}.
Without RWA, for small $G$, the entanglement dynamics show the same
behavior. As shown in Fig. \ref{C_ab_g1=g2}(b), the area of the ESD
become larger with increasing $G$, and more wider than that in Bell
state 1.
As we know, for weak coupling between the atom and the single bosonic mode,
the RWA is a good approximation, the non-RWA treatment is not necessary. Our
results for the entanglement dynamics also provide a evidence for this
point. However, for strong coupling case, the entanglement dynamics
demonstrate that the non-RWA should be considered essentially. As the new
progress in the fabrication, some artificial atoms are just strongly coupled
with the bosonic field.
The non-RWA JC model can be also treated by the unitary transformation
approach. Is this approach really good enough in the strong coupling regime?
We now present the analytical results based on this approach for zero
detunning according to Eqs. (\ref{ana_C_1}) and (\ref{ana_C_2}), which are
shown in Fig. \ref{anaC2tu}. The corresponding numerically exact ones are
also exhibited for comparison. It is obvious that the analytical results
essentially deviate from the exact ones at $g= 0.1$. Recent experiments on
the LC resonator coupled to a flux qubit demonstrated that the system
operates in the ultra-strong coupling regime $g=0.1$ \cite{exp}, which
crosses the limit of validity for RWA in the JC model. Since the analysis
based on a unitary transformation could not essentially change the RWA
results, the present numerically exact results are more necessary.
We next consider the two JC atoms with different atom-cavity coupling for
zero detuning. Without loss of generality, we choose $g_1=2g_2$. The
concurrence for atom-atom entanglement as a function of time can also be
calculated with the use of Eqs. (\ref{wavefunction_j}) and (\ref
{wavefunction_t}). As shown in Fig. \ref{C_ab_g1g2}(a), for small $G$, the
concurrence evolve as $cos^2(Gt/2)$, following that in RWA \cite{Eberly1}
for Bell state 1. As $G$ increases, entanglement can drop to zero and is
almost not recovered, quite different from that in the two same JC atoms. We
argue that two JC atoms with different coupling strength suppress the
atom-atom entanglement. This point is confirmed by the evolution of the
initial Bell state 2 as indicated in Fig. \ref{C_ab_g1g2}(b). No matter
whether RWA is taken into account or not, the area of the ESD in the
evolution of the initial Bell state 2 is larger than that in initial Bell
state 1.
For more detail, we study the effect of the coupling strength on evolution
of the concurrence for atom-atom entanglement. Without loss of generality,
we only consider the two same JC atoms. As exhibited in Figs. \ref{C_ab_g1}
and \ref{C_ab_g2}, for weak coupling $g_1<10^{-3}$, no ESD is observed. For
g_1>10^{-3}$, the ESD appear, and its area is enlarged with $g$. At the same
time, the periodicity of the entanglement evolution is destroyed.
\begin{figure}[tbp]
\includegraphics[scale=0.8]{FIG_3.eps}
\caption{ (Color online) Histogram of the concurrence for atom-atom
entanglement with the initial Bell state 1. From left to right, $g=10^{-4},
10^{-2}, 10^{-1}$ (top panel) and $g=0.25, 0.5, 1$ (bottom panel). $G=2g$. }
\label{C_ab_g1}
\end{figure}
\begin{figure}[tbp]
\includegraphics[scale=0.8]{FIG_4.eps}
\caption{ (Color online) Histogram of the concurrence for atom-atom
entanglement with the initial Bell state 2. From left to right, $g=10^{-4},
10^{-2}, 10^{-1}$ (top panel) and $g=0.25, 0.5, 1$ (bottom panel). $G=2g$.}
\label{C_ab_g2}
\end{figure}
One natural question is what the mechanism is for the ESD in two
remote JC models. The photons should influence the atomic state
considerably. We then calculate the average photon number during the
evolution. To show its role in the entanglement dynamics, we plot
the average photon number together with entanglement for two limit
coupling cases $g=0.001$ and $1$ in Fig. \ref{phontonicnumber1} with
initial Bell state 1 and Fig. \ref {phontonicnumber2} with initial
Bell state 2. It is very interesting that the curves for the
entanglement and the average photon number always show opposite
behavior. It is highly suggested that the average photon number
suppress the entanglement between two atoms considerably and
sensitively for both weak and strong coupling.
\begin{figure}[tbp]
\includegraphics[scale=0.8]{FIG_5.eps}
\caption{ The concurrence for atom-atom entanglement and the averaged
photonic numbers for two identical JC atoms with the initial Bell state 1
for (a) $g=0.01$ and (b) g=1. $G=2g, \alpha=\pi/4$.}
\label{phontonicnumber1}
\end{figure}
\begin{figure}[tbp]
\includegraphics[scale=0.8]{FIG_6.eps}
\caption{ The concurrence for atom-atom entanglement and the averaged
photonic numbers for two identical JC atoms with the initial Bell state 2
for (a) $g=0.01$ and (b) g=1. $G=2g, \alpha=\pi/12$.}
\label{phontonicnumber2}
\end{figure}
Finally, we turn to the effect of detuning on the entanglement
evolution. It is found in Ref. \cite{Ficek} that the atom-atom
entanglement in RWA usually increases with ratio between detuning
and coupling constant $\left| \delta \right| /g$ and is irrelevant
with the sign of detuning. In other words, the entanglement
decreases with $g$ for given detuning. Without RWA, it is not that
case. After the unitary transformation, we have the renormalized JC
Hamiltonian Eq. (\ref{hamiltonian_ren}) in a RWA type. So
entanglement evolution depends on $\left| \delta _{eff}\right| $,
which is related to both the magnitude and the sign of original
detuning $\left| \delta \right| $ shown in Eq. (\ref{d_eff}). By
Eqs. (\ref{g_eff}) and (\ref{detunning_eq}), the effective coupling
constant can also be expressed as $g_{eff}=2g/\left[ 1+1/\left(
1-\delta /\omega \right) \right] $ if the counter rotating-wave
terms are taken into account after the unitary transformation. It
follows that $g_{eff}$ decreases with $\left| \delta \right| $ for
positive detuning and increases with $\left| \delta \right| $ for
negative detuning. So in the strong coupling regime where the
counter rotating-wave terms is required, the effect of detuning is
very sensitive to the sign of detuning. The above discussions should
be also suited qualitatively to the numerical exact results. We
calculate the entanglement evolution of the initial Bell states 1
and 2 as a function of detuning. The results for different coupling
constants ranging from weak to strong coupling regime are list in
Fig. \ref {detunning}. The entanglement evolution is symmetric with
detuning for the weak coupling and becomes asymmetric with the
increase of the coupling constant. In the weak coupling regime, as
shown in the left column, the entanglement increase with the value
of detuning, and independent of the sign, similar to those in RWA.
With the increase of the coupling, it is however observed that the
entanglement decreases with the magnitude of the positive detuning,
and increases with the magnitude of the minus detuning, in contrast
to the case of RWA. In the strong coupling regime, the positive
detuning stabilizes the entanglement and facilitates its revival,
whereas the negative detuning suppresses the entanglement,
facilitates its death, and reduces the period of entanglement
revival.
\begin{figure}[tbp]
\includegraphics[scale=0.85]{detunning.eps}
\caption{ Effect of detuning on the entanglement evolution of the Bell
initial state 1 with $\alpha=\pi/4$(top panel) and Bell initial state 2 with
$\alpha=\pi/12$(bottom panel). From left to right column, $g=10^{-4}, 0.02,
0.1$. $\delta =\omega -\Delta, G=2g$.}
\label{detunning}
\end{figure}
\section{Conclusions}
In this paper, based on the exact solution for the single JC atoms, we are
able to calculate exactly the entanglement evolution of the two independent
JC atoms without RWA. The results are essentially different from RWA cases
in the strong coupling regime. The analytical results based on an unitary
transformation are also given. It can modify the RWA results and could not
provide an essentially different ones. Initiated from the Bell state 1, the
RWA results show no ESD. The present numerically exact calculations for the
non-RWA model show that the ESD could not be avoided, and the periodicity of
entanglement evolution is destroyed by the presence of additional photons.
We also suggest that the photons may suppress the entanglement and is just
the origin of the ESD. The effect of the detunning on the entanglement
evolution is also investigated. It is observed that the sign of detunning
play a essential role in the strong coupling regime. The present theoretical
prediction would be tested in a experimental study of ESD where the
artificial atoms are made of circuit QED\cite{Wallraff,squid,Chiorescu,exp}
if operating in the ultra-strong coupling regime.
\section*{ACKNOWLEDGEMENTS}
This work was supported by National Natural Science Foundation of
China and National Basic Research Program of China (Grant Nos.
2011CB605903 and 2009CB929104).
|
2,877,628,088,513 | arxiv |
\section{Related Work}\label{sec:relatedwork}
\textbf{Scan Matching}.
The most basic element of lidar-based navigation is \emph{local} data association, often referred to as scan matching.
It is frequently carried out by iterative closest point (ICP)~\cite{besl1992method} and its variants~\cite{rusinkiewicz2001efficient}, though care must be taken to provide a good initialization, otherwise, a wrong odometry solution can be obtained.
Bosse and Zlot~\cite{bosse2009continuous} perform scan matching on spinning 2D lidar sweeps where the correspondence generation step of ICP is informed by local shape information.
LOAM~\cite{zhang2014loam} uses feature-based scan matching by minimizing the distance between edge points and planar points of subsequent scans in a Levenberg-Marquardt (LM) framework, resulting in high-rate, low-drift odometry.
LeGO-LOAM~\cite{shan2018lego} specializes LOAM for ground vehicles with limited computation; by first extracting ground points and segmenting remaining points into local clusters, noisy points can be removed and scan matching is performed in a two-step LM optimization.
\textbf{Place Recognition}.
Scan matching alone will introduce drift over time, which can be reduced via loop closure or localization within a map.
To identify potential loop closure scan pairs, some systems extract a compact global descriptor of the scan~\cite{yin2018locnet,kim2018scan} which is then used to search for similar scans via a k-d tree.
Once the top loop candidate is identified, the rigid transformation between two scans is refined using ICP, which requires that the initial pose error relating the two scans is low and that all the points be saved for each each scan.
Descriptors of a subset of the scan could instead be extracted~\cite{bosse2013place,cop2018delight} and subsequently matched, but handcrafted features are especially known to be unstable due to the sparsity of the lidar point cloud~\cite{dewan2018learning}.
SegMap~\cite{dube2020segmap} incrementally segments new scans into clusters of a local map to overcome the sparsity of scans and to reduce the number of points required to store, after which descriptors of each cluster are used to find matches, followed by a graph-based geometric verification step.
Other graph-based methods~\cite{zhu2020gosmatch,kong2020semantic} leverage semantic information to create histogram-based global descriptors used for place retrieval, followed by RANSAC~\cite{fischler1981random} geometric verification.
Fern\'andez-Moral et al.~\cite{fernandez2013fast} present a graph-based place recognition system which matches planar patches extracted from RGB-D scans using an interpretation tree to validate various unary and binary constraints between candidate plane matches, followed by geometric verification.
Jiang et al.~\cite{jiang2020lipmatch} extend~\cite{fernandez2013fast} and introduce additional unary and binary constraints.
Some of these constraints are sensitive to viewpoint changes, thus these methods rely on close proximity of the 3D scans as an initialization, precluding their applicability in settings like global localization.
Pathak et al.~\cite{pathak2010online} use 3D plane landmarks in a relaxed graph-based SLAM and perform data association of planes using a series of geometric tests followed by a maximum consensus selection~\cite{pathak2010fast}.
Kaess~\cite{kaess2015simultaneous} extracts 3D planes from RGB-D data and proposes a novel quaternion-based representation of planes for use in SLAM which avoids the issues of overparameterized state vector in nonlinear least-squares estimation.
Geneva et al.~\cite{geneva2018lips} alternatively introduce the ``closest point'' (CP) parameterization of planes for estimation and demonstrate its advantages in lidar-inertial SLAM.
However, \cite{kaess2015simultaneous} and \cite{geneva2018lips} do not provide a means for global data association for detection of loop closures.
Zhou et al.~\cite{zhou2021pi} present an indoor smoothing and mapping system which incorporates a plane-based bundle adjustment.
Loop closures candidates, identified by previous keyframes in close proximity, are verified by first matching plane CP vectors, followed by a plane-based RANSAC.
Pole-based localization methods~\cite{schaefer2019long,kummerle2019accurate,wilbers2019localization} commonly treat poles as 2D points based on their intersection with the ground plane and use point-based registration methods for geometric verification given an initial guess.
Brenner~\cite{brenner2009global} investigates the use of upright poles extracted from lidar scans to construct descriptors for global localization.
Schlichting and Brenner~\cite{schlichting2014localization} extend this descriptor to include upright planes, but effectively treat poles and planes as 2D points and lines.
Cao et al.~\cite{cao2021lidar} perform object-level SLAM using poles, walls, and parked cars as landmarks and propose to use pole positions within a scan to create a scan signature used for detecting loops.
Upon detecting a pair of scans as a loop candidate, clusters of pole-points are matched~\cite{cao2020accurate} and a rigid transformation is estimated in a RANSAC framework.
Our method similarly leverages poles and planes, but is not limited to treating these landmarks as 2D objects and does not make assumptions on the proximity of scans, nor does it require an initial alignment guess.
Instead, we perform global data association by identifying matches based on pairwise geometric consistency between lines and planes.
Thus, our method provides a means for obtaining the transformation between two sets of geometric objects, a key feature for place recognition in object-based maps.
\textbf{Grassmannian Manifold}.
The Grassmannian manifold has been used extensively in subspace learning~\cite{hamm2008grassmann}, especially in face recognition~\cite{huang2015projection} and appearance tracking~\cite{shirazi2014object} tasks in the computer vision community.
Rentmeesters et al.~\cite{rentmeesters2010filtering} develop an observer for subspace tracking on the manifold.
Calinon~\cite{calinon2020gaussians} outlines the use of Riemannian manifolds in robotics and notes the under-representation of the Grassmannian.
\subsection{Preliminaries}\label{sec:preliminaries}
We briefly introduce the Grassmannian manifold.
For a more comprehensive introduction, we refer the reader to~\cite{edelman1998geometry}.
The Grassmannian is the space of $k$-dimensional subspaces of $\mathbb{R}^n$, denoted $\mathrm{Gr}(k,n)$.
For example, $\mathrm{Gr}(1,3)$ represent 3D lines containing the origin.
An element $\mathbb{A}\in\mathrm{Gr}(k,n)$ is represented by an orthonormal matrix $A\in\mathbb{R}^{n\times k}$ whose columns form an orthonormal basis of $\mathbb{A}$.
Note that the choice of $A$ is not unique.
The geodesic distance between two subspaces $\mathbb{A}_1\in\mathrm{Gr}(k_1,n)$ and $\mathbb{A}_2\in\mathrm{Gr}(k_2,n)$ is
\begin{equation}
d_\mathrm{Gr}(\mathbb{A}_1, \mathbb{A}_2) = \left(\sum_{i=1}^{\min(k_1,k_2)} \theta_i^2\right)^{1/2}
\end{equation}
where $\theta_i$ are known as the principal angles~\cite{edelman1998geometry}.
These angles can be computed via the singular value decomposition (SVD) of the corresponding orthonormal matrices of $\mathbb{A}_1$ and $\mathbb{A}_2$,
\begin{equation}
A_1^\top A_2 = U\, \mathrm{diag}(\cos\theta_1, \dots, \cos\theta_k )\, V^\top.
\end{equation}
Note that if the subspaces are of unequal dimension, the number of principal angles is equal to the smaller dimension of the two.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.49\columnwidth}
\includeinkscape[pretex=\footnotesize,width=\columnwidth]{graffex2}
\caption{}
\label{fig:graffexample}
\end{subfigure}
\begin{subfigure}[b]{0.49\columnwidth}
\includegraphics[trim = 0mm 0mm 0mm 0mm, clip, width=1\linewidth]{graffsensitivity}
\caption{}
\label{fig:graffsensitivity}
\end{subfigure}
\caption{(a) Example of a point in $\mathrm{Graff}(0,1)$ being embedded as a line in $\mathrm{Gr}(1,2)$.
The principal angle between these two linear subspaces is $\theta_1$.
(b) When applied directly, $d_\mathrm{Graff}$ is not invariant to global translation $s$.
}
\label{fig:graffexample-both}
\end{figure}
We are specifically interested in affine subspaces of $\mathbb{R}^3$, e.g., lines and planes that are at some distance away from the origin.
In analogy to $\mathrm{Gr}(k,n)$, the set of $k$-dimensional affine subspaces constitute a smooth manifold called the \emph{affine Grassmannian}, denoted $\mathrm{Graff}(k,n)$~\cite{lim2021grassmannian}.
We write an element of this manifold as $\mathbb{Y}=\mathbb{A}+b\in\mathrm{Graff}(k,n)$ with affine coordinates $[A,b]\in\mathbb{R}^{n\times(k+1)}$, where $A\in\mathbb{R}^{n\times k}$ is an orthonormal matrix and $b\in\mathbb{R}^n$ is the displacement of $\mathbb{A}$ from the origin.
We emphasize that $\mathrm{Graff}(k,n)\neq\mathrm{Gr}(k,n)\times\mathbb{R}^n$.
Instead, an element $\mathbb{Y}\in\mathrm{Graff}(k,n)$ is treated as a higher-order subspace via the embedding
\begin{align}
j:\mathrm{Graff}(k,n)&\hookrightarrow\mathrm{Gr}(k+1,n+1), \notag \\
\mathbb{A}+b &\mapsto \mathrm{span}(\mathbb{A}\cup\{b+e_{n+1}\}),
\end{align}
where $e_{n+1} = (0,\dots,0,1)^\top\in\mathbb{R}^{n+1}$ (see \cite[Theorem 1]{lim2021grassmannian}).
Fig.~\ref{fig:graffexample} shows an example of a point in $\mathbb{R}$ being embedded as a line in $\mathbb{R}^2$.
The Stiefel coordinates of $\mathbb{Y}\in\mathrm{Graff}(k,n)$,
\begin{equation}
Y =
\begin{bmatrix}
A & b_0/\sqrt{1+\|b_0\|^2} \\
0 & 1/\sqrt{1+\|b_0\|^2}
\end{bmatrix}\in\mathbb{R}^{(n+1)\times(k+1)},
\end{equation}
allow for the computation of distances between two affine subspaces using the Grassmannian metric,
\begin{equation}
d_\mathrm{Graff}(\mathbb{Y}_1,\mathbb{Y}_2) = d_\mathrm{Gr}(j(\mathbb{Y}_1),j(\mathbb{Y}_2)),
\end{equation}
with principal angles computed via the SVD of $Y_1^\top Y_2$.
The vector $b_0\in\mathbb{R}^n$ is the orthogonal displacement of $\mathbb{A}$, which is the projection of $b$ onto the left nullspace of $A$ s.t. $A^\top b_0=0$.
For convenience, the line $\mathbb{Y}^\ell\in\mathrm{Graff}(1,3)$ may also be represented in point-direction form as $\ell = [A;b]\in\mathbb{R}^6$, and a plane $\mathbb{Y}^\pi\in\mathrm{Graff}(2,3)$ may be represented in Hesse normal form as $\pi = [n;d]\in\mathbb{R}^4$ where $n = \mathrm{ker}\,A^\top$ and $d = \|b_0\|$.
Under a rigid transformation $T=(R,t)\in\mathrm{SE}(3)$, the transformation law of lines and planes can be written
\begin{align}
\ell' &= f_\ell(\ell,R,t) := \begin{bmatrix}RA&Rb+t\end{bmatrix}^\top \\
\pi' &= f_\pi(\pi,R,t) := T^{-\top}\pi.
\end{align}
\section{Introduction}\label{sec:intro}
Geometric verification provides a critical line of defense against incorrect loop closure, which can lead to disastrous map distortion and estimation error.
Place recognition modules attempt to suggest previously explored areas that are similar to current local sensor observations, but require a geometric verification step to confirm the loop closure hypothesis and to provide a geometric constraint between the pair of associated poses.
These constraints are extremely valuable in reducing odometric drift present in simultaneous localization and mapping (SLAM) systems~\cite{cadena2016past}, so long as they are correct.
The core challenge of place recognition and geometric verification is associating current local observations with previously processed observations without relying on an initial guess.
This challenge is known as global data association~\cite{durrant2006simultaneous,bailey2006simultaneous} and is at the heart of many perception problems, such as extrinsic calibration, multi-robot map merging, loop closure detection, and global (re)localization.
In the visual place recognition~\cite{lowry2015visual} setting, image features are commonly used in bag-of-words techniques~\cite{galvez2012bags} for loop candidate retrieval and geometric verification.
However, appearance-based methods are sensitive to illumination, weather, and viewpoint changes and can fail to detect loop closures in these settings.
Alternatively, geometric-based methods~\cite{yin2018locnet,kim2018scan,chen2021auro} utilizing 3D lidar sensors are more resilient to these changes, but come at the expense of processing and storing hundreds of thousands of point measurements per second.
To maintain the benefits of geometric data, but to reduce the storage and computational costs of large point maps, some lidar odometry and SLAM systems use geometric primitives like lines and planes instead of points~\cite{brenner2009global,pathak2010online,schaefer2019long,kummerle2019accurate,cao2021lidar}.
In addition to providing lightweight maps with high-level semantic information, navigating using explicit planes extracted from the environment provides extra information over points and has lead to improved, low-drift odometry~\cite{kaess2015simultaneous,hsiao2017keyframe,geneva2018lips}.
In fact, even utilizing \emph{points} (momentarily ignoring the storage costs) that exhibit strong local planarity have allowed for high-quality lidar-based odometry systems~\cite{zhang2014loam,shan2018lego}.
While existing works either use lines/poles or planes (often in 2D) for global data association, a remaining challenge is performing global data association using 3D lines and planes simultaneously.
We present an efficient and robust method for global data association and geometric verification amenable to any combination of points, lines, and planes.
\begin{figure}[t]
\centering
\includegraphics[trim=1cm 1cm 1cm 2cm, clip, width=\columnwidth]{fig1}
\caption{Successful alignment between the lidar scans of a loop closure hypothesis.
Sensor origins are denoted by the green and yellow cars, which are \SI{18}{\m} apart.
Poles and planes extracted from each lidar scan are represented as 3D affine Grassmannian elements.
Using the associated Riemannian metric allows for the evaluation of geometric consistency between object pairs, even between a pole and a plane.
Object correspondences with high pairwise consistency are identified using our graph-based global data association algorithm and then used to estimate the rigid transformation between the two frames, yielding an alignment error of \SI{4}{\cm} and \SI{0.3}{\deg}.
}
\label{fig:teaser-image}
\end{figure}
A key novelty of our approach is in the representation of line and plane landmarks as elements of a Grassmannian manifold, which is the space of all linear subspaces.
In particular, we utilize the \emph{affine} Grassmannian manifold, which allows for the representation of affine subspaces (i.e., linear subspaces not necessarily containing the origin).
By leveraging this manifold representation, distances between simple geometric landmarks can easily be defined in a principled manner.
We use these distances between pairwise landmarks in each lidar scan to build a consistency graph, enabling the use of our robust, graph-theoretic global data association framework~\cite{lusk2021clipper} to find the largest set of landmark associations that are geometrically consistent.
Then, the rigid transformation between a pair of candidate loop closure scans can be estimated by solving a line and plane registration problem with known correspondences in the least-squares sense.
Experimental evaluation of loop closure verification on the KITTI dataset~\cite{geiger2012we} shows that our method surpasses the state-of-the-art in global data association with geometric primitives. %
Compared to pole-only approaches and plane-only approaches, our method yields a \SI{71}{\percent} and \SI{325}{\percent} increase respectively to loop closure recall at \SI{100}{\percent} precision.
In summary, our main contributions are:
\begin{itemize}
\item the introduction of the affine Grassmannian representation of pole and plane objects for global data association, leading to geometric verification with 3D landmarks free of requirements of an initial alignment;
\item a least squares estimator for rigid transformation using lines and planes instead of points, leading to a more accurate estimate for rotation and translation;
\item evaluation of loop closure geometric verification on four sequences of the KITTI~\cite{geiger2012we} dataset, showing superior recall and accuracy over the state-of-the-art.
\end{itemize}
We emphasize that this is the first work using the affine Grassmannian manifold for data association, which provides a unifying and principled framework for associating points, lines, planes (or higher dimensional linear objects) in robotic loop closure and geometric verification problems.
\section{Experiments}\label{sec:experiments}
We evaluate our global data association method using candidate loop closure pairs from KITTI~\cite{geiger2012we} sequences 00, 02, 05, and 08.
We compare our method, called GraffMatch, with a pole-only method~\cite{cao2021lidar} based on 2D cluster matching~\cite{cao2020accurate}, and a plane-only method~\cite{zhou2021pi} that attempts to match planes via nearest neighbor search on CP parameterization~\cite{geneva2018lips} followed by RANSAC~\cite{fischler1981random}.
We adapt the pole-only method~\cite{cao2021lidar} to 3D and denote it as PoleMatch, while the plane-only method is denoted PlaneMatch.
The algorithms are implemented in MATLAB\footnote{\href{https://github.com/mit-acl/clipper}{https://github.com/mit-acl/clipper}} and executed on an i9-7920X CPU with 64 GB RAM.
\subsection{Dataset}\label{sec:dataset}
Each sequence contains a trajectory of ground truth poses $T_i = (R_i,t_i)\in\mathrm{SE}(3)$.
We generate potential loop candidates by sampling $K$ keyframe poses $\bar{T}_k,\forall\,k\in[K]$ along the trajectory with a stride of \SI{20}{\m}, e.g., see Fig.~\ref{fig:kitti-traj-kfs}.
Let the set of all poses $T_i$ leading up to keyframe $\bar{T}_k$ be denoted $\mathcal{T}_k$.
The set of previously visited poses near keyframe $\bar{T}_k$ is then
\begin{equation*}
\mathcal{X}_k = \{ T_i\;\colon \|t_k-\bar{t}_i\| < r,\;\forall\,T_i\in\mathcal{T}_{k-1} \},
\end{equation*}
where we have set $r=\SI{20}{\m}$ to prevent selecting a loop pair without overlapping scans.
From each $\mathcal{X}_k\ne\emptyset$, three loop candidates are generated with $\bar{T}_k$ based on straight-line distance.
We used distances of \SI{0}{\m}, \SI{8}{\m}, and \SI{16}{\m}, for easy, medium, and hard difficulty, respectively.
These three cases allow us to evaluate each method's sensitivity to noise, baseline, and partial overlap.
Some keyframes may not have a loop candidate at a specified distance, resulting in an unequal number of easy, medium, and hard cases.
A histogram of these distances is shown in Fig.~\ref{fig:candidate-dists} for all KITTI sequences.
Pole and plane features are extracted from each loop candidate lidar scan and are used as input for each algorithm for global data association.
Poles are extracted as lines by leveraging the SemanticKITTI~\cite{behley2019semantickitti} dataset for simplicity.
Given points corresponding to the pole or trunk classes, we use DBSCAN~\cite{ester1996density} implemented in Open3D~\cite{Zhou2018} to generate clusters, from which PCA~\cite{shlens2014tutorial} is used to estimate a line.
Planar patches are extracted from the lidar scan using our implementation\footnote{\href{https://github.com/plusk01/pointcloud-plane-segmentation}{https://github.com/plusk01/pointcloud-plane-segmentation}} of~\cite{araujo2020robust}.
Because planar patches are bounded, there may be multiple planar patches that correspond to the same infinite plane.
\begin{figure}[t]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, clip, width=\columnwidth]{kitti-pdists}
\caption{
Pairwise object distances from KITTI 00, 02, 05, and 08.
The mean is \SI[separate-uncertainty=true,multi-part-units=single]{27\pm16}{\meter}.
Using this data, we choose the scaling parameter as $\rho=40$.
}
\vskip0.1in
\label{fig:kitti-pdists}
\end{figure}
\subsection{Selection of Scaling Parameter}\label{sec:exp-scaling}
The scaling parameter $\rho$ (see Section~\ref{sec:consistency}) is chosen so that the pairwise affine Grassmannian distance lies in the linear regime and is therefore more sensitive when scoring consistencies.
The Velodyne HDL-64E used in KITTI has a range of \SIrange{50}{120}{\meter}, with an average point range in the KITTI dataset of approximately \SI{80}{\meter}.
In terms of pairwise object distances, we find that the average Euclidean distance is \SI[separate-uncertainty=true,multi-part-units=single]{27\pm16}{\meter}, as shown in Fig.~\ref{fig:kitti-pdists}.
Therefore, we select $\rho=40$ so that relative Euclidean distances of \SI{80}{\meter} will be scaled to \SI{2}{\meter}, which is at the end of the linear regime (see Fig.~\ref{fig:graffsensitivity}).
\begin{table}[!t] %
\centering
\caption{
Recall at \SI{100}{\percent} precision.
Divided into easy (E), medium (M), hard (H) cases based on straight-line distance between loop candidate poses.
}
\setlength{\tabcolsep}{3.3pt}
\ra{1.2}
\begin{tabular}{c c c c c c c c c c c c}
\toprule
Seq. & \multicolumn{3}{c}{GraffMatch (Ours)} && \multicolumn{3}{c}{PoleMatch} && \multicolumn{3}{c}{PlaneMatch} \\
\cmidrule{2-4}\cmidrule{6-8}\cmidrule{10-12}
& E & M & H && E & M & H && E & M & H \\
\toprule
$00$ & \SI{91}{\percent} & \SI{78}{\percent} & \SI{46}{\percent} && \SI{69}{\percent} & \SI{43}{\percent} & \SI{41}{\percent} && \SI{66}{\percent} & \SI{3}{\percent} & \SI{3}{\percent} \\
$02$ & \SI{100}{\percent} & \SI{78}{\percent} & \SI{50}{\percent} && \SI{44}{\percent} & \SI{33}{\percent} & \SI{17}{\percent} && \SI{33}{\percent} & \SI{11}{\percent} & \SI{0}{\percent} \\
$05$ & \SI{95}{\percent} & \SI{68}{\percent} & \SI{35}{\percent} && \SI{42}{\percent} & \SI{41}{\percent} & \SI{18}{\percent} && \SI{42}{\percent} & \SI{14}{\percent} & \SI{6}{\percent} \\
$08$ & \SI{100}{\percent} & \SI{79}{\percent} & \SI{78}{\percent} && \SI{55}{\percent} & \SI{32}{\percent} & \SI{44}{\percent} && \SI{0}{\percent} & \SI{0}{\percent} & \SI{0}{\percent} \\
\midrule
all & \SI{94}{\percent} & \SI{76}{\percent} & \SI{48}{\percent} && \SI{56}{\percent} & \SI{39}{\percent} & \SI{33}{\percent} && \SI{45}{\percent} & \SI{6}{\percent} & \SI{3}{\percent} \\
\bottomrule
\end{tabular}
\label{tbl:recall}
\end{table}
\begin{table}[t] %
\centering
\caption{
Median translation and rotation alignment error of all successful loop closures, divided into easy (E), medium (M), hard (H) cases.
}
\setlength{\tabcolsep}{3.1pt}
\ra{1.2}
\begin{tabular}{c c c c c c c c c c c c}
\toprule
& \multicolumn{3}{c}{GraffMatch (Ours)} && \multicolumn{3}{c}{PoleMatch} && \multicolumn{3}{c}{PlaneMatch} \\
\cmidrule{2-4}\cmidrule{6-8}\cmidrule{10-12}
& E & M & H && E & M & H && E & M & H \\
\toprule
$\tilde{t}_\text{err}$ [cm] & $9.1$ & $17.3$ & $25.7$ && $10.4$ & $23.2$ & $16.0$ && $11.8$ & $17.3$ & $25.1$ \\
$\tilde{\theta}_\text{err}$ [deg] & $0.57$ & $0.92$ & $1.32$ && $0.74$ & $1.6$ & $1.72$ && $0.97$ & $1.78$ & $2.58$ \\
\bottomrule
\end{tabular}
\label{tbl:alignment-error}
\end{table}
\subsection{Loop Closure Results}
Global data association is attempted on each loop closure candidate, after which line and plane matches are used to estimate a rigid transformation $\hat{T}^i_j$ of scan $j$ w.r.t scan $i$.
The quality of loop closure is evaluated by comparing $\hat{T}^i_j$ with the ground truth $T^i_j$ and calculating the rotation and translation error.
If the rotation error is less than \SI{5}{\degree} and the translation error is less than \SI{1}{\meter}, the loop closure is accepted.
If the number of matches returned by an algorithm is less than 3, the loop closure attempt is considered failed.
The parameters used for GraffMatch (see \eqref{eq:consistency}) are $\epsilon=0.2$ and $\sigma=0.02$.
Table~\ref{tbl:recall} lists the recall at \SI{100}{\percent} precision for each tested KITTI sequence.
As expected, utilizing both poles and planes in GraffMatch produces a higher number of successful loop closures.
The number of successful PoleMatch loop closures is low due to too few poles or variation of extracted poles across lidar scans (i.e., in a single scan, few lidar point returns may exist for a pole-like object, leading to a noisy centroid).
PlaneMatch also scores low in general and even fails to successfully match and align planes in all of sequence 08, where the car drives through previously visited streets in the opposite direction.
Because the CP parameterization heavily depends on the origin and orientation of the lidar sensor frame, successful CP plane matching requires a very good initialization, as in the easy case where PlaneMatch performs at its best.
This requirement can be problematic in the presence of odometry-only measurements, as drift could prevent loop closure from ever succeeding.
\begin{figure}[t]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, clip, width=\columnwidth]{kitti-err-2dhist}
\caption{
Alignment error for loop closure pairs, visualized as a grid of likelihood-normalized density plots.
From left to right, the grid columns correspond to GraffMatch (ours), PoleMatch, and PlaneMatch.
From top to bottom, the grid rows correspond to the easy, medium, and hard cases.
For each case, GraffMatch achieves the highest recall, indicated by the high density of points in the low-translation, low-rotation error regime.
PoleMatch fails to generate enough pole correspondences in many loop closures due to the scarcity of poles; in these cases, the error is set to a high value (upper-right corner).
PlaneMatch performs at its best in the easy case when lidar scans have very close initial poses, but breaks down as the baseline distance increases.
}
\label{fig:kitti-alignment-err}
\end{figure}
Fig.~\ref{fig:kitti-alignment-err} shows the alignment error of loop candidates from all sequences as a $3\times3$ grid of density heatmaps, where columns correspond to algorithms and rows (from top to bottom) correspond to easy, medium, and hard cases.
In many cases of PoleMatch and in some cases of PlaneMatch, less than 3 matches were returned and so alignment error is set high, causing increased density in the upper-right corner.
GraffMatch is the only data association method that consistently scores in the low-translation, low-rotation error regime.
The median alignment error for successful loop closures is listed in Table~\ref{tbl:alignment-error}.
As discussed in Section~\ref{sec:consistency}, the distance function used to score consistency in our graph-theoretic framework is an important consideration.
We choose $d_\mathrm{Graff}$ because it allows us to score the consistency of affine subspaces pairs with arbitrary dimension in a principled manner.
Other distance functions might only consider the distance or angle between objects, for example.
Fig.~\ref{fig:recall-vs-distance} shows recall at \SI{100}{\percent} precision and compares our choice of $d_\mathrm{Graff}$ with four other possible distances.
The distances $d_\mathrm{Gr}$ and $d_{\pi\ell}$ disregard distance information, treating lines and planes as linear subspaces containing the origin, or naively using the inner product between a plane's normal vector and a line's direction vector, respectively.
The standard Euclidean distance $d_{\mathbb{R}^n}$ disregards subspace orientation and instead treats lines and planes as bounded, using their centroids as measurements.
As discussed previously in this section, using centroid requires that points be segmented into the same bounded lines and planes in every view, and thus will suffer as the baseline between loop pairs increases.
Naively combining orientation and distance information in $d_{\mathrm{Gr}\times\mathbb{R}^n}$ leverages all available information, but requires the weighting function $f$ (see Section~\ref{sec:consistency}) to take on an ad-hoc mixture of kernels with additional parameters, e.g., $f(c_r,c_\theta):=\exp(-c_r^2/\sigma_r^2)\exp(-c_\theta^2/\sigma_\theta^2)$.
Using $d_\mathrm{Graff}$ leads to a simple method of calculating distances on the manifold of affine subspaces and leads to higher recall.
\begin{figure}[t]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, clip, width=\columnwidth]{recallvsdgraff}
\caption{
Recall at \SI{100}{\percent} precision of loop candidate alignment using different distance functions in our data association framework.
The shifted affine Grassmannian distance $d_\mathrm{Graff}$, which combines line and plane `direction' with distance, provides the highest recall.
Using centroid information ($d_\mathbb{R}^n$, $d_{\mathrm{Gr}\times\mathbb{R}^n}$) also gives good results, but depends on accurate line and plane segmentation.
Using only directional information ($d_\mathrm{Gr}$, $d_{\pi\ell}$) performs poorly due to many objects with similar plane normals and line directions.
}
\label{fig:recall-vs-distance}
\end{figure}
Timing results for GraffMatch, PoleMatch, and PlaneMatch are respectively \SI[separate-uncertainty=true,multi-part-units=single]{0.076\pm0.102}{\second}, \SI[separate-uncertainty=true,multi-part-units=single]{0.005\pm0.004}{\second}, and \SI[separate-uncertainty=true,multi-part-units=single]{0.011\pm0.003}{\second}.
Thus, GraffMatch is suitable for online operation in loop closure tasks and is a robust alternative to PoleMatch and PlaneMatch, both of which rely on assumptions to speed up their execution, but limit their accuracy.
Specifically, PoleMatch treats infinite lines as centroid points and PlaneMatch requires an initial frame alignment guess.
In our experiments, there were an average of $7$ poles and $23$ planar patches extracted per frame, resulting in an average of $650$ initial correspondences to be processed for geometric consistency.
Execution time could be reduced by leveraging additional object information to immediately discard initial matches instead of allowing each object be potentially associated with each other object (e.g., a plane with large area is unlikely to be matched to a small plane).
\subsection{Proof of Invariance}
\begin{repprop}{prop:invariance}
\input{paper/prop_invariance}
\end{repprop}
\begin{proof}
The subspace distance between $\mathbb{Y}_1$ and $\mathbb{Y}_2$ is $d_\mathrm{Graff}(\mathbb{Y}_1,\mathbb{Y}_2) = \|\Theta\|$, where $\Theta$ is a vector of $k=\min(k_1,k_2)$ principal angles.
These angles can be calculated via the singular value decomposition of $Y_1^\top Y_2$, the inner product of the Stiefel coordinates of $\mathbb{Y}_1,\mathbb{Y}_2$.
Without loss of generality, assume $\mathbb{Y}_1,\mathbb{Y}_2$ are shifted s.t. $b_{1}=0$.
Then,
\begin{equation}
Y_1^\top Y_2 =
\begin{bmatrix}
A_1^\top A_2 & \tfrac{1}{\eta_2}A_1^\top b_{02} \\
0 & \tfrac{1}{\eta_1\eta_2}
\end{bmatrix},
\end{equation}
where $\eta_i=\sqrt{\|b_{0i}\|^2 + 1}$.
Given $T=(R,t)\in\mathrm{SE}(3)$, let $\bar{\mathbb{Y}}_1,\bar{\mathbb{Y}}_2$ be the rotated and translated versions of $\mathbb{Y}_1,\mathbb{Y}_2$, respectively, with affine coordinates
\begin{equation}
\mathbb{Y}_i:[A_i,b_i] \xrightarrow{\quad T\quad} \bar{\mathbb{Y}}_i:[RA_i, Rb_i + t].
\end{equation}
Shifting $\bar{\mathbb{Y}}_1,\bar{\mathbb{Y}}_2$ by $-\bar{b}_1=-(Rb_1+t)$ leads to the affine coordinates $\bar{\mathbb{Y}}_1:[RA_1, 0]$ and
\begin{equation}
\bar{\mathbb{Y}}_2:[RA_2, Rb_2+t-(Rb_1+t)] = [RA_2, Rb_2],
\end{equation}
so that
\begin{equation}
\bar{Y}_1^\top \bar{Y}_2 =
\begin{bmatrix}
A_1^\top A_2 & \tfrac{1}{\eta_2}A_1^\top b_{02} \\
0 & \tfrac{1}{\eta_1\eta_2}
\end{bmatrix},
\end{equation}
which is free of $R$ and $t$ and equal to $Y_1^\top Y_2$, as desired.
\end{proof}
\section{Method}\label{sec:method}
Given a candidate pair of lidar scans produced by, e.g., matching global scan descriptors~\cite{zhu2020gosmatch} or comparison with past keyframes~\cite{jiang2020lipmatch}, we seek to geometrically verify the loop pair and produce a relative transformation between the two sensor poses.
In the following discussion, we assume that scan $i$ has already had $l_i$ lines and $p_i$ planes extracted, and we refer to them collectively as objects $s_{i,a}\in{\mathcal{S}_i = \{ \mathbb{Y}^\ell_1,\dots,\mathbb{Y}^\ell_{l_i}, \mathbb{Y}^\pi_1,\dots,\mathbb{Y}^\pi_{p_i}\}}$.
Our method is comprised of the following steps: (i) constructing a consistency graph based on pairwise object distances in each scan, (ii) identifying object correspondences via the densest fully-connected subgraph in the consistency graph, and (iii) estimating a rigid transformation based on object correspondences.
\begin{figure}[t]
\centering
\includeinkscape[pretex=\footnotesize,width=\columnwidth]{consistencygraph}
\caption{
Construction of a consistency graph.
Using $d_\mathrm{Graff}$, the distance between a line and a plane in scan $\mathcal{S}_i$ (\tikzcircle[scan1blue,fill=scan1blue]{1.5pt}) is compared to the distance between the two corresponding objects in $\mathcal{S}_j$ (\tikzcircle[scan2red,fill=scan2red]{1.5pt}).
The consistency of these two distances is evaluated using \eqref{eq:consistency} and the edge $(u_1,u_2)$ is so weighted.
}
\label{fig:consistencygraph}
\end{figure}
\subsection{Consistency Graph Construction}\label{sec:consistency}
A consistency graph for two scans $\mathcal{S}_i$, $\mathcal{S}_j$ is an undirected weighted graph $\mathcal{G}=(\mathcal{V},\mathcal{E},w)$ with potential object correspondences $s_{i,a}\leftrightarrow s_{j,b}$ as vertices, edges between consistent correspondences, and a weighting function $w:\mathcal{E}\to[0,1]$ that evaluates the strength of consistency.
A pair of correspondences $u_1,u_2\in\mathcal{V}$ is consistent if the distance between the underlying objects $s_{i,a}\in\mathcal{S}_i,s_{j,b}\in\mathcal{S}_j$ satisfies
\begin{equation}\label{eq:consistency}
c_{u_1,u_2} = |d(s_{i,u_1^a},\,s_{i,u_2^a}) - d(s_{j,u_1^b},\,s_{j,u_2^b})| < \epsilon,
\end{equation}
for some distance function $d$.
Note that the two distances in \eqref{eq:consistency} are between objects \emph{internal} to scans $\mathcal{S}_i$ and $\mathcal{S}_j$, respectively.
If a pair of correspondences are deemed consistent, the corresponding edge is attributed the weight $w(u_1,u_2):=f(c_{u_1,u_2})$, for some choice of ${f:\mathbb{R}_+\to[0,1]}$ that scores very consistent pairs close to 1.
In this paper, we choose $f(c):=\exp(-c^2/2\sigma^2)$ for simplicity, though other appropriate functions could be used.
Given a consistency graph, correspondences are selected that maximize consistency, further explained in Section~\ref{sec:clipper}.
The distance function $d$ must be carefully chosen to ensure accuracy of graph-based data association.
In particular, we desire \eqref{eq:consistency} to hold when $s_{j,u_1^b},\,s_{j,u_2^b}$ are the transformed versions of $s_{i,u_1^a},\,s_{i,u_2^a}$, respectively.
This property is called invariance and leads to subgraphs of the consistency graph that indicate a set of object matches in a loop pair.
\begin{definition}\label{defn:invariance}
A distance $d:X\times X\to\mathbb{R}$ is \emph{invariant} if $d(x_1,x_2) = d(x_1',x_2')$, where $x_1',x_2'\in X$ are the transformation of $x_1,x_2\in X$ under $T\in\mathrm{SE}(3)$, respectively.
\end{definition}
We establish the invariance of the metric $d_\mathrm{Graff}$ to rotation and, under careful application, translation.
\begin{prop}\label{prop:invariance}
\input{paper/prop_invariance}
\end{prop}
\begin{proof}
See Appendix A.
\end{proof}
The intuition of Proposition~\ref{prop:invariance} can be understood from Fig.~\ref{fig:graffexample-both}.
As $\mathbb{Y}_1$ and $\mathbb{Y}_2$ are together translated further from the origin, the principal angle between $j(\mathbb{Y}_1)$ and $j(\mathbb{Y}_2)$ decreases to zero in the limit.
However, the distance between the affine components of $\mathbb{Y}_1$ and $\mathbb{Y}_2$ remains the same, no matter the translation.
By first shifting the affine components, we remove the dependence of the absolute translation in the computation of the principal angle, while maintaining the dependence on the \emph{relative} translation between $\mathbb{Y}_1$ and $\mathbb{Y}_2$.
A remaining challenge is to address the insensitivity of $d_\mathrm{Graff}$ to the Euclidean distance between affine components of objects.
The yellow curve ($s=0$) in Fig.~\ref{fig:graffsensitivity} represents the principal angle between $\mathbb{Y}_1,\mathbb{Y}_2\in\mathrm{Graff}(0,1)$ after shifting them as per Proposition~\ref{prop:invariance}, as a function of the Euclidean distance between $\mathbb{Y}_1$ and $\mathbb{Y}_2$.
Observe that after a distance of approximately \SI{2}{\meter}, the curve flattens significantly as it asymptotes towards $\tfrac{\pi}{2}$.
This nonlinearity leads to poor discrimination between pairs of correspondences whose internal objects are far apart in the Euclidean sense.
To combat this when calculating pairwise affine Grassmannian distances, we first scale the affine component of each $\mathbb{Y}_i$ by a constant parameter $\rho$ so that the affine coordinates of $\mathbb{Y}_i$ become $[A_i,b_i/\rho]$.
The choice of $\rho$ depends on the average Euclidean distance between objects in the environment and its effect is to bring principal angles into the linear regime.
The selection of $\rho$ is discussed further in Section~\ref{sec:exp-scaling}.
With Proposition~\ref{prop:invariance} and the scaling parameter $\rho$ in hand, a consistency graph between objects in $\mathcal{S}_i$ and $\mathcal{S}_j$ can be constructed.
We establish initial correspondences between each object in $\mathcal{S}_i$ with each object of $\mathcal{S}_j$ so long as the objects are of the same dimensions $k$ (i.e., we do not consider lines associated to planes).
Given additional information such as color, scan intensity, planar patch area, or pole radius, this initial set of correspondences could be refined, but would rely on accurately segmenting lines and planes across potentially wide baselines.
While we restrict object correspondences to be of the same dimension, the machinery we have developed allows for computing the consistency of two correspondences whose internal pair of objects have differing dimension.
Evaluating the consistency of a correspondence pair in our affine Grassmannian framework is illustrated in Fig.~\ref{fig:consistencygraph}.
\subsection{Graph-based Global Data Association}\label{sec:clipper}
Given a consistency graph, the task of matching objects from two scans is reduced to identifying the densest clique of consistent correspondences, formalized as the problem
\begin{gather}\label{eq:densestclique}
\begin{array}{ll}
\underset{u \in \{0,1\}^m}{\text{maximize}} & \dfrac{u^\top M \, u}{u^\top u}
\\
\text{subject to} & u_i \, u_j = 0 \quad \text{if}~ M(i,j)=0, ~ \forall_{i,j},
\end{array}
\end{gather}
where $M\in[0,1]^{m\times m}$ is the weighted adjacency matrix (i.e., from $w$ as defined in Section~\ref{sec:consistency}) with ones on the diagonal, and ${u\in\{0,1\}^m}$ indicates a consistent set of correspondences.
Note that we choose to maximize the \emph{density} of correspondences rather than the cardinality (e.g., maximum clique) as our previous work has found this objective to produce more accurate results~\cite{lusk2021clipper}.
Problem~\eqref{eq:densestclique} is NP-hard, therefore we solve a particular relaxation which yields high accuracy solutions via our efficient CLIPPER algorithm (see ~\cite{lusk2021clipper} for more details).
\subsection{Transformation Estimation}
Given pairwise correspondences between objects in $\mathcal{S}_i$ and $\mathcal{S}_j$, consider finding the best rigid transformation to simultaneously align matched lines and planes by solving the optimization problem
\begin{equation}
\min_{\substack{R\in\mathrm{SO}(3),\\ t\in\mathbb{R}^3}}
\sum_{i=1}^{p} \|\pi_i' - f_\pi(\pi_i,R,t)\|^2
+
\sum_{i=1}^{l} \|\ell_i' - f_\ell(\ell_i,R,t)\|^2.
\end{equation}
This problem can be solved in closed-form by first solving for the rotation via SVD, then solving for the translation via least squares, similar to Arun's method for point cloud registration~\cite{arun1987least}.
The benefit of using the line and plane geometry directly, as opposed to a point parameterization, is twofold.
First, it allows the use of the full information present in the infinite plane or line, i.e., distance from origin as well as orientation.
Second, it does not require assumptions about where the ``centroid'' of the plane or line is, which is undefined for infinite planes and lines and requires consistent segmentation of objects across scans.
Together, these benefits lead to a more accurate rigid transformation estimate when aligning line and plane features.
\section{Conclusion}\label{sec:conclusion}
We presented a global data association method that achieved high recall with low alignment error when evaluated on candidate loop closures in the KITTI dataset.
By unifying the representation of poles and planes extracted from lidar scans as affine Grassmannian manifold elements, GraffMatch widens the applicability of using geometric primitive in place of memory-intensive point cloud maps.
Importantly, leveraging the invariant shifted affine Grassmannian distance in our graph-based data association framework enables the geometric verification of place recognition candidates with a wide range of baseline distances between frames.
By removing assumptions on initial frame alignment (e.g., from noisy odometry), GraffMatch is applicable to other perception problems requiring geometric verification, such as extrinsic sensor calibration, map merging, and global relocalization.
In future work we will incorporate GraffMatch into a complete SLAM pipeline, using affine Grassmannian objects for both local and global data association.
In particular, we will investigate the estimation of lines and planes directly via subspace tracking methods, using manifold-based optimization techniques to perform online bundle adjustment of affine Grassmannian object landmarks. |
2,877,628,088,514 | arxiv | \section{Optical forces and noise acting on a dielectric sphere}
To illustrate our idea, we consider a sub-wavelength dielectric sphere interacting with two standing-wave optical modes of a Fabry-Perot cavity~(Fig.~\ref{fig:schematic}a). One resonantly driven mode provides an optical dipole trap for the sphere. The second mode is driven by a weaker ``cooling'' beam, assumed to have a non-zero intensity gradient at the trap center, which provides a radiation pressure cooling force~\cite{braginsky02,wilson-rae07,marquardt07}. We discuss the cooling mechanism in the next section, while here we focus on the trapping potential and the noise forces acting on the sphere.
The trapping beam provides a gradient force similar to that used
to ``optically tweeze'' small dielectric
particles~\cite{ashkin07}. Considering a sphere whose radius is
much smaller than the optical wavelength, $r{\ll}\lambda$, its
optical response is like that of a point dipole with induced
dipole moment
$p_{\footnotesize\textrm{ind}}={\alpha}_{\footnotesize\textrm{ind}}E(x)$
and optical potential
$U_{\footnotesize\textrm{opt}}(x)=-(1/4)(\textrm{Re}\;\alpha_{\footnotesize\textrm{ind}})|E(x)|^2$~(see
Appendix). Here $x$ is the CM position of the sphere,
$\alpha_{\footnotesize\textrm{ind}}=3\epsilon_{0}V\left(\frac{\epsilon-1}{\epsilon+2}\right)$
is its polarizability, $V$ is the sphere volume, and $\epsilon$ is
the electric permittivity. Taking a standing wave
$E(x)=E_{0}\cos\,kx$~($k{\equiv}2\pi/\lambda$), to lowest order
near an anti-node the potential corresponds to a harmonic
oscillator with mechanical frequency
\begin{equation} \omega_{m}=\left(\frac{6k^{2}I_0}{\rho c}
\textrm{Re}\frac{\epsilon-1}{\epsilon+2}\right)^{1/2},\label{eq:omegam}
\end{equation}
where $I_0$ is the field intensity and $\rho$ is the mass density
of the sphere. The total trap depth is
$U_{0}=(3I_{0}V/c)\textrm{Re}\frac{\epsilon-1}{\epsilon+2}$.
Typical trap depths and oscillation frequencies for a high-index
material~($\frac{\epsilon-1}{\epsilon+2}{\sim}1$) are plotted in
Figs.~\ref{fig:schematic}c,d. Frequencies of several MHz are
achievable using an intra-cavity intensity of
$I_{0}{\sim}1$~W/$\mu$m${}^2$. The imaginary component of
$\epsilon$ characterizes optical absorption, which contributes to
internal heating. For a material with ${\sim}10$~dB/km propagation
losses in bulk, intensities of $I_0{\sim}10$~W/$\mu$m${}^2$ can be
sustained without melting the sphere, due to blackbody
re-radiation of the absorbed energy~(see Appendix). With this in
mind, we assume $\epsilon$ is real in following discussions.
The dominant noise forces acting on the sphere are collisions with
a background gas and momentum recoil kicks due to scattered
photons. In the Appendix, we show that the contributions from shot
noise, blackbody radiation, and sphere anisotropy are negligible.
Furthermore, the CM is de-coupled from the internal degrees of
freedom and the sphere effectively has no internal structure~(as
opposed to molecules, where the internal configuration can affect
cooling efficiency~\cite{bahns96}). In the regime where the
molecular mean free path exceeds $r$, the background gas leads to
a mean damping force $dp/dt=-\gamma_{g}p/2$ with damping rate
$\gamma_{g}/2=(8/\pi)(P/\bar{v}r\rho)$, where $P,\bar{v}$ are the
background gas pressure and mean speed,
respectively~\cite{epstein24}. The random nature of the collisions
also thermalizes the motional energy, at a rate given through the
fluctuation-dissipation theorem by $dE/dt=-\gamma_{g}(E-k_{B}T)$,
where $T$ is the gas temperature. In particular, the
characteristic time for the system to heat by one phonon starting
from the ground state is
$\tau_{g}{\sim}\hbar\omega_{m}/\gamma_{g}k_{B}T$. Note that
$\tau_{g}^{-1}$ does not necessarily reflect the actual collision
rate between the sphere and gas molecules,
$R_{\footnotesize\textrm{coll}}{\approx}{\pi}P\bar{v}r^2/k_{B}T$~(it
is possible for a single collision to be quite rare,
$R_{\footnotesize\textrm{coll}}{\gg}\tau_{g}^{-1}$, and to impart
several phonons at once). We define a mechanical quality factor
$Q_g=\omega_m/\gamma_g$ due to the background gas, and a number of
coherent oscillations
$N^{(g)}_{\footnotesize\textrm{osc}}\equiv\omega_{m}\tau_{g}/2\pi$
expected before the energy increases by a single phonon. For a
sphere of radius $r=50$~nm, $\omega_{m}/(2\pi)=1$~MHz, and a
room-temperature gas with $P=10^{-10}$~Torr, one finds
$\gamma_{g}{\sim}10^{-6}$~s${}^{-1}$,$Q_g{\sim}6{\times}10^{12},N^{(g)}_{\footnotesize\textrm{osc}}{\sim}10^5$,
indicating that the levitated sphere can be essentially
\textit{de-coupled} from its thermal environment.
Photons scattered by the sphere out of the cavity lead to heating
via momentum recoil kicks. In analogy with atoms or ions trapped
in the Lamb-Dicke regime~\cite{leibfried03}~(when the particle is
trapped on a scale $\Delta{x}$ much smaller than $\lambda$), the
scattering induces transitions between consecutive harmonic
oscillator levels $n{\rightarrow}n{\pm}1$, with rates
$R_{n{\rightarrow}n{\pm}1}=\gamma_{\footnotesize\textrm{sc}}(n+1/2{\pm}1/2)$.
Second-order perturbation theory~\cite{wineland79} yields
\begin{equation}
\gamma_{\footnotesize\textrm{sc}}=(2/5)(\omega_{r}/\omega_{m})R_{\footnotesize\textrm{sc}},\label{eq:gammasc}
\end{equation}
where $\omega_{r}=\hbar k^2/2{\rho}V$ is the recoil frequency and $R_{\footnotesize\textrm{sc}}=48\pi^{3}\frac{I_{0}V^2}{\lambda^{4}\hbar\omega}(\frac{\epsilon-1}{ \epsilon+2})^2$ is the photon scattering rate. A result identical to Eq.~(\ref{eq:gammasc}) holds for a weakly excited, trapped atom~\cite{cirac92}. When photon scattering dominates the heating, the expected number of coherent oscillations is
\begin{equation}
N^{(\footnotesize\textrm{sc})}_{\footnotesize\textrm{osc}}\equiv\frac{\omega_m}{2\pi\gamma_{\footnotesize\textrm{sc}}}=\frac{5}{
8\pi^3}\frac{\epsilon+2}{\epsilon-1}\frac{\lambda^3}{V}.\label{eq:Nosc}
\end{equation}
Note that $N^{(\footnotesize\textrm{sc})}_{\footnotesize\textrm{osc}}$ scales inversely with
the sphere volume~($N^{(\footnotesize\textrm{sc})}_{\footnotesize\textrm{osc}}{\sim}40$ for $r=50$~nm, $\lambda=1\;\mu$m, $\epsilon{\gg}1$), due to the fact that the scattered power and dipole force scale like $p_{\footnotesize\textrm{ind}}^2$ and $p_{\footnotesize\textrm{ind}}$, respectively. Comparing with background gas collisions at $P=10^{-10}$~Torr and $\omega_m/(2\pi)=1$~MHz, recoil heating dominates $N_{\footnotesize\textrm{osc}}$ for sphere sizes $r{\gtrsim}5$~nm. Reaching the regime $N_{\footnotesize\textrm{osc}}{\gg}1$ implies that the sphere can coherently evolve for many oscillation periods after any cooling mechanisms are turned off, which makes this system a promising candidate for observing coherent quantum phenomena.
Finally, we remark that $R_{\footnotesize\textrm{sc}}$ can be very
large~($R_{\footnotesize\textrm{sc}}{\sim}10^{15}$~s${}^{-1}$ for
$I_{0}=1$~W/$\mu$m${}^2$ and $r=50$~nm) compared to atoms or ions,
which enables direct imaging. The large scattering is due to the
large intensities and the linear response of the sphere~(it is not
saturated like an atom or ion), as opposed to the system behaving
as a lossy element in the cavity. The contribution to the cavity
loss rate is
$\kappa_{sc}=12\pi^{2}\omega(V^{2}/\lambda^{3}V_c)\left(\frac{\epsilon-1}{\epsilon+2}\right)^{2}$,
where $V_c$ is the cavity mode volume, and is typically much
smaller than the natural cavity linewidth $\kappa$. We also
emphasize that in the Lamb-Dicke regime, the scattering does not
cause extra decoherence beyond that from recoil heating. This is
in contrast to motional wavepackets of spatial extent
$\Delta{x}\sim\lambda$, where a single scattering event can
destroy quantum coherence~\cite{kokorowski01}.
\section{Cooling the center-of-mass motion to the ground state}
We now describe the optical cooling effect of the weaker, second cavity mode~(denoted mode $2$). For concreteness, we assume that the sphere is trapped near the anti-node $x=0$ of cavity mode $E_{1}{\propto}\cos\;k_{1}x$, and that the second mode has spatial profile $E_{2}{\propto}\cos\;(k_{2}x-\pi/4)$, such that the intensity gradient is maximized. The total Hamiltonian of the system is given in a
rotating frame by
\begin{eqnarray} H & = &
-\hbar\delta_{1}\opdagger{a}{1}\op{a}{1}-\hbar\delta_{2}\opdagger{a}{2}\op{a}{2}
+\frac{\hbar\Omega}{2}\left[(\op{a}{1}+\opdagger{a}{1})+\sqrt{2\zeta'}(\op{a}{2}
+\opdagger{a}{2})\right]
\nonumber \\ & & -\hbar
g_{1}(\cos\;2k_{1}\hat{x}-1)\opdagger{a}{1}\op{a}{1}-\hbar
g_{2}\cos\;2(k_{2}\hat{x}-\pi/4)\opdagger{a}{2}\op{a}{2}+\frac{\hat{p}^2}{2m}
.\label{eq:H}
\end{eqnarray}
Here $\hat{p},\hat{x}$ are the momentum and position operators of
the CM, $\op{a}{i}$ is the photon annihilation operator of cavity
mode $i$, and $\Omega$, $\Omega\sqrt{2\zeta'}$ are the driving
amplitudes of modes $1$ and $2$, respectively. $\delta_{i}$ is the
detuning between the driving field and mode frequency when the
sphere sits at $x=0$. The opto-mechanical coupling strengths
$g_i=\frac{3V}{4V_{c,i}}\frac{\epsilon-1}{\epsilon+2}\omega_i$
characterize the position-dependent frequency shifts due to the
sphere~(see Appendix), where $V_{c,i}$, $\omega_i$ are the mode
volume and resonance frequency of mode $i$. To simplify notation,
we assume that modes $1,2$ have similar properties,
$\omega_{1}{\approx}\omega_{2}=\omega$, etc. In addition to the
evolution described by $H$, the system also exhibits cavity losses
and the mechanical noise described previously.
Expanding the opto-mechanical coupling term of mode $2$ around
$x=0$, $\hbar g
\cos\;2(k\hat{x}-\pi/4)\opdagger{a}{2}\op{a}{2}\approx 2\hbar g
k\hat{x} \opdagger{a}{2}\op{a}{2}$, one finds a linear coupling in
the sphere position, analogous to the effect of radiation pressure
on a moving mirror of a Fabry-Perot cavity~\cite{wilson-rae07}.
Physically, the motion induces changes in the detuning and thus
the intra-cavity field amplitude, while the lag in the cavity
response enables the field to do work~(cooling) on the sphere. To
calculate the cooling rate, following the techniques of
Ref.~\cite{wilson-rae07} we first apply shifts to the operators,
$\op{a}{i}\rightarrow\op{a}{i}+\alpha_i$,
$\hat{x}\rightarrow\hat{x}+x_0$, where $\alpha_i$ and
$x_0\approx\zeta/k$~($\zeta\approx\kappa^{2}\zeta'/(\kappa^{2}+4\delta_{2}^2)$)
are mean values of the quantum fields. Here we have defined
$2\zeta=|\alpha_{2}/\alpha_{1}|^2$ as the ratio of intra-cavity
intensities of modes $1$ and $2$, and assumed that mode $1$ is
driven on resonance~($\delta_1=0$). To lowest order in $\zeta$,
field mode $1$($2$) is purely responsible for trapping~(cooling).
Subsequently tracing out the cavity degrees of freedom yields
equations for the mechanical motion alone. In particular, to
lowest order in $\zeta$ and for $\delta_2<0$, the cooling laser
provides a net cooling rate $\Gamma\equiv
R_{opt,-}-R_{opt,+}=\kappa\Omega_{m}^2\left[((\delta_{2}+\omega_m)^2+(\kappa/2)^2)^{-1}-((\delta_{2}-\omega_m)^2+(\kappa/2)^2)^{-1}\right]$~(see
Appendix), where $R_{opt,{\mp}}$ denote the anti-Stokes~(cooling)
and Stokes~(heating) scattering rates~(see
Fig.~\ref{fig:schematic}b). Here $\Omega_{m}\equiv
2gkx_{m}|\alpha_{1}|\sqrt{2\zeta}$ is the effective
opto-mechanical driving amplitude~(see Fig.~\ref{fig:schematic}b)
and $x_m\equiv\sqrt{\hbar/2m\omega_m}$. Validity of these
perturbative results requires $\Omega_{m}\lesssim\kappa,\omega_m$
and $\zeta{\lesssim}1$.
In the realistic limit that background gas collisions are negligible, the steady-state phonon number is $\avg{n_f}{\approx}\tilde{n}_{f}+\gamma_{sc}/\Gamma$, where $\tilde{n}_{f}=R_{opt,+}/\Gamma$ is the fundamental limit of laser cooling~\cite{wilson-rae07}. It is minimized when $\delta_{2}=-(1/2)\sqrt{\kappa^2+4\omega_m^2}$. In particular, when sideband resolution is achieved~($\omega_m\gtrsim\kappa$), $\tilde{n}_{f,\footnotesize\textrm{min}}{\approx}(\kappa/4\omega_m)^2{\ll}1$, indicating that ground-state cooling is possible provided other heating mechanisms are made sufficiently small. Considering the limit $\omega_{m}{\gg}\kappa$ and taking the maximum cooling rate $\Gamma{\sim}\kappa$ consistent with the perturbative calculations, using Eq.~(\ref{eq:Nosc}) one can then re-write $\avg{n_f}$ as
\begin{equation}
\avg{n_f}{\approx}\frac{\kappa^2}{16\omega_m^2}+\phi\frac{\omega_m}{\kappa}
.\;\;\;\;\;(\omega_{m}{\gg}\kappa)\label{eq:nf}
\end{equation}
The last term on the right corresponds to photon recoil heating and $\phi=(4\pi^2/5)(V/\lambda^3)\frac{\epsilon-1}{\epsilon+2}$ is a dimensionless parameter characterizing the sphere volume. Eq.~(\ref{eq:nf}) is minimized for $\kappa/\omega_{m}=2\phi^{1/3}$, in which case $\avg{n_f}_{\footnotesize\textrm{min}}=3\phi^{2/3}/4{\propto}(r/\lambda)^{2}{\ll}1$. Thus, one sees that ground-state cooling is readily attainable~(provided that $\zeta{\lesssim}1$ can be simultaneously satisfied). Physically, the optimum value of $\kappa/\omega_m$ balances good sideband resolution and excessive heating when large intensities are used to increase $\omega_m$.
To illustrate these results, we consider a sphere of radius $r=50$~nm and $\omega_m/(2\pi)=0.5$~MHz levitated inside a cavity of length $L=1$~cm and mode waist $w=25\;\mu$m~($V_c=(\pi/4)Lw^2$). In Fig.~\ref{fig:cooling}a we plot the minimum obtainable $\avg{n_f}$~(black curve) as a function of cavity finesse $\mathcal{F}\equiv{\pi}c/2{\kappa}L$, assuming negligible gas collisions and subject to the constraints $\zeta,\Omega_{m}/\kappa,\Omega_m/\omega_m<1/2$ and optimized over detuning $\delta_2$. For low cavity finesse the cooling is essentially limited by sideband resolution~($\tilde{n}_{f,\footnotesize\textrm{min}}$, red curve) and the ground state regime $\avg{n_f}{\sim}1$ can be reached with a finesse of $\mathcal{F}{\sim}3600$. A minimum of $\avg{n_f}{\sim}0.02$ is reached at a finesse of $\mathcal{F}{\sim}50000$~(with corresponding cooling rate $\Gamma{\sim}10^{6}$~s${}^{-1}$). This corresponds to a final temperature of $T_f{\sim}6\;\mu$K, or a remarkable compression factor of $T/T_f{\sim}5{\times}10^7$ relative to room temperature $T$.
\section{Motional entanglement and squeezed light generation using quantum state transfer}
A number of related schemes have been proposed for quantum state transfer between light and the motion of atoms~\cite{zeng94,parkins99} or nano-mechanical systems~\cite{zhang03,jahne09}. In our system, the small mechanical noise and ease of achieving good sideband resolution in principle allow state transfer to be realized with almost perfect efficiency. This might enable light with non-classical properties to be mapped onto mechanical motion, and as an example, we show that this can be used to generate EPR correlations between two spatially separate spheres. Moreover, a complementary process can be realized, where a non-trivial mechanical state~(a squeezed state) is prepared through coherent manipulation and subsequently transferred to light leaving the cavity. The latter case illustrates how opto-mechanics can yield a novel nonlinear optical system.
First we give a simplified picture of quantum state transfer using a one-sided, ideal cavity~(where all losses are via transmission through one cavity mirror)~\cite{gardiner85}. Specifically, we consider the Heisenberg equations of motion in a rotating frame for the cavity cooling mode and the motion~(after applying the shifts described in the previous section), when the cooling mode is driven resonantly on the red motional sideband~($\delta_{2}=-\omega_m$),
\begin{eqnarray} \frac{d}{dt}\op{a}{2} & = &
-\frac{\kappa}{2}\op{a}{2}-i\Omega_{m}\left(\op{b}{}+\opdagger{b}{}e^{2i\omega_{
m}t}\right)+\sqrt{\kappa}\op{a}{2,\footnotesize\textrm{in}},
\nonumber \\ \frac{d}{dt}\op{b}{} & = &
(i/\hbar)[H_{e},\hat{b}]-i\Omega_{m}\left(\op{a}{2}+\opdagger{a}{2}e^{2i\omega_{m}t}
\right)+i\hat{F}(t)e^{i\omega_{m}t}.\label{eq:timeevolution}
\end{eqnarray}
The Hamiltonian $H_e$ describes any external forces or couplings applied to the sphere beyond those in Eq.~(\ref{eq:H}), $\hat{b}$ is the annihilation operator corresponding to a harmonic oscillator of mass $m$ and frequency $\omega_m$, and $\hat{a}_{2,\footnotesize\textrm{in}}$ is the cavity input operator associated with losses. $F(t)$ is the~(Hermitian) noise due to photon recoil, which has correlations $\avg{F(t)F(t')}=\phi\omega_{m}\delta(t-t')$, and we assume all other noise is negligible. Since the cavity trapping mode~($\hat{a}_1$) effectively provides a harmonic potential and can otherwise be ignored, for simplicity we will omit the subscript $2$ as we refer to the cooling mode in future discussions. Temporarily assuming that the non-secular terms~($e^{2i\omega_{m}t}$) can be ignored and that the mechanical motion evolves slowly on time scales compared to $1/\kappa$, one can adiabatically eliminate the cavity mode to yield $\hat{a}{\approx}-2i(\Omega_{m}/\kappa)\hat{b}+(2/\sqrt{\kappa})\op{a}{\footnotesize\textrm{in}}$, and $d\hat{b}/dt{\approx}(i/\hbar)[H_e,\hat{b}]-(\Gamma/2)\hat{b}-i\sqrt{\Gamma}\op{a}{in}+i\hat{F}(t)e^{i\omega_{m}t}$, where $\Gamma{\equiv}4\Omega_{m}^2/\kappa$ is the cavity-induced cooling rate in the weak-driving limit~($\Omega_{m}\lesssim\kappa$). The cavity output is related to the input and intra-cavity fields through $\op{a}{\footnotesize\textrm{out}}=\sqrt{\kappa}\hat{a}-\op{a}{\footnotesize\textrm{in}}$~\cite{gardiner85}, or $\op{a}{\footnotesize\textrm{out}}{\approx}-i\sqrt{\Gamma}\hat{b}+\op{a}{\footnotesize\textrm{in}}$, which states that the mechanical motion is mapped onto the outgoing light. Physically, the cooling process converts phononic excitations into photonic excitations that leak out of the cavity. Generally, two mechanisms will degrade state transfer. First, $\hat{F}$ adds extra noise to the ideal state that one is attempting to transfer, with a strength characterized by the small parameter $\phi$. Second, the non-secular terms contribute to Stokes scattering, destroying the perfect phonon-photon correspondence, with a strength that is expected to be proportional to $(\kappa/\omega_m)^2$. Given that $\phi,(\kappa/\omega_m)^2$ can be made small, nearly perfect state transfer is possible in principle. We illustrate this with two examples, entanglement transfer and squeezed light generation.
\subsection{Entanglement transfer}
Here we describe how EPR correlations shared between two modes of
light~\cite{yonezawa07} can be mapped to the motion of two spheres
trapped in spatially separate cavities. Specifically, we define
quadrature operators for the input light for each of the two
systems~(denoted $A,B$), given by
$X^{(j)}_{+,\footnotesize\textrm{in}}=(\hat{a}^{(j)}_{\footnotesize\textrm{in}}+\hat{a}^{(j)\dagger}_{\footnotesize\textrm{in}})$,
$X^{(j)}_{-,\footnotesize\textrm{in}}=(\hat{a}^{(j)}_{\footnotesize\textrm{in}}-\hat{a}^{(j)\dagger}_{\footnotesize\textrm{in}})/i$
for $j=A,B$. A similar set of operators
$X^{(j)}_{\pm,m},X^{(j)}_{\pm,\footnotesize\textrm{out}}$ can be
defined for the motion and output light, by replacing
$\hat{a}^{(j)}_{\footnotesize\textrm{in}}{\rightarrow}\hat{b}^{(j)},\hat{a}^{(j)}_{\footnotesize\textrm{out}}$,
respectively. Of particular interest is the case where the two
input fields exhibit broadband EPR correlations between them,
\begin{equation} \avg{(X^{(A)}_{+,\footnotesize\textrm{in}}(\omega)+X^{(B)}_{+,\footnotesize\textrm{in}}(\omega))^2}/2=\avg{(X^{(A)}_{-,\footnotesize\textrm{in}}(\omega)-X^{(B)}_{-,\footnotesize\textrm{in}}(\omega))^2}/2=e^{-2R}<1.\label{eq:EPRstate} \end{equation}
When the variances satisfy $e^{-2R}<1$, the two modes exhibit correlations below vacuum level and are entangled~\cite{duan00}~(for concreteness, we assume the other combinations of quadratures satisfy $\avg{(X^{(A)}_{\pm,\footnotesize\textrm{in}}(\omega){\mp}X^{(B)}_{\pm,\footnotesize\textrm{in}}(\omega))^2}/2=e^{2R}$). Such EPR correlations have been observed with light and in the internal degrees of freedom of atomic ensembles~\cite{julsgaard01}, but have yet to be demonstrated using mechanical systems.
To proceed, we solve Eq.~(\ref{eq:timeevolution}) in the Fourier
domain~(including the non-secular terms) for each of the systems
for the correlations given in Eq.~(\ref{eq:EPRstate}) and $H_e=0$.
Generally, the non-secular terms yield an infinite set of
algebraic equations~(coupling frequencies $\omega_m+2n\omega_{m}$
for integer $n$), which given $\omega_m{\gg}\Omega_m,\kappa$ can
be truncated to good approximation at $n>1$. For simplicity of
analysis, we assume the two systems have identical properties, and
that the cooling rate $\Gamma\sim\kappa$. However, we expect our
results should qualitatively hold provided that only
$\Gamma,\omega_m$ of the two systems are properly tuned, which can
be easily accomplished by adjusting the trapping and cooling beam
intensities. One can then show that state transfer yields the
following joint variances in the motion~(see Appendix),
\begin{equation} \Delta_{\footnotesize\textrm{EPR}}\equiv\avg{(X^{(A)}_{\pm,m}(t){\mp}X^{(B)}_{\pm,m}(t))^2}/2=e^{-2R}+\frac{\kappa^2}{16\omega_m^2}(3e^{2R}+2\,{\sinh}\,2R)+\frac{4\phi\omega_m}{\kappa}. \end{equation}
As expected, Stokes scattering and recoil heating contribute to
the variance by amounts $(\kappa/\omega_m)^2$ and
$\phi\omega_m/\kappa$, respectively. This can be minimized with
respect to $\kappa/\omega_m$, yielding
$\Delta_{\footnotesize\textrm{EPR,min}}=e^{-2R}+3(\phi/2)^{2/3}(3e^{2R}+2\sinh\,2R)^{1/3}$.
To illustrate these results we plot
$\Delta_{\footnotesize\textrm{EPR,min}}$ in
Fig.~\ref{fig:cooling}b as a function of $e^{-2R}$, taking the
same parameters as in Fig.~\ref{fig:cooling}a. For the moderate
values of $e^{-2R}$ typically obtained in
experiments~\cite{yonezawa07}, EPR correlations in the motion can
be achieved with reasonable cavity finesse $F<10^5$.
\subsection{Squeezed light generation}
First we describe a technique to create a mechanical squeezed
state, and then derive the properties of the outgoing light upon
quantum state transfer. Mechanical squeezing is accomplished by
adding a sinusoidally-varying component to the intensity of the
trapping beam, which yields the Hamiltonian of a parametric
amplifier~\cite{rugar91},
$H_{e}=\epsilon_{m}\omega_m^{2}x^2\sin\;2\omega_{m}t$. Here
$\epsilon_m$ is a small parameter characterizing the strength of
the modulation of the trap frequency. As one approaches the
threshold for parametric
oscillation~($\epsilon_{m}\omega_{m}{\rightarrow}\Gamma$), the
variance in one quadrature of motion is reduced by up to a factor
of $2$~\cite{rugar91}.
We now investigate the properties of the outgoing light over a
narrow frequency range near the cavity resonance, specifically
considering $X_{{\pm},\footnotesize\textrm{out}}(\omega=0)$. We
apply similar methods as above to solve
Eq.~(\ref{eq:timeevolution}) in the Fourier domain. Taking the
limit as one approaches threshold and $\Gamma{\sim}\kappa$, the
variance in the output light is given by~(see Appendix)
\begin{equation}
{\Delta}X_{+,\footnotesize\textrm{out}}^2(\omega=0)=\frac{2\phi\omega_m}{\kappa}+\frac{5}{16}\frac{\kappa^2}{\omega_m^2}.\label{eq:squeezing}
\end{equation}
Again, an optimum value of $\kappa/\omega_m{\propto}\phi^{1/3}$
maximizes the squeezing, with
$({\Delta}X^{2}_{+,\footnotesize\textrm{out}})_{\footnotesize\textrm{min}}{\approx}2.04\phi^{2/3}$~(note
that ${\Delta}X^{2}_{+,\footnotesize\textrm{out}}=1$ for vacuum).
A plot of
$({\Delta}X^{2}_{+,\footnotesize\textrm{out}})_{\footnotesize\textrm{min}}$
as a function of sphere size is shown in Fig.~\ref{fig:cooling}c.
For $r=10$~nm size spheres, one finds that over $25$~dB of
squeezing relative to vacuum can be obtained using an ideal
cavity~(note for good vacuum conditions, background gas collisions
are negligible down to $r{\sim}5$~nm). In practice, a cavity has
additional scattering and absorption losses that limit the
squeezing. Taking an ultra-high finesse cavity with ${\sim}1$~ppm
losses per round trip~\cite{hood01} and a set of reasonable cavity
dimensions, we show in the Appendix that light squeezed by up to
${\sim}15$~dB can be extracted.
In principle, similar techniques also apply to trapped atoms or
ions. However, one benefits from the relatively large mass $m$ of
the sphere. Specifically, approaching threshold, one quadrature of
motion becomes infinitely unsqueezed, producing a large position
uncertainty ${\Delta}x$~\cite{rugar91}. At the same time, faithful
quantum state transfer requires a linear opto-mechanical coupling,
which translates into a requirement that the Lamb-Dicke parameter
$\eta{\equiv}k{\Delta}x\propto{m^{-1/2}}{\ll}1$ be small. In the
Appendix, we show that $\eta{\ll}10^{-2}$ can be satisfied with a
sphere even in the regime of ${\sim}25$~dB squeezing levels. To
compare, a typical atom trapped with a frequency of
$\omega_{m}/(2\pi){\sim}1$~MHz in its \textit{ground state}
already yields $\eta{\sim}0.05$.
\section{Outlook}
An optically levitated opto-mechanical system can have remarkably
long coherence times, which potentially enables quantum phenomena
such as entanglement to be observed even in room-temperature
environments. Combining previously demonstrated techniques to
controllably grow small particles~\cite{venkatathri08} and load
and manipulate them in vacuum~\cite{shu06,ashkin07} should put
this system within experimental reach. Extending the ideas
presented here should open up several other interesting
possibilities. For example, beyond the dipole-type objects
considered here, one could engineer the shapes of the levitated
objects to yield even larger mechanical frequencies and coherence
times, and controllably study the decoherence of a large
system~\cite{hackermuller04}. Also, several spheres or more
complex nano-mechanical systems with internal modes could be
levitated and coupled together, for the purpose of entangling
multiple degrees of freedom. Separately, one could take advantage
of the fact that the potential for the CM is purely optical to
engineer non-trivial dynamics, such as nonlinear motion. It would
also be interesting to develop analogous levitation techniques
using nano- and micro-photonic
cavities~\cite{vahala03,vuckovic03}, combining their remarkable
optical properties with the excellent mechanical characteristics
of a trapped particle. Finally, by levitating charged or magnetic
systems, one could potentially realize systems analogous to ion
traps~\cite{wineland07} or facilitate novel quantum hybrid
architectures~\cite{rabl09}.
\acknowledgments
DC and SP acknowledge support from the Gordon and Betty Moore
Foundation through Caltech's Center for the Physics of
Information, DC from the National Science Foundation under Grant
No. PHY-0803371, CR from a Millikan Postdoctoral Fellowship, and
JY and PZ from a Moore Fellowship during their stay at Caltech.
Work at Innsbruck is supported by the Austrian Science Fund and EU
Projects.
Note added: We also have become aware of a recent, similar
proposal to optically levitate and manipulate a nano-mechanical
system by O. Romero-Isart \textit{et al}., in arXiv:0909.1469
(2009).
|
2,877,628,088,515 | arxiv | \section{Introduction}
t-distributed Stochastic Neighborhood Embedding (t-SNE) is a well-known nonlinear dimensionality reduction technique with applications in many fields. It is frequently used to generate two- or three-dimensional visualizations of high dimensional datasets, often for the purpose of visualizing and identifying clusters.
\vspace{-10pt}
\begin{center}
\begin{figure}[h!]
\begin{tikzpicture}
\node at (0,0) {\includegraphics[width=0.43\textwidth]{mnist_blind}};
\node at (5,0) {\includegraphics[width=0.43\textwidth]{mnist_ground_truth}};
\end{tikzpicture}
\vspace{-10pt}
\caption{t-SNE embedding of MNIST (left) with ground truth coloring (right).}
\label{fig:mnist}
\end{figure}
\end{center}
We describe t-SNE as a \textit{force-based} method because it generates embeddings by balancing attractive and repulsive forces between data samples. These forces are determined by comparing the neighborhood structure of the input data to that of the output. Other well-known force-based methods include Laplacian eigenmaps \cite{bel, coif}, ForceAtlas2 \cite{jac}, LargeVis \cite{largevis}, and UMAP \cite{umap}. \\
\begin{center}
\begin{figure}[h!]
\begin{tikzpicture}
\node at (0,0) {\includegraphics[width=0.9\textwidth]{mnist_directions}};
\node at (3.75,-3.2) {\includegraphics[width=0.2\textwidth]{wheel}};
\end{tikzpicture}
\vspace{-10pt}
\caption{Coloring of t-SNE embedding by force direction. We propose that using forces in the equilibrium embedding as features can provide additional information for (sub-)cluster identification. The wheel in the bottom right identifies colors with directions.}
\label{fig:mnist_directions}
\end{figure}
\end{center}
Although t-SNE is widely used in applications, there is currently little theory to explain how it works. The algorithm can have profoundly different outputs with different choices of parameters, and it is well known that it simply does not work well with certain types of data, such as manifolds \cite{george1, wattenberg}. Identifying when t-SNE results are meaningful and how they should be interpreted is thus an important open question for its practical use. \\
\textbf{Existing results.}
Linderman \& Steinerberger \cite{george1} interpreted t-SNE as a dynamical multi-particle system that obeys certain ellipticity conditions. This approach was further developed by Arora, Hu \& Kothari \cite{arora}. These results show, roughly, that if the underlying data $\left\{x_1, \dots, x_n\right\} \subset \mathbb{R}^d$ is strongly clustered, then t-SNE will recover the clustering. The results also explain, more qualitatively than quantitatively, the underlying mechanism by which this occurs. One goal of this paper is to introduce a new approach for obtaining quantitative predictions of t-SNE results. \\
Since t-SNE is highly popular, there are many experimental studies and guides for selecting parameters and validating results. We especially highlight two recent studies by Kobak \& Berens \cite{kobak} and Wang, Huang, Rudin \& Shaposhnik \cite{wang}. We also point out the study by B\"ohm, Behrens \& Kobak \cite{bohm}, which shows that force-based methods lie on an attraction-repulsion spectrum and can be empirically recovered by tuning the forces used to create the embedding. We believe that this idea is a very promising step towards a unified theory of these algorithms. \\
\textbf{Outline of the paper.} We discuss two (independent) new ideas.
\begin{enumerate}
\item \textbf{Forceful Colorings.} We propose using the attractive and repulsive \textit{forces} used to generate the t-SNE embedding as features. Naturally, when the embedding reaches equilibrium, these force vectors cancel; however, we find that either one of the two can be used as an additional classifier that carries a lot of information. This idea can be applied to any force-based technique and will be explained in greater detail in \S 2.
\item \textbf{Mean Field Limits.} We present a new approach for obtaining quantitative predictions on the behavior of minimizers of the t-SNE energy (cost function). The main idea is to base the analysis on assumptions about the input similarities $p_{ij}$ rather than the data $\left\{x_1, \dots, x_n\right\}$. In particular, we set $p_{ij}$ as the adjacency matrix of a random graph. For suitable graph models, such as Erd\H{o}s-Renyi or random $k$-regular graphs, a stochastic regularization phenomenon allows us to simplify and rewrite the t-SNE cost as a fairly classical calculus of variations problem. We solve the problem for random $k$-regular graphs and come to an interesting conclusion: the mean field limit predicts that the energy minimizer of a $k-$regular random graph is, asymptotically, given by an \textit{annulus}. This result is interesting in its own right, but it also highlights how little we actually know about the t-SNE energy. These results are described in \S 3 and derived in \S 4.
\end{enumerate}
\section{Forceful Colorings}
\subsection{Force-based methods.} This section presents a simple new idea which may prove to be useful for applications of force-based embedding methods. We begin by describing the logic behind force-based methods in a unified way. A more complete description of t-SNE specifically is given in \S 4.1.
Most force-based dimensionality reduction techniques work by minimizing some notion of energy $E$ for the output embedding $\mathcal{Y} = \left\{y_1, \dots, y_n\right\} \subset \mathbb{R}^s$. Letting $\mathcal{X} = \left\{x_1, \dots, x_n \right\} \subset \mathbb{R}^d$ be our input dataset, we initialize $\mathcal{Y}$ and apply an iterative method on the coordinates to minimize $E$. Each step of the optimization can usually be interpreted as an interaction between attractive and repulsive forces that moves the particle system $\mathcal{Y}$ towards a locally optimal configuration.
\begin{center}
\begin{figure}[h!]
\begin{tikzpicture}
\node at (0,0) {\includegraphics[width=0.38\textwidth]{15blind}};
\node at (6,0) {\includegraphics[width=0.4\textwidth]{15ground_truth}};
\node at (3.5,-6) {\includegraphics[width=0.8\textwidth]{attraction_magnitudes_vert_bar}};
\end{tikzpicture}
\vspace{-10pt}
\caption{t-SNE embedding of the digits 1 and 5 from MNIST (top left) with ground truth labels (top right). Coloring by the magnitude of attractive forces on a point (bottom) hints at substructures within the clusters.}
\label{fig:magnitude}
\end{figure}
\end{center}
For t-SNE specifically, we use gradient descent to optimize the energy functional
$$
E(y_1, \dots, y_n) = \sum_{i \neq j} p_{ij} \log\left(\frac{p_{ij}}{q_{ij}}\right)
$$
Here $p_{ij}$ represents pairwise similarities in the input space $\mathbb{R}^d$ and $q_{ij}$ represents pairwise similarities in the output space $\mathbb{R}^s$. $E$ is minimized when $p_{ij}$ and $q_{ij}$ have the same distribution. We update each $y_i$ using the negative gradient:
$$
-\frac{\partial E}{\partial y_i} = 4\sum_{j \neq i} p_{ij} q_{ij} Z (y_j - y_i) - 4\sum_{j \neq i} q_{ij}^2 Z (y_j - y_i),
$$
where $Z$ is a normalization factor for $q_{ij}$ calculated from $\mathcal{Y}$. The first term is an attractive force that moves $y_i$ towards points $y_j$ for which $p_{ij}$ is large. These points correspond to samples $x_j$ which are close to sample $x_i$ in the input data. The second term is a repulsive force that moves $y_i$ away from points $y_j$ for which it is too close. This prevents the formation of degenerate clusters. The net effect, hopefully, is that attraction dominates for pairs of points which are nearby in $\mathcal{X}$ while repulsion dominates for pairs of points which are distant, so that the final embedding $\mathcal{Y}$ preserves the neighborhood relations of the input.
\subsection{Forceful Colorings.} We reach a local minimum of the t-SNE energy functional when the attractive and repulsive forces on each $y_i$ cancel, i.e.
$$
\frac{\partial E}{\partial y_i} = 0 \qquad \qquad \forall~1 \leq i \leq n
$$
Though the net force on each point is $0$, the magnitudes of the attraction and repulsion (generally) do not vanish. The main insight is that these forces actually
\begin{center}
\begin{figure}[h!]
\begin{tikzpicture}
\node at (2.5,-2.3) {\includegraphics[width=0.17\textwidth]{wheel}};
\node at (0,0) {\includegraphics[width=0.7\textwidth]{15attraction_directions}};
\end{tikzpicture}
\vspace{-10pt}
\caption{t-SNE embedding of MNIST 1 and 5 (see also Fig. \ref{fig:magnitude}) colored by direction of the attractive forces acting on a point. The wheel identifies colors with directions. The forceful coloring reveals rich substructures.}
\label{fig:dir}
\end{figure}
\end{center}
contain substantial information on the embedding structure while being easy to calculate. In fact, they are computed as part of each gradient descent step.
\begin{quote}
\textbf{Main idea}\textbf{.}
The attractive (or, equivalently, repulsive) forces on a particle organize clusters into force sinks (sources) which can be used to identify meaningful substructures in the data.
\end{quote}
This principle is based on empirical observations. We have not found it stated elsewhere in the literature, and we believe it to be possibly quite useful. A priori, a generic t-SNE embedding can be challenging to interpret, as it is not always clear how exactly to separate clusters. In Fig. \ref{fig:mnist}, for example, we see that it is impossible to distinguish the purple and light blue clusters, representing $4$ and $9$ respectively, based on the raw output. When we color the embedding by directions, however, we see the emergence of patterns that roughly correspond to the underlying ground truth (Fig. \ref{fig:mnist_directions}). We observe a similar phenomenon for the brown, yellow, and red clusters (representing $5$, $8$, and $3$).
\begin{center}
\begin{figure}[h!]
\begin{tikzpicture}
\filldraw (0,0) circle (0.08cm);
\filldraw (3,0) circle (0.08cm);
\filldraw (6,0) circle (0.08cm);
\foreach \x in {1,...,3}
\foreach \y in {1,...,3}
{
\draw [->] (0,0) -- (0.5*\x+0.4142*\y, 0.5*\y);
\draw [->] (3,0) -- (3+0.5*\x+0.4142*\y, 0.5*\y);
\draw [->] (6,0) -- (4.6+0.5*\x+0.4142*\y, 0.5*\y);
}
\end{tikzpicture}
\caption{Another interpretation of forceful colorings: since t-SNE preserves neighborhood structure, we expect that points which are similar in the input data will be subject to similar forces. On the other hand, nonhomogenous force vectors may indicate that the points are quite different despite being close in the embedding.}
\label{fig:rep}
\end{figure}
\end{center}
\begin{center}
\begin{figure}[b!]
\begin{tikzpicture}
\node at (0,-2.5) {\includegraphics[width=0.25\textwidth]{gblind}};
\node at (3,-2.5) {\includegraphics[width=0.25\textwidth]{gattraction_directions}};
\node at (6.5,-2.5) {\includegraphics[width=0.35\textwidth]{gattraction_magnitudes_vert_bar}};
\end{tikzpicture}
\vspace{-10pt}
\caption{t-SNE embedding of two Gaussian clusters (left), the attractive forces (middle) and size of these forces (right). There is an ambiguous region in the middle, but it is possible to discern cluster identity from the direction of the attractive forces.}
\label{fig:gauss}
\end{figure}
\end{center}
\vspace{-20pt}
We also hypothesize that the force vectors can be used to measure the local homogeneity of the data. If two points $x_i$ and $x_j$ are similar in the original dataset, then they likely have similar affinities $p_{k\ell}$. As a result, we can expect that (1) $y_i$ and $y_j$ will be nearby in the final t-SNE embedding and (2) attractive forces on $y_i$ and $y_j$ will have similar magnitudes and directions. On the other hand, if the forces on nearby embedded points are highly dissimilar, they may represent dramatically different samples in the original dataset (Fig. \ref{fig:rep} and Fig. \ref{fig:gauss}).
\subsection{Magnitude and Direction.} We found it interesting to consider the magnitude and direction of the attraction (repulsion) vectors in isolation. As examples, we plotted embeddings of the digits $1$ and $5$ from MNIST (Fig. \ref{fig:magnitude}, \ref{fig:dir}) and two high-dimensional Gaussian clusters (Fig. \ref{fig:gauss}). For both datasets, we observe that the forces are generally stronger on the edges of a cluster. This is not surprising since for an embedded point $y_i$ near a cluster boundary, the points $y_j$ with high input similarity $p_{ij}$ must lie in a halfspace about $y_i$, which limits the vector cancellation of attractive forces. However, the magnitude coloring effectively illuminates the structure of the cluster's interior. In particular, we see the emergence of connected regions separated from other regions by a dramatic change of force. These internal regions become more clear when we plot the \textit{direction} of the vector.
\begin{center}
\begin{figure}[h!]
\begin{tikzpicture}
\node at (0,0) {\includegraphics[width=0.3\textwidth]{5cluster}};
\draw [ultra thick] (0.3,0) -- (0.3,1.8) -- (-1.8, 1.8) -- (-1.8, 0) -- (0.3, 0);
\node at (5,0) {\includegraphics[width=0.6\textwidth]{15zoom}};
\end{tikzpicture}
\vspace{-10pt}
\caption{The same t-SNE embedding of MNIST 1 and 5 as above. Zooming into a tiny region (on the left) and drawing the forces as a vector field (colored by magnitude) reveals remarkable inner structure that can serve as additional feature.}
\label{fig:vector}
\end{figure}
\end{center}
\vspace{-10pt}
Naturally, we would like to use this information to refine t-SNE and other related methods. This could be done in many ways. For example, we can:
\begin{enumerate}
\item \label{enum:subcluster} use the vector field generated by the attractive forces to partition clusters into sub-clusters and/or refine cluster associations.
\item \label{enum:homogeneity} use the overall behavior of the force vector within a cluster as a measure of the cluster homogeneity.
\item \label{enum:earlyform} use force vectors for the identification of `barrier clusters' that formed too rapidly (see Fig. \ref{fig:mnist} and below for details).
\item compare the magnitude of the force vector acting on a point across multiple independent runs of t-SNE.
\item only use information from runs where the attractive forces acting on a specific particle end up being small for proper group identification.
\end{enumerate}
We illustrate (\ref{enum:subcluster}) using the MNIST embedding of $1$ and $5$. Focusing on the cluster of $5$'s, we observe that the attraction vector field contains several `sinks' -- regions where forces converge towards a single point (Fig. \ref{fig:vector}). We identified three potential subclusters using these sinks, and checked their coherence by computing their average image (Fig. \ref{fig:subclusters}). Since the images are sharp, most of the pictures in the cluster are similar to the mean. Moreover, the means themselves appear to represent different handwriting styles. For instance, digits in cluster $1$ have the most severe slant, while digits in $3$ have the most pronounced loop. This indicates that force vector fields can be useful for identifying subfeatures in a cluster's interior.
Idea (\ref{enum:homogeneity}) was inspired by Fig. \ref{fig:rep} and by the fact that the Gaussian clusters (Fig. \ref{fig:gauss}) contain a single sink while the MNIST clusters (Fig. \ref{fig:vector}) have a more turbulent vector field.
We finally comment on (\ref{enum:earlyform}). Sometimes, during the t-SNE gradient descent, data of the same type simultaneously starts forming clusters in two different regions in space. One would assume a priori that these two clusters would then move towards each other and merge into a larger cluster. However, it is sometimes possible that other clusters have formed between the two and now act as a barrier. In Fig. \ref{fig:mnist}, this is the reason purple and light blue ($4$ and $9$) are so deeply intertwined: the purple cluster would tend to move towards each other if embedded in isolation, but they are obstructed by the light blue barrier and vice versa. Naturally, this type of behavior would show up in the forceful colorings, which raises the interesting question how to identify and the circumvent this. Again, an abundance of ideas comes to mind, e.g. `teleporting' uniform clusters towards their force direction, temporarily increasing/decreasing the attractive/repulsive forces in that area, etc.
\begin{center}
\begin{figure}[h!]
\begin{tikzpicture}
\node at (3,0) {\includegraphics[width=0.6\textwidth]{15zoom_new}};
\draw [ultra thick] (2.3,0.6) -- (2.3, 2) -- (0.4, 2) -- (0.4, 0.6) -- (2.3, 0.6);
\draw [ultra thick] (4.1,-2.5) -- (4.1, 0.7) -- (1.5, 0.7) -- (1.5, -2.5) -- (4.1,-2.5);
\draw [ultra thick] (5.4,-0.2) -- (5.4, 1) -- (4.4, 1) -- (4.4, -0.2) -- (5.4,-0.2);
\node at (2.5, 2.1) {1};
\node at (1.2, -2.4) {2};
\node at (5.6, 1.1) {3};
\node at (-1.5,2) {\includegraphics[width=0.15\textwidth]{mean1}};
\node at (-1.5,0) {\includegraphics[width=0.15\textwidth]{mean2}};
\node at (-1.5,-2) {\includegraphics[width=0.15\textwidth]{mean3}};
\node at (-2.6, 2) {1};
\node at (-2.6, 0) {2};
\node at (-2.6, -2) {3};
\end{tikzpicture}
\vspace{-10pt}
\caption{Subclusters identified using the vector field in Figure \ref{fig:vector}. The three subclusters contains roughly $335$, $563$, and $170$ samples respectively. The left hand side shows the mean MNIST image of each subcluster.}
\label{fig:subclusters}
\end{figure}
\end{center}
\vspace{-10pt}{}
We see that incorporating forces as a feature leads to many possible adaptations and variations on t-SNE and other force-based nonlinear dimensionality reduction methods. Investigating when these ideas are useful and how to best implement them seems like a very interesting avenue of future research.
\section{Mean Field Limit for t-SNE}
\subsection{Motivation.} This section describes purely theoretical work on t-SNE. Our goal was to find a simple setting in which the t-SNE functional can be studied using rigorous quantitative methods. We study the embedding of a single homogeneous cluster and emphasize
\begin{enumerate}
\item that the underlying approach extends to more complicated settings (see \S 3.5). Such extensions lead to more complicated problems in calculus of variations that may be interesting in their own right.
\item that the underlying approach also extends to other attraction-repulsion based methods. Indeed, a similar type of analysis should be possible for many of the methods discussed in \cite{bohm, wang}.
\end{enumerate}
One reason there is so little theoretical work on t-SNE is the complexity of the setup: we are given a set of points $\mathcal{X} = \left\{x_1, \dots, x_n\right\} \subset \mathbb{R}^d$. For each pair $x_i$ and $x_j$, we define a measure of affinity $p_{ij}$. These affinities then fuel a dynamical system on $n$ particles $\mathcal{Y} = \left\{y_1, \dots, y_n \right\} \subset \mathbb{R}^s$ that determines the embedding. Each of these objects is already nontrivial on its own. The two existing theoretical approaches \cite{arora, george1} assume that the $p_{ij}$ are strongly clustered in order to deduce information about the dynamical system. Showing that t-SNE preserves pre-existing cluster structure amounts to a \textit{soft} analysis of the t-SNE mechanism. In contrast, we aim to present the first \textit{hard} analysis by making explicit quantitative statements about the output. This analysis will involve classical techniques from the calculus of variations and leads to interesting problems. It also extends to more complicated settings (see \S 3.5 for details).
\begin{center}
\begin{figure}[h!]
\begin{tikzpicture}
\node at (-0.2,0) {\includegraphics[width=0.32\textwidth]{randreg1}};
\node at (4,0) {\includegraphics[width=0.32\textwidth]{randreg2}};
\node at (8.2,0) {\includegraphics[width=0.32\textwidth]{randreg3}};
\end{tikzpicture}
\caption{An embedding of a $k-$regular graph on 40000 vertices: $k=40$ (left), $k=400$ (middle) and $k=4000$ (right). The mean field model predicts the diameter of the ring to scale as $\sim k^{-1/4} n^{-1/4}$. The emerging ring structure is \textit{not} reflective of any underlying circular structure in the data and purely an artifact of the variational structure of the t-SNE functional.}
\label{fig:ring}
\end{figure}
\end{center}
\vspace{-20pt}
\subsection{Random Regular Graphs and their Mean Fields.}
In the t-SNE algorithm, the input data set $\mathcal{X} \subset \mathbb{R}^d$ does not directly enter into the computation of the final output $\mathcal{Y}$. Rather, it is the affinities $p_{ij}$ on $\mathcal{X}$ which are used to generate $\mathcal{Y}$. We will argue that for the purpose of developing rigorous mathematical theory, it may be advantageous not to study t-SNE under some assumptions on $\mathcal{X}$ but to start with the setting where one only poses assumptions on the $p_{ij}$.
\begin{quote}
\textbf{Main Idea.} Instead of trying to impose structure on the original points $\left\{x_1, \dots, x_n\right\}$, a rigorous analysis of t-SNE should first address the case where the affinities $p_{ij}$ are structured. In particular, when the $p_{ij}$ are taken as the entries of an adjacency matrix of certain types of random graphs, there is a stochastic regularization phenomenon that simplifies the structure of the t-SNE energy.
\end{quote}
We tried to understand the implications of this idea in the very simplest case: embedding a single cluster in two dimensions.
There are at least two canonical models of what a perfectly homogeneous cluster could look like: (1) a random $k-$regular graph and (2) the Erd\H{o}s-Renyi random graph $G(n,p)$. We will see that with regards to an effective mean-field limit, both models behave somewhat similarly. A more refined analysis shows that one of the t-SNE energy terms has a larger variance under the Erd\H{o}s-Renyi model. This is also confirmed by numerical experiments: pictures like Fig. \ref{fig:ring} are easy to produce for $k-$regular graphs but it is not possible to get equally clear ring structures for the Erd\H{o}s-Renyi model (perhaps all that is required is a larger number of points but it could conceivably also be a difference in the actual variational structure).
\begin{center}
\begin{figure}[h!]
\begin{tikzpicture}[scale=4]
\foreach \x in {0,...,20}
\foreach \y in {0,...,20}
{
\filldraw (0.05*\x+ 0.03*rand,0.05*\y+0.03*rand) circle (0.01cm);
}
\draw [thick] (-0.1,-0.1) -- (0.2, -0.1) -- (0.2, 0.2) -- (-0.1, 0.2) -- (-0.1,-0.1);
\draw [thick] (0.7,0.7) -- (1.1, 0.7) -- (1.1, 1.1) -- (0.7, 1.1) -- (0.7,0.7);
\node at (-0.2, 0) {$A$};
\node at (1.2, 0.9) {$B$};
\end{tikzpicture}
\caption{Suppose the points are the final embedding of the vertices of an Erd\H{o}s-Renyi or a random $k-$regular graph: the number of edges running between the vertices in $A$ and the vertices in $B$ is under control and depends (up to a small error) only on the number of vertices in $A$ and $B$ \textit{independently of the embedding}.}
\label{fig:reg}
\end{figure}
\end{center}
\vspace{-10pt}
Our derivation will initially not distinguish between the Erd\H{o}s-Renyi model and the model of a $k-$regular random graph (with $k \sim p \cdot n$). In \S \ref{sec:erdmean}, we will discuss the arising quantities for the Erd\H{o}s-Renyi model and describe the crucial variance term which disappears in the random $k-$regular model. Throughout the rest of the paper, we will then refine our argument for random $k-$regular graphs.
The Erd\H{o}s-Renyi random graph $G(n,p)$ is a graph on $n$ vertices where any pair of vertices is connected with likelihood $0 < p < 1$, where $0 < p < 1$ is fixed and we let $n$ become large (this is the dense setting). For such a random graph, we set
$$ p_{ij} = \begin{cases} 1 \qquad &\mbox{if}~i \sim_{E} j \\
0 \qquad &\mbox{otherwise.} \end{cases}$$
This corresponds to a graph on $n$ vertices having $\sim p \binom{n}{2}$ edges. It is a tightly connected cluster, there are no distinguished vertices and no underlying symmetries: each vertex plays essentially the same role. A random $k-$regular graph is simply a graph on $n$ vertices chosen uniformly at random from
the set of $k-$regular graphs on $n$ vertices. We note that in our setting of interest, $k = p \cdot n$ where $0 < p < 1$ is a fixed constant as $n \rightarrow \infty$, those two models are fairly similar with respect to many aspects: in the Erd\H{o}s-Renyi model, each vertex has $\sim p \cdot n \pm \mathcal{O}(\sqrt{n})$ neighbors.
Assuming that the $p_{ij}$ come from an Erd\H{o}s-Renyi graph or a random $k-$regular graph has several advantages: once $n$ becomes large, there is an interesting regularization effect: for any two arbitrary subsets of vertices, as long as the number of vertices in each subset is not too small, we can estimate the number of edges that run between them fairly accurately (see also Fig. \ref{fig:reg}) and
$$ \# \left\{(a,b) \in E: a \in A \wedge b \in B \right\} \sim p \cdot |A| \cdot |B| \sim \frac{k}{n}\cdot |A| \cdot |B|.$$
This is a remarkable property because it implies that the associated energy should be primarily dominated by the distribution of $\left\{y_1, \dots, y_n\right\} \subset \mathbb{R}^2$: what is mainly relevant is \textit{how} the points are distributed in $\mathbb{R}^2$ rather than how the underlying graph behaves. As such, we expect that, for given fixed $\left\{y_1, \dots, y_n\right\} \in \mathbb{R}^2$, the t-SNE energy is essentially constant for a randomly chosen graph from the same model.
Since it's virtually constant, it should be very well described by its expectation with the square root of the variance describing the typical deviation -- and both of these quantities, the expectation $\mathbb{E}$ and the variance $\mathbb{V}$ can be computed. \\
This underlying assumption, that for a fixed random graph model the t-SNE energy is being essentially given purely as a function of the embedding points $\left\{y_1, \dots, y_n\right\} \subset \mathbb{R}^2$ is naturally a key ingredient and quite similar to many other models in, say, statistical physics: the behavior of individual particles is assumed the even out and to give rise to an emerging mean field. The consequences of this assumption are quite impactful: the energy is then given merely as a function of a distribution of points in the plane and we end up trying to minimize $I(\mu)$, where $I$ is a notion of energy and $\mu$ ranges over all probability measures in $\mathbb{R}^2$. This is a much more classical mathematical problem and many more tools become available. In particular, the approach naturally extends to other such nonlinear dimensionality reduction methods such as SNE \cite{hint}, ForceAtlas2 \cite{jac}, LargeVis \cite{largevis} or UMAP \cite{umap}. We believe this to be a very promising avenue for further investigations.
\subsection{Emerging Functionals.} Suppose now that the affinities $p_{ij}$ are given by one of the two random models described above and suppose that $\left\{y_1, \dots, y_n\right\} \subset \mathbb{R}^2$ are given points in the plane. The t-SNE energy of this set of points (the functional we aim to minimize) will be shown to simplify (for the reason discussed in \S 3.2). We introduce some notation by introducing the probability measure $\mu$ in $\mathbb{R}^2$
$$\mu = \frac{1}{n} \sum_{k=1}^{n} \delta_{y_k} ,$$
We determine the approximation as
$$ \mbox{t-SNE energy} = \mathbb{E} \pm \sigma \sqrt{\mathbb{V}},$$
where both expectation $\mathbb{E}$ and variance $\mathbb{V}$ are computed with respect to the random Erd\H{o}s-Renyi model. These terms are fairly explicit: in particular, for the expectation we have (up to lower order errors)
$$ \mathbb{E} \sim 2p \binom{n}{2} \left[\int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \|x-y\|^4 d\mu(x) d\mu(y) - \left( \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \|x-y\|^2 d\mu(x) d\mu(y)\right)^2 \right]$$
and we have a similar expression for the variance.
Let now $\mu$ be an arbitrary probability measure on $\mathbb{R}^2$, then we will consider a renormalized energy given by
\begin{align*}
J_{\sigma, \delta}(\mu) &= \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \|x-y\|^4 d\mu(x) d\mu(y) - \left( \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \|x-y\|^2 d\mu(x) d\mu(y)\right)^2\\
&+ \frac{\sigma}{ \sqrt{p}} \delta \left( \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \|x - y\|^4 d \mu(x) d\mu(y)\right)^{1/2}.
\end{align*}
We expect that the behavior of the t-SNE energy for a random $k-$regular graph is approximately given by $J_{\sigma, \delta}$ with $\delta \sim n^{-1}$, $p \sim k/n$ and $\sigma$ a parameter at scale $\sim 1$.
The first two terms combined are always nonnegative: using the Cauchy-Schwarz inequality, we see that
$$ \int_{\mathbb{R}^2} \int_{ \mathbb{R}^2} \|x-y\|^4 d\mu(x) d\mu(y) - \left( \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \|x-y\|^2 d\mu(x) d\mu(y)\right)^2 \geq 0.$$
The first two terms may thus be understood as a `Cauchy-Schwarz deficit' which then interacts with the two remaining terms. We show that minimizers can be characterized and have a peculiar shape.
\begin{thm} Among radial measures $\mu$, the functional $J_{\sigma, \delta}(\mu)$ has a unique minimizer (up to translation symmetry) given by the normalized arclength measure on a circle if $\sigma < 0$ or a Dirac measure centered in a point if $\sigma > 0.$ \end{thm}
Since $\delta = n^{-1}$, a more precise analysis of the scaling (done in the proof of the Theorem) would predict that the optimal t-SNE embedding of a random $k-$regular graph (assuming $k$ is proportional to $n$) to behave like a ring with radius $\sim k^{-1/4} n^{-1/4}$ (see Fig. \ref{fig:ring}). Numerical experiments support this conjecture and we see the final points arranged in a ring-like shape; however, it is more difficult to test the scaling since the decay is rather slow. Moreover, our derivation does employ a Taylor expansion: we would thus expect the scaling to be rather accurate once the ring is sufficiently small (say, diameter $\leq 0.001$). As seen in Fig. \ref{fig:ring} even for $n=40000$ and $k=4000$, the ring still has diameter $\sim 0.1$.
\subsection{Open Problems.} This motivates several interesting problems.
\begin{enumerate}
\item \textbf{Erd\H{o}s-Renyi model.} Can this type of analysis be carried out for the Erd\H{o}s-Renyi model? The difficulty lies in the quantity
$$\log{\left( - n + n^2 \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \frac{d\mu(x) d\mu(y)}{1+\|x-y\|^2} \right)}$$
which, for fixed measure $\mu$ and $n \rightarrow \mathbb{R}$ scales like $\sim \log{n}$. We prove that, without this term, the measure localizes at
scale $\sim n^{-1/2}$ which is exactly the scale at which cancellation between the two terms occurs.
\item \textbf{Sparse Random Graphs.} We always work with $0 < p < 1$ or $k/n$ fixed, our Graphs are always dense. One could also wonder about the case where $p,k$ become smaller as $n$ gets larger: one would assume that randomness then starts playing a larger role.
\item \textbf{Multiple clusters.} Can this analysis be extended to multiple clusters? The derivation of the functional itself is not difficult and can be carried out along the same lines. Numerical experiments suggest that the limit will not be a `ring' but rather two more disk-like clusters. If the two clusters end up being close to one another, a Taylor expansion may be carried out, if they stay at a distance, then a new approach will be required. We emphasize that these problems are really problems regarding the structure of energy minimizers of certain functionals and, while possibly hard, quite classical.
\item \textbf{Other methods.} We would expect that the embedding of a random $k-$regular graph will asymptotically concentrate in a point for most of these methods. A similar analysis can then be conceivably carried out -- different functionals may come with different characteristic length scales which might be an interesting point for comparison. In particular, direct variations of t-SNE have been proposed (see e.g. \cite{kobak0}) for which the underlying analysis might be somewhat similar. As mentioned above, we believe this to be a promising line for further research.
\item \textbf{Expanders.} Other connections are conceivable. In particular, the regularization property of Erd\H{o}s-Renyi graphs that we use states that for any two subsets of vertices $A,B \in V$, the number of edges between $A$ and $B$ is proportional to $\sim p \cdot |A| \cdot |B|$. This property has in fact appeared in the context of expander graphs. More precisely, let $G = (V,E)$ be any $d-$regular random graph on $n$ vertices. Then, for any disjoint $A,B \subset V$, the Expander Mixing Lemma (see Alon \& Chung \cite{alon}) says that
$$ \left| \# \left\{ (a,b) \in E: a \in A \wedge b \in B \right\} - \frac{d}{n} |A| \cdot |B| \right| \leq \lambda \sqrt{|A| \cdot |B|},$$
where $\lambda$ is the second largest eigenvalue of the adjacency matrix $A_G$. This is exactly the type of regularity property used in our derivation -- it is thus conceivable that one might be able to derive rigorous bounds about the t-SNE embedding of an expander graph (though it might be less clear how one would generalize such an approach to the setting where there is more than one cluster).
\end{enumerate}
\section{A Single Cluster: Proof of the Theorem}
\S 4 contains our theoretical arguments: \S 4.1 formally introduces t-SNE, \S 4.2 computes the mean field (along the lines discussed in \S 3.2), and \S 4.3 shows that this is sufficient to deduce that, as $n \rightarrow \infty$, the optimal t-SNE embedding of an Erd\H{o}s-Renyi random graph will shrink in diameter. This shrinkage will allow us to apply a Taylor approximation in \S 4.4 which will be shown to have a scaling symmetry in \S 4.5. Finally, we prove the Theorem in \S 4.6.
\subsection{The t-SNE functional.} We quickly recall the t-SNE algorithm. Given a set of points $\mathcal{X} = \{x_1, x_2, ..., x_n\} \subset \mathbb{R}^d$, we define the affinity $p_{ij}$ between any pair as
$$
\label{p_ij} p_{ij} = \frac{ p_{i|j} + p_{j|i} }{2n}, \qquad \mbox{where} \qquad p_{i|j} = \frac{\exp{(-\| x_i - x_j \|^2 /2 \sigma_i^2 )}}{\sum_{k\neq i} \exp{( - \| x_i - x_k \|^2/ 2 \sigma_i^2 }) }.
$$
The parameters $\sigma_i$ are usually set based on the local scale of the neighborhood of $x_i$. This expression for $p_{ij}$ will not be relevant to our analysis. Assume now that $\mathcal{Y} = \left\{y_1, \dots, y_n\right\} \subset \mathbb{R}^s$ is our embedding. We describe a notion of energy that aims to quantify how `similar' the points $x_i$ and the points $y_i$ are. For this, we define the analogue of the $p_{ij}$: the quantity $q_{ij}$ will denote a notion of similarity between points $ y_i$ and $ y_j$ via
$$
q_{ij} = \frac{(1 + \| y_i - y_j\|^2)^{-1}}{\sum_{k\neq \ell} (1 +\| y_k - y_\ell\|^2 )^{-1}}.
$$
The energy is then defined as
$$
E = \sum_{i,j=1 \atop i\neq j}^n p_{ij} \log \frac{p_{ij}}{q_{ij}}.
$$
t-SNE then uses gradient descent to find an output $\mathcal{Y}$ which minimizes $E$. The remaining question is how to initialize $\mathcal{Y}$ for this step. Early implementations of t-SNE chose $\mathcal{Y}$ uniformly at random. However, there is evidence that initializing with the results of another dimensionality reduction method can better preserve global structure in the final embedding (see Kobak \& Linderman \cite{kobak3}). In light of our arguments above (especially \S 3.2), it is clear that initialization will not be important in our argument.
Finally, we observe that $E$ is solely a function of the $q_{ij}$ during the gradient descent optimization, since the $p_{ij}$ are constants determined by the input data. This naturally raises the question of whether other functions for $q_{ij}$ besides the one provided could produce interesting results. Empirical results have shown that this is indeed the case, as the decay of the function seems to correspond to the resolution with which a cluster's substructure is displayed (see \cite{kobak0}). However, many other choices of functionals are possible, and many result in methods that work quite well. We refer to B\"ohm, Behrens \& Kobak \cite{bohm} for a unifying overview.
In our analysis of t-SNE, we will fix the embedding dimension $s = 2$, and we will set $p_{ij}$ as the adjacency matrix of an Erd\H{o}s-Renyi or random $k$-regular graph. The original $p_{ij}$ form a probability distribution while our values are in $\{0, 1\}$. This is not an issue because, as is easily seen from the structure of the energy $E$, the local minima of $E$ are invariant under rescaling of $p_{ij}$.
\subsection{Mean Fields.} \label{sec:erdmean} This section discusses the first approximations.
We would like to understand the behavior of the following functional for large $n$:
$$
\sum_{i,j=1 \atop i\neq j}^n p_{ij} \log{\frac{p_{ij}}{q_{ij}}} = \sum_{i,j=1 \atop i\neq j}^n p_{ij} \log{p_{ij}} + \sum_{i,j=1 \atop i\neq j}^n p_{ij} \log{\frac{1}{q_{ij}}} \rightarrow \min.
$$
The first of these two sums only depends on $p_{ij}$, which are externally given and independent of the actual embedding of points in $\mathbb{R}^2$. We can thus safely ignore the first sum containing only $p_{ij}$ quantities. It remains to understand the second sum.
Plugging in the definition of $q_{ij}$ yields:
$$
\boxed{ \sum_{i,j=1 \atop i \neq j}^n p_{ij} \log \left( \sum_{k, \ell =1 \atop k \neq \ell}^n \frac{1}{1+\|y_{\ell} - y_k\|^2} \right) + \sum_{i,j=1 \atop i \neq j}^n p_{ij} \log{(1 + \|y_i - y_j\|^2)} \rightarrow \min.}
$$
This is the problem that we will analyze for the remainder of the paper. We observe that the functional is comprised of two terms. We will compute the expectation and variance for both.
\begin{enumerate}
\item For \textbf{k-regular graphs}, the first term simplifies because the number of edges is constant ($|E| = kn/2$). The inner sum does not depend on $i,j$ and
$$
\sum_{i,j=1 \atop i \neq j}^n p_{ij} \log \left( \sum_{k, \ell =1 \atop k \neq \ell}^n \frac{1}{1+\|y_{\ell} - y_k\|^2} \right) = kn \cdot \log \left( \sum_{k, \ell =1 \atop k \neq \ell}^n \frac{1}{1+\|y_{\ell} - y_k\|^2} \right).
$$
\item In the \textbf{Erd\H{o}s-Renyi model}, the first term has roughly the same expectation because
$$
\mathbb{E} \sum_{i,j=1 \atop i \neq j}^n p_{ij} = 2p \binom{n}{2}
$$
but a nontrivial variance (computed below).
\end{enumerate}
As above, we simplify notation by introducing the probability measure
$$
\mu = \frac{1}{n} \sum_{i=1}^{n}{ \delta_{y_i}}.
$$
\subsubsection{The first term.}
We can write the expectation of the first term for the Erd\H{o}s-Renyi random model as
$$
\mathbb{E} \sum_{i,j=1 \atop i \neq j}^n p_{ij} \log \left( \sum_{k, \ell =1 \atop k \neq \ell}^n \frac{1}{1+\|y_{\ell} - y_k\|^2} \right) = 2p \binom{n}{2} \log{\left( - n + n^2 \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \frac{d\mu(x) d\mu(y)}{1+\|x-y\|^2} \right)}.
$$
Replacing $2p \binom{n}{2}$ with $kn$ leads to the expectation for $k$-regular graphs (which is actually a constant).
As evidenced by the numerical examples above and the arguments in \S 4.3, in the asymptotic regime the measure $\mu$ will concentrate around a point. This means that we expect the integral to be size $\sim 1$ which turns the $-n$ factor inside the logarithm into a lower order term -- as it turns out, it will be structurally similar to other lower order terms and can be absorbed by them.
For simplicity of exposition, we first ignore the lower order term and use the algebraically more convenient approximation
$$
\mathbb{E}_2 = 2p \binom{n}{2} \log{\left( n^2 \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \frac{d\mu(x) d\mu(y)}{1+\|x-y\|^2} \right)}
$$
which can also be written as
$$
\mathbb{E}_2 =2p \binom{n}{2} \log{(n^2)} + 2p \binom{n}{2} \log{\left( \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \frac{d\mu(x) d\mu(y)}{1+\|x-y\|^2} \right)},
$$
where only the second term depends on $\mu$. In Section \ref{subsec:pert}, we show how to compare $\mathbb{E}$ and $\mathbb{E}_2$.
In the $k$-regular case, the variance is 0 because this term is constant. In the Erd\H{o}s-Renyi case, this is slightly different but it is relatively easy to compute
$$
\mbox{the variance} \qquad \mathbb{V} \sum_{i,j=1 \atop i \neq j}^n p_{ij} \log \left( \sum_{k, \ell =1 \atop k \neq \ell}^n \frac{1}{1+\|y_{\ell} - y_k\|^2} \right).
$$
Recall that the variance of a sum of independent random variables is given by the sum of the variances and that for a random variable $X \sim \text{Bern}(p)$, we have $\mathbb{V} X = p(1-p)$. Therefore,
$$ \mathbb{V} = 2p (1-p) \binom{n}{2} \left[ \log{\left( - n + n^2 \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \frac{d\mu(x) d\mu(y)}{1+\|x-y\|^2} \right)}\right]^2.$$
\subsubsection{The Second Term.}
It remains to analyze the second term.
$$
\sum_{i,j=1 \atop i \neq j}^n p_{ij} \log{(1 + \|y_i - y_j\|^2)}
$$
This term is more involved since it couples $p_{ij}$ with the location of $y_i$ and $y_j$ in $\mathbb{R}^2$. However, we are able to treat the random $k-$regular model and the Erd\H{o}s-Renyi model simultaneously.
Taking two arbitrary subsets of vertices, the number of edges between them cannot deviate substantially from the expectation. Let us now assume, more precisely, that $B_1, B_2 \subset \mathbb{R}^2$ are two small disjoint boxes in $\mathbb{R}^2$. The number of points in $B_1$ is given by $n \cdot \mu(B_1)$, the number of points in $B_2$ is given by $n \cdot \mu(B_2)$.
Since the underlying graph is Erd\H{o}s-Renyi, we have that the expected number of edges with one vertex in $B_1$ and the other in $B_2$ is
$$
\mathbb{E} ~\sum_{v \in B_1} \sum_{w \in B_2} 1_{(v,w) \in G} = p n^2 \mu(B_1) \mu(B_2).
$$
For $k$-regular graphs, we use that
$$\mathbb{E}[1_{(v,w) \in G}] = \frac{kn/2}{\binom{n}{2}} = \frac{k}{n-1}$$
instead. Since $k/(n-1) \sim p$, this is essentially the same as with Erd\H{o}s-Renyi graphs.
In the next step, we will compute the variance. The variance of a sum of independent random variables is the sum of the variances of each individual random variable. Since the variance of Bernoulli random variables with likelihood $p$ is given by $p(1-p)$, we obtain
$$
\mathbb{V} ~\sum_{v \in B_1} \sum_{w \in B_2} 1_{(v,w) \in E} = p(1-p)n^2 \mu(B_1) \mu(B_2).
$$
From this we get that taking the expectation with respect to all Erd\H{o}s-Renyi random graphs for fixed $\left\{y_1, \dots, y_n\right\} \subset \mathbb{R}^2$ leads to,
$$
\mathbb{E} \sum_{i,j=1 \atop i \neq j}^n p_{ij} \log{(1 + \|y_i - y_j\|^2)} = 2p \binom{n}{2} \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \log{(1 + \|x - y\|^2)} d \mu(x) d\mu(y).
$$
We recall that when dealing with the expectation $\mathbb{E}$, switching to the integral required taking out self-interactions (resulting in $\mathbb{E}_2$ and an analysis to be done in \S \ref{subsec:pert}). Self interactions do not contribute here since $\log(1 + \|y_i - y_i\|^2) = 0$. By the same approach, we can compute the variance with respect to Erd\H{o}s-Renyi random graphs and arrive at
$$ \mathbb{V} \sum_{i,j=1 \atop i\neq j}^n p_{ij} \log{(1 + \|y_i - y_j\|^2)} = p(1-p) n^2\int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \log{(1 + \|x - y\|^2)}^2 d \mu(x) d\mu(y).$$
It remains to compute the variance of this term with respect to the model of $k-$regular random graphs. Naturally, we expect this to be very close to the variance for the Erd\H{o}s-Renyi model. The main difference is that the $p_{ij}$ are no longer independent random variables but exhibit a slight negative correlation. We make use of the following Lemma.
\begin{lemma}
Let $\left\{ i, j, k, l\right\}$ denote four different vertices. Then, with respect to all random $k-$regular graphs on all $n$ vertices, there exist two positive quantities $c_{k,n} \sim 1 \sim c_{2,k,n}$ (comparable to a universal constant) such that
$$ \mathbb{E} ~p_{i,j} p_{i,k} \sim \frac{k^2}{n^2} - c_{k,n} \frac{k^2}{n^3}$$
and
$$ \mathbb{E} ~p_{i,j} p_{k,l} = \frac{k^2}{n^2} - c_{2,k,n} \frac{k^2}{n^3}.$$
\end{lemma}
\begin{proof} We start with the first case. This is simply asking about the likelihood that two fixed edges $(i,j)$ and $(i,k)$ emanating from the same vertex both end up in a random $k-$regular graph. Since everything is invariant under relabeling and the product of two indicator functions is only 1 when both are 1
$$ \mathbb{E} ~p_{i,j} p_{i,k} = \frac{k}{n-1} \frac{k-1}{n-2}.$$
An alternative argument would proceed as follows: the ways of choosing $k$ elements out of $n-1$ with 2 elements being fixed is given by
$$ \frac{\binom{n-3}{k-2}}{\binom{n-1}{k}} = \frac{\frac{(n-3)!}{ (n-k)! (k-2)!}}{ \frac{(n-1)!}{k! (n-1 -k)!}} = \frac{k (k-1)}{(n-1) (n-2)}.$$
Let us now consider the likelihood of two disjoint edges $p_{i,j} p_{k,l}$ being contained in the random graph. We use, since these are indicator functions,
$$ \mathbb{E} ~p_{i,j} p_{k,l} = \mathbb{P}( p_{i,j} p_{k,l} = 1) = \mathbb{P}\left( p_{k,l} = 1 \big| p_{i,j}=1\right) \cdot \mathbb{P}(p_{i,j} = 1).$$
We have $ \mathbb{P}(p_{i,j} = 1) = k/(n-1)$, it remains to understand the conditional expectation. Suppose that $p_{i,j} = 1$.
\begin{center}
\begin{figure}[h!]
\begin{tikzpicture}
\filldraw (0,0) circle (0.04cm);
\filldraw (0,1.5) circle (0.04cm);
\foreach \x in {0,1,2}
{
\draw (0,0) -- (2, 0.5*\x);
\draw (0,1.5) -- (2, 1.5-0.5*\x);
}
\node at (-0.2, 0) {$i$};
\node at (-0.2, 1.5) {$j$};
\draw [thick] (2.5,0.8) ellipse (0.5cm and 1cm);
\draw (-0, 0) -- (-0, 1.5);
\node at (3.8, 0.8) {$V \setminus \left\{i,j\right\}$};
\end{tikzpicture}
\caption{Sketch of the Argument.}
\end{figure}
\end{center}
\vspace{-10pt}
The symmetry of the $k-$random regular graphs allows us to always relabel vertices in the complement. The likelihood of $p_{k,l} = 1$ subject to $p_{i,j} =1$ is thus simply determined by the total number of edges between vertices in $V \setminus \left\{i, j\right\}$. There are $n \cdot k$ edges in total. Of those $k-1$ connect $i$ and $V \setminus \left\{i, j\right\}$ and $k-1$ connect $j$ and $V \setminus \left\{i, j\right\}$. Thus the subgraph induced by the vertices $V \setminus \left\{i,j\right\}$ has $nk/2 - 2k + 1$ edges. The likelihood of any random pair of vertices being connected is thus
$$ \mathbb{P}\left(p_{k,l} \big| p_{i,j} = 1\right) = \frac{nk/2 - 2k + 1}{\binom{n-2}{2}}.$$
Thus
$$ \mathbb{E} p_{i,j} p_{k,\ell} = \frac{k}{n-1} \frac{nk/2 - 2k + 1}{\binom{n-2}{2}} \sim \frac{k^2}{n^2} - 4 \frac{k^2}{n^3}.$$
\end{proof}
These expectations $ \mathbb{E} ~p_{i,j} p_{i,k}$ and $ \mathbb{E} ~p_{i,j} p_{k,l} $ are very close to what one would expect for independently chosen edges. This suggests that our computation of the variance assuming the Erd\H{o}s-Renyi model should be close to the truth and indeed it is.
\begin{lemma} Assuming this type of correlation structure, for arbitrary $x_{i,j} \in \mathbb{R}$,
$$ \mathbb{V} \sum_{i,j} p_{i,j} x_{i,j} \geq \frac{k}{n} \left(1 - \frac{k}{n}\right) \sum_{i,j} x_{i,j}^2 - c_{k,n} \frac{k^2}{n^3} \left( \sum_{i,j} x_{i,j}\right)^2.$$
\end{lemma}
\begin{proof} We have $\mathbb{V} X = \mathbb{E} X^2 - \left( \mathbb{E} X \right)^2$. Assuming the $p_{i,j}$ are independent variables that are 1 with likelihood $k/n$ and $0$ with likelihood $1-k/n$, we see
$$ \mathbb{V} \sum_{i,j} p_{i,j} x_{i,j} = \sum_{i,j} x_{i,j}^2 \mathbb{V} p_{i,j} = \frac{k}{n} \left(1 - \frac{k}{n}\right) \sum_{i,j} x_{i,j}^2.$$
Let us now assume that they are not independent but almost independent in the sense above. Then
\begin{align*}
\mathbb{E} \left( \sum_{i,j} p_{i,j} x_{i,j} \right)^2 &= \mathbb{E} \sum_{i_1, i_2, j_1, j_2} p_{i_1,j_2} x_{i_1,j_2} p_{i_2,j_2} x_{i_2,j_2} \\
&= \sum_{i_1, i_2, j_1, j_2} x_{i_1,j_2} x_{i_2,j_2} \mathbb{E} p_{i_1,j_2} p_{i_2,j_2}.
\end{align*}
This leads to exactly the same quantity as above except for an additional error term of size
$$ \sim - c_{k,n} \frac{k^2}{n^3} \left( \sum_{i,j} x_{i,j}\right)^2.$$
\end{proof}
A simple computation shows that we expect the constant to scale roughly like $c_{k,n} \sim 4$.
Combining all these ingredients, we expect the variance of the second term with respect to random $k-$regular graphs to be given by
\begin{align*}
\mathbb{V} \sum_{i,j=1 \atop i\neq j}^n p_{ij} \log{(1 + \|y_i - y_j\|^2)} &\sim p(1-p) n^2\int_{\mathbb{R}^2 \times \mathbb{R}^2} \log{(1 + \|x - y\|^2)}^2 d \mu(x) d\mu(y)\\
&- c_{k,n} p(1-p) n \left(\int_{\mathbb{R}^2 \times \mathbb{R}^2} \log{(1 + \|x - y\|^2)}d \mu(x) d\mu(y) \right)^2.
\end{align*}
The second term in this approximation is indeed a lower order perturbation as $n$ becomes large. The first term is exactly what is predicted by the Erd\H{o}s-Renyi random model.
\subsection{Shrinkage.} \label{sec:shrinkage} Taking the leading terms derived in the prior section, we have (up to lower order terms)
\begin{align*}
\mathbb{E} ~\mbox{t-SNE loss} &= 2p \binom{n}{2} \log{(n^2)} + 2p \binom{n}{2} \log{\left( \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \frac{ d\mu(x) d\mu(y)}{1+\|x-y\|^2} \right)} \\
&+ 2p \binom{n}{2}\int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \log{(1 + \|x - y\|^2)} d \mu(x) d\mu(y).
\end{align*}
The constant is the same in front of all three terms. Moreover, the first term is constant, depending only on $n$ and $p$ and thus irrelevant for the study of minimizing configurations.
When studying the minimizer, it thus suffices to consider the rescaled functional
$$ I(\mu) = \log{\left( \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \frac{ d\mu(x) d\mu(y)}{1+\|x-y\|^2} \right)} + \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \log{(1 + \|x - y\|^2)} d \mu(x) d\mu(y).$$
The logarithm is concave and thus we have by Jensen's inequality that
$$ \log{\left( \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \frac{ d\mu(x) d\mu(y)}{1+\|x-y\|^2} \right)} \geq
\int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \log \left( \frac{ 1}{1+\|x-y\|^2} \right) d\mu(x) d\mu(y)$$
from which we deduce
$ I(\mu) \geq 0$
with equality if and only if all the mass is concentrated in one point, i.e. $\mu = \delta_{x_0}$ for some $x_0 \in \mathbb{R}^2$. This already illustrates part of the dynamics that plays out: the expected term (not considering lower order perturbations) has a Jensen-type structure and forces the measure to contract -- this is counter-balanced by quantities coming from the variance and the interactions between these two terms leads to final estimates about the scale of the measure.
We quickly establish a quantitative result that essentially states that if $\mu$ is spread out over a small scale (in a certain sense), then $I(\mu)$ has to be strictly bigger than 0. (The proof will also show that if $\mu$ is spread out over a large area, then much stronger estimates holds, we will not concentrate on that part). We define a length-scale $r(\mu)$ of any probability measure $\mu$ on $\mathbb{R}^2$ via
$$ r(\mu) = \inf \left\{ \mbox{sidelength}(Q): Q \subset \mathbb{R}^2, Q~\mbox{square}, ~\mu(Q) \geq \frac{1}{200} \right\}.$$
If $r(\mu) = 0$ (something that happens if $99.5\%$ of all mass is concentrated in a point, for example, which is not something that we would expect for the minimizers of t-SNE energy), then the following Lemma does not say anything interesting but one can simply replace $1/200$ by $0 \leq \delta \ll 1$ and rerun the argument. In the interest of clarity of exposition, we fix $\delta = 1/200$ in the definition of $r(\mu)$.
\begin{lemma} Let $\mu$ be a probability measure on $\mathbb{R}^2$. Then, for some universal $c>0$,
$$ I(\mu) \geq c\frac{r(\mu)^4}{(1+ r(\mu)^2)^4}.$$
\end{lemma}
We note that we are only interested in the case when the measure is already quite concentrated, i.e. $r(\mu)$ small. In that case, the lower bound is really $\gtrsim r(\mu)^4$ and, before proving the statement, we quickly show that it has the sharp scaling. Let $\mu$ be the sum of two Dirac measures, each having mass $1/2$ and being at distance $r$ from each other. Then a quick computation shows that
$$ I(\mu) = \log\left(\frac{1}{2} + \frac{1}{2} \frac{1}{1+r^2}\right) + \frac{1}{2} \log{(1+r^2)} \sim \frac{r^4}{8} + \mbox{l.o.t.} \qquad \mbox{as}~r \rightarrow 0.$$
\begin{proof} We start by using a refined version of Jensen's inequality (see \cite{liao}). If $\nu$ is a probability measure supported on $[0,1]$ and $Z$ is a random variable following that distribution, then
$$ \log \left( \mathbb{E} Z \right) - \int_{0}^{1} \log{(x)} d\nu(x) \geq \frac{1}{2} \mathbb{V} Z.$$
As will come as no surprise, this is a consequence of the strict uniform bound on the second derivative of $\log$ in the interval $(0,1)$: stronger results would be available if $\mathbb{E}Z$ is close to 0 (see \cite{liao}) but we are interested in the case where $\mathbb{E}Z$ is fairly close to 1. Let us now return to our original setting: given a measure $\mu$, we will consider the random variable
$$ Z = \frac{1}{1+\|X-Y\|^2}, ~\mbox{where} \quad X \sim \mu \sim Y$$
are two independent realizations of $\mu$. Then
$$ I(\mu) = \log \left( \mathbb{E} Z \right) - \mathbb{E} \log{Z}.$$
We can now introduce the induced measure $\nu$ describing the distribution of $Z$ in the unit interval via
$$ \nu(A) = \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} 1_{\frac{1}{1+\|x-y\|^2} \in A} ~ d\mu(x) d\mu(y).$$
Appealing to the strenghtened Jensen inequality, we deduce
$$ 2 \cdot I(\mu) \geq \mathbb{V} Z.$$
\begin{center}
\begin{figure}[h!]
\begin{tikzpicture}[scale=0.8]
\draw [ultra thick] (0,0) -- (1,0) -- (1,1) -- (0,1) -- (0,0);
\draw [thick] (-1, -1) -- (2,-1) -- (2,2) -- (-1, 2) -- (-1, -1);
\draw [thick] (-1, 0) -- (2, 0);
\draw [thick] (-1, 1) -- (2, 1);
\draw [thick] (0, -1) -- (0, 2);
\draw [thick] (1, -1) -- (1, 2);
\draw [thick] (-2, -2) -- (3, -2) -- (3,3) -- (-2,3) -- (-2,-2);
\node at (0.5, 0.5) {$Q_r$};
\end{tikzpicture}
\caption{The densest square $Q_r$.}
\end{figure}
\end{center}
It remains to understand how the variance of $Z$ depends on the distribution properties of $\mu$: more precisely, what we want to show is that if $\mu$ is not concentrated around a single point and $X \sim \mu \sim Y$, then the random variable
$$ Z = \frac{1}{1 + \|X-Y\|^2} \qquad \mbox{cannot be concentrated around a single value.}$$
This, in turn, is equivalent to showing that $\|X - Y\|$ cannot be concentrated around a single value. This is where our notion of $r(\mu)$ comes into play. We first assume that the smallest square containing $1/200$ of the total mass has positive side-length $r(\mu)$. Let us call this smallest square $Q_r$.
We note that $\mu(Q_r) \leq 1/50$: if it were larger, then we could subdivide $Q_r$ into four smaller squares at least one of which will have measure $1/200$ which is a contradiction. We take the $24 = 5^2 - 1$ squares of equal sidelength surrounding $Q_r$: since all of these squares have total measure less than $1/50$, this means that at least half the measure is outside this $5 \times 5$ box and thus at least distance $2r(\mu)$ from any point in $Q_r$. This shows that
$$ \mathbb{P} \left( \|X - Y \| \leq \sqrt{2} r(\mu)\right) \geq \frac{1}{200} \cdot \frac{1}{200} \quad \mbox{and} \quad \mathbb{P} \left( \|X - Y \| \geq 2 r(\mu) \right) \geq \frac{1}{200} \cdot \frac{1}{2}.$$
This proves that
$$ \mathbb{P} \left( \frac{1}{1+ \|X - Y \|^2} \geq \frac{1}{1 + 2 r(\mu)^2} \right) \geq \frac{1}{200} \cdot \frac{1}{200}$$
as well as
$$ \mathbb{P} \left( \frac{1}{1+\|X - Y \|^2} \leq \frac{1}{1+4r(\mu)^2} \right) \geq \frac{1}{200} \cdot \frac{1}{2}.$$
At this point we use the following simple Lemma: if $a<b$ and $\mathbb{P}(X \leq a) \geq c_1$ and $\mathbb{P}(X \geq b) \geq c_2$, then $\mathbb{V} X \gtrsim (b-a)^2,$ where the implicit constant depends on $c_1$ and $c_2$.
This shows that, for some implicit universal constant,
$$ \mathbb{V} Z \gtrsim \frac{r(\mu)^4}{(1+ r(\mu)^2)^4}.$$
\end{proof}
\textit{A Curious Problem.} We quickly note the following curious problem that arises naturally in the context of the Lemma. Let $\Omega \subset \mathbb{R}^n$ be given and let $X_1,X_2$ be two independent random variables sampled uniformly from $\Omega$, i.e. for each $A \subset \mathbb{R}^n$
$$ \mathbb{P}\left(X_i \in A\right) = \frac{|A \cap \Omega|}{|\Omega|}.$$
It is an interesting question to understand the behavior of $\mathbb{E} \| X - Y\|$, especially the question of how small this expectation can be. It is known \cite{blaschke, pfiefer} that $$\mathbb{E} \|X - Y\| \geq c_d |\Omega|^{1/d}$$ and that the extremal case is given by the ball (in light of the Riesz Rearrangement Inequality, this is perhaps not too surprising). We also refer to the more recent results \cite{thal, burg}.
One could naturally ask whether there is an analogous inequality for the variance, i.e. $$\mathbb{V} \|X-Y\| \geq c_d \cdot |\Omega|^{2/d}$$ and whether it is possible to identify the optimal shape. Is it again a ball?
\subsection{Taylor Expansions.} What we have seen in the preceding section is that the main parts of the t-SNE energy functional (assuming a random underlying graph) will ultimately lead to a shrinking of the cluster down to a small area in space. This allows us to further simplify all the functionals by replacing them with their Taylor expansions. We have three terms (two expectations and one variance).
We recall that one of the expectations, $\mathbb{E}_2$, is a lower order perturbation of the true expectation $\mathbb{E}$. We will first perform a Taylor expansion of the algebraically simpler quantity $\mathbb{E}_2$ before showing, in \S \ref{subsec:pert} that the difference between $\mathbb{E}$ and $\mathbb{E}_2$ can be absorbed in already existing lower order terms.
\subsubsection{Expectations.} We start by
analyzing the expectations, i.e.
$$ \log{\left( \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \frac{ d\mu(x) d\mu(y)}{1+\|x-y\|^2} \right)} \quad \mbox{and} \quad \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \log{(1 + \|x - y\|^2)} d \mu(x) d\mu(y)$$
under the assumption that all the mass is contained in a ball of radius $r$ centered around a fixed point (due to the translation invariance of these functionals, it does not matter where that point is).
For the first term, since $\mu$ is a probability measure, we have
$$ \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \frac{ d\mu(x) d\mu(y)}{1+\|x-y\|^2} = 1 - \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \frac{ \|x-y\|^2}{1+\|x-y\|^2} d\mu(x) d\mu(y)$$
which we expect to be $\mathcal{O}(r^2)$ close to 1. Using the Taylor expansion of the logarithm around 1
$$ \log{(1+x)} = x - \frac{x^2}{2} + \frac{x^3}{3} - \frac{x^4}{4} + \dots,$$
we can expand the integral as
\begin{align*}
\log{\left( \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \frac{d\mu(x) d\mu(y)}{1+\|x-y\|^2} \right)} &= - \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \frac{\|x-y\|^2}{1+\|x-y\|^2} d\mu(x) d\mu(y) \\
& - \frac{1}{2} \left( \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \frac{\|x-y\|^2}{1+\|x-y\|^2} d\mu(x) d\mu(y)\right)^2+ \mathcal{O}(r^6).
\end{align*}
We simplify the first integral using
$$ \frac{\|x-y\|^2}{1+\|x-y\|^2} = \|x-y\|^2 - \|x-y\|^4 + \mathcal{O}(\|x-y\|^6).$$
The second integral is already $\mathcal{O}(r^4)$ and we can thus perform the same simplification. This leads to
\begin{align*}
\log{\left( \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \frac{d\mu(x) d\mu(y)}{1+\|x-y\|^2} \right)} &= - \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \|x-y\|^2 d\mu(x) d\mu(y) \\
&+ \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \|x-y\|^4 d\mu(x) d\mu(y) \\
& - \frac{1}{2} \left( \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \|x-y\|^2 d\mu(x) d\mu(y)\right)^2+ \mathcal{O}(r^6).
\end{align*}
For the second expectation, we again use the Taylor expansion of the logarithm leading to
\begin{align*}
\int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \log{(1+\|x-y\|^2)} d\mu(x) d\mu(y) &= \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \|x-y\|^2 d\mu(x) d\mu(y)\\
&- \frac{1}{2} \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \|x-y\|^4 d\mu(x) d\mu(y) + \mathcal{O}(r^6).
\end{align*}
Summing both of these terms up, we obtain as the Taylor expansion of the expected t-SNE energy (when averaging over all random graphs)
\begin{align*}
\mathbb{E}_2 &= pn(n-1) \log{(n^2)} + pn \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \|x-y\|^2 d\mu(x) d\mu(y) \\
&+ \dfrac{pn(n-2)}{2} \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \|x-y\|^4 d\mu(x) d\mu(y) \\
&- \dfrac{pn(n-1)}{2} \left( \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \|x-y\|^2 d\mu(x) d\mu(y)\right)^2 + \mathcal{O}(r^6).
\end{align*}
This concludes our expansion of the terms controlling the expectations.
\subsubsection{Variance.} The variance depends on the random graph model. As derived above, in the Erd\H{o}s-Renyi case, we have that the variance of the first term satisfies
$$ \mathbb{V} = 2p (1-p) \binom{n}{2} \left[ \log{\left(- n + n^2 \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \frac{d\mu(x) d\mu(y)}{1+\|x-y\|^2} \right)}\right]^2.$$
This quantity is a priori, for fixed $\mu$ and $n$ becoming large, an object at scale $\sim_p n^2 (\log{n})^2$ which is larger than the second term. One would expect that this actually tells us something about the size of the integral: presumably it will
actually be much smaller so that the variance is not quite as large. In fact, one would perhaps believe that the integral is of such a size that the logarithm becomes small, this would suggest
$$ \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \frac{d\mu(x) d\mu(y)}{1+\|x-y\|^2} \sim \frac{1}{n}$$
which would indicate that $\mu$ is distributed over a scale of size $\sim n^{-1/2}$ which is the scaling we get in the case of $k-$regular random graphs. It is clear that this case presents with some interesting dynamics; it would be desirable to have a better understanding.
However, switching to the case of random $k-$regular graphs, we see that there is only one variance term (the variance of the second term in the energy) and that this term is given by
$$ \mathbb{V} = p(1-p) n^2 \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \log{(1 + \|x - y\|^2)}^2 d \mu(x) d\mu(y).$$
A Taylor expansion up to $\mathcal{O}(r^6)$ shows that
$$ \mathbb{V} = p(1-p) n^2 \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \|x - y\|^4 d \mu(x) d\mu(y) + \mathcal{O}(r^6).$$
\subsubsection{Adding a slight perturbation.} \label{subsec:pert} We will now compare the true expectation of the first term, it being
$$ \mathbb{E} = 2p \binom{n}{2} \log{\left( - n + n^2 \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \frac{d\mu(x) d\mu(y)}{1+\|x-y\|^2} \right)}$$
to the algebraically more convenient approximation
$$ \mathbb{E}_2 = 2p \binom{n}{2} \log{\left( n^2 \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \frac{d\mu(x) d\mu(y)}{1+\|x-y\|^2} \right)}$$
that we have used up to now. The mean value theorem implies that for $ 0 < |y| \ll x$, we have
$$\log{(x+y)} \sim \log{x} + \frac{y}{x} + \mathcal{O}\left(\frac{y^2}{x^2}\right)$$
and therefore
\begin{align*}
\log{\left( - n + n^2 \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \frac{d\mu(x) d\mu(y)}{1+\|x-y\|^2} \right)} &= \log{\left(n^2 \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \frac{d\mu(x) d\mu(y)}{1+\|x-y\|^2} \right)} \\
&- \frac{1}{n} \left( \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \frac{d\mu(x) d\mu(y)}{1+\|x-y\|^2} \right)^{-1} + \mbox{l.o.t.}
\end{align*}
It remains to analyze this integral. As before, we can assume that $\mu$ is concentrated at scale $r$ around a single point and use
$$ \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \frac{ d\mu(x) d\mu(y)}{1+\|x-y\|^2} = 1 - \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \frac{ \|x-y\|^2}{1+\|x-y\|^2} d\mu(x) d\mu(y).$$
The geometric series
$$ \frac{1}{1-x} = 1 + x + x^2 + \dots$$
leads to
\begin{align*}
\left( \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \frac{d\mu(x) d\mu(y)}{1+\|x-y\|^2} \right)^{-1} &= 1 + \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \frac{ \|x-y\|^2}{1+\|x-y\|^2} d\mu(x) d\mu(y) \\
&+ \left( \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \frac{ \|x-y\|^2}{1+\|x-y\|^2} d\mu(x) d\mu(y) \right)^2 + \mathcal{O}(r^6).
\end{align*}
Recalling
$$ \frac{\|x-y\|^2}{1+\|x-y\|^2} = \|x-y\|^2 - \|x-y\|^4 + \mathcal{O}(\|x-y\|^6)$$
we can simplify these integrals as above and get
\begin{align*}
\left( \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \frac{d\mu(x) d\mu(y)}{1+\|x-y\|^2} \right)^{-1} &= 1 + \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \|x-y\|^2 d\mu(x) d\mu(y) \\
&- \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \|x-y\|^4 d\mu(x) d\mu(y) \\
&+ \left( \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \|x-y\|^2 d\mu(x) d\mu(y) \right)^2 + \mathcal{O}(r^6).
\end{align*}
Altogether,
\begin{align*}
\mathbb{E} &= \mathbb{E}_2 - p(n-1) - p(n-1) \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \|x-y\|^2 d\mu(x) d\mu(y) + \mbox{l.o.t.} \\
&+ p(n-1) \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \|x-y\|^4 d\mu(x) d\mu(y) - p(n-1) \left( \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \|x-y\|^2 d\mu(x) d\mu(y) \right)^2
\end{align*}
We note that, at the scale that we consider, only the first line will be relevant: the relevant terms in $\mathbb{E}, \mathbb{E}_2$ are at scale $\sim n^2 r^4$ which may be comparable to $\sim n r^2$ but for which $\sim n r^4$ is a lower order term.
\subsubsection{Conclusion.} This completes our Taylor expansion, we can now collect all the relevant terms for the Taylor expansion of the t-SNE energy with respect to a random $k-$regular graph up to leading order. For the expectation, we have our expansion for $\mathbb{E}_2$ and the correction term from the preceding section. After some simplification, we arrive at
\begin{align*}
\mathbb{E}~\mbox{t-SNE energy} &= pn(n-1) \log{(n^2)} - p(n-1) \\
&+ p \frac{n^2 - 2}{2} \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \|x-y\|^4 d\mu(x) d\mu(y) \\
&- p \frac{(n-1)(n+2)}{2} \left( \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \|x-y\|^2 d\mu(x) d\mu(y)\right)^2 \\
& + p \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \|x-y\|^2 d\mu(x) d\mu(y) + \mbox{l.o.t.}
\end{align*}
We see that there are some constants depending only on $n, p$, there are two terms with the same pre-factor that emulate the dominant Jensen-functional structure that we have already encountered above and there is a lower order perturbation.
As for the variance, we recall that
$$
\mathbb{V} = p(1-p)n^2\int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \|x - y\|^4 d \mu(x) d\mu(y) + \mbox{l.o.t.}.
$$
Having identified expectation and variance, the first approximation is naturally given by
$$ X \sim \mathbb{E}X \pm \sigma \sqrt{\mathbb{V} X},$$
where $\sigma$ is a random variable at scale $\sim 1$ (and, in many settings, one would expect it to be approximately Gaussian).
This is exactly the ansatz that we chose for our functional. Ignoring the constants (which have no impact on the structure of the minimizer), dividing by $\sim pn^2 /2$ and absorbing some universal constants depending only on $p$ in the scaling of $\sigma$, we see that the ansatz leads to
\begin{align*}
J_{\sigma, \delta}(\mu) &= \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \|x-y\|^4 d\mu(x) d\mu(y) - \left( \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \|x-y\|^2 d\mu(x) d\mu(y)\right)^2\\
&+ \frac{\sigma}{ \sqrt{p}} \delta \left( \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \|x - y\|^4 d \mu(x) d\mu(y)\right)^{1/2},
\end{align*}
where $\delta \sim 1/n$.
We first note that this functional has a scaling symmetry.
If we replace the measure $\mu$ by the rescaled measure $\mu_{\lambda}$ (defined in the canonical way: $\mu_{\lambda}(A) = \mu(\lambda^{-1} A)),$ then we see that
\begin{align*}
\int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \|x-y\|^4 d\mu_{\lambda}(x) d\mu_{\lambda}(y) &= \lambda^4 \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \|x-y\|^4 d\mu(x) d\mu(y) \\
\left( \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \|x-y\|^2 d\mu_{\lambda}(x) d\mu_{\lambda}(y)\right)^2 &= \lambda^4 \left( \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \|x-y\|^2 d\mu(x) d\mu(y)\right)^2 \\
\left(\int_{\mathbb{R}^2} \|x - y\|^4 d \mu_{\lambda}(x) d\mu_{\lambda}(y)\right)^{1/2} &= \lambda^2 \left(\int_{\mathbb{R}^2} \|x - y\|^4 d \mu(x) d\mu(y)\right)^{1/2}
\end{align*}
and therefore, for any $\lambda > 0$,
$$ J_{\sigma, \delta}(\mu_{\lambda}) \frac{1}{\lambda^4} = J_{\sigma, \delta \lambda^{-2}}(\mu).$$
As
the number of points $n$ increases, $\delta$ decreases. This, however, does not fundamentally alter the functional, it merely changes the scale of
extremal configurations. We can thus without loss of generality assume that $\delta = 1$ and study the simplified functional
$J_{\sigma} := J_{\sigma, 1}.$
\subsection{Radial Solutions.} We will now analyze $J_{\sigma}$ for $\sigma$ fixed under the additional assumption that $\mu$ is radial. This is partially inspired by numerical results which seemed to result in radial configurations. It could be interesting to try to remove that assumption.
Assuming the measure $\mu$ to be radial, we will introduce $\nu$ as the measure on $\mathbb{R}_{\geq 0}$ such that for all $A \subset [0,\infty]$
$$ \nu(A) = \mu \left( \left\{x \in \mathbb{R}^2: \|x\| \in A \right\} \right).$$
This makes $\nu$ a probability measure on $[0,\infty]$.
We require the two basic integral identities
\begin{align*}
\frac{1}{2\pi r} \frac{1}{2\pi s} \int_{\|x\| = r} \int_{\|y\|=s} \|x-y\|^2 dx dy &= r^2 + s^2. \\
\frac{1}{2\pi r} \frac{1}{2\pi s} \int_{\|x\| = r} \int_{\|y\|=s} \|x-y\|^4 dx dy &= r^4 +4r^2 s^2 + s^4.
\end{align*}
We then have, by switching to polar coordinates,
\begin{align*}
\int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \|x-y\|^4 d\mu(x) d\mu(y) &= \int_0^{\infty} \int_0^{\infty} (r^4 +4r^2 s^2 + s^4) d\nu(r) d\nu(s) \\
&= 2 \int_0^{\infty} r^4 d\nu(r) + 4\left( \int_0^{\infty} r^2 d\nu(r) \right)^2.
\end{align*}
Likewise, we have
\begin{align*}
\int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \|x-y\|^2 d\mu(x) d\mu(y) &= \int_0^{\infty} \int_0^{\infty} (r^2 + s^2) d\nu(r) d\nu(s) = 2 \int_0^{\infty} r^2 d\nu(r).
\end{align*}
Thus, for radial measures, the functional simplifies to (dividing without loss of generality by a factor of 2 for simplicity)
\begin{align*}
J_{\sigma,1}(\nu) &= \int_0^{\infty} r^4 d\nu(r) + \frac{\sigma}{\sqrt{p}} \left( \frac{1}{2} \int_0^{\infty} r^4 d\nu(r) + \left( \int_0^{\infty} r^2 d\nu(r) \right)^2 \right)^{1/2}.
\end{align*}
At this point we can rewrite everything in terms of moments of a random variable $X$ that is distributed according to $X \sim \nu$ as
$$ J_{\sigma,1}(\nu) = \mathbb{E} X^4 + \frac{\sigma}{\sqrt{p}} \left(\frac{1}{2} \mathbb{E}X^4 + (\mathbb{E} X^2)^2\right)^{1/2}.$$
We recall the Cauchy-Schwarz inequality
$$0 \leq \mathbb{E} X^2 \leq \left(\mathbb{E} X^4\right)^{1/2}.$$
We can thus reduce the problem to one in multivariable calculus: for all $0 \leq a \leq \sqrt{b}$, what can be said about the minimum of
$$ f(a,b) = b + c \cdot \sqrt{ \frac{b}{2} + a^2},$$
where $c=\sigma \delta/\sqrt{p}$. Observe that
$$ \frac{\partial f}{\partial b} = 1 + \frac{c}{4\sqrt{a^2 + b/2}}$$
which shows that for $c \geq 0$, the minimizer is given by the trivial solution where the entire mass is collected in a point $\nu = \delta_0$. Let us thus assume $c < 0$. Then
$$ \frac{\partial f}{\partial a} = \frac{ac}{\sqrt{a^2 + b/2}} \leq 0$$
and the functional decreases under increasing $a$. We thus want to have $a = \sqrt{b}$ which corresponds to the entire probability mass being collected in a single point. A simple computation shows that if $a = \sqrt{b}$, then the minimum of $f(\sqrt{b}, b)$ for $c<0$ is given by
$ b_{*} = 3c^2/8$
and thus the random variable is concentrated at distance $\sim \sqrt{|c|}$ form the origin. Recalling that we expect $\sigma \sim 1$, this corresponds to (for $\sigma < 0$) the functional $J_{\sigma, 1}$ assuming its minimum for a ring of radius $ \sim p^{-1/4}$.
Recalling the scaling symmetry
$ J_{\sigma, \delta}(\mu_{\lambda}) \lambda^{-4} = J_{\sigma, \delta \lambda^{-2}}(\mu)$
we thus the functional $ J_{\sigma, \delta}$ to assume its minimum for radius $\delta^{1/2} p^{-1/4}$.
Recalling $\delta = 1/n$, we arrive at the scaling of a ring forming at distance $\sim n^{-1/2} p^{-1/4}$. Finally, for a random $k-$regular graph, we have $k=p \cdot n$, this leads to $\sim k^{-1/4} n^{-1/4}$.
\section{Numerical Results}
We conclude with a discussion of some numerical experiments to test the assumptions on scaling that guided our derivation. Our underlying assumption is that there is simply no good way to embed a large random graph; the object is too high-dimensional. More precisely, we assumed that if we are given an embedding $\{y_1, \dots, y_n \} \subset \mathbb{R}^2$, then -- due to stochastic regularization -- the t-SNE energy $E$ will be essentially constant across all graphs sampled from a fixed model. We expect $E$ to be close to the expectation and that the typical deviation from the expectation is given by the variance. This section focuses on testing this hypothesis.
\begin{center}
\begin{figure}[h!]
\begin{tikzpicture}
\node at (-4,0) {\includegraphics[width=0.35\textwidth]{rr_energy.png}};
\node at (0,0) {\includegraphics[width=0.35\textwidth]{rr_expected_energy.png}};
\node at (4,0) {\includegraphics[width=0.35\textwidth]{rr_variance.png}};
\end{tikzpicture}
\caption{Scatterplots of t-SNE energy (left), expected energy (center), and variance (right) of all trials of each parameter setting. We observe little variance between trials, which is unsurprising due to the stochastic regularity of the underlying graph model. The t-SNE energy is mostly explained by its expectation.}
\label{fig:stats}
\end{figure}
\end{center}
\subsection{Experiment Setup.}
We used the Networkx Python library to generate random regular graphs $G$, and we ran t-SNE using Kluger et. al's implementation \cite{george2} with our custom values for $P$. We normalized $P$ to a probability distribution, i.e. $p_{ij} \in \{0, 1/(2|E(G)|)\}$, though as discussed previously this has no effect on the minima of the t-SNE energy $E$. We ran t-SNE as follows: after a PCA initialization, we applied an early exaggeration factor of 12 for 250 iterations, and then finished with an additional 500 normal iterations. We checked that this was sufficient for the embedding to stabilize. With our chosen normalization, we calculated the t-SNE energy as
\begin{align*}
\mbox{t-SNE} &= \sum_{(i,j) \in E(G)} \frac{1}{|E(G)|} \log \left( \sum_{i,j =1 \atop i \neq j}^n \frac{1}{1+ \|y_i - y_j\|^2} \right) \\
&+ \frac{1}{|E(G)|}\sum_{(i,j) \in E(G)} \log\left(1+\|y_i - y_j\|^2\right) \\
\end{align*}
and the expectation and variance as
\begin{align*}
\mathbb{E} ~\mbox{t-SNE}&= \log \left( \sum_{i,j =1 \atop i \neq j}^n \frac{1}{1+ \|y_i - y_j\|^2} \right) + \frac{p}{2 \cdot |E|} \sum_{i,j=1 \atop i \neq j}^{n}\log\left(1+\|y_i - y_j\|^2\right) \\
\mathbb{V} ~\mbox{t-SNE} &= p(1-p) \sum_{i,j=1 \atop i \neq j}^n \frac{1}{4\cdot |E|^2} \left(\log\left(1+\|y_i - y_j\|^2\right)\right)^2.
\end{align*}
Since we work with random regular graphs, the variance comes from the second energy term only. We ran calculations for all graph parameter combinations with
$$
n = 10\_000, 20\_000, 30\_000, 40\_000 \quad \text{and} \quad p = 0.05, 0.06, \ldots, 0.1.
$$
The graph degree is $k = n\cdot p$. We ran $10$ trials for each parameter setting.
\begin{center}
\begin{figure}[h!]
\begin{tikzpicture}
\node at (-5,0) {\includegraphics[width=0.3\textwidth]{randreg4.png}};
\node at (0,0) {\includegraphics[width=0.3\textwidth]{randreg5.png}};
\end{tikzpicture}
\caption{t-SNE embedding of a random regular graph with $p=0.1$ and $n=10000$ (left) and $40000$ (right). The embedding diameter shrinks as $n$ increases due to the Jensen-like structure of the energy, which tends to concentrate the embedding at a point.}
\label{fig:ringn}
\end{figure}
\end{center}
\vspace{-20pt}
\subsection{Results.}
Our results confirm that the expected energy roughly equals the actual energy (Fig. \ref{fig:stats}). We also observe that in typical realizations, the expectation is many orders of magnitude larger than the variance. The calculations support our hypothesis that t-SNE works by minimizing the Jensen-like gap between the energy and its expected value, but injects some randomness to prevent the embedding from converging to a point mass. We observe how the Jensen structure tends to concentrate the embedding in Fig. \ref{fig:ringn}, which shows that the output diameter tends to decrease as $n$ increases.
We also hypothesize that the energy of the final embedding is described by:
$$
\mbox{t-SNE energy}(y_1, \dots, y_n) = \mathbb{E}(y_1, \dots, y_n) + \sigma \sqrt{ \mathbb{V}(y_1, \dots, y_n)}
$$
where $\sigma$ takes values at scale $\sim 1$. We tested this conjectured relationship numerically by computing the actual energy, the expected energy and the variance and the solving for $\sigma$. The results are shown in Fig. \ref{fig:sigma} and suggest that this assumption is reasonable. Indeed, we emphasize that, due to the scaling by $2 \cdot |E|$, the expected energy is at scale $\sim 20$ while the variance is closer to $\sim 10^{-9}$. Having $\sigma \sim 1$ for these very different scales is a good indicator that our assumption on stochastic regularization is meaningful in this context. While it would be difficult to argue convincingly that $\sigma$ behaves like a Gaussian, it does seem as if it were roughly centered at 0 and has variance roughly $\sim 1$ (Fig. \ref{fig:sigma}).
\vspace{-15pt}
\begin{center}
\begin{figure}[h!]
\begin{tikzpicture}
\node at (0,0) {\includegraphics[width=0.6\textwidth]{rr_sigma.png}};
\node at (6,0) {\includegraphics[width=0.33\textwidth]{rr_dist.png}};
\end{tikzpicture}
\caption{Left: Scatterplots of $\sigma$ of all trials of each parameter setting. Right: Histogram of $\sigma$ for $n=10000$ and $p=0.1$.}
\label{fig:sigma}
\end{figure}
\end{center}
\vspace{-15pt}
|
2,877,628,088,516 | arxiv |
\section{Introduction}
\label{sec:intro}
To address the rapid increase of wireless data traffic demand in the upcoming years, the wireless industry has turned its attention to the unlicensed spectrum bands as a way to aggregate additional bands and improve the capacity of future cellular systems~\cite{zhangh:15,zhan:15,labib:17}. The unlicensed spectrum that has global worldwide availability includes the 2.4 GHz, 5 GHz, and 60 GHz bands.
In the unlicensed 60 GHz band, there has been a release of 9 GHz of spectrum in Europe and of 14 GHz in the USA~\cite{5Gamericas}, which provides 10$\times$ times (in Europe) and 16$\times$ times (in the USA) as much unlicensed spectrum as is available in sub 6 GHz bands.
Due to the large amount of spectrum available, the design of a system able to work in \gls{mmwave} carrier frequencies (30-300 GHz) is inevitable in order to achieve multi-Gigabit/s data rates for a large number of devices~\cite{pi:11,wong17}.
The \gls{3gpp} is currently in a full standardization process of \gls{nr}\footnote{The first version of \gls{nr} specification was published as a part of \gls{nr} Rel-15 in June 2018, while the remaining part of the specification is planned to be published as a part of \gls{nr} Rel-16 (in early 2020) as well as a part of subsequent releases.}, the \gls{rat} for \gls{5g} systems~\cite{TS38300,TR38912}, which has inherent support for operation at high carrier frequencies within the \gls{mmwave} spectrum region with wide-bandwidth~\cite{parkvall:17,giordani:19}.
One of the options which is being considered is to allow \gls{nr} to operate in unlicensed bands through \gls{nru}. It is similar to what was previously proposed in the case of \gls{lte} in unlicensed spectrum for the 5 GHz band, through its different variants~\cite{labib:17}, namely
\gls{laa}~\cite{kwon:17,TR36889}, \gls{lteu}~\cite{zhan:15,LTEU}, and MulteFire~\cite{rosa:18,multefire}.
The design of \gls{nru} started in a study item of \gls{nr} Rel-16 in 2018~\cite{RP-170828,TR38889}, and it is currently being developed as one of the \gls{nr} Rel-16 work items, which will enable its inclusion in future \gls{nr} specification~\cite{RP-190706}.
The primary objective of \gls{nru} is to extend the applicability of \gls{nr} to unlicensed spectrum bands as a general purpose technology that works across different bands and uses a design that allows fair coexistence across different \gls{rat}s.
Differently from \gls{laa} and \gls{lteu} that were based on carrier aggregation using the unlicensed 5 GHz band, and from MulteFire that used standalone operation in the 5 GHz band so far, \gls{nru} considers multiple bands and various deployment modes.
The frequency bands discussed for \gls{nru} include 2.4 GHz, 5 GHz, 6 GHz,
and 60 GHz unlicensed bands\footnote{The \gls{nru} work item in \gls{nr} Rel-16 has started while focusing on sub 7 GHz bands~\cite{RP-190706}, but the extension to unlicensed \gls{mmwave} bands will probably be included in later releases, i.e., \gls{nr} Rel-17 and beyond. References to sub 7 GHz are intended to include the unlicensed bands in the 6 GHz region that have some region exceeding 7 GHz (e.g., 7.125 GHz). This differs from the classification of spectrum in \gls{nr} that considers sub 6 GHz bands and \gls{mmwave} frequency ranges.}, as well as 3.5 GHz and 37 GHz bands, which are devoted to shared access in the USA.
As confirmed by \gls{3gpp}, the 60 GHz band is an attractive candidate for \gls{nru}, since it is currently not very crowded and can offer a large amount of contiguous bandwidth~\cite{TR38805}.
Regarding the deployment modes, \gls{nru} supports carrier aggregation, dual connectivity, and standalone operation in unlicensed.
All in all, \gls{nru} is a milestone for 3GPP, which will allow, among others, standalone operation of NR in unlicensed spectrum including the mmWave bands with beam-based transmissions.
One of the most critical issues of allowing cellular networks to operate in unlicensed spectrum is to ensure a fair and harmonious coexistence with other unlicensed systems, such as Wi-Fi in the 5 GHz band (IEEE 802.11a/n/ac/ax) and directional multi-Gigabit Wi-Fi in the 60 GHz band (IEEE 802.11ad/ay, also known as \gls{wigig})~\cite{wigig,nitsche:14,ghasempour:17}.
Fairness for \gls{nru} operation in the unlicensed bands is defined as the ability that \gls{nru} devices do not impact already deployed Wi-Fi services more than an additional Wi-Fi network would do on the same carrier~\cite{RP-170828}.
For a fair coexistence, any \gls{rat} that wants to operate in the unlicensed spectrum (e.g., \gls{nru}) has to be designed in accordance with the regulatory requirements of the corresponding bands. In the case of the 5 GHz and 60 GHz bands, the regulation mandates the use of \gls{lbt} in Europe and Japan~\cite{ETSI302567}.
\gls{lbt} is a spectrum sharing mechanism by which a device senses the channel using a \gls{cca} check before accessing to it. \gls{lbt} works across different \gls{rat}s, and it is adopted by \gls{laa}, MulteFire, Wi-Fi, and \gls{wigig} to comply with the regulation (known as \gls{csma-ca} in the IEEE 802.11 context). However, even with omnidirectional communications, \gls{lbt} suffers from the hidden node and exposed node problems due to the differences in the sensing, transmission, and reception ranges\footnote{A hidden node problem arises when a node cannot hear an on-going transmission in the channel and declares the channel free to transmit but, if that node does any transmission, it collides with the on-going transmission. An exposed node problem, instead, appears when a node senses the channel as busy because it can listen to an on-going transmission but it could have transmitted simultaneously with that on-going transmission without creating any collision.}.
Coexistence in the 5 GHz band has been well studied in recent years to let \gls{lte} in unlicensed spectrum gracefully coexist with Wi-Fi~\cite{zhangh:15, zhan:15, labib:17, lorenza:16}.
Since \gls{lte} was initially designed to work in licensed bands on the basis of uninterrupted and synchronous operations, it was required to be later adapted to work with asynchronous protocols for operation in the unlicensed 5 GHz band. Differently, due to the on-going \gls{nr} standardization, \gls{nru} can be designed from the start with a great amount of flexibility for efficient operation in unlicensed spectrum bands.
Nevertheless, there is a major difference between \gls{nru} coexistence with other \gls{rat}s as compared to \gls{lte}/Wi-Fi coexistence in the 5 GHz band because of the use of beam-based (or directional) transmissions in \gls{nr}.
\gls{nr} has standardized beam management procedures for \gls{gnb}s and \gls{ue}s in all operational bands~\cite[Sec. 8.2.1.6.1]{TR38912}. In particular, directional communications are needed in \gls{mmwave} bands due to its characteristic propagation conditions, which require the use of beamforming to overcome propagation limits like severe pathloss, blocking, and oxygen absorption in case of the 60 GHz band~\cite{pi:11,andrews:17}. Similarly, \gls{wigig} (IEEE 802.11ad/ay) has been particularly designed to deal with these impairments by making directionality mandatory at either the transmitter or receiver~\cite{wigig}.
The beam-based transmissions envisioned in \gls{nr} potentially may cause less interference and enable spatial reuse. However, the different interference layout due to the directional transmissions also changes the coexistence framework in the unlicensed spectrum. In particular, the directionality may aggravate the hidden node and exposed node problems in the unlicensed bands~\cite{subramanian:10}.
As such, the beam-based transmissions make the \gls{nru} coexistence framework more challenging as compared to coexistence with omnidirectional transmissions/receptions in Wi-Fi and \gls{lte} in unlicensed spectrum.
\subsection{Objective and Contribution}
The objective of this paper is to give the reader a complete overview of the major design principles and solutions for \gls{nru} operation in unlicensed bands, with an emphasis on mmWave bands, by taking into account the beam-based transmissions and the worldwide regulatory requirements. \gls{nru} technology is currently under development, and hence we focus our discussions to a set of key features and functionalities that are likely to be included in the final specification.
For that, we go through the main \gls{nr} features defined in Rel-15 and discuss the challenges in adapting them to meet the regulation for use in unlicensed spectrum and to coexist with other \gls{rat}s. We mainly focus our discussions on the design of \gls{phy} and \gls{mac} layers.
The main contributions of this paper are summarized as follows:
\begin{itemize}
\item We review the spectrum allocation and regulatory requirements for the unlicensed bands that have the most potential for \gls{nru}, i.e., 5 GHz, 6 GHz, and 60 GHz bands.
\item We outline the \gls{nru} scenarios and \gls{lbt} procedures under discussion in 3GPP and highlight the \gls{nr} features that need to be revisited for \gls{nru}.
\item By considering the
regulatory requirements and the impact of narrow
beam transmissions, we elaborate on a variety of critical challenges that are encountered in different \gls{nru} scenarios, related to the following areas:
\begin{itemize}
\item the redefinition and implementation of \gls{lbt}-based \textbf{channel access procedures},
\item the selection of the \textbf{frame structure}
in \gls{tdd} systems,
\item the adaptation of \gls{nr} \textbf{initial access procedures},
\item the redesign of \gls{nr} \textbf{re-transmission procedures} based on \gls{harq} and \textbf{scheduling schemes}.
\end{itemize}
For each one of the identified challenges, we review the available literature and interesting standard contributions, and suggest innovative design solutions that can be further elaborated in future works.
\item Finally, we evaluate and compare different LBT-compliant channel access procedures with the aid of simulations in different NR-U/WiGig coexistence scenarios at the 60 GHz band.
\end{itemize}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.91\textwidth]{timing_techs.pdf}
\caption{Standardization timeline of technologies that use unlicensed spectrum.}
\label{fig_timing}
\end{figure*}
To the best of our knowledge, this is the first work that provides a detailed discussion on the design considerations and development process of beam-based \gls{nru}. Apart from the channel access procedures, no other work discusses other design challenges and solutions for \gls{nru}\footnote{Although we focus on beam-based transmissions, some of the discussions in this paper regarding frame structure, initial access, \gls{harq}, and scheduling, also apply to \gls{nru} with omnidirectional transmissions.}. Besides, regarding the channel access procedures, we analyze, compare, and evaluate different procedures in this paper.
Let us remark that in this paper we focus on \gls{nru}, assuming some basic knowledge from the reader about \gls{nr}. An overall description of \gls{nr} can be found in~\cite{TS38300}, and key papers are~\cite{parkvall:17,zaidi:16,liu:18}. Throughout this paper we refer the reader to specific sections of \gls{3gpp} \gls{nr} technical specification and reports when needed.
In line with \gls{3gpp} terminology, we refer to an \gls{nr} terminal as \gls{ue} and an \gls{nr} base station as \gls{gnb}\footnote{The \gls{nr} architecture supports multiple \gls{trp}s that act as dumb antennas and are coordinated by a \gls{gnb}. Throughout this paper, we make the distinction only when needed but, in general, we refer to the \gls{nr} access point as \gls{gnb}.}. Similarly, according to IEEE 802.11 standards, Wi-Fi/\gls{wigig} terminal and base station are referred to as \gls{sta} and \gls{ap}, respectively.
\subsection{Organization}
The remainder of the paper is organized as follows. We start in Section~\ref{sec:back} by giving a review of the related work (in the areas of \gls{lte} in unlicensed spectrum (including LAA, LTE-U, and MulteFire), unlicensed IEEE-based technologies, beam-based \gls{nr} and \gls{nru}).
Then, Section~\ref{sec:regulation} reviews the spectrum allocation and regulatory requirements for the unlicensed spectrum at 5 GHz, 6 GHz, and 60 GHz bands. Section~\ref{sec:NRU} presents the \gls{nru} scenarios and \gls{lbt} specifications, based on 3GPP discussions. Next, Section~\ref{sec:NR} introduces the different areas of the \gls{nr} system design that need to be rethought for \gls{nru}, which will be reviewed in Sections~\ref{sec:channelaccess}-\ref{sec:sched}. In Section~\ref{sec:channelaccess}, we highlight the problems and analyze potential channel access procedures for \gls{nru} to provide support for different \gls{lbt}-related problems that arise due to the beam-based transmissions and which were not present in \gls{laa} and MulteFire technologies. In Section~\ref{sec:frame}, we highlight the trade-offs in the selection of the frame structure. Section~\ref{sec:initialaccess} reviews the problems and solutions for the initial access procedure, including synchronization signal block design, random access procedure, and paging. In Section~\ref{sec:harq}, we illustrate two negative impacts of \gls{lbt} on the \gls{harq} mechanism and show how to overcome them. Section~\ref{sec:sched} elaborates on the problems related to the scheduler operation, and highlights new scheduling schemes that are suitable for beam-based \gls{nru}. After that, in Section~\ref{sec:eval}, we evaluate different \gls{lbt}-based channel access procedures in NR-U/WiGig indoor \gls{mmwave} coexistence scenarios. Finally, Section~\ref{sec:learned} summarizes the lessons learned from the discussions given in this paper, Section~\ref{sec:future} highlights future perspectives, and Section~\ref{sec:conc} concludes the paper.
\section{Background Review}
\label{sec:back}
\begin{table*}[!t]
\scriptsize
\centering
\begin{tabular}{|m{1cm}||m{1.5cm}|m{1.3cm}|m{1.5cm}|m{2.5cm}|m{1.5cm}|m{5cm}|}
\hline
& Standardization body & Underline \break Technology & Operational bands & Deployment capabilities & \gls{rat} in \ \ \break unlicensed & Key features\\ \hline \hline
802.11n & IEEE & 802.11a/g & sub 7 GHz & standalone (unlicensed) & Wi-Fi & \shortstack[l]{Unlicensed bands: 2.4, 5 GHz \\ Aggregated bandwidth: 40 MHz \\ MIMO: up to 4 streams, MU-MIMO: no \\ Modulation: up to 64-QAM \\ \gls{harq}: no \\ channel access scheme: \gls{csma-ca}} \\ \hline
802.11ad & IEEE & 802.11 & above 7 GHz & standalone (unlicensed) & WiGig & \shortstack[l]{Unlicensed bands: 60 GHz \\ Aggregated bandwidth: 2.16 GHz \\ MIMO: up to 8 streams, MU-MIMO: no \\ Modulation: up to 64-QAM \\ \gls{harq}: no \\ channel access scheme: \gls{csma-ca}} \\
\hline
802.11ac & IEEE & 802.11n & sub 7 GHz & standalone (unlicensed) & Wi-Fi & \shortstack[l]{Unlicensed bands: 5 GHz \\ Aggregated bandwidth: 160 MHz \\ MIMO: up to 8 streams, MU-MIMO: up to 4 \\ Modulation: up to 256-QAM \\ \gls{harq}: no \\ channel access scheme: \gls{csma-ca}} \\ \hline
LTE-U & LTE-U Forum & LTE Rel-12 & sub 7 GHz & \shortstack[l]{carrier aggregation \\ (licensed + unlicensed)} & LTE & \shortstack[l]{Unlicensed bands: 5 GHz \\ Aggregated bandwidth: 60 MHz \\ MIMO: up to 8 streams, MU-MIMO: up to 4 \\ Modulation: up to 256-QAM \\ \gls{harq}: yes \\ channel access scheme: duty-cycle} \\
\hline
LWA & 3GPP & LTE Rel-13 & sub 7 GHz & LTE + Wi-Fi integration at PDCP level & Wi-Fi & LTE Rel-13 + Wi-Fi\\
\hline
LWIP & 3GPP & LTE Rel-13 & sub 7 GHz & LTE + Wi-Fi integration at IP level & Wi-Fi & LTE Rel-13 + Wi-Fi\\
\hline
LAA & 3GPP & LTE Rel-13 & sub 7 GHz & \shortstack[l]{carrier aggregation \\ (licensed + unlicensed)} & LTE & \shortstack[l]{Unlicensed bands: 5 GHz \\ Aggregated bandwidth: 80 MHz \\ MIMO: up to 8 streams, MU-MIMO: up to 8 \\ Modulation: up to 256-QAM \\ \gls{harq}: yes \\ channel access scheme: \gls{lbt}} \\
\hline
MulteFire & MulteFire Alliance & LTE Rel-14 & sub 7 GHz & standalone (unlicensed) & LTE & \shortstack[l]{Unlicensed bands: 1.9, 2.4, 5 GHz \\ Aggregated bandwidth: 80 MHz \\ MIMO: up to 8 streams, MU-MIMO: up to 8 \\ Modulation: up to 256-QAM \\ \gls{harq}: yes \\ channel access scheme: \gls{lbt}} \\
\hline
eLWA & 3GPP & LTE Rel-14 & sub 7 GHz and above 7 GHz & LTE + Wi-Fi/WiGig integration at PDCP level & Wi-Fi/WiGig & LTE Rel-14 + Wi-Fi/WiGig\\
\hline
eLWIP & 3GPP & LTE Rel-14 & sub 7 GHz and above 7 GHz & LTE + Wi-Fi/WiGig integration at PDCP level & Wi-Fi/WiGig & LTE Rel-14 + Wi-Fi/WiGig\\
\hline
eLAA & 3GPP & LTE Rel-14 & sub 7 GHz & \shortstack[l]{carrier aggregation \\ (licensed + unlicensed), \\ dual connectivity \\ (licensed + unlicensed)}& LTE & \shortstack[l]{Unlicensed bands: 5 GHz \\ Aggregated bandwidth: 80 MHz \\ MIMO: up to 8 streams, MU-MIMO: up to 8 \\ Modulation: up to 256-QAM \\ \gls{harq}: yes \\ channel access scheme: \gls{lbt}} \\
\hline
FeLAA & 3GPP & LTE Rel-15 & sub 7 GHz & \shortstack[l]{carrier aggregation \\ (licensed + unlicensed), \\ dual connectivity \\ (licensed + unlicensed)} & LTE & \shortstack[l]{Unlicensed bands: 5 GHz \\ Aggregated bandwidth: 100 MHz \\ MIMO: up to 8 streams, MU-MIMO: up to 8 \\ Modulation: up to 256-QAM \\ \gls{harq}: yes \\ channel access scheme: \gls{lbt}} \\
\hline
802.11ax & IEEE & 802.11ac & sub 7 GHz & standalone (unlicensed) & Wi-Fi & \shortstack[l]{Unlicensed bands: 1 to 6 GHz \\ Aggregated bandwidth: 160 MHz \\ MIMO: up to 8 streams, MU-MIMO: up to 8 \\ Modulation: up to 1024-QAM \\ \gls{harq}: yes \\ channel access scheme: \gls{csma-ca}} \\
\hline
802.11ay & IEEE & 802.11ad & above 7 GHz & standalone (unlicensed) & WiGig & \shortstack[l]{Unlicensed bands: 60 GHz \\ Aggregated bandwidth: 8.64 GHz \\ MIMO: up to 8 streams, MU-MIMO: up to 8 \\ Modulation: up to 64-QAM \\ \gls{harq}: no \\ channel access scheme: \gls{csma-ca}} \\
\hline
NR-U & 3GPP & NR Rel-17 & sub 7 GHz and above 7 GHz & \shortstack[l]{carrier aggregation \\ (licensed + unlicensed), \\ dual connectivity \\ (licensed + unlicensed), \\ standalone (unlicensed)} & NR & \shortstack[l]{Unlicensed bands: 2.4, 3.5, 5, 6, 37, 60 GHz \\ Aggregated bandwidth: 800 MHz \\ MIMO: up to 8 streams, MU-MIMO: up to 12 \\ Modulation: up to 1024-QAM \\ \gls{harq}: yes \\ channel access scheme: \gls{lbt}} \\ \hline
\end{tabular}
\caption{Taxonomy of technologies that use unlicensed spectrum.}
\label{table:taxonomy}
\end{table*}
In Fig.~\ref{fig_timing}, we illustrate the timeline of different \gls{rat}s that have been standardized for use in unlicensed spectrum (or are in the process of being standardized) so far. The timeline includes widely-deployed IEEE 802.11 standards (\gls{wlans}, commonly-known as Wi-Fi) with their different amendments, and the \gls{3gpp} based standards that follow different releases of \gls{lte} and \gls{nr}. In \gls{3gpp}, two main groups have been created depending on the \gls{rat} that is used to access the unlicensed spectrum:
\begin{enumerate}
\item technologies that are based on the integration of \gls{lte} and Wi-Fi radio links and that use Wi-Fi to access the unlicensed spectrum (i.e., \gls{lwa} and enhanced \gls{lwa} (eLWA), \gls{lwip} and enhanced \gls{lwip} (eLWIP)), and
\item technologies that use modified versions of \gls{lte} or \gls{nr} to access and operate in the unlicensed spectrum (i.e., \gls{lteu}, \gls{laa} and its various enhancements, namely \gls{elaa} and further eLAA (FeLAA), MulteFire, and \gls{nru}).
\end{enumerate}
In Table~\ref{table:taxonomy}, we present a taxonomy of the different \gls{rat}s that use unlicensed spectrum, including the standardization body, the underline technology, the operational unlicensed spectrum bands (sub 7 GHz and/or above 7 GHz bands), the supported deployment capabilities, the \gls{rat} that is used to access the unlicensed spectrum, and the supported key features in terms of frequency bands, maximum supported bandwidth (including aggregation), \gls{mimo} support, \gls{mu-mimo} support, maximum supported modulation, \gls{harq} support for combining transmissions, and the channel access scheme that is used in the unlicensed spectrum.
From Fig.~\ref{fig_timing} and Table~\ref{table:taxonomy}, it can be observed that IEEE 802.11 based technologies have been designed to access the unlicensed spectrum since 1997 and with the support of large bandwidth; on the other hand, 3GPP based technologies in unlicensed spectrum are more recent, and are characterized by a more sophisticated and efficient design, because they have been designed, since the very beginning, to operate in limited and expensive licensed spectrum. Nevertheless, with the latest amendments and versions (e.g., IEEE 802.11ax and NR-U), it is possible to observe that both the technologies are converging to use large bandwidth in a very efficient manner, through the support of key features such as \gls{harq}, high-order modulations, and high-order \gls{mimo}.
In this paper, we focus on the operation of cellular networks in unlicensed spectrum, i.e., the second group of 3GPP based technologies listed before, with special emphasis on the unlicensed \gls{mmwave} bands. As a result, in what follows we review only the state of the art related to the objective of this paper. Specifically, we first focus on the standardization and literature of the different variants of \gls{lte} in unlicensed spectrum. Then, we review the literature related to technologies that use directional transmissions for operation in unlicensed \gls{mmwave} bands.
\subsection{LTE in unlicensed spectrum (5 GHz band)}
To let \gls{lte} gracefully coexist with Wi-Fi in the 5 GHz band with omnidirectional transmissions and receptions, different variants of \gls{lte} in unlicensed spectrum have been proposed, widely studied in the research literature, and standardized based on modifications over \gls{lte}. The different variants are: \gls{laa}~\cite{TR36889}, \gls{lteu}~\cite{LTEU}, and MulteFire~\cite{multefire}.
\gls{3gpp} established work items on \gls{laa} in \gls{lte} Rel-13~\cite{TR36889} and on \gls{elaa} in \gls{lte} Rel-14~\cite{RP152272} to evaluate and specify \gls{dl} and \gls{ul} operations in the 5 GHz unlicensed band~\cite{TS36213}, respectively. Also, in \gls{lte} Rel-15, a work item on further enhancements to \gls{lte} operation in unlicensed spectrum (FeLAA) was concluded in 2018~\cite{RP170848}. \gls{laa} technologies (\gls{laa}/\gls{elaa}/FeLAA) operate as supplementary \gls{dl}/\gls{ul} carriers in unlicensed bands with anchor carriers in the licensed bands. As mentioned earlier, to meet worldwide regulation, a \gls{lbt}-based channel access scheme was introduced in \gls{laa} technologies to access to the unlicensed band, which is similar to the \gls{cca} procedure used in IEEE 802.11-based technologies.
An overview of LAA technology is presented in~\cite{kwon:17}. Interested readers can also look at the comprehensive survey about \gls{laa}/Wi-Fi coexistence in the 5 GHz band in~\cite{chen:17}, and references therein. In~\cite{perf_analysis_lte_wifi}, an analytical framework based on Markov chain is developed to study the downlink throughput of \gls{laa}/Wi-Fi coexistence, for a simple \gls{lbt} with fixed contention window size and simple scenarios composed of one \gls{ap} and one \gls{laa} node. For \gls{3gpp}-based scenarios, the impact of several parameters related to the \gls{laa} \gls{lbt} mechanism on the channel access opportunities of LAA and its coexistence performance, has been assess through system-level simulations in~\cite{perf_analysis_ericsson_wireless_comm}.
In regions where the regulation does not require \gls{lbt}, as in the USA, access schemes, other than the ones standardized by \gls{3gpp}, have been designed and produced. In particular, the industrial consortium \gls{lteu} Forum specified a proprietary solution~\cite{LTEU}, known as \gls{lteu}. As for \gls{laa}, \gls{lteu} technology uses carrier aggregation of the unlicensed band with an anchor carrier in a licensed band. However, instead of relying on \gls{lbt} for accessing the channel, it basically allows coexistence by duty-cycling the \gls{lte} continuous transmission. A comprehensive overview of the \gls{lteu} technology, including implementation regulations,
principles, and typical deployment scenarios, is presented in~\cite{zhan:15}.
A highly performing access scheme for \gls{lteu} (known as \gls{csat}) is proposed in~\cite{qualcomm_csat_algorithm}, in which the duty cycle is adapted based on the activity observed on the channel.
In~\cite{perf_analysis_lteu_stohastic_geom}, stochastic geometry is used to analyze \gls{lteu}/Wi-Fi coexistence in terms of coverage probability and throughput, as well as to perform asymptotic analysis. Resource allocation for~\gls{lteu} is studied in~\cite{7558177}, which also proposes a joint optimization of \gls{mac} and \gls{phy} layer parameters of the \gls{lteu} network.
Multiple works have focused also on modeling, analyzing, and comparing \gls{laa} and \gls{lteu}.
Authors in \cite{compare_dc_lbt_modeling_interference} derive throughput and interference models for inter-technology coexistence analysis in the 5 GHz band, considering \gls{lbt}-based as well as duty cycle-based access schemes. Through Monte Carlo simulations, they show that duty cycle (i.e., \gls{lteu}) outperforms \gls{laa} \gls{lbt} for low interference scenarios, while in high interference scenarios \gls{lbt} outperforms
duty cycle mechanisms.
Comparisons are done also through simulations in~\cite{perf_analysis_jeon_intel_globecom_ws_2014} for various indoor and outdoor setups.
In
\cite{perf_analysis_cristina_cano_douglas_leight}, a throughput model is presented to analyze LTE/Wi-Fi coexistence by focusing on the comparison of \gls{lbt} versus \gls{csat}. They conclude that, when
optimally configured, both \gls{laa} and \gls{lteu} approaches are capable of providing the same level of fairness to Wi-Fi. Authors in~\cite{perf_analysis_samsung_tcom} model, analyze, and compare different coexistence mechanisms including plain \gls{lte}, \gls{lte} with discontinuous transmission (\gls{lteu}), and \gls{lte} with \gls{lbt} (\gls{laa}). Therein, by leveraging on stochastic geometry, authors analytically derive and numerically evaluate the medium
access probability, the \gls{sinr} coverage probability, density of successful transmission, and the rate coverage probability.
In general, for the 5 GHz band, it is generally considered
that \gls{laa} is fairer to Wi-Fi than \gls{lteu}, because it uses the \gls{lbt} mechanism and so it abides similar rules as Wi-Fi. Recently, authors in~\cite{biljana:19}, have presented a detailed coexistence study and comparison of \gls{laa} and \gls{lteu} technologies through network simulations, and evaluated how the channel access procedures, besides other important aspects like the traffic patterns, simulation setup, and proprietary implementation choices, impact on the coexistence results.
Finally, the MulteFire Alliance launched the development of a new \gls{lte}-based technology capable of operating standalone in unlicensed or shared spectrum bands, also known as MulteFire~\cite{multefire, qualcomm, nokia}, without using any licensed carrier as an anchor.
An overview of MulteFire is presented in~\cite{rosa:18}, including the main challenges due to \gls{lbt} and the standalone operation, as well as the solutions adopted in MulteFire to overcome such challenges and the attained performance benefits.
The standalone operation in unlicensed bands may open a new class of wireless private networks, e.g., for Industry 4.0 scenarios~\cite{8207346}. However, it also becomes difficult to operate without any support from licensed carriers. For example, in standalone operation, latency may be increased because of the LBT requirement for each new transmission~\cite{maldonado:18}.
A comparative analysis of the three LTE variants (LAA, LTE-U, and MulteFire) is provided in~\cite{labib:17}, including technical details of each \gls{rat} and their operational features and coexistence capabilities.
Research on the different variants of \gls{lte} in unlicensed spectrum for 5G is still on-going to improve the coexistence with Wi-Fi in sub 7 GHz bands. For example, authors in~\cite{zeng:18} propose channel selection algorithms for 5G \gls{elaa}. Recently, authors in~\cite{garcia:18} presented the massive MIMO unlicensed (mMIMO-U) technology for sub 7 GHz bands. The mMIMO-U enhances LBT by placing radiation nulls toward neighboring Wi-Fi nodes, provided that Wi-Fi nodes can be detected by the \gls{gnb}s. Authors in~\cite{song:19} present a cooperative \gls{lbt} scheme with omnidirectional transmissions/receptions, whereby neighboring \gls{gnb}s are allowed to cooperate in the sensing and transmission phases to improve the \gls{qos}.
Let us remark that some features of \gls{laa}-based technologies and MulteFire can be reused for \gls{nru}, specially for what regards initial access from \gls{laa} and regarding \gls{harq} procedures and scheduling from MulteFire standalone operation, but they need to be adapted and/or improved for beam-based transmissions in \gls{nru} (as we will review later).
\subsection{Technologies in unlicensed mmWave bands}
\label{subsec:ieee}
One of the key features of \gls{nr}, as compared to \gls{lte}, is the wide-band support for operation at \gls{mmwave} carrier frequencies~\cite{parkvall:17}. For that, multiple procedures for beam-related operations have been defined in the \gls{nr} standard, including beam sweeping, beam measurement, beam determination, and beam reporting~\cite{giordani:19}.
In terms of NR to make it operate in shared/unlicensed mmWave bands, related works include~\cite{nekovee:16,nekovee:17,zhang:17,seo:18,boccardi:16}. Authors in~\cite{nekovee:16,nekovee:17} present beam scheduling solutions that are based on iterative coordination of the concurrent transmissions of different base stations by means of properly selecting their transmit beams. Also, multiple solutions based on spectrum sharing~\cite{zhang:17,seo:18} and spectrum pooling~\cite{boccardi:16} have been recently proposed, which exploit coordination among different cellular network operators to improve the spatial reuse. However, these solutions cannot ensure fair coexistence of \gls{nru} with other \gls{rat}s in the unlicensed bands because they do not employ mechanisms to avoid continuous use of the spectrum (as it is the case of \gls{lbt} or duty-cycling).
IEEE 802.11 \gls{wlans} standards have started technology development to use the unlicensed spectrum at \gls{mmwave} bands few years ago through 802.11ad specification~\cite{nitsche:14}, and its recent enhancement in 802.11ay specification~\cite{ghasempour:17} (see Fig.~\ref{fig_timing}). In this regard, both IEEE 802.11ad and 802.11ay have standardized specific beam training processes for directional transmissions~\cite{zhou:18}. However, in these specifications, \gls{cca} within \gls{csma-ca} is still defined with omnidirectional sensing. In~\cite{singh:10}, an enhanced distributed \gls{mac} protocol is proposed for CSMA-based mesh networks employing directional transmissions at 60 GHz. The proposed solution uses memory at the nodes to achieve approximate TDMA schedules without explicit coordination.
In the area of IEEE 802.15 \gls{wpans} standards (including the well-known Bluetooth and ZigBee), technology development to use the unlicensed \gls{mmwave} bands has also been considered since few years ago in 802.15.3c specification. To enhance IEEE 802.15 \gls{wpans}, multiple solutions have been proposed for beam management and time-domain coordination in \gls{mmwave} bands~\cite{an:08,pyo:09,cai:10}. A time division multiple access (TDMA) based channel allocation scheme for directional transmissions is proposed in~\cite{an:08}. An enhanced \gls{mac} with frame aggregation, unequal error protection, and block acknowledgment is defined in~\cite{pyo:09}. Authors in~\cite{cai:10} introduced the concept of an exclusive region to enable concurrent transmission with significant interference reduction in \gls{mmwave} \gls{wpans}, by considering all kinds of directional and omnidirectional transmission/reception antenna patterns.
The coordination of the transmit beams (as proposed in~\cite{nekovee:16,nekovee:17,cai:10}) and the coordination of the channel access in time domain (as analyzed in~\cite{an:08,pyo:09}) solve hidden node problems that arise in the unlicensed spectrum. However, since these kinds of solutions require coordination between Wi-Fi/\gls{wigig} and cellular devices, they are not adequate for multi-\gls{rat} coexistence scenarios. Instead, distributed uncoordinated approaches are needed.
For that reason, and also due to regulation mandate, \gls{lbt} was adopted to control the channel accesses in \gls{laa}/\gls{elaa}/FeLAA and MulteFire.
\subsection{Towards NR-U}
In case of directional transmissions, \gls{lbt} might not work well because of the increased hidden and exposed node problems~\cite{lagen:18d,subramanian:10}. For example, when the carrier sense is done omnidirectionally, i.e., \gls{omnilbt}, while the intended transmission is beam-based, there is a higher chance of exposed node problems (as it happens in WiGig). If the direction of the intended communication is known, directional carrier sense, i.e., \gls{dirlbt}, may help to improve the spatial reuse but it may lead to hidden node problems~\cite{R1-1713785}. This phenomenon is the so-called \gls{omnilbt}/\gls{dirlbt} trade-off. It is shown in~\cite{lagen:18b} that, for low network densities, \gls{dirlbt} performs significantly better than \gls{omnilbt}, while for high network densities, \gls{omnilbt} is a better technique.
Therefore, new regulatory-friendly and distributed channel access schemes are needed to address coexistence for \gls{nru} under beam-based transmissions.
From the 3GPP standardization point of view, \gls{nru} for sub 7 GHz is currently being standardized by \gls{3gpp}, and \gls{nru} in \gls{mmwave} bands is planned to be addressed in future releases of 3GPP (i.e., \gls{nr} Rel-17 and beyond). In the literature, \gls{nru} with beam-based transmissions has not been discussed sufficiently. There have only been some work on the channel access procedures~\cite{lagen:18b,lagen:18d,lagen:18,R1-180xxxx,li:18}. To address the \gls{omnilbt}/\gls{dirlbt} trade-off, two distributed \gls{lbt}-based channel access procedures have been proposed by the same authors for beam-based \gls{nru}, namely \gls{pairlbt}~\cite{lagen:18b} and \gls{lbtswitch}~\cite{lagen:18d}, which we will further review, discuss, and compare throughout this paper.
Even though, in the case of beam-based transmissions, there are interference situations that cannot be detected at the transmitter due to the significant difference in the interference dynamics at the transmitter and receiver sides. Remarkably, in some cases, it is only the receiver that can be aware of potential interference situations~\cite{lagen:18}. Therefore, \gls{lbt} at the transmitter may not be useful to detect such interference.
In this line, a technique called \gls{lat} is introduced in~\cite[Sec. 8.2.2]{D41mmMagic}.
This approach is certainly of interest but it is not compliant with regulations regarding \gls{lbt} requirement.
This can be solved by employing receiver-assisted LBT procedures~\cite[Sec. 7.6.4]{R1-180xxxx},
or \gls{lbr}~\cite{lagen:18}, wherein the transmitter triggers a carrier sense at the receiver that is used to complement \gls{lbt}. Recently, going deeper into this issue, authors in~\cite{li:18} proposed a joint directional \gls{lbt}-\gls{lbr} and beam training for \gls{nru} in \gls{mmwave} bands.
\section{Spectrum Allocation and Regulatory Requirements}
\label{sec:regulation}
Operation in unlicensed spectrum is subject to different regulatory limitations and restrictions that are region- and band- specific. In this section, we review the spectrum allocation and the regulatory requirements for the 5 GHz and 60 GHz bands, which have common global availability and for which most major geographical areas worldwide have authorized wide unlicensed spectrum bandwidth. Also, we review the spectrum allocation for the 6 GHz band, which has been recently allocated for unlicensed use in Europe and the USA.
\subsection{Spectrum Allocation}
In Fig.~\ref{fig_channelization5} and Fig.~\ref{fig_channelization}, we show the unlicensed spectrum allocation of major geographic areas of the world for the 5 GHz band and the 60 GHz band, respectively, including IEEE 802.11ac channelization in Fig.~\ref{fig_channelization5} and IEEE 802.11ad channelization in Fig.~\ref{fig_channelization}. Three subbands are available in the 5 GHz band and, according to IEEE 802.11ac channelization~\cite{perahia:11}, each subband is further divided into multiple non-overlapping channels of 20 MHz bandwidth each.
On the other hand, IEEE 802.11ad channelization in the 60 GHz band supports up to six non-overlapping channels of 2.16 GHz bandwidth each, thus having a lower number of channels but much wider channel bandwidths than Wi-Fi in the 5 GHz band.
\begin{figure}[!t]
\centering
\includegraphics[width=0.41\textwidth]{channelization5}
\caption{5 GHz unlicensed spectrum allocation in different areas of the world.}
\label{fig_channelization5}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=0.42\textwidth]{channelization}
\caption{60 GHz unlicensed spectrum allocation in different areas of the world.}
\label{fig_channelization}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=0.41\textwidth]{channelization6}
\caption{6 GHz potential unlicensed spectrum allocation in the USA and Europe.}
\label{fig_channelization6}
\end{figure}
At the time of writing, the USA and Europe are analyzing the potential of the 6 GHz band for unlicensed use.
The spectrum considered in the USA (5.925-7.125 GHz) and Europe (5.925-6.425 GHz) is illustrated in Fig.~\ref{fig_channelization6}, alongside IEEE 802.11ax 20 MHz channelization.
\subsection{Regulatory Requirements}
\gls{etsi} regulation has harmonized the requirements for the 5 GHz band (5.15-5.35 GHz and 5.47-5.725 GHz) and the 60 GHz band (57-66 GHz), as included in~\cite{ETSI301893} and~\cite{ETSI302567}, respectively. To enable worldwide regulation-compliant access and satisfy a fair coexistence with the unlicensed systems (Wi-Fi, \gls{wigig}, radar) and intra-\gls{rat} services, any technology that attempts accessing to the unlicensed spectrum (like \gls{nru}) should fulfill the following regulatory requirements:
\begin{itemize}
\item \textbf{Listen-Before-Talk (LBT)}: The \gls{lbt} procedure is a mechanism by which a device should apply a \gls{cca} check (i.e., spectrum sensing
for a certain period, called the \gls{cca} period) before using the channel and which imposes certain rules after determining the channel to be busy. \gls{cca} uses \gls{ed} to detect the presence (i.e., channel is busy) or absence (i.e., channel is idle) of other signals on the channel. If the detected energy during an initial \gls{cca} period is lower than a certain threshold (the \gls{ed} threshold), the device can access the channel for a period called \gls{cot}. Otherwise, an extended \gls{cca} period starts, in which the detected energy is again compared against the \gls{ed} threshold until channel access is granted. \gls{lbt} is a mandatory procedure in Europe and Japan for the 5 GHz and 60 GHz bands but it is not required in other regions like the USA and China. The \gls{lbt} mechanism and its parameters are specified in~\cite{ETSI301893} and~\cite{ETSI302567}. Briefly, for each band, the regulation specifies the \gls{cca} slot duration ($9$ $\mu$s in the 5 GHz band, and $5$ $\mu$s in the 60 GHz band),
the initial and extended \gls{cca} check times (e.g., a multiple of $5$ $\mu$s for initial \gls{cca} and $8{+}m{\times} 5$ $\mu$s for extended \gls{cca} in the 60 GHz band, where $m$ controls the backoff), and the \gls{ed} threshold ($-72$ dBm for a 20 MHz channel bandwidth in the 5 GHz band, and $-47$ dBm for $40$ dBm of radiated power in the 60 GHz band).
\item \textbf{\gls{mcot}}: Certain regions such as Europe and Japan prohibit continuous transmission in the unlicensed spectrum and impose limits on the \gls{cot}, i.e., the maximum continuous time a device can use the channel. The \gls{mcot} in the 5 GHz band is limited to $2$ ms, $4$ ms, or $6$ ms depending on the channel access priority class, and it may be increased up to $8$-$10$ ms in some cases~\cite{ETSI301893}. The \gls{mcot} in the 60 GHz band is $9$ ms~\cite{ETSI302567}. Besides, for the 5 GHz and 60 GHz bands, it is allowed to share the \gls{cot} with the associated devices (e.g., \gls{gnb} and \gls{ue}s), and thus enable a contiguous combination of \gls{dl} and \gls{ul} transmissions within the \gls{cot}. Sharing the COT means that once the initiating device (\gls{gnb}) gets access to the channel through \gls{lbt} and transmits, the responding devices (\gls{ue}s) are allowed to
skip the \gls{cca} check and immediately transmit in response to the received frames~\cite{ETSI302567}.
\item \textbf{\gls{eirp} and \gls{psd}}: Operation in the unlicensed spectrum is subject to power limits in all regions and bands, in terms of \gls{eirp} and \gls{psd}, to constrain the generated inter-\gls{rat} and intra-\gls{rat} interference levels. According to \gls{etsi} regulation~\cite{ETSI301893}, in the 5 GHz band, the maximum mean \gls{eirp} and \gls{psd} with transmit power control for 5.15-5.35 GHz range are limited to $23$ dBm and $10$ dBm/MHz, respectively, and for 5.47-5.725 GHz range, are limited to $30$ dBm and $17$ dBm/MHz, respectively.
In the 60 GHz band, the maximum mean \gls{eirp} and \gls{psd} are limited to $40$ dBm and $13$ dBm/MHz, respectively~\cite{ETSI302567}. Besides the \gls{etsi} power limits, more restrictive power limits are imposed in some regions~\cite{TR38805}. For example, the USA differentiates among indoor and outdoor devices with different power limits~\cite{5Gamericas,FCC}.
\item \textbf{\gls{ocb}}: The \gls{ocb} is defined as the bandwidth containing $99 \%$ of the signal power and, in certain regions, it should be larger than a percentage of the \gls{ncb} (i.e., the channel width). This enforces the unlicensed technologies to use major part of the channel bandwidth when they access the channel.
According to \gls{etsi}, for the 5 GHz band, the \gls{ocb} shall be between $70 \%$ and $100 \%$ of the \gls{ncb}~\cite{ETSI301893}. In the 60 GHz band, the \gls{ocb} shall be in between $80 \%$ and $100 \%$ of the \gls{ncb}~\cite{ETSI302567}.
\item \textbf{\gls{fr}}: The \gls{fr} process allows reusing the same channel at the same time by different devices of the same \gls{rat}. In general, if a device is accessing the channel, then other devices in its coverage area should be muted in this channel so that it cannot be reused at the same time. This reduces the number of devices that access simultaneously (i.e., the \gls{fr} factor). The \gls{fr} mechanism is designed to allow devices of the same operator to access the channel simultaneously, and hence increase the \gls{fr} factor and improve the spectral efficiency. This is done by using different \gls{ed} thresholds for intra-\gls{rat} and inter-\gls{rat} signals, provided that the devices can distinguish between these two types of signals\footnote{For example, \gls{laa} supports \gls{fr} with an \gls{ed} threshold of $-52$ dBm for intra-\gls{rat} signals (i.e., \gls{laa} signals), as compared to $-72$ dBm of ED threshold used for inter-\gls{rat} signals (e.g., Wi-Fi signals). On the other hand, Wi-Fi is designed to avoid \gls{fr}, especially among Wi-Fi nodes. For that, Wi-Fi supports preamble detection to identify intra-\gls{rat} signals, and it uses -82 dBm of preamble detection threshold for Wi-Fi signals while a -62 dBm of \gls{ed} threshold for non-Wi-Fi signals in the 5 GHz band.}.
\item \textbf{\gls{dfs}}: \gls{dfs} functionality is used to avoid interfering with 5 GHz and 60 GHz radar systems, as well as to uniformly spread the traffic load across the different channels in each band. The regulation states that whenever radar signals are detected, a device must switch to another channel to avoid interference.
\end{itemize}
\section{\gls{nru} Scenarios and \gls{lbt} Specifications}
\label{sec:NRU}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.81\textwidth]{deployment_scen}
\caption{\gls{nru} layout scenarios.}
\label{fig_topologies}
\end{figure*}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.94\textwidth]{scenarios}
\caption{\gls{nru} deployment scenarios.}
\label{fig_scenarios}
\end{figure*}
\subsection{\gls{nru} Scenarios}
\gls{laa}, \gls{lteu}, and MulteFire technologies were specifically designed to operate in the 5 GHz band. Differently, \gls{nru} considers multiple bands: 2.4 GHz (unlicensed worldwide), 3.5 GHz (shared in the USA), 5 GHz (unlicensed worldwide), 6 GHz (unlicensed in the USA and Europe), 37 GHz (shared in the USA), and 60 GHz (unlicensed worldwide).
The \gls{3gpp} classifies these bands for \gls{nru} as sub 7 GHz and mmWave bands. Sub 7 GHz bands include the 2.4, 3.5, 5, and 6 GHz bands; meanwhile mmWave bands encompass the 37 and 60 GHz bands. First efforts in the \gls{nru} standardization focus on sub 7 GHz bands~\cite{TR38889} and mmWave bands will be addressed later. Therefore, four \textit{layout scenarios} can be defined for \gls{nru} based on the deployment and propagation environment conditions:
\begin{itemize}
\item indoor sub 7 GHz,
\item indoor \gls{mmwave},
\item outdoor sub 7 GHz,
\item outdoor \gls{mmwave}.
\end{itemize}
The \gls{nru} layout scenarios are shown in Fig.~\ref{fig_topologies}. According to standard terminology, operator A and operator B in the figure are used to denoting two different \gls{rat}s (and thus address, e.g., Wi-Fi and \gls{nru} coexistence) or two operators of the same \gls{rat}, e.g., to evaluate either Wi-Fi/Wi-Fi or \gls{nru}/\gls{nru} coexistence. The more details on the simulation methodology and parameters for indoor and outdoor sub 7 GHz can be found in the 3GPP report TR 38.889~\cite[Sec. 8.1]{TR38889}.
To assess the coexistence, five different \textit{deployment scenarios} are defined for \gls{nru} in \gls{3gpp}~\cite[Sec. 6]{TR38889}:
\begin{itemize}
\item Carrier aggregation between licensed band \gls{nr} and unlicensed band \gls{nru},
\item Dual connectivity between licensed band \gls{lte} and unlicensed band \gls{nru},
\item Standalone unlicensed band \gls{nru},
\item \gls{nr} with \gls{dl} in unlicensed band and \gls{ul} in licensed band,
\item Dual connectivity between licensed band \gls{nr} and unlicensed band \gls{nru}.
\end{itemize}
The \gls{nru} deployment scenarios are illustrated in Fig.~\ref{fig_scenarios}. All of them can be applied to each of the \gls{nru} layout scenarios shown in Fig.~\ref{fig_topologies}.
The carrier aggregation scenario follows the approach in LAA technologies, with the possibility of \gls{nru} for both supplementary \gls{dl} and \gls{ul}. The standalone scenario resembles the MulteFire approach.
Note that the \gls{nru} design is further complicated in the standalone deployment scenario because all the signals must use the unlicensed band, thus significantly affecting the initial access and scheduling procedures.
The performance metrics for \gls{nru} coexistence evaluation are the same as in LAA~\cite{TR36889}. These include, user packet throughput and delay (mean value and value at the $5$th, $50$th, and $95$th percentiles) for low, medium and high loads, measured separately for \gls{dl} and \gls{ul}, thus leading to 48 metrics. Also, buffer occupancy is measured for \gls{nru} and Wi-Fi, separately. The coexistence evaluation scenarios include Wi-Fi/Wi-Fi, Wi-Fi/NR-U, and NR-U/NR-U~\cite{TR38889}.
The coexistence requirement for \gls{nru} (i.e., the fairness definition) remains the same as in \gls{laa}, in which \gls{nru} devices should not impact deployed Wi-Fi/\gls{wigig} services (data, video, and voice services) more than an additional Wi-Fi/\gls{wigig} network would do on the same carrier~\cite{RP-170828}. Therefore, the standard way to evaluate the fairness is to first consider a Wi-Fi/Wi-Fi deployment (operator A/operator B) in any of the layout scenarios in Fig.~\ref{fig_topologies}, and then replace one Wi-Fi network by an \gls{nru} network to assess the Wi-Fi/\gls{nru} coexistence and determine the impact of \gls{nru} on the Wi-Fi system as compared to the Wi-Fi/Wi-Fi deployment.
\subsection{\gls{lbt} Specifications}
\gls{3gpp} has specified four \gls{lbt} categories for \gls{nru}~\cite{TR38889}:
\begin{itemize}
\item Category 1 (Cat 1 \gls{lbt}): Immediate transmission after a short switching gap of $16$ $\mu$s.
\item Category 2 (Cat 2 \gls{lbt}): \gls{lbt} without random back-off, in which the \gls{cca} period is deterministic (e.g., fixed to $25$ $\mu$s).
\item Category 3 (Cat 3 \gls{lbt}): \gls{lbt} with random back-off with a contention window of fixed size, in which the extended \gls{cca} period is drawn by a random number within a fixed contention window.
\item Category 4 (Cat 4 \gls{lbt}): \gls{lbt} with random back-off with a contention window of variable size, in which the extended \gls{cca} period is drawn by a random number within a contention window, whose size can vary based on channel dynamics.
\end{itemize}
For different transmissions in a \gls{cot} and various channels/signals to be transmitted, different categories can be used. In brief, as in \gls{laa}, Cat 4 \gls{lbt} is used for \gls{gnb} or \gls{ue} to initiate a \gls{cot} for data transmissions, while \gls{gnb} can use Cat 2 \gls{lbt} for specific signaling like discovery reference signals (see details in~\cite{TR38889}).
The rules for shared \gls{cot} have also been defined for \gls{nru} in~\cite{TR38889}. For a \gls{gnb} initiated \gls{cot}, the responding devices are allowed to transmit without performing a \gls{cca} check (i.e., Cat 1 \gls{lbt}) if there is a gap in between \gls{dl} and \gls{ul} transmissions of less than $16$ $\mu$s.
For a gap of more than $16$ $\mu$s but less than $25$ $\mu$s, within the \gls{cot}, only a short sensing (i.e., Cat 2 \gls{lbt}) is needed at the responding devices. Otherwise, if the gap is longer than $25$ $\mu$s, regular \gls{lbt} (i.e., Cat 4 \gls{lbt} for data) has to be done at responding devices.
Besides, differently to \gls{laa} that supported a single \gls{dl}/\gls{ul} switching point within the \gls{cot}, \gls{nru} supports multiple \gls{dl}/\gls{ul} switching points within the \gls{cot}~\cite[Sec. 7.6.2]{R1-180xxxxb}.
\section{From NR to NR-U}
\label{sec:NR}
\begin{table*}[!t]
\small
\centering
\begin{tabular}{|m{3.7cm}||m{4.3cm}|m{2.5cm}|m{2.5cm}|m{2.5cm}|}
\hline
& \textbf{NR-U} & \textbf{LAA} & \textbf{LTE-U} & \textbf{MulteFire} \\ \hline \hline
Deployment scenario & carrier aggregation, dual connectivity (NR-NR, LTE-NR), standalone, DL-UL & carrier aggregation & carrier aggregation & standalone \\ \hline
Operational bands & 2.4, 3.5, 5, 6, 37, 60 GHz & 5 GHz & 5 GHz & 5 GHz \\ \hline
Duplexing mode & FDD, semi-static TDD, \quad \quad dynamic TDD & FDD (LAA), semi-static TDD (eLAA) & FDD & semi-static TDD \\ \hline
Channel access scheme & LBT & LBT & duty-cycle & LBT \\ \hline
Type of carrier sense & omni/dir & omni & - & omni\\ \hline
Dimensions for carrier sense & time, frequency (channel and bandwidth part), space & time, frequency (channel) & - & time, frequency (channel) \\ \hline
Scheduling dimensions & time, frequency, space & time, frequency & time, frequency & time, frequency \\ \hline
Processing delays (described in Section~\ref{sec:sched}) & 1 slot: 1, 0.5, 0.25, 0.125 ms (numerology-dependent) & 1 subframe: 1 ms & 1 subframe: 1 ms & 1 subframe: 1 ms \\ \hline
Time-domain resource allocation granularity & 1 OFDM symbol: 0.066, 0.033, 0.017, 0.008 ms & 1 subframe: 1 ms & 1 subframe: 1 ms & 1 subframe:1 ms \\ \hline
Frequency-domain resource allocation granularity & 1 \gls{rb}: 180, 260, 720, 1440 kHz (numerology-dependent) & 1 \gls{rb}: 180 kHz & 1 \gls{rb}: 180 kHz & 1 \gls{rb}: 180 kHz \\ \hline
\end{tabular}
\caption{Comparison of NR-U and the different variants of LTE in unlicensed spectrum.}
\label{table:LTEuNRu}
\end{table*}
The \gls{nru} system should be flexible enough not only to support the different layout and deployment scenarios shown in Fig.~\ref{fig_topologies} and Fig. ~\ref{fig_scenarios} but also to follow region- and band-specific regulatory requirements (e.g., \gls{lbt}, see Section~\ref{sec:regulation}.B) to gracefully coexist with other users of the unlicensed spectrum (Wi-Fi, \gls{wigig}, radar).
\gls{nr} has already paved the way for a fully flexible and configurable technology~\cite{TS38300}.
In particular, \gls{nr} design is highly flexible to:
\begin{itemize}
\item support a wide range of use cases (e.g., \gls{embb}, \gls{mmtc}, \gls{urllc}, and \gls{ev2x})~\cite{TR38913},
\item operate in a wide range of carrier frequencies (sub 6 GHz and \gls{mmwave} bands\footnote{NR in Rel-15 has been designed for up to 52.6 GHz frequencies. The frequencies above 52.6 GHz, including the unlicensed spectrum in the 60 GHz band, are expected to be part of future releases.}) with different channel bandwidths,
\item enable different deployment options (in terms of inter-site distance, number of antennas, beamforming structures), and
\item address a variety of architectures (non-centralized, centralized, co-sited with E-UTRA, and shared \gls{ran}).
\end{itemize}
Some of the key \gls{nr} features that enable such a flexible and configurable \gls{rat} are:
\begin{itemize}
\item a flexible \gls{ofdm} system with multiple numerologies support~\cite[Sec. 5.1]{TS38300},~\cite{zaidi:16, 5GLENA},
\item configurable frame and slot structures that allow fast \gls{dl}-\gls{ul} switch for bidirectional transmissions~\cite[Sec. 4.3.2]{TS38211},~\cite{qualcomm:15b},
\item a mini-slot-based transmission which, for the unlicensed bands, may also provide an efficient way to reduce the latency from \gls{cca} end to the start of the \gls{nru} transmission~\cite{R1-1708121,R1-1803678},
\item the definition of bandwidth parts and bandwidth adaptation for energy-saving purposes as well as to multiplex services with different \gls{qos} requirements~\cite[Sec. 6.10]{TS38300},~\cite{lagen:18c, biljana:18},
\item support for beam management procedures (including beam determination, measurement, reporting, and sweeping) at both sub 6 GHz and mmWave bands~\cite[Sec. 8.2.1.6.1]{TR38912},~\cite{giordani:19,R1-1612345,R1-1702604},
\item new dynamic \gls{nr} scheduling timing parameters~\cite{TS38213,TS38214} to flexibly govern the communication timings between \gls{gnb}s and \gls{ue}s,
and which notably reduce the high processing delays in \gls{lte}.
\end{itemize}
In terms of NR operation in unlicensed bands, we compare it with the different variants of \gls{lte} in unlicensed spectrum, i.e., \gls{laa}, \gls{lteu}, and MulteFire, in Table~\ref{table:LTEuNRu}.
Thanks to the flexibility inherited from \gls{nr}, the \gls{nru} system has great potential to perform well in coexistence scenarios.
As compared to \gls{lte} in unlicensed spectrum, in \gls{nru}, we may expect: 1) a lower interference generation owing to the beam-based transmissions that allow exploiting the spatial domain, and 2) a lower latency thanks to the reduced processing times as well as the better scheduling time-resource granularity provided by the \gls{nr} numerologies.
The designs of \gls{laa} and MulteFire technologies have considered the worldwide regulatory requirements of the 5 GHz band through enhancements over \gls{lte}.
For \gls{nru}, further flexibility is needed to meet the worldwide regulatory requirements of multiple operational bands, as well as to provide support for them under beam-based transmissions.
Some of the design principles that need to be rethought in beam-based \gls{nru} are~\cite{R1-180xxxx}:
\begin{itemize}
\item the channel access procedure,
\item the \gls{cot} structure,
\item the initial access procedure,
\item the \gls{harq} procedure, and
\item the \gls{mac} scheduling scheme.
\end{itemize}
As previously highlighted, traditional \gls{lbt} might be insufficient under beam-based transmissions. As such, new regulation-compliant and distributed channel access procedures are needed. As far as the \gls{cot} structure is considered, \gls{nr} inherently includes a very flexible design due to the multiple numerologies support, but it still can be optimized for unlicensed-based access in \gls{tdd} systems to meet the \gls{mcot} limit while reducing the access delay and enabling fast \gls{dl}-\gls{ul} responses when needed. The initial access and \gls{harq} procedures that have been adopted in \gls{nr} can be reused for \gls{nru}. However, some initial access principles need to be rethought to meet the regulatory requirements (e.g., \gls{ocb}). Moreover, in case of standalone operation in unlicensed spectrum, the \gls{harq} and initial access procedures need to be improved to mitigate the negative impact that \gls{lbt} could have on the latency performance.
In the next sections, we highlight the problems, review the available solutions, and propose new potential solutions, for each of these \gls{nru} procedures.
We would like to recall that all the procedures
are susceptible to be standardized.
\section{Channel Access Procedures for \gls{nru}}
\label{sec:channelaccess}
\gls{nru} is required to ensure fair coexistence with other incumbent \gls{rat}s according to the regulatory requirements in the corresponding bands.
An appropriate channel access design, including \gls{lbt}, is the key to allow a fair coexistence in all the \gls{nru} deployment scenarios shown in Fig.~\ref{fig_scenarios} (carrier aggregation, dual connectivity, and standalone), even when not mandated by the regulation~\cite[Sec. 7.6.4]{R1-180xxxxb},~\cite[Sec. 7.6.4]{R1-180xxxx}.
The \gls{lbt} aspects that need to be designed and/or improved for beam-based \gls{nru} beyond \gls{lbt} mechanisms in \gls{laa} and MulteFire, are:
\begin{itemize}
\item \textit{\gls{lbt} for beam-based transmissions}: \gls{lbt} is a spectrum sharing mechanism that works across different \gls{rat}s. As explained in Section~\ref{sec:back}, it suffers from the hidden node and exposed node problems, which become even more likely and accentuated in case of beam-based transmissions. When an omnidirectional antenna pattern is used for carrier sense while a directional antenna pattern is used for (beam-based) transmission (as it happens in WiGig), there is a higher chance of a node being exposed. If the direction of the communication is known, directional carrier sense may help in certain situations but it may also lead to hidden node problems.
In this line, as highlighted by 3GPP, effects of the directivity of the carrier sense, for beam-based \gls{nru}, should be studied thoroughly and improved to maximize the system performance~\cite{R1-1806761,R1-1719841}.
\item \textit{Receiver-assisted \gls{lbt} for beam-based transmissions}: \gls{lbt} has been widely adopted in LAA and MulteFire. However, as introduced in Section~\ref{sec:back}, in case of beam-based transmissions, there are interference situations that can no longer be detected with carrier sense at the transmitting node (\gls{gnb}) because listening to the channel at the transmitter may not detect activity near to the receivers. The receivers are in a better position to assess potential interference, and thus
the assistance from the \gls{ue} to \gls{gnb} can help to better manage interference. Therefore, as agreed among 3GPP members, interference mitigation schemes that utilize information from the \gls{ue} need to be considered for beam-based \gls{nru}~\cite{R1-1806761,R1-1804870}.
\item \textit{Intra-\gls{rat} tight frequency reuse}: Modern cellular networks in licensed spectrum employ full frequency reuse along with interference management techniques to mitigate inter-cell interference. \gls{nru} channel access procedures could adopt similar principles within the same \gls{rat}, or at least within nodes of the same \gls{rat} that are deployed by the same operator. However, as \gls{lbt} operation based solely on \gls{ed} is uncoordinated inherently, it results in unnecessary blocking among different nodes of the same \gls{rat}, and thus it reduces the spatial reuse and efficiency as compared to full frequency reuse. Accordingly, new frequency reuse methods are needed to avoid \gls{lbt} blocking within \gls{nru} devices of the same operator, or among devices of different operators if coordination among them is permitted, as highlighted by 3GPP~\cite{R1-1806548,R1-1719841}.
\item \textit{\gls{cws} adjustment for beam-based transmissions}: The \gls{cws} is an \gls{lbt} parameter that controls the backoff period after collisions for Cat 4 \gls{lbt}, i.e., the \gls{lbt} category used for data transmissions (see Section~\ref{sec:NRU}.B). LAA-based technologies update the maximum \gls{cws} based on \gls{harq} feedback, and in particular based on the percentage of \gls{nack}s received. This procedure has some drawbacks, as \gls{nack}s do not necessarily reflect collisions and introduce delays into the \gls{cws} update procedure. Moreover, under beam-based transmission, the directionality also makes that some collisions may not be related to the transmit beam for which the \gls{cws} is being updated, e.g., collisions due to interference coming from other directions. As such, from the authors' point of view, new procedures for \gls{cws} adjustment under beam-based transmissions should be defined for \gls{nru}.
\end{itemize}
Further in this section, we review the above challenges in more detail and discuss solutions to each of them.
\subsection{LBT for Beam-based Transmissions}
\label{sec:LBT}
Two \gls{lbt} sensing approaches are envisioned for \gls{nru} to ensure a fair multi-\gls{rat} coexistence in unlicensed bands with beam-based transmissions: \textbf{\gls{omnilbt}} and \textbf{\gls{dirlbt}}~\cite{R1-1713785}. \gls{omnilbt} senses omnidirectionally, while \gls{dirlbt} senses in a directional manner within the transmit beam towards the intended receiver. {Wi-Fi} and \gls{wigig} use \gls{omnilbt}.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.93\textwidth]{figures_paired}
\caption{Behavior of (a) \gls{omnilbt}, (b) \gls{dirlbt}, (c) \gls{pairlbt}, and (d) \gls{lbtswitch} techniques, assuming beam-based transmissions and that LBT is implemented at gNB during an on-going AP-to-STA transmission, for fully-aligned (top), aligned transmitters (middle), and aligned receivers (bottom) deployment configurations.}
\label{fig_LBT}
\end{figure*}
Under directional transmissions, \gls{omnilbt} causes overprotection because a transmission is prevented even if a signal is detected from a direction that may not create harmful interference for the intended receiver. It is an exposed node problem, as shown in Fig.~\ref{fig_LBT}.(a)-middle, for \gls{gnb}-\gls{ue}, which could have reused the spectrum but have been prevented by \gls{omnilbt} at \gls{gnb}. \gls{omnilbt} is only correct when transmissions are aligned in space, see Fig.~\ref{fig_LBT}.(a)-top. In contrast, \gls{dirlbt} does not create overprotection because it only senses the spatial direction in which the transmission will be carried out (see Fig.~\ref{fig_LBT}.(b)-middle). However, in \gls{dirlbt}, on-going nearby transmissions might not be detected, and directional hidden node problems may cause interference as shown in Fig.~\ref{fig_LBT}.(b)-top, because the transmission of \gls{ap} lies within the antenna boresight of the \gls{ue}. The above results in an \gls{omnilbt} that is overprotective and prevents spatial reuse, and a \gls{dirlbt} that enables spatial reuse with some hidden node problems.
\begin{table*}[!t]
\small
\centering
\begin{tabular}{|m{5cm}||m{2.6cm}|m{2.6cm}|m{2.6cm}|m{2.6cm}|}
\hline
& \textbf{\gls{omnilbt}} & \textbf{\gls{dirlbt}} & \textbf{\gls{pairlbt}} & \textbf{\gls{lbtswitch}}\\ \hline \hline
Type of carrier sense & omnidirectional & directional & directional & omni/directional \\ \hline
Fixed or dynamic type of carrier sense & fixed & fixed & fixed & dynamic \\ \hline
UE-dependent carrier sense & no & yes & yes & yes \\ \hline
Number of carrier sense stages & 1 & 1 & 2 or more & 1 \\ \hline
Information from UE & - & - & optional at sync, to optimize \gls{pairlbt} parameters & on-line, to switch from \gls{omnilbt} to \gls{dirlbt}, and reverse \\ \hline
\end{tabular}
\caption{Comparison of channel access procedures that use carrier sense at gNB side.}
\label{table:LBT_comparison}
\end{table*}
To properly address the \gls{omnilbt}/\gls{dirlbt} trade-off, in~\cite{lagen:18b} a distributed solution is proposed, called \textbf{\gls{pairlbt}}. The key idea of \gls{pairlbt} is to perform directional sensing in paired directions, i.e., in the transmitting direction (which is equivalent to perform legacy \gls{dirlbt}) and its opposite direction(s). The opposite directions can denote a single direction or a set of directions depending on whether the beams for carrier sense are either reconfigurable or predefined based on a set of previously configured beams, respectively.
In this line, in~\cite{lagen:18b}, analytic expressions are derived to optimize the parameters (beam shape and \gls{ed} threshold) for \gls{lbt} in the opposite direction(s) with the objective of reducing hidden node problems. Additional extensions to the \gls{pairlbt} are also proposed, which use the sensed power during the sensing phase in the opposite direction(s) to properly adjust the transmit/receive strategy. Fig.~\ref{fig_LBT}.(c) shows how the \gls{omnilbt}/\gls{dirlbt} trade-offs are addressed by \gls{pairlbt}.
It is shown in~\cite{lagen:18b} through simulations that the \gls{pairlbt} solution allows improving the ability to perform carrier sense by avoiding hidden node problems, which appear under \gls{dirlbt}, and by stimulating spatial reuse, which is prevented under \gls{omnilbt} (see Fig.~\ref{fig_LBT}). All in all, \gls{pairlbt} is a simple and fully distributed technique that ensures a fair indoor coexistence of different \gls{rat}s in unlicensed spectrum, and which can be properly adjusted to the network density and beamwidth configurations by optimizing the \gls{lbt} parameters. Note, however, that the procedure, as defined in~\cite{lagen:18b}, applies only to indoor scenarios (i.e., indoor \gls{mmwave} shown in Fig.~\ref{fig_topologies}.(b)), since for outdoor scenarios a new dimension (the height) should be added to the definition and optimization.
Results in~\cite{lagen:18b} also demonstrate the trade-off between \gls{omnilbt} and \gls{dirlbt}. It is shown that for low network densities, \gls{dirlbt} performs significantly better than \gls{omnilbt}, while for high network density, \gls{omnilbt} is a good technique. The trade-off is also observed based on the beamwidth configuration (narrow versus wide beams).
Based on that, another solution to deal with the \gls{omnilbt}/\gls{dirlbt} trade-off is to implement an \textbf{\gls{lbtswitch}} scheme~\cite{lagen:18d}. This scheme basically switches the type of carrier sense between omnidirectional and directional, based on the beamwidth configuration and density of neighboring nodes.
Moreover, a dynamic switching method can also be implemented, where switching from \gls{dirlbt} to \gls{omnilbt} could be done based on indications like \gls{harq} feedback, \gls{ue} measurements, etc., to detect an excess of hidden node situations. To switch from \gls{omnilbt} to \gls{dirlbt}, a new procedure to measure the overprotective level of \gls{omnilbt} (i.e., an excess of exposed node situations) should be introduced, as detailed in~\cite{lagen:18d}.
The \gls{omnilbt}-\gls{dirlbt} trade-off, as well as how the \gls{pairlbt} and \gls{lbtswitch} procedures address the trade-off, is shown in Fig.~\ref{fig_LBT} for three different deployment configurations in a \gls{dl} scenario with two pairs (gNB-UE and AP-STA):
\begin{itemize}
\item \textit{top}: fully-aligned (i.e., AP, gNB, STA, and UE are aligned in the same spatial line),
\item \textit{middle}: aligned transmitters (i.e., gNB is in the coverage area of AP),
\item \textit{bottom}: aligned receivers (i.e., UE is in the coverage area of AP).
\end{itemize}
\noindent For each configuration, we illustrate the behavior with different gNB channel access procedures and what happens when the sensing strategy fails (e.g., interference occurs or transmission is unnecessarily prevented). In the case of \gls{lbtswitch} technique, we depict the sensing strategy (\gls{dirlbt}, \gls{omnilbt}) that the gNB would use on each of the deployment configurations. The correct gNB behavior in each deployment configuration is: \textit{transmission prevented} for the fully-aligned configuration (which occurs with \gls{omnilbt}, \gls{pairlbt}, \gls{lbtswitch}), \textit{transmission allowed} for aligned transmitters configuration (which occurs with \gls{dirlbt}, \gls{pairlbt}, \gls{lbtswitch}), and \textit{transmission prevented} for aligned receivers configuration (which is not achieved with any of the methods).
In Table~\ref{table:LBT_comparison}, we provide a summary of the requirements of each LBT-based strategy to illustrate the differences in the implementation complexity.
Note that \gls{omnilbt}, \gls{dirlbt}, \gls{pairlbt} are distributed procedures that can be implemented without UE's assistance,
while \gls{lbtswitch} is also distributed but requires information from the UE to properly adapt the type of carrier sense based on the UE's observation (see Fig.~\ref{fig_LBT}.(d)). DirLBT, \gls{pairlbt}, and \gls{lbtswitch} require knowledge of the intended beam's direction (towards UE) to perform the carrier sense, while \gls{omnilbt} does not. Indeed, \gls{pairlbt} needs at least two sensing stages before every channel access, which could increase the overhead for the sensing in case that a single radio-frequency chain could be used at a time.
Any of the LBT schemes discussed so far cannot properly address the case of aligned receivers configuration (see Fig.~\ref{fig_LBT}-bottom). In this configuration, if the AP is transmitting towards the STA with its transmit beam (green beam) and, then, the gNB wants to access the channel to serve UE by performing LBT (either \gls{dirlbt}, \gls{omnilbt}, \gls{pairlbt}, or \gls{lbtswitch}), the gNB will sense the channel as idle. This enables the gNB to proceed with directional data transmission towards UE (yellow beam), which will generate interference onto the STA, as well as UE will receive interference from the AP. In the next section, we discuss the receiver-assisted LBT, which can help to prevent the transmission in this configuration.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.67\textwidth]{figures_lbr2}
\caption{Receiver-assisted LBT procedure to solve the incorrectness of sensing at gNB side under beam-based transmissions, through triggering carrier sense at the receivers (UE1 and UE2). The LBR trigger and LBR feedback messages can be sent over a licensed (carrier aggregation and dual connectivity scenarios) or unlicensed carrier (standalone scenario), while carrier sense (LBR) is performed in the unlicensed band.}
\label{fig_lbr}
\end{figure*}
\subsection{Receiver-Assisted LBT for Beam-based Transmissions}
\label{sec:recLBT}
As shown before, there are situations (e.g., aligned receivers) in which on-going nearby beam-based transmissions cannot be detected at the gNB through any of the LBT-based schemes, thus causing hidden node problems (see Fig.~\ref{fig_LBT}-bottom). In these cases, it is the receiver (UE) which has useful information that can be properly exploited for a successful, fair, and friendly channel access in unlicensed bands with beam-based transmissions.
To address these situations, in~\cite[Sec. 8.2.2]{D41mmMagic}, a Listen-After-Talk (\textbf{\gls{lat}}) technique based on message exchange was proposed. \gls{lat} adopts the opposite logic as compared to \gls{lbt}, in which the default mode for a transmitter is `to send data' and data is not sent only when it is confirmed that the channel is occupied by interfering transmissions. That is, the transmitter transmits when data packets arrive and then, in case that a collision is detected by the receiver, coordination signaling is used to avoid future collisions. Therefore, \gls{lat} considers involving the receiver to sense the channel directly.
However, \gls{lat} does not use \gls{lbt}, and so it is not compliant with the regulatory requirements in the unlicensed spectrum at 5 GHz and 60 GHz bands in some regions~\cite{ETSI302567,ETSI301893}. Accordingly, it is a potential approach for the USA and China, as well as for the shared bands without the \gls{lbt} requirement, but not for Europe and Japan in 5 GHz and 60 GHz bands.
Wi-Fi and \gls{wigig} use an optional \textbf{RTS/CTS} mechanism to reduce intra-RAT collisions caused by hidden node problems. This mechanism involves physical carrier sense and virtual carrier sense but only solves intra-RAT interference problems, as IEEE 802.11 messages are not decodable by \gls{nr} devices. Note that RTS/CTS protocol is not currently adopted in LAA and MulteFire technologies. However, from the authors' point of view, it may be worth reconsidering it for \gls{nru} to deal with intra-RAT problems since the hidden node and exposed node problems become more severe under beam-based transmissions.
Other potential solution, which is only based on the physical carrier sense of RTS/CTS, is the Listen-Before-Receive (\textbf{\gls{lbr}})~\cite{lagen:18}. According to this mechanism, the \gls{gnb} triggers the \gls{ue} to perform carrier sense, and only if the \gls{ue} responds, the \gls{gnb} can initiate the transmission. Carrier sense is used before sending the trigger and feedback messages over the unlicensed carrier, thus addressing the \gls{nru} standalone scenario. The solution is illustrated in Fig.~\ref{fig_lbr}, where the messages are referred to as LBR trigger and LBR feedback. In~\cite{lagen:18}, it is also shown how to implement LBR to complement \gls{lbt} in \gls{nr} by exploiting the \gls{nr} flexible slot structure.
Depending on the omnidirectional/directional sensing that is performed at the \gls{gnb} (\gls{dirlbt}/\gls{omnilbt}) and at the \gls{ue} (dirLBR/omniLBR), different \gls{lbt}-LBR combinations may arise. Among all of them, it is found that \gls{dirlbt}-dirLBR is the best technique and provides significant enhancements in interference management as compared to transmitter-only based sensing approaches~\cite{lagen:18}.
In line with the \gls{lbr} proposal, some solutions suggest sending the LBR trigger and LBR feedback (see Fig.~\ref{fig_lbr}) over the licensed carrier. This is the case of the so-called \textbf{closed-loop \gls{lbt}}, introduced in~\cite{R1-1802611}, which is useful for the carrier aggregation and dual connectivity scenarios. This way, by utilizing the licensed carrier, closed-loop \gls{lbt} procedure can become more robust to channel availability uncertainties of the unlicensed spectrum, thus resulting in lower latency.
\begin{figure*}[!t]
\centering
\hspace{2cm} \includegraphics[width=0.75\textwidth]{LBTcoord}
\caption{\gls{lbt} blocking for (a) nodes of different RATs and (b) nodes of the same RAT.}
\label{fig_LBTcoord}
\end{figure*}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.74\textwidth]{LBTcoord_sol} \hspace{1.4cm}
\caption{Solutions to avoid \gls{lbt} blocking among nodes of the same RAT/operator, through (a) self-defer approach or (b) \gls{lbt} coordination in frequency-domain.}
\label{fig_LBTcoord2}
\end{figure*}
RTS/CTS, \gls{lbr}, and closed-loop \gls{lbt} solutions are generally referred to as \textbf{receiver-assisted \gls{lbt}}, as illustrated in Fig.~\ref{fig_lbr}. It was agreed by 3GPP to analyze whether receiver-assisted \gls{lbt} approaches, as well as on-demand receiver-assisted \gls{lbt}, enable enhancing \gls{nru} performance beyond the baseline \gls{lbt} mechanism~\cite[Sec. 7.6.4]{R1-180xxxx}.
The sensing stages for receiver-assisted \gls{lbt} procedure (i.e., \gls{lbt} at gNB and \gls{lbr} at UE) can use any of the sensing strategies that we discussed in Section~\ref{sec:LBT} (directional, omnidirectional, paired, or switching),
so that multiple \gls{lbt}-\gls{lbr} combinations can be formed. In Section~\ref{sec:eval}, we evaluate and compare different \gls{lbt}-\gls{lbr} techniques.
The efficiency of receiver-assisted \gls{lbt} is achieved at the cost of additional message exchange between gNB and UE before every channel access (see Fig.~\ref{fig_lbr}). For different \gls{nr} numerologies\footnote{Each numerology in \gls{nr} specifies a \gls{scs} and a slot length, therefore, it influences the \gls{dl}-\gls{ul} handshake timings, see~\cite[Sec. 5.1]{TS38300}.}, the overhead to implement receiver-assisted \gls{lbt} can be quantified in terms of percentage of the \gls{mcot} ($9$ ms as per ETSI for the 60 GHz band~\cite{ETSI302567}) that is used to perform the message exchange and sensing at UE side before every channel access. If we assume that one slot is required to perform a complete message handshake, which includes \gls{lbr} trigger transmission, sensing at UE side, and LBR feedback transmission, the percentage of the \gls{mcot} used for the handshake will be $11.11\%$ (\gls{scs}=$15$ kHz), $5.55\%$ (\gls{scs}=$30$ kHz), $2.77\%$ (\gls{scs}=$60$ kHz), $1.38\%$ (\gls{scs}=$120$ kHz), $0.69\%$ (\gls{scs}=$240$ kHz). This reflects the penalty in the spectral efficiency of the NR-U system.
It is observed that for high numerologies, (\gls{scs}=$60$, $120$, $240$ kHz), i.e., the ones that are used at mmWave bands, the overhead is below $3\%$.
\subsection{Intra-RAT tight Frequency Reuse}
Apart from the \gls{lbt} sensing strategies analyzed in the previous sections, another problem that may arise due to the uncoordinated \gls{lbt} among different nodes of the same \gls{rat} is unnecessary blocking of transmissions, which leads to degradation in spatial reuse.
As previously described, cellular networks have been appropriately designed to allow full frequency reuse since they have effective interference management techniques (e.g., adaptive rate control, power control, coordinated multi-point (CoMP), enhanced inter-cell interference coordination (eICIC)) to mitigate inter-cell interference within the nodes of a single \gls{rat} (e.g., \gls{nr} from a specific operator). Let us note that the transmit coordination methods, e.g., CoMP and eICIC, basically coordinate the data transmissions, which in case of NR-U occur in the unlicensed band once the channel access is obtained. Therefore, there is no need to block a transmission through \gls{lbt} among devices of the same \gls{rat} that can be coordinated for transmission in the unlicensed spectrum.
An example of the \gls{lbt} blocking is shown in Fig.~\ref{fig_LBTcoord}, for (a) nodes of different \gls{rat}s and (b) nodes of the same \gls{rat}. In Fig.~\ref{fig_LBTcoord}.(a), the \gls{ap} has accessed the channel and then blocks transmission of the \gls{gnb}, since the \gls{gnb} senses the channel as busy with \gls{lbt}. In this case, the \gls{gnb} has to wait for the transmission of the \gls{ap} to finish and its own backoff procedure to access the channel. This behavior is correct. However, in Fig.~\ref{fig_LBTcoord}.(b), \gls{gnb}1 has accessed the channel and is blocking the transmission of \gls{gnb}2 (a node of the same \gls{rat} and operator), which detects the channel as busy.
In this case, \gls{gnb}2 must defer the transmission, due to unnecessary \gls{lbt} blocking. Therefore, improvements can be done for \gls{nru}.
An alternative solution is presented in~\cite{R1-1719841,R1-1803679}, where a method for joint channel access using \textbf{self-defer} within a group of neighboring \gls{gnb}s/\gls{trp}s of the same operator has been proposed. The group will self-defer its transmission simultaneously after successful \gls{lbt} for joint channel access so that nodes among the group do not block each other.
The self-defer solution is shown in Fig.~\ref{fig_LBTcoord2}.(a). Therein, once \gls{gnb}1 gets a clear channel, it communicates with the neighboring \gls{gnb}s through the Xn interface\footnote{Xn is an \gls{nr} interface through which the \gls{gnb}s may communicate with one another, similar to the X2 interface in \gls{lte}.}, and if they are performing the \gls{cca} procedure, \gls{gnb}1 would self-defer itself to avoid blocking \gls{gnb}2. \gls{gnb}1 would self-defer until \gls{gnb}2 has completed the \gls{cca} check and backoff procedure. This solution addresses simultaneous accesses. However, it does not resolve the case in which a node has already accessed the channel, and may block neighbor transmissions of the same RAT and/or operator that have not already started the \gls{cca} check. Also, during the self-deferral period, there is a risk that nodes of other RATs and/or operators do occupy the channel.
Another option that we hereby propose is to use \textbf{\gls{lbt} coordination} procedures among neighboring gNBs/TRPs of the same operator. \gls{lbt} coordination consists on coordinating the LBT processes before starting the data transmission. We foresee that \gls{lbt} coordination to finalize the backoff procedure can be either in time- or frequency-domain. A possible procedure for \gls{lbt} coordination in frequency-domain is illustrated in Fig.~\ref{fig_LBTcoord2}.(b). If a \gls{gnb} (\gls{gnb}2 in the figure) is able to detect that the node occupying the channel is a node from its own \gls{rat} and operator (\gls{gnb}1 in the figure), it could send a message over the Xn interface to request \gls{lbt} coordination to \gls{gnb}1. After receiving such request, the \gls{gnb}1 could release part of the channel bandwidth (frequency-domain \gls{lbt} coordination) and/or some slots (time-domain \gls{lbt} coordination), for the \gls{gnb}2 to complete the backoff procedure. The part of the channel bandwidth and/or the slots which will be released, as well as the starting point for transmit coordination, could be communicated through Xn so that both \gls{gnb}s, after the backoff procedure is completed, could start with transmit coordination, thus improving the spatial reuse.
Note that, in case of time-domain \gls{lbt} coordination, the same problems as in the self-defer approach arise. That is, other nodes may occupy the channel during the request-enabled \gls{lbt} coordination process. Nevertheless, this does not happen in case of frequency-domain \gls{lbt} coordination, since \gls{gnb}1 does not release the full spectrum bandwidth and other \gls{rat}s would still detect the channel as busy. In this case, \textbf{bandwidth part-based \gls{lbt}} is needed, i.e., \gls{gnb}2 should implement \gls{cca} only in the released bandwidth part (as illustrated in Fig.~\ref{fig_LBTcoord2}.(b), second \gls{cca} block for \gls{gnb}2), and then transmit in such bandwidth part.
To further improve the proposal and facilitate the job of detecting that channel is busy due to a \gls{gnb} of the same RAT/operator, once a \gls{gnb} gets access to the channel, it could inform nearby \gls{gnb}s over the Xn interface.
\subsection{\gls{cws} Adjustment for Beam-based Transmissions}
In \gls{laa} Cat 4 \gls{lbt}, the \gls{cws} is updated based on HARQ feedbacks.
If $80 \%$ or more of \gls{harq} feedbacks of one reference subframe are \gls{nack}, the maximum \gls{cws} is increased~\cite{TR36889}. Otherwise, it is reset. Note that this collision detection technique has some drawbacks. First,
it is affected by the scheduler policies, e.g.,
collisions from different \gls{ue}s may affect the corrective actions of \gls{lbt} differently since they will depend on how many and which \gls{ue}s are simultaneously allocated in the reference subframe. Second, HARQ does not necessarily reflect collisions, e.g., \gls{nack} may also occur due to a sudden signal blocking. Third, since \gls{harq} is based on soft combining techniques (i.e., incremental redundancy or chase combining), an unsuccessful transmission, due to a collision, may not result in a \gls{nack} in case of successful decoding thanks to the combination of multiple transmissions. And the last, it introduces delays in the \gls{cws} update. Since LAA uses \gls{harq} feedback corresponding to the starting subframe of the most recent transmission burst, it may detect collision after at least $4$ ms, whereas, Wi-Fi detects collisions after $16$ $\mu$s.
We would like to remark that, at the time of writing, the problems that we highlight next in this section have not been detected so far in the literature and, consequently, no solutions are available. Also, the \gls{cws} adjustment criterion for Cat 4 \gls{lbt} in \gls{nru} has not been defined yet.
For \gls{nru}, the same issues listed before for \gls{laa} will also appear if \gls{harq} feedback is used for the \gls{cws} update, except that the \gls{harq} feedback delay may be reduced due to the flexible \gls{nr} slot structure. Moreover, for beam-based \gls{nru}, the reported collision by \gls{harq} feedbacks may not be linked to the transmit beam correctly. As already mentioned in previous section, \gls{lbt} (and the extended \gls{cca} procedure) only makes sense if the beams of neighboring \gls{gnb}s are aligned. If the \gls{gnb}s/\gls{ap}s saw each other (as shown in Fig.~\ref{fig_cws}.(b)), they would backoff to each other and so randomize their accesses, taking advantage of the \gls{cws} increase. However, if beams of neighboring nodes are not aligned (see Fig.~\ref{fig_cws}.(a)), the \gls{lbt} is not effective, even if the \gls{cws} is increased. The \gls{gnb}s/\gls{ap}s never enter in the backoff phase, so the access randomization effect is not produced. In particular, in case of Fig.~\ref{fig_cws}.(a) scenario, both the \gls{gnb} and \gls{ap} listen to the channel and find it free, thus, they both access the channel and collide. Then, they increase the \gls{cws}, they listen again, find the channel free and they collide again. Therefore, in those cases where \gls{lbt} does not work properly, it is furthermore counterproductive to increase the \gls{cws} based on \gls{harq} procedure.
\begin{figure}[!t]
\centering
\includegraphics[width=0.48\textwidth]{CWS}
\caption{Situations in which \gls{lbt} and the \gls{cws} update based on HARQ feedback: (a) do not make sense, (b) make sense. }
\label{fig_cws}
\end{figure}
To summarize, we have detected two problems that arise in beam-based transmissions when the \gls{cws} update is based on \gls{harq} feedback:
\begin{itemize}
\item \textit{The lack of correlation between a collision indicated by a \gls{nack} and the transmit beam}: \gls{harq} feedbacks may refer to collisions due to interference coming from another direction, while only collisions generated on the transmit direction line are of interest for the \gls{cws} update.
\item \textit{The inability to enter the backoff phase due to an incorrect sensing phase}: transmitters that do not see each other would never enter the backoff phase to randomize their accesses, although they increase the \gls{cws} based on \gls{harq} feedback.
\end{itemize}
Therefore, for transmitters that do not see each other, it would be beneficial that the \gls{ue} triggers the backoff procedure at its gNB to randomize its \gls{gnb}'s access to the channel since the \gls{ue} is the only one that has the knowledge about interfering nodes. In addition, it would be good to isolate the \gls{cws} update procedure from the \gls{harq} feedback because it does not properly capture the directional (and non-directional) collisions.
To solve these problems, we propose using a \gls{cws} update at the \gls{gnb} that is assisted by the \gls{ue}, i.e., \textbf{receiver-assisted \gls{cws} adjustment}. In particular, by a paired sensing at the \gls{ue}. That is, the \gls{ue} could carry out a paired sensing over the \gls{gnb} transmit beam line (receive direction and opposite direction(s)) and, if the channel is sensed as busy during some period, it could:
\begin{itemize}
\item Trigger backoff at the \gls{gnb} if it is not aligned to the source of interference.
\item Suggest the most appropriate \gls{cws} over the \gls{gnb} transmit beam line, based on, e.g., the percentage of slots sensed as busy during the paired sensing phase.
\end{itemize}
Hence, we suggest updating the \gls{cws} associated to the transmit beam based on statistical paired sensing at the \gls{ue} within the direction of the \gls{gnb} transmit beam.
\section{COT Structure for NR-U}
\label{sec:frame}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.65\textwidth]{mcot}
\caption{COT structure with (a) single \gls{dl}/\gls{ul} switch, (b) multiple \gls{dl}/\gls{ul} switches.}
\label{fig_mcot}
\end{figure*}
After a successful \gls{lbt}, a device can access the channel at most for the duration of the \gls{mcot} ($9$ ms in the 60 GHz band). The \gls{nr} frame structure inherently allows \gls{nru} to transmit and receive in a more efficient manner compared to LTE in unlicensed spectrum technologies, thanks to the numerologies, mini-slots, and flexible slot structure~\cite{R1-1804275}. Indeed, the \gls{cot} can be shared between a \gls{gnb} and its \gls{ue}s to achieve a higher spectral efficiency and faster responses under bidirectional transmissions (see details in Section~\ref{sec:NRU}.B). In this section, we review how to define the \gls{dl} and \gls{ul} transmission periods within the \gls{cot}.
There are two options considered in 3GPP to define the structure of the \gls{cot} (as illustrated in Fig.~\ref{fig_mcot}):
\begin{itemize}
\item \gls{cot} with single \gls{dl}/\gls{ul} switch or
\item \gls{cot} with multiple \gls{dl}/\gls{ul} switches.
\end{itemize}
Note that the slot length in \gls{nr} is much shorter than the \gls{mcot}. For example, with \gls{scs}=$120$ kHz, $72$ slots fit within $9$ ms, so that multiple \gls{dl}/\gls{ul} switches could be implemented.
Recently, support for the multiple \gls{dl}/\gls{ul} switch option within the \gls{cot} has been agreed for \gls{nru}~\cite[Sec. 7.2.1.1]{TR38889}.
Still, as highlighted by 3GPP, the number of switch points per \gls{cot} should be further studied in \gls{nru}~\cite{R1-1803678}.
Both aforementioned options have advantages and disadvantages.
A \textbf{\gls{cot} with a single \gls{dl}/\gls{ul} switch} has the advantages that: \textit{i}) there is a low overhead due to only one guard band (shown in gray color in Fig.~\ref{fig_mcot}) and \textit{ii}) avoids multiple \gls{lbt}s for successive \gls{dl}-\gls{ul} periods (in case the gaps are larger than $16$ $\mu$s and so a new \gls{lbt} has to be done at each gap\footnote{Whether \gls{lbt} before an \gls{ul} transmission that follows a \gls{dl} transmission is needed or not, depends on the gap length, as detailed in Section~\ref{sec:NRU}.B.}). The disadvantages are that: \textit{i}) it increases delays to get the \gls{harq} feedback, and \textit{ii}) the \gls{gnb} would schedule \gls{ul} far away in time, for which the channel may no longer be available in the \gls{ul} direction (in case a new \gls{lbt} has to be performed). Accordingly, this \gls{cot} configuration is suitable for high throughput situations with relaxed latency constraints, e.g., \gls{embb} traffic.
On the other hand, a \textbf{\gls{cot} with multiple \gls{dl}/\gls{ul} switches}: \textit{i}) simplifies the \gls{harq} timings related to \gls{harq} feedback and \textit{ii}) ensures channel availability in \gls{ul} (in case a new \gls{lbt} has to be done), but
\textit{i}) has a high overhead due to multiple guard bands (see Fig.~\ref{fig_mcot}) and \textit{ii}) involves multiple \gls{lbt}s for successive \gls{dl}-\gls{ul} periods at every direction switch (in case the gaps are larger than $16$ $\mu$s). This configuration is then suitable for delay-sensitive traffic, such as \gls{urllc} and \gls{ev2x}, as well as for low-load traffic categories, like \gls{mmtc}. However, it may not be suitable for applications with high throughput requirements (like \gls{embb}) as it provides a lower spectral efficiency due to the existence of multiple guard bands and the potential need for multiple \gls{lbt}s.
Based on the above advantages/disadvantages of each option, from the authors' point of view, it would be appropriate to optimize \gls{ul}/\gls{dl} structure within the \gls{cot} based on knowledge of the traffic status and patterns (e.g., \gls{bsr} and future \gls{bsr} pattern predictions), the throughput/latency requirements of the active data flows, their category type (or 5G \gls{qos} Indicator, 5QI), and the channel status at the \gls{ue}s (percentage of busy and idle slots). The \gls{gnb} could consider the information from all the active flows for the \gls{cot} period.
In addition to the intrinsic trade-offs, it would be beneficial that the \gls{gnb} notifies the \gls{ue}s the selected \gls{cot} structure preferably at the beginning of the \gls{cot}.
This would help the \gls{ue}s to prepare for performing \gls{lbt} ahead of time, as well as to anticipate the preparation of any potential transmission in a \gls{pucch} or \gls{pusch} resource.
\section{Initial Access Procedures for NR-U}
\label{sec:initialaccess}
The basic structure of \gls{nr} initial access is similar to
the corresponding functionality of \gls{lte}~\cite{parkvall:17}: $1$) there is a pair of \gls{dl} signals, the \gls{pss} and \gls{sss}, which are used by the \gls{ue} to find, synchronize, and identify a network, 2) there is a \gls{dl} \gls{pbch} that carries a minimum amount of system information, which is transmitted together with the \gls{pss}/\gls{sss}, and 3) there is a four-stage \gls{rach} procedure that starts with the \gls{ul} transmission
of a random access preamble~\cite{liu:18}. In NR, the combination of \gls{pss}/\gls{sss} and \gls{pbch} is referred to as an \gls{ss} block, and such signals are always sent together with the same periodicity. This section reviews the problems and solutions for the key features of the \gls{nru} initial access, which include \gls{ss} block design\footnote{In the \gls{nru} standardization, the \gls{ss} block is referred to as \gls{nru} discovery reference signal~\cite{TR38889}.}, RACH procedure, and paging.
\subsection{\gls{ss} Block Design}
\gls{ss} blocks are used in \gls{nr} to enable radio resource management measurements, synchronization, and initial access. Therefore, for \gls{nru} operation, \gls{ss} blocks should always be transmitted in all the deployment scenarios, i.e., carrier aggregation, dual connectivity, or standalone mode (see Section~\ref{sec:NRU}.A).
An \gls{ss} block spans over 240 contiguous subcarriers and 4 contiguous \gls{ofdm} symbols (as shown in Fig.~\ref{fig_SS}). The frequency location is typically not at the center of the \gls{nr} carrier (as in \gls{lte}) but shifted according to a global synchronization raster that depends on the frequency band~\cite[Sec. 5.4.3]{TS38101}. The time locations of the \gls{ss} blocks are determined by SCS and frequency range~\cite[Sec. 4.1]{TS38213}. The maximum transmission bandwidth of an \gls{ss} block has been defined to be [$5, 10, 40, 80$] MHz with [$15, 30, 120, 240$] kHz \gls{scs}, respectively.
To support beam sweeping and \gls{ss} block repetitions, multiple \gls{ss} blocks from the same \gls{gnb} are organized in time into a burst (called \gls{ss} burst), and multiple \gls{ss} bursts further comprise an \gls{ss} burst set. The periodicity of an \gls{ss} burst set is configurable from the set of \{$5, 10, 20, 40, 80, 160$\} ms (default at $20$ ms), and each \gls{ss} burst set can contain up to $64$ \gls{ss} blocks. For more details on the \gls{pss}, \gls{sss}, and PBCH signals, see~\cite{TS38211}. The synchronization procedure for cell search is detailed in~\cite[Sec. 4.1]{TS38213}, and the time-frequency structure of the \gls{ss} block is shown in~\cite[Sec. 5.2.4]{TS38300}.
\begin{figure}[!t]
\centering
\includegraphics[width=0.48\textwidth]{SS-U}
\caption{Location of \gls{ss} block in an \gls{nr} frame for SCS=240 kHz.}
\label{fig_SS}
\end{figure}
In the following we discuss different challenges that arise in the \gls{ss} block design and transmission principles for \gls{nru}.
Note that \gls{ss} blocks must be sent by the \gls{gnb} even if there is no data, to enable the \gls{ue}s to detect and search cells, be synchronized to the \gls{gnb}, perform beam measurements, implement handover if required, and decode broadcast messages.
The first problem is related to the transmission of the \gls{ss} blocks and \gls{lbt} requirement. Since \gls{ss} block may be interrupted due to channel occupancy, periodical \gls{ss} block transmission may not be possible~\cite[Sec. 7.6.4.2]{R1-180xxxxb}.
This can be solved by using a solution adopted in \gls{laa} discovery reference signals, which are transmitted within a periodically occurring time window, and thus increase the chance for signal transmission~\cite{kwon:17}. Additional occasions for \gls{ss} block transmissions, over the legacy periodic \gls{ss} block transmissions, are proposed in~\cite{R1-1803977}. Also, it is shown therein how to enable multiple occasions by reusing the \gls{nr} \gls{ss} block patterns.
In addition, \gls{ss} block patterns may need to be redefined to include the \gls{lbt} resource overhead and enable \gls{ss} block transmissions through multiple beams, as discussed in~\cite{R1-1806761}. The technical contribution in~\cite{R1-1803856} describes solutions to reuse the \gls{nr} \gls{ss} block patterns while leaving enough space for \gls{lbt} as well as for switching antenna weights for beam sweeping in between the different blocks.
The second problem is related to the \gls{ss} block design in 60 GHz band, which occurs due to the \gls{ocb} requirement and the large channel bandwidth (see Section~\ref{sec:regulation}.B). The main problem is that \gls{ss} blocks occupy only a part of the NCB, as shown in Fig.~\ref{fig_SS}. For illustrative purposes, in Fig.~\ref{fig_SS}, the \gls{ss} block starts at the first OFDM symbol and is located at the upper-left corner, although its exact location is defined in~\cite[Sec. 4.1]{TS38213}. If \gls{ss} blocks are multiplexed with data, then the \gls{ocb} requirement may be met. However, if \gls{ss} blocks are not multiplexed with data, then the \gls{ocb} requirement is not met with the current \gls{ss} block design in \gls{nr}.
Accordingly, to meet the \gls{ocb} requirement defined by ETSI, a new design of \gls{ss} blocks in frequency domain is required for \gls{nru} operation at the 60 GHz band.
In case \gls{ss} blocks are not multiplexed with data, or they are sent with data but do not fulfill the \gls{ocb} requirement, a basic solution is to send dummy non-useful data in frequency-domain to meet the \gls{ocb} requirement. However, this solution is energy-inefficient and does not add any benefit from the \gls{ue} perspective.
Other solutions that we envision to meet the \gls{ocb} requirement are:
\begin{itemize}
\item Perform \textbf{frequency-domain \gls{ss} block repetitions}, by repeating the \gls{ss} block in multiple frequency locations within the channel bandwidth. This solution uses additional power but enhances the \gls{ue} performance, as it enables receiving the \gls{ss} block with a higher signal-to-noise ratio.
\item Redesign the \textbf{time-frequency structure} of the \gls{pss}/\gls{sss}/\gls{pbch} signals in the \gls{ss} block, by restructuring the signals placement. An example is to use a frequency-domain interlaced mapping for \gls{pss}/\gls{sss}/\gls{pbch} signals so that they span over the required channel bandwidth. This solution allows meeting the \gls{ocb} requirement without incurring additional power consumption.
\end{itemize}
\subsection{RACH Procedure}
The contention-based \gls{rach} procedure in \gls{nr} has four steps~\cite[Sec. 8]{TS38213},~\cite{liu:18}, \textit{step 1}: \gls{ue} transmits a \gls{prach} preamble to \gls{gnb}, \textit{step 2}: \gls{gnb} transmits the Random Access Response (RAR) to \gls{ue} with the \gls{pusch} resource allocation to send message 3, \textit{step 3}: \gls{ue} transmits message 3 over the allocated \gls{pusch} resource, and \textit{step 4}: \gls{gnb} transmits message 4 for contention resolution.
In \gls{nru}, \gls{rach} procedures are needed and must be improved at least for dual connectivity and standalone deployment scenarios. Carrier sense must be performed at each step of \gls{rach} procedure, which may delay the procedure to complete if the channel is busy at any step. Therefore, high-priority channel access with Cat 2 \gls{lbt} could be preferred for \gls{rach}. Indeed, the use of two-step \gls{rach} procedures would also be of high interest to reduce the initial access delay, as proposed in~\cite{R1-1806762,R1-1803856}, and also identified by \gls{3gpp}~\cite[Sec. 7.6.4.2]{R1-180xxxxb}. Particularly, two-step \gls{rach} procedures will require fewer \gls{lbt}s than the four-step \gls{rach} procedure. Other enhancements may include the increasing transmit opportunities for each message~\cite{R1-1804405}, which is also discussed in the case of SSB transmissions.
In addition to that, the \gls{prach} preamble format needs to fulfill the regulatory requirement of OCB, which will exclude some of the agreed \gls{nr} \gls{prach} formats. In Rel-14 eLAA~\cite{TS36213}, several types of \gls{prach} waveforms were studied, such as frequency-domain repetition of a licensed band preamble, Demodulation Reference Signals (DMRS) repetition in time domain with frequency-domain interlacing, and frequency-domain interlaced mapping of a licensed band preamble. This study in eLAA may provide a baseline for the design of \gls{nru} \gls{prach} interlace waveforms.
\subsection{Paging}
Paging is a \gls{rrc} procedure to activate a \gls{ue} that is in idle mode. In the unlicensed context, it is needed at least for dual connectivity and standalone deployment scenarios. A paging cycle is defined to allow \gls{ue}s to wake up and listen at predefined time slots to receive possible paging messages. The paging message is scheduled through \gls{dci} and is transmitted in the associated \gls{pdsch}.
The uncertainty of channel availability in the unlicensed bands due to \gls{lbt} makes paging \gls{dci} hard to be sent out at predefined time slots.
To solve that, a time interval composed of multiple slots for potential paging message transmission has been proposed in~\cite{US20170230933,WO2017145120}. It provides a \gls{gnb} multiple opportunities (multiple slots) to send the paging \gls{dci} as soon as \gls{lbt} allows. On the other side, \gls{ue} needs to listen for all the possible opportunities. In such solution, the probability of blocking due to channel occupancy is reduced at the cost of a higher energy consumption at \gls{ue}.
Current \gls{nr} specification already supports a paging occasion consisting of multiple slots~\cite[Sec. 9.2.5]{TS38300}, to improve the reliability of the system. Also, \gls{nr} permits the network to transmit a paging message using a different set of transmit beams or repetitions. Thus, the reliability and channel availability issues of paging for \gls{nru} can be assessed by using the already supported time- and spatial- domains for paging in \gls{nr}.
\section{HARQ Procedures for \gls{nru}}
\label{sec:harq}
In \gls{nr}, similar to \gls{lte}, after reception of data, a device has to respond with a \gls{harq} feedback to indicate whether the data transmission was successful or not. The time duration between the initial data transmission, \gls{harq} feedback, and re-transmission, as well as the way the transmitted and re-transmitted data are combined at the receiver for decoding, define the basics of the \gls{harq} procedure. \gls{harq} in \gls{nr} supports asynchronous incremental redundancy both for \gls{dl} and \gls{ul}. In \gls{dl}, the \gls{gnb} provides the \gls{harq} feedback timing configuration to \gls{ue} either dynamically using \gls{dci} or semi-statically using \gls{rrc}. In \gls{ul}, upon reception of the \gls{sr} or \gls{bsr} from \gls{ue}, the \gls{gnb} schedules each \gls{ul} transmission and re-transmission using \gls{dci}.
In \gls{nr}, the following terminologies\footnote{Let us note that, at the time of writing, only K0, K1, and K2 are included in \gls{3gpp} technical specification~\cite{TS38331}. K3 and K4 are not included, although they were mentioned in \gls{3gpp} technical discussions~\cite{R1-1719401}, and are included in this paper to illustrate the whole \gls{harq} time-line.} are defined in terms of scheduling and \gls{harq} time-line~\cite{TS38213,TS38214,TS38331,R1-1719401}:
\begin{itemize}
\item K0: Delay between \gls{dl} allocation (\gls{pdcch}) and corresponding \gls{dl} data (\gls{pdsch}) reception,~\cite[Sec. 5.1.2.1]{TS38214},
\item K1: Delay between \gls{dl} data (\gls{pdsch}) reception and corresponding \gls{harq} feedback transmission on \gls{ul} (\gls{pucch}),~\cite[Sec. 9.2.3]{TS38213},
\item K2: Delay between \gls{ul} grant reception in \gls{dl} (\gls{pdcch}) and \gls{ul} data (\gls{pusch}) transmission,~\cite[Sec. 6.1.2.1]{TS38214},
\item K3: Delay between \gls{harq} feedback reception in \gls{ul} (\gls{pucch}) and corresponding re-transmission of data (\gls{pdsch}) on \gls{dl},
\item K4: Delay between \gls{ul} data (\gls{pusch}) reception and corresponding \gls{harq} feedback transmission on \gls{dl} (\gls{pdcch}).
\end{itemize}
Fig.~\ref{fig_harq} shows an example of \gls{dl} and \gls{ul} data transmissions along with the associated \gls{harq} feedback allocation for K0=0, K1=1, K2=1, K3=1, K4=1 slots. If \gls{pdsch} is sent in slot $n$, \gls{pucch} with \gls{harq} feedback would be sent in slot $n{+}k$, where $k$ is indicated by the field \textit{\gls{pdsch}-to-HARQ-timing-indicator} (provides the value of K1) in the \gls{dci} in \gls{pdcch}. Moreover, \gls{pucch} resources, i.e., physical \gls{rb}s to be used for \gls{harq} feedback are also indicated by \gls{dci} in \gls{pdcch}~\cite[Sec. 9.2.3]{TS38213}. Similarly, in \gls{ul} transmissions, \gls{pusch} resources for \gls{ul} data transmissions and re-transmissions are configured by \gls{dci} in \gls{pdcch}, where the slot timing offset K2 is part of the \textit{Time-domain resource assignment} field in \gls{dci}~\cite[Sec. 6.1.2.1]{TS38214}.
\begin{figure}[!t]
\centering
\includegraphics[width=0.45\textwidth]{harq}
\caption{Problems related to scheduling and \gls{harq} due to \gls{lbt}. }
\label{fig_harq}
\end{figure}
Note that K3 and K4 need to consider the processing times at the \gls{gnb} side, while K1 and K2 have to take into account the \gls{ue} processing times. In \gls{nr}, the \gls{ue} processing time is expressed in terms of symbols, instead of slots (unlike the K parameters), for which the following terminologies are defined:
\begin{itemize}
\item N1: the number of OFDM symbols required for \gls{ue} processing from the end of \gls{pdsch} reception to the earliest possible start of the corresponding \gls{harq} feedback transmission,
\item N2: the number of OFDM symbols required for \gls{ue} processing from the end of \gls{pdcch} containing the \gls{ul} grant reception to the earliest possible start of the corresponding \gls{pusch} transmission.
\end{itemize}
More details and specific values of N1 and N2 for different configurations and numerologies can be found in~\cite{R1-1721515,R1-1719401}.
From the \gls{harq} procedure point of view, two \gls{nr} features are important: the flexible slot structure and the mini-slot-based transmissions. The flexible slot structure may reduce the \gls{harq} delay by allowing the transmission of \gls{harq} feedback in the same slot in which \gls{pdsch} was received (\textbf{self-contained \gls{harq} feedback})~\cite{R1-1806671}, and may enable re-transmissions in the subsequent slot, provided that the processing delays at \gls{ue} and \gls{gnb} are short enough to permit it. The mini-slot-based transmissions provide scheduling support with flexible transmission durations. It also reduces the delay between the time instant when the channel is found idle and the time instant when the transmission can be started. This way it reduces the need of using reservation signals to reserve the channel until the next allowed transmission time instant boundary, which was used in LAA to protect the channel until the next subframe boundary (see~\cite[Sec. 7.2.1.1]{TR36889}).
However, in case of standalone \gls{nru}, there are two important problems associated with the HARQ operation and \gls{lbt} requirement (see Fig.~\ref{fig_harq}):
\begin{itemize}
\item \textit{\gls{ul} data blocking of an \gls{ul} \gls{harq} process}: It may happen that \gls{ul} grant is transmitted through \gls{pdcch} but the corresponding \gls{ul} data in \gls{pusch} is blocked by channel occupancy (even in case of Cat 2 \gls{lbt} within a shared \gls{cot}). In such a case, the \gls{gnb} would assume it as an incorrect reception (even if there was no transmission) and so would proceed to reallocate resources for the \gls{ue} to "re-transmit". The problem is further aggravated in case of multi-slot scheduling, for which multiple slots are assigned for the \gls{ue} to transmit, through a single \gls{ul} grant.
\item \textit{\gls{harq} feedback blocking of a \gls{dl} \gls{harq} process}: It may happen that \gls{pdcch} and \gls{pdsch} are transmitted but \gls{harq} feedback in \gls{pucch} is blocked by channel occupancy (even in case of Cat 2 \gls{lbt}).
Due to the blocking of \gls{harq} feedback transmissions, \gls{gnb} would assume a it is a \gls{nack} and additional re-transmissions would occur at \gls{gnb}.
The problem is further aggravated in case of multi-slot aggregation, for which multiple \gls{harq} feedbacks of different transport blocks are multiplexed together in a single \gls{pucch} transmission.
\end{itemize}
The problem of \gls{ul} data blocking of an \gls{ul} \gls{harq} process has already been addressed in eLAA using triggered grant~\cite{TS36213}. The key idea is to use two step grant process instead of one. For an \gls{ul} grant, first a subset of the configuration parameters, for example, \gls{mcs}, \gls{tbs}, and assigned \gls{rb}s are sent, then, at a later point, a short triggered grant is sent on \gls{pdcch} to trigger the corresponding \gls{ul} transmission. The delay to process the triggered grant and to send the \gls{ul} transmission would be minimal at \gls{ue} side, because most of the processing has already been finished based on the configuration parameters sent earlier before the triggered grant. This allows the \gls{ue} to immediately transmit after the triggered grant without \gls{lbt}, given that the \gls{ue} transmission can be done within $16$ $\mu$s from the transmission of triggered grant, within the shared \gls{cot}. This solution can be reused for \gls{nru}.
The problem of \gls{harq} feedback blocking of a \gls{dl} HARQ process was not present in LAA technologies, because \gls{pucch} was always sent over the licensed carrier~\cite[Sec. 10]{TS36213}. In MulteFire, this problem was partially solved using new \gls{pucch} formats, i.e., an extended \gls{pucch} format (MF-ePUCCH) and a short \gls{pucch} format (MF-sPUCCH). MF-ePUCCH is sent with \gls{pusch} using interlaced configuration, while MF-sPUCCH is sent in the \gls{lte} special subframe~\cite{multefire}.
Based on that, in MulteFire, the transmission opportunity to send HARQ feedback is defined according to the availability of either MF-sPUCCH, MF-ePUCCH (\gls{pusch} resources) for the \gls{ue}.
In addition, in case of MF-sPUCCH (if available) transmission after \gls{dl} data transmission, the \gls{lbt} for it could be avoided according to the shared \gls{cot} rule. However, \gls{lbt} blocking of \gls{harq} feedback still can arise when the MF-sPUCCH cannot be placed immediately after its \gls{dl} transmission.
One of the solutions to solve the HARQ feedback blocking of a \gls{dl} HARQ process in \gls{nru} is postponing the HARQ feedback transmission to the next available slot/symbols which are not blocked. Such a solution of postponing the HARQ feedback has also been considered in \gls{nr} for multi-slot aggregation and \gls{dl} semi-persistent scheduling. It occurs when there is a direction conflict due to \gls{dl}-\gls{ul} semi-static configuration or dynamic subframe indicator (SFI). However, in these cases, both \gls{gnb} and \gls{ue} know that there is a direction conflict, thus the \gls{gnb} postpones the reception and the \gls{ue} postpones the transmission of the HARQ feedback. In \gls{nru}, HARQ feedback can be postponed but the \gls{gnb} would not know that it was blocked in \gls{ul} and it would assume \gls{nack} instead. So, postponing the HARQ feedback is not sufficient in \gls{nru}.
A potential solution to solve the above problem can be the allocation of multiple \gls{pucch} resources for sending HARQ feedback corresponding to a \gls{pdsch} transmission within the \gls{cot} (\textbf{opportunistic HARQ feedback}). This solution has been highlighted in~\cite{R1-1806671} as a potential enhancement for \gls{nru}. The configuration of multiple \gls{pucch} resources can be given in the \gls{dci}, which requires definition of a new \gls{dci} format for \gls{nru}. The multiple \gls{pucch} resource configuration for HARQ feedback may include multiple time resources as well as various beams/TRPs. Once \gls{ue} receives \gls{pdsch} in slot $n$, the \gls{ue} will check whether the activated \gls{pucch} resources for HARQ feedback are valid. If any \gls{pucch} resource after $n{+}K1$ slots is not blocked, the HARQ feedback is transmitted. If all \gls{pucch} resources are blocked, then HARQ feedback is discarded. The \gls{gnb} must wait and check whether HARQ feedback can be decoded in any of the allocated multiple \gls{pucch} resources. As soon as the \gls{gnb} decodes the HARQ feedback, it can proceed with either re-transmissions or new data transmissions without monitoring of the remaining allocated \gls{pucch} resources.
Another option to solve the problem is to use a \textbf{triggered HARQ feedback}~\cite{R1-1806671}. That is, to use a \gls{dl} triggered grant to trigger the transmission of HARQ feedback. This is similar to the solution adopted in eLAA that is used to resolve the \gls{ul} data blocking problem.
\section{Scheduling Methods for \gls{nru}}
\label{sec:sched}
In \gls{nr}, like in \gls{lte}, dynamic scheduled access is used for both \gls{dl} and \gls{ul}, for which the scheduling decisions are made at the \gls{gnb}. Each \gls{ue} monitors multiple \gls{pdcch}s, which, upon the detection of a valid \gls{dci}, follows the given scheduling decision and receives (transmits) its \gls{dl} (\gls{ul}) data. In \gls{nru}, the dynamic scheduler design has some challenging issues to solve because of the regulatory requirements for accessing the unlicensed bands. One of such issues arises due to \gls{mac} \& \gls{phy} processing delays and the requirement of \gls{lbt}, which we discuss in detail in Section~\ref{sec_LBTsched}. In addition to that, the scheduler needs to take the OCB and \gls{mcot} requirements into account as well. At each transmission time interval, the \gls{gnb} needs to schedule the \gls{ue}s such that the OCB requirement is full-filled. For example, multiple \gls{ue}s may be multiplexed in frequency domain in such a way that the OCB requirement is satisfied, e.g., by scheduling \gls{ue}s that are associated to the same beam in a slot. Also, the \gls{gnb} should take \gls{mcot} limitation into account while scheduling different data flows because the channel availability after \gls{mcot} cannot be ensured.
Due to \gls{lbt} requirements, scheduling schemes other than the dynamic scheduled access might be more suitable for \gls{nru}, and particularly for \gls{ul} access. For example, autonomous \gls{ul} introduced in FeLAA~\cite[Sec. 4.2]{TS37213}, grant-less \gls{ul} in MulteFire~\cite{multefire}, or configured grant defined in \gls{nr} for \gls{ul} transmissions~\cite[Sec. 5.8.2]{TS38321}, might be good candidates for NR-U UL access. We discuss them in detail in Section~\ref{sec_alt}.
\subsection{Impact of Processing Delays and \gls{lbt} on the Scheduler}
\label{sec_LBTsched}
As inherited in \gls{lte}, in \gls{laa} and MulteFire technologies there is 1 ms (one \gls{lte} subframe) of \gls{mac} processing delay and 1 ms of \gls{phy} processing delay for each transmission. For example, as shown in Fig.~\ref{fig_lteprocessing}, data scheduled in subframe number 0 (SF0) can be transmitted over the air after 2 ms in subframe number 2 (SF2). This allows two ways to perform \gls{lbt}, which are also shown in Fig.~\ref{fig_lteprocessing2}: (a) \gls{lbt} before \gls{mac} processing, (b) \gls{lbt} after \gls{mac} processing\footnote{Note that the selection of \gls{lbt} scheme out of these two options is implementation-specific and, therefore, it is not defined either in LAA-releases or MulteFire, but one of the options has to be implemented in chipsets.}.
\begin{figure}[!t]
\centering
\includegraphics[width=0.3\textwidth]{procdelay}
\caption{Processing delays in \gls{lte}.}
\label{fig_lteprocessing}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=0.3\textwidth]{procdelayopt}
\caption{Options to perform \gls{lbt} in LAA. For \gls{nr}, the same options apply but the subframe would correspond to a slot or symbol (depending on the device processing capabilities) that has a numerology-dependent length.}
\label{fig_lteprocessing2}
\end{figure}
In the \textbf{\gls{lbt} before \gls{mac} processing} option, the delay to access the channel, given that the channel is clear, is larger than two subframes (see Fig.~\ref{fig_lteprocessing2}.(a)). In this solution, the \gls{mac}/\gls{phy} configuration of the current transmission can be modified based on the \gls{lbt} outcome (e.g., adjust the \gls{mcs} based on the sensed power during \gls{lbt}). In the \textbf{\gls{lbt} after \gls{mac} processing} option, if the channel is clear, then the delay to access the channel is lower than one subframe (see Fig.~\ref{fig_lteprocessing2}.(b)). If the channel is not clear within the duration of the \gls{phy} processing, then the \gls{mac} \gls{pdu} needs to be rescheduled, which will incur an access delay of more than three subframes to reschedule at \gls{rlc}, and then reprocess at \gls{mac} and \gls{phy}. In addition, in this case, when the channel is clear, the \gls{mac}/\gls{phy} configuration of the current transmission cannot be modified based on the \gls{lbt} outcome. In both the options, when the channel is clear, reservation signals may be needed to reserve the channel until the subframe boundary corresponding to the data transmission starts. In line with the above, \gls{lbt} before or after \gls{mac} processing solutions have clear trade-offs. The \gls{lbt} before \gls{mac} processing provides more flexibility at the scheduler but it requires the use of reservation signals during \gls{mac} and \gls{phy} processing for a long duration. On the other hand, the \gls{lbt} after \gls{mac} processing reduces the duration of use of reservation signals but requires handling rescheduling if \gls{lbt} fails which complicates the scheduler operation.
In \gls{nr}, \gls{mac}/\gls{phy} processing delays are of the order of OFDM symbol length, for which the specific values can be derived based on the device capability and the numerology~\cite{R1-1721515}. Although the processing delays are reduced in \gls{nr}, the same trade-offs of \gls{lbt} before and after \gls{mac} processing options described above will still exist for \gls{nru}. However, for \gls{lbt} after \gls{mac} processing, due to small delay in accessing the channel, i.e., less than one OFDM symbol, which can be for example 8.93 $\mu$s for \gls{scs}=120 kHz, there may not be any need for using reservation signals.
This is an important aspect, since there are some suggestions in \gls{3gpp} to eliminate the use of reservation signals, which may also be prevented by the \gls{etsi} regulation in the future~\cite{R1-1714479}.
In case of scheduled \gls{ul} transmissions, \gls{lbt} after \gls{mac} processing is a better solution because the scheduling decision has already been made by the \gls{gnb} and it becomes important to not lose the allocated resource for \gls{ul} access. Losing the transmission opportunity in \gls{ul} may delay successful transmission. It may also affect the \gls{dl} performance for example in the case of \gls{tcp}, which requires transmission of timely \gls{tcp} \gls{ack}s in the opposite direction.
One of the solutions that we propose here to increase the probability of channel access while performing \gls{lbt} for the beam-based transmissions is to use \textbf{multiple spatial replicas} of the same transmission. This is more suitable for the \gls{dl} transmissions, where multiple \gls{trp}s or multiple beams of the same \gls{trp} can be used to generate multiple spatial replicas for the same \gls{ue}.
However, it also applies to the \gls{ul} in case the \gls{ue} has connectivity with multiple \gls{trp}s/beams. In this solution, we propose:
\begin{itemize}
\item preparing multiple replicas of the same \gls{mac} \gls{pdu} scheduled for a certain slot/symbol of a specific \gls{ue} with different beam-pairs or \gls{trp}s for that \gls{ue},
\item performing simultaneous \gls{lbt} processes on different \gls{trp}s/beams after \gls{mac} processing, and
\item then proceeding with the best beam/\gls{trp} for which \gls{lbt} is successful (i.e., that finds the channel available on time). In case of multiple \gls{trp}s/beams get a successful \gls{lbt}, the final selection can be based on the channel conditions on the selected \gls{trp}s/beams.
\end{itemize}
This is illustrated in Fig.~\ref{fig_lteprocessing3} for two spatial replicas. The proposed solution requires a process of selecting multiple beams/\gls{trp}s for each transmission link, as well as the capability of performing \gls{lbt} simultaneously on multiple beams/\gls{trp}s. In case different \gls{trp}s are used, the sensing for \gls{lbt} can be either directional or omnidirectional. If multiple beams of the same \gls{trp} are used, then this solution is only applied in case the \gls{gnb} uses directional \gls{lbt}.
In any case, it also requires the \gls{ue} to listen simultaneously on the multiple configured beams for data reception.
This method would increase the reliability and reduce the impact of \gls{lbt} failure on latency. It would also reduce the access delay and improve performance in case that \gls{mac}/\gls{phy} processing delays are of the slot length order and/or the use of reservation signals is not allowed.
\begin{figure}[!t]
\centering
\includegraphics[width=0.28\textwidth]{procdelayopt2}
\caption{\gls{lbt} after \gls{mac} processing with two spatial \gls{mac} \gls{pdu} replicas and two parallel \gls{lbt} processes.}
\label{fig_lteprocessing3}
\end{figure}
In FeLAA~\cite{TS36213}, a similar kind of solution was adopted by allowing multiple starting positions in the \gls{dl} and \gls{ul} special subframes, which is basically using multiple replicas of \gls{mac} \gls{pdu} in the temporal domain. Similarly, in~\cite{WO2017074498A1,WO2016122786A1,WO2017078796A1}, it was proposed that multiple \gls{pdcch}s were used to indicate different starting positions for the special subframes, whereas in~\cite{EP3}, it was suggested adjusting the \gls{mcs} according to the remaining time available for transmission which will depend on time instant at which the channel is found available by \gls{lbt}. For \gls{lbt} after \gls{mac}, these solutions involve preparing multiple replicas related to different starting temporal points~\cite{WO2017074498A1,WO2016122786A1,WO2017078796A1} and \gls{mcs}s~\cite{EP3}.
\subsection{Non-dynamic Scheduling Schemes}
\label{sec_alt}
In the case of \gls{ul} dynamic scheduling, first, a \gls{ue} has to send a \gls{sr}/\gls{bsr} to request an \gls{ul} grant (\gls{dci} in \gls{pdcch}) from its \gls{gnb}. Then, after receiving the \gls{ul} grant, the \gls{ue} performs the data transmission in \gls{pusch}. In unlicensed spectrum, this process will need multiple \gls{lbt}s (in particular, 3 \gls{lbt}s for \gls{nru} standalone scenario). This means that, if channel is occupied at any step, it will incur long delays to \gls{ul} data transmissions. Alternative (non-dynamic) scheduling schemes may be more suitable for \gls{ul} \gls{nru} to reduce the message exchange overhead of dynamic scheduled \gls{ul}.
In Rel-14 eLAA, it was found that scheduled \gls{ul} transmission has disadvantages in terms of throughput and latency, compared to contention-based transmissions used in other coexisting \gls{rat}s, such as Wi-Fi.
To compensate for that, Rel-15 FeLAA introduced the \textbf{autonomous \gls{ul}} transmissions~\cite[Sec. 4.2]{TS37213} and MulteFire defined \textbf{grant-less \gls{ul}}~\cite{multefire}, which have a high resemblance. Both in autonomous \gls{ul} and grant-less \gls{ul}, there is a predefined set of radio resources, which are
configured on a per-cell basis and are for contention-based access. A \gls{ue} is allowed, after a successful \gls{lbt}, to transmit its \gls{pusch} on such resources without an \gls{ul} grant.
Therefore, autonomous \gls{ul} and grant-less \gls{ul} eliminate the handshake of \gls{sr}, \gls{bsr}, and dynamic \gls{ul} grant for \gls{ul}
access~\cite{R1-1804313}.
However, losses due to collisions and blocking owing to channel occupancy may occur in autonomous \gls{ul} and grant-less \gls{ul}. To solve that, if a similar approach is followed for \gls{nru}, then the multi-\gls{trp} deployment and multi-beam operation could be exploited to configure \gls{ul} transmissions to multiple \gls{trp}s by following the same approach as in the spatial replicas based solution that we described in Section~\ref{sec_LBTsched}.
Non-dynamic scheduling schemes have also been introduced in \gls{nr} to reduce the latency of dynamic scheduled \gls{ul}.
\gls{nr} defines a new non-dynamic scheduling for \gls{ul} data transmission~\cite[Sec. 5.8.2]{TS38321}, called \textbf{configured grant}. In configured grant, the \gls{ul} data transmissions follow a semi-statically configured resource allocation corresponding to a UE-specific configured grant. The configured grant may either be provided by \gls{rrc} (Type 1) or via \gls{dci} (Type 2).
Due to the semi-static and periodic configuration of resources, configured scheduling requires less control signaling as compared to dynamic scheduling.
This is convenient for \gls{nru} \gls{ul} to simplify the \gls{sr}/\gls{bsr}/\gls{ul} grant handshake and reduce the number of required \gls{lbt}s that are needed before a \gls{ue} can successfully access the unlicensed channel~\cite{TR38889}.
Therefore, it is a potential scheme to reduce the access delay in \gls{nru} \gls{ul}, provided that its parameters: size $\gamma$ (i.e., amount of data, in bits, which is given by the number of assigned resources and \gls{mcs}) and periodicity $p$ (in number of slots)
are properly configured for the available traffic pattern. For example, consider a \gls{ue} which needs to download some data from a remote host, in that case, configured grant can be used to reserve space for TCP \gls{ack}s every $p$ slots for an amount of data $\gamma$. Moreover, to avoid blocking of \gls{ul} transmissions on configured resources due to \gls{lbt}, the \gls{gnb} can also use the triggered grants (described in Section~\ref{sec:harq}) to enable the UE transmit immediately after the triggered grant.
\section{Evaluation}
\label{sec:eval}
In this section, by using simulation of an \textbf{NR-U/WiGig coexistence} scenario, we evaluate the performance of different \gls{lbt}-based channel access procedures discussed in Section~\ref{sec:channelaccess}. The evaluation of other open design improvements (like \gls{cot} structure, initial access, HARQ, and scheduler analyzed in Sections~\ref{sec:frame},~\ref{sec:initialaccess},~\ref{sec:harq},~\ref{sec:sched}, respectively) is left for future works.
The details of the deployment scenario and the simulation results are given in the following sections.
\subsection{Deployment Scenario}
\label{sec:scen}
A dense indoor network deployment, composed of $K$ pairs that are randomly deployed in a $25 {\times} 25$ m$^2$ area is considered.
We consider an NR-U/WiGig coexistence scenario, for which half of the pairs ($K/2$) are NR-U pairs (gNB-UE) and the other half of the pairs ($K/2$) are WiGig pairs (AP-STA). The minimum distance among gNBs/APs is set to $1$ meter, and UEs/STAs are deployed in a random distance between $3$ and $8$ meters from the serving gNB/AP.
Performance of the downlink transmission is assessed, assuming that gNBs/APs operate at carrier frequency $60$ GHz with $1$ GHz channel bandwidth and transmit power of $10$ dBm. The channel models of IEEE 802.11ad are used. The noise power spectral density and the noise figure are set to ${-}174$ dBm/Hz and $7$ dB, respectively.
According to WiGig specification, we assume that APs perform \gls{omnilbt}. For NR-U gNBs, different channel access procedures described in Section~\ref{sec:LBT}, i.e., \gls{omnilbt}, \gls{dirlbt}, \gls{pairlbt}, and \gls{lbtswitch} are considered. We also combine each of these strategies with LBR (i.e., receiver-assisted LBT, as detailed in Section~\ref{sec:recLBT}), which are denoted by \gls{omnilbt}+LBR, \gls{dirlbt}+LBR, \gls{pairlbt}+LBR, and \gls{lbtswitch}+LBR, respectively. In addition to these schemes, we introduce a dummy design in which gNBs do not perform any LBT before a transmission, denoted as no-LBT. The no-LBT option is not compliant with ETSI regulation~\cite{ETSI302567} but it is just included as a benchmark in the simulations.
For the LBR-based options, the additional time required to perform LBR handshake given in Section~\ref{sec:recLBT} is taken into account. For NR-U, we consider \gls{scs}=$120$ kHz, since it is a common numerology in \gls{mmwave} bands.
Directional transmissions are assumed at gNBs/APs. The transmit beam gain at gNBs/APs is fixed to $10$ dB with a transmit main lobe beamwidth of $30^o$, and ideal antenna radiation efficiency is assumed. For data reception, two configurations for the UEs/STAs' antennas are considered:
\begin{itemize}
\item \textbf{Omnidirectional reception}: UEs/STAs receive data omnidirectionally. In this case, for the LBR scheme, sensing at UE side will also be performed omnidirectionally.
\item \textbf{Quasi-omnidirectional reception}: the receive beam gain at UEs/STAs is fixed to $7$ dB with a receive main lobe beamwidth of $90^o$ while assuming ideal antenna radiation efficiency. In this case, LBR will be implemented through directional sensing (in the receive beam) at UE side.
\end{itemize}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.88\textwidth]{figures_resultsOmni}
\caption{Performance evaluation of different NR-U channel access procedures, for omnidirectional reception at UEs/STAs. The WiGig channel access is kept as per IEEE 802.11ad standard, i.e., \gls{omnilbt}. (a) Sum-rate (Gbps) vs $K$. (b) Mean-rate during channel access (Gbps) vs $K$. (c) Number of pairs that get access to the channel when $K{=}40$, for NR-U and WiGig, separately. (d) Mean-rate during channel access (Gbps) when $K{=}40$, for NR-U and WiGig, separately.}
\label{fig_res1}
\end{figure*}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.88\textwidth]{figures_resultsQOmni}
\caption{Performance evaluation of different NR-U channel access procedures, for quasi-omnidirectional reception at UEs/STAs. The WiGig channel access is kept as per IEEE 802.11ad standard, i.e., \gls{omnilbt}. (a) Sum-rate (Gbps) vs $K$. (b) Mean-rate during channel access (Gbps) vs $K$. (c) Number of pairs that get access to the channel when $K{=}40$, for NR-U and WiGig, separately. (d) Mean-rate during channel access (Gbps) when $K{=}40$, for NR-U and WiGig, separately.}
\label{fig_res2}
\end{figure*}
The \gls{ed} threshold for LBT, normalized by the maximum antenna gain for sensing, is set to -74 dBm\footnote{Note that directional transmissions are considered but the sensing stage can be performed either directionally or omnidirectionally. Thus, a normalized \gls{ed} threshold of -74 dBm is considered by taking into account the receive gain used for sensing, which corresponds to an \gls{ed} threshold of -74 dBm for \gls{omnilbt} and -64 dBm for \gls{dirlbt}. Similarly, in case of LBR, it depends on the data reception configuration; for omnidirectional reception, the \gls{ed} threshold is -74 dBm, while it is -67 dBm for quasi-omnidirectional reception. Recall that the noise power for $W{=}1$ GHz with noise power spectral density of $-174$ dBm/Hz results in ${-}84$ dBm, and thus we consider -74 dBm as the \gls{ed} in \gls{omnilbt} to account for the noise figure.}.
We do not emulate backoff processes for both WiGig \gls{cca} and \gls{nru} \gls{lbt}, and simply consider how many pairs (connections) can reuse the spectrum according to the different channel access procedures. Simulation results are averaged among $1000$ random deployments.
For the performance metrics, we collect sum-rate and mean-rate during channel access. The sum-rate is the sum of data rates of all the pairs that can simultaneously access the channel. On the other hand, the mean-rate corresponds to the average of the rates over the connections that get access to the channel. This may be a useful metric to measure the \gls{qos} obtained by the different RATs. In addition, to account for fairness, we also evaluate the average number of connections that get access to the channel for both NR-U and WiGig systems.
\subsection{Results and Comparison}
\label{sec:res}
We categorize the results based on the reception type implemented at the UEs and STAs sides. For the omnidirectional reception at UEs/STAs, the collected results are shown in Fig.~\ref{fig_res1}, and for the quasi-omnidirectional reception, the results are shown in Fig.~\ref{fig_res2}.
Within the figures, subfigures (a) and (b) show the sum-rate and mean-rate with different number of total pairs ($K$), respectively. Subfigures (c) and (d) depict the number of pairs that get access to the channel and their attained mean-rate in each of the systems, i.e., NR-U and WiGig, respectively, with $K{=}40$.
For omnidirectional reception, we observe that:
\begin{itemize}
\item No-LBT provides the lowest mean-rate for all $K$. It is worse than \gls{omnilbt} for coexistence since it reduces the number of WiGig connections and their attained rate (see Fig.~\ref{fig_res1}.(c)-(d)). Also, as $K$ increases, the sum-rate gets saturated due to the interference (see Fig.~\ref{fig_res1}.(a)).
\item LBT strategies at gNB side (\gls{omnilbt}, \gls{dirlbt}, \gls{pairlbt}, \gls{lbtswitch}):
\begin{itemize}
\item The \gls{omnilbt}-\gls{dirlbt} trade-off is observed. OmniLBT is overprotective (low number of NR-U connections access), so that it obtains a lower sum-rate but a higher mean-rate than \gls{dirlbt} (see Fig.~\ref{fig_res1}.(a)-(b)). DirLBT enables spatial reuse at gNBs (high number of NR-U connections) but hidden nodes arise, which also impacts WiGig performance negatively since more NR-U nodes access and interfere (see Fig.~\ref{fig_res1}.(d)).
\item PairLBT performs similar to \gls{dirlbt} for omnidirectional reception. It is not effective for an omnidirectional reception configuration because the LBT in the opposite direction cannot properly detect all the hidden nodes that are interfering the UE.
\item \gls{lbtswitch} improves the mean-rate compared to \gls{dirlbt}, \gls{omnilbt}, and \gls{pairlbt}, as shown in Fig.~\ref{fig_res1}.(d). It is able to enhance the fairness of NR-U pairs as compared to \gls{omnilbt}, since more NR-U connections get access to the channel while not affecting negatively the number of WiGig accesses and their average rate. In addition, compared to \gls{dirlbt} and \gls{pairlbt}, since \gls{lbtswitch} is able to properly adapt the type of carrier sense at every gNB as a function of the observed neighboring gNBs/APs density and activity, it provides better performance in such coexistence scenario.
\end{itemize}
\item Receiver-assisted LBT strategies with sensing at gNB and UE side (\gls{omnilbt}-LBR, \gls{dirlbt}-LBR, \gls{pairlbt}-LBR, \gls{lbtswitch}-LBR): In general, sensing at the UE side provides large benefits at unlicensed bands since it overcomes the deficiencies of LBT under beam-based transmissions. This can be observed in the number of connections accessing the channel, their attained mean-rate, and the system sum-rate. We observe that, however, \gls{omnilbt}-LBR is too much conservative and cannot provide the spatial reuse and sum-rate as of \gls{dirlbt}-LBR, \gls{pairlbt}-LBR, and \gls{lbtswitch}-LBR. In general, LBR acts as good neighbor for WiGig nodes as it does not impact the number of WiGig nodes that access the channel and their attained rate, while at the same time NR-U pairs achieve a much larger rate during channel access. Recall that LBR-based techniques get the same access probability than \gls{omnilbt} but, since only the properly selected gNBs access, it provides a higher mean-rate (see Fig.~\ref{fig_res1}.(c)-(d)).
\end{itemize}
For quasi-omnidirectional reception, as shown in Fig.~\ref{fig_res2}, similar trends are observed but with: 1) lower relative differences in the performance among the different schemes, and 2) larger rates because of the reduced interference levels due to directional reception.
However, few differences are observed for this configuration:
\begin{itemize}
\item PairLBT with directional reception is able to address the \gls{omnilbt}-\gls{dirlbt} trade-off, since the sensing beam for the opposite direction can be properly adjusted.
\item Although LBR-based procedures obtain the largest \gls{qos} (mean-rate), the largest system capacity is given by the no-LBT scheme because excessive interference does not arise due to directional receptions and, thus, the larger the spatial reuse is, the larger the system capacity is.
\item \gls{lbtswitch} gets a mean-rate similar to LBR-based approaches.
\end{itemize}
We would like to remark that information from UE side is shown to be significantly beneficial to improve the coexistence in unlicensed bands with beam-based transmissions, particularly in the case of omnidirectional reception. This is observed in the performance of \gls{lbtswitch}-LBR.
\gls{lbtswitch}-LBR performs better than the other strategies because it includes sensing at the UE as well as recommendation from UE side regarding the type of carrier sense to be performed at the gNB for LBT. On the other hand, for quasi-omnidirectional reception, one type of UE feedback (either to switch the LBT strategy or to allow/prevent the access through LBR) is sufficient to improve simultaneously the spatial reuse and the \gls{qos}.
\section{Lessons Learned}
\label{sec:learned}
The lessons that we have learned and discussed throughout this article are summarized as follows.
\begin{itemize}
\item \textbf{The usage of \gls{pairlbt} and \gls{lbtswitch} in \gls{nru} help in reducing exposed node and hidden node problems, as compared to \gls{omnilbt} and \gls{dirlbt}:} Multiple solutions are available to implement carrier sense at the transmitter side for \gls{lbt} under beam-based transmissions. The two trivial solutions, i.e., \gls{omnilbt} and \gls{dirlbt} have different trade-offs in terms of system performance, fairness, and complexity. It is due to the different types of sensing that accentuate exposed nodes in \gls{omnilbt} and hidden nodes in \gls{dirlbt}. These trade-offs can be addressed by using paired directional sensing at the transmitter side (\gls{pairlbt}), or switching the type of carrier sense at the transmitter as a function of density and activity of neighboring nodes observed from the receiver side (\gls{lbtswitch}).
\item \textbf{The efficiency of \gls{pairlbt} and \gls{lbtswitch} is demonstrated in \gls{nru}/WiGig coexistence scenarios:} Results have shown that \gls{pairlbt} is useful for scenarios in which data reception is directional. Otherwise, for omnidirectional data reception, there are hidden node problems that cannot be detected at the transmitter, even with multiple paired sensings. On the other hand, results have shown that \gls{lbtswitch} performs better than \gls{omnilbt}, \gls{dirlbt}, and \gls{pairlbt} because it includes recommendation from the \gls{ue} side regarding the type of carrier sense (omni or dir) to be performed at \gls{gnb}s based on the observed potential interferers. So, information from the \gls{ue} side is beneficial to improve coexistence in beam-based \gls{nru}.
\item \textbf{Receiver-assisted \gls{lbt} solutions help in overcoming the deficiencies of sensing only at the transmitter side:} For beam-based communications in the unlicensed band, due to the use of directional antenna arrays, the observed channel status at the transmitter may be different from the perceived interference at the receiver side. Therefore, performing carrier sense at the transmitter side (i.e., \gls{lbt}) may not be sufficient. This can be fixed by using receiver-assisted \gls{lbt} solutions, which provide the receiver (\gls{ue}) an opportunity to sense the shared channel using LBR and assist the transmitter for channel access using a feedback. Indeed, LBR can be combined with different types of sensing at the transmitter side.
\item \textbf{The effectiveness of receiver-assisted \gls{lbt} over \gls{lbt}-based strategies is demonstrated in \gls{nru}/WiGig coexistence scenarios:} Results have shown that sensing at the UE side (LBR) provides large fairness and QoS benefits in \gls{nru}/WiGig coexistence scenarios at mmWave bands. Results confirm that RTS/CTS-like mechanisms are beneficial to \gls{nru}. Moreover, among the LBT-LBR combinations, it is observed that \gls{lbtswitch}-LBR performs better than \gls{omnilbt}-LBR, \gls{dirlbt}-LBR, and \gls{pairlbt}-LBR. This is due to the fact that, in \gls{lbtswitch}-LBR, the feedback from the UE after performing LBR includes also a recommendation for the type of LBT to be used at the transmitter side.
\item \textbf{Coordination of \gls{lbt} processes improves \gls{nru} channel reuse:} Mechanisms to enable frequency reuse among \gls{nru} devices of the same operator are needed to improve the system performance and avoid LBT blocking between devices of the same operator. The potential mechanisms to support intra-\gls{rat} tight frequency reuse are: multi-\gls{ed} strategies, self-defer schemes, and a new mechanism proposed in this paper, i.e., \gls{lbt} coordination, which enables time/frequency coordination of the resource allocation as well as coordination among the \gls{lbt} procedures of different nodes.
\item \textbf{Sensing at the receiver node is useful to properly update the \gls{lbt} \gls{cws} in beam-based \gls{nru}:} Multiple issues arise when using \gls{harq} feedback to update the \gls{cws} (as done in \gls{laa}) for the case of beam-based transmissions. It is because of the lack of correlation between a collision indicated by a \gls{nack} and the transmit beam, as well as due to the inability to enter in the backoff phase after an incorrect sensing phase. We have proposed a solution to fix these problems by using a receiver-assisted \gls{cws} adjustment that considers paired sensing at the receiver (\gls{ue}) for the \gls{cws} update. It does not use \gls{harq} feedback.
\item \textbf{Multiple DL/UL switches within the \gls{cot} is beneficial for \gls{nru}:} Two options are considered for the \gls{cot} structure in \gls{nru}, i.e., a single DL/UL switch and multiple DL/UL switches, each with their pros and cons, as considered in the current discussions for \gls{nru} specification.
To reduce the end-to-end latency, a \gls{cot} with multiple DL/UL switches is preferred. It is identified that the number of switching points should be further optimized based on the traffic patterns and flow requirements.
\item \textbf{\gls{ss} block design improvements are needed for initial access in \gls{nru}:} Multiple challenges of the \gls{ss} block design in the unlicensed context arise due to the \gls{lbt} and \gls{ocb} requirements. To reduce the \gls{lbt} impact, multiple occasions for \gls{ss} block transmissions can be used to improve channel access probability. Some \gls{nr} \gls{ss} block patterns need to be redesigned to leave enough time for the sensing phase in between two \gls{ss} block transmissions. To meet the \gls{ocb} requirement in the 60 GHz band, new design solutions for \gls{ss} blocks resource mapping are proposed. It includes frequency-domain \gls{ss} block repetitions, split and/or reordering of the the \gls{ss} block time-frequency structure, and frequency-domain interlaced mapping of the signals that compose the \gls{ss} block.
\item \textbf{\gls{nr} and eLAA enhancements regarding RACH procedure can be reused for \gls{nru}:} Enhancements to the current four step RACH procedure are needed to reduce the delay associated with it. This can be fixed by increasing the transmit opportunities for each message of the RACH procedure, simplifying the overall RACH procedure (as already contemplated in \gls{nr}), and/or enhancing the \gls{lbt} design for random access. Also, to meet the \gls{ocb} requirements, adaptation in \gls{nr} PRACH preamble formats is needed, as it was done in \gls{elaa}.
\item \textbf{Paging solutions already defined in \gls{nr} are useful for \gls{nru}:} The uncertainty of channel availability in the unlicensed context complicates the paging procedure in \gls{nru} with standalone and dual-connectivity operations. Multiple opportunities for the paging procedures, for example, using paging message repetitions through the time and/or space domains, have been identified as beneficial for \gls{nru}. Some of such solutions are already being supported in \gls{nr} specification.
\item \textbf{HARQ procedures defined in eLAA could be reused for \gls{nru}:} Two problems related to the HARQ procedure in \gls{nru} with standalone operation, caused by the usage of the \gls{lbt} requirement, have been identified: HARQ feedback blocking of a DL HARQ process and UL data blocking of an UL HARQ process. To solve the former, the concept of a triggered grant, as per eLAA, can be used. To fix the latter, solutions based on opportunistic and triggered HARQ feedback could be beneficial.
\item \textbf{There are pros and cons regarding the \gls{lbt} placement in real implementations (\gls{lbt} after or before \gls{mac}):}
Two implementation-specific solutions for what regards the \gls{lbt} placement versus the scheduling operation are: \gls{lbt} before \gls{mac} processing and \gls{lbt} after \gls{mac} processing.
For the \gls{dl} access, pros and cons of each solution are apparent. \gls{lbt} before \gls{mac} processing provides more flexibility at the scheduler, reduces complexity of the scheduler implementation, but it increases the access delay and may require the use of reservation signals. On the other hand, \gls{lbt} after \gls{mac} processing reduces/avoids the need for reservation signals, reduces the access delay if \gls{lbt} success, but requires handling of rescheduling if \gls{lbt} fails. Although this is not discussed in the standardization, and the impact in \gls{nr} may be lower than in \gls{lte} in unlicensed spectrum due to the lower \gls{nr} processing timings, the authors believe that practical implementations should carefully analyze these aspects.
\item \textbf{A possible scheduling solution including a specific \gls{lbt} placement is to use spatial replicas:} To address the issues in the \gls{dl} access mentioned in the previous bullet, in this paper we have proposed a new scheduling solution that uses multiple spatial replicas and \gls{lbt} after \gls{mac} processing for \gls{nru} \gls{dl} access. The proposed solution exploits the multi-beam and multi-\gls{trp} deployment in \gls{nr}, while meeting the \gls{lbt} requirement in \gls{dl}, as a way to increase the reliability, to reduce the impact of \gls{lbt} failure on latency, and to reduce the access delay.
\item \textbf{Alternative \gls{ul} scheduling methods defined in \gls{nr}, FeLAA, and MulteFire are beneficial for \gls{nru}:} \gls{ul} dynamic scheduling in the unlicensed context may incur long delays to UL data transmissions. Scheduling schemes with less dynamic nature, like autonomous \gls{ul} (defined in FeLAA), grant-less \gls{ul} (used in MulteFire), or configured grant (standardized in NR), can be more favorable for \gls{nru} \gls{ul} transmissions in reducing the message exchange overhead and the access delay.
\end{itemize}
\section{Future Perspectives}
\label{sec:future}
The future perspectives and opportunities for \gls{nru} related research that we envision are:
\begin{itemize}
\item \textbf{Integration of \gls{mmwave} and sub 7 GHz licensed/unlicensed bands}: Integration of \gls{mmwave} and sub 7 GHz bands has been studied in the \gls{nr} context with licensed bands~\cite{semiari:17,semiari:18}, as well as in the \gls{wigig} context with unlicensed bands~\cite{nitsche:15}. How to potentially reuse and extend them for \gls{nru} by combining licensed/unlicensed/shared paradigms under different operational modes (i.e., carrier aggregation and standalone) is an interesting area for further research~\cite{lu:19}. Also, multi-band and multi-channel selection algorithms in this context could be investigated.
\item \textbf{\gls{nru} for ultra-reliable and low-latency communications}: The impact of \gls{lbt} on the latency performance of MulteFire has been assessed in~\cite{maldonado:18}, both analytically and through simulations. Extension of the analytic framework and system-level simulations for \gls{nru} are of high interest to understand if \gls{nru} can meet strict low-latency and high-reliability requirements~\cite{TR38913}. If not, then what modifications are required (if any) to support the \gls{urllc} use case.
\item \textbf{\gls{nru} for future smart factories}: Industry 4.0 has emerged as an important application for \gls{nru} since it requires wireless-connected and privately-owned networks~\cite{8207346,TR22804}. A future research line is to develop theoretical foundations for licensed, unlicensed and shared spectrum paradigms to use \gls{nru} as the \gls{rat} for future smart factories. For example, to accommodate multiple devices with diverse requirements such as extended reality applications, \gls{urllc} devices, sensors, mobile robots, etc. simultaneously.
\item \textbf{Improved beam-training for unlicensed-based access}: The impact of \gls{lbt} on the beam training processes needs to be investigated. Recently, authors in~\cite{li:18} proposed a joint directional received-assisted \gls{lbt} (i.e., \gls{dirlbt}/dirLBR) and beam training. It identifies the best beam pair for \gls{nru} communication by taking both channel blocking and channel quality into account. Further research in this line, and the impact on the overall network efficiency should be studied.
\item \textbf{Beam reciprocity in unlicensed}: Even if \gls{tdd} is used and \gls{dl} and \gls{ul} transmissions are performed within the coherence time interval, it may happen that the best beam for \gls{dl} reception is not the best beam for \gls{ul} transmissions. This is due to \gls{lbt} blocking effects and the differences in the received interference at transmitter and receiver sides, which are accentuated at \gls{mmwave} bands. Therefore, the study of best beam-pair selection, independently for \gls{dl} and \gls{ul}, jointly with the unlicensed band access constraints, could be further investigated.
\item \textbf{Grant-less \gls{ul} in the unlicensed mmWave bands}: Grant-less \gls{ul} is useful to reduce the scheduling delays and get fast access to the channel at the cost of increased collisions. Therefore, pros and cons of grant-based and grant-free access schemes should be properly evaluated for NR-U beam-based access to unlicensed spectrum. Also, optimization of the access scheme and the number of repetitions for grant-less \gls{ul} to guarantee successful access and decoding while minimizing energy consumption at \gls{ue}s could be investigated.
\end{itemize}
\section{Conclusions}
\label{sec:conc}
In this paper, we highlight the challenges and analyze the potential solutions for \gls{nr}-based access to unlicensed spectrum with beam-based transmissions. We discuss different topics such as channel access, frame structure, initial access, HARQ, and scheduling in the context of \gls{nru}. For the channel access procedures, we review the solutions to support \textit{i}) \gls{lbt} under beam-based transmissions, \textit{ii}) receiver-assisted \gls{lbt} in beam-based transmissions, \textit{iii}) intra-RAT frequency reuse improvement, and \textit{iv}) \gls{cws} adjustment in beam-based transmissions. With the help of simulations, we show that feedback from the receiver significantly improves the performance of coexistence in terms of \gls{qos} and fairness.
In terms of \gls{cot} structures, slots with multiple \gls{dl}/\gls{ul} switching points within the \gls{cot} are shown to be more suitable for \gls{nru}. For \gls{nru} initial access, we discuss the design consideration for \gls{ss} block design, \gls{rach} procedure, and paging procedure to take \gls{lbt} and \gls{ocb} requirements into account. At the \gls{mac} level, two problems related to the HARQ procedures are identified, for which, we describe the solutions based on self-contained, triggered, and opportunistic HARQ feedbacks. We also discuss the issues related to the dynamic scheduling in \gls{nru}, where, we propose a multiple spatial replicas based solution, and also indicate that the existing scheduling schemes such as grant-less \gls{ul} and configured grant that have less control signaling for \gls{ul} access may be suitable for \gls{nru}. Finally, we provide a summary of all of our main findings as well as future research perspectives for \gls{nru} beam-based transmissions.
\printglossaries
\section{Acknowledgments}
This work was partially funded by Spanish MINECO grant
TEC2017-88373-R (5G-REFINE) and Generalitat de Catalunya grant 2017 SGR 1195. Also, it was supported by InterDigital Communications, Inc.
|
2,877,628,088,517 | arxiv | \section{Introduction}\label{sec:intro}
The Accelerator Test Facility 2 (ATF2)~\cite{atfweb},~\cite{atf2proposalVol1},~\cite{atf2proposalVol2} at KEK has been designed to prove the principle of the compact final focus beam optics design based on the local chromaticity correction~\cite{finalFocus} required for future Linear Colliders (LCs) such as the International Linear Collider (ILC)~\cite{ilc} and Compact Linear Collider (CLIC)~\cite{clic}. LCs require
precision transverse position measurements for a number of applications including, but not limited to, such crucial tasks as beam based alignment and beam optics tuning.
Cavity beam position monitors (CBPM) have been proposed as the primary technology for
the high resolution position measurements.
Resolutions of a few hundred nanometre have been routinely demonstrated in installations of tens of such devices~\cite{kim} with 10 $\sim$ 50~nm
achieved in smaller, limited range high gain systems~\cite{kim},~\cite{honda},~\cite{kimthesis}.
The ATF2 is a test facility of various beam diagnostics systems such beam position monitor (BPM)
system, laser wire (LW) and optical transition radiation (OTR). All data used for this paper were taken at the ATF2.
\subsection{Cavity Beam Position Monitor}\label{sec:cavity}
When a charged particle beam passes through a cavity, various electromagnetic modes are excited.
The excited electromagnetic fields are defined by the cavity shape and trajectory of the passing beam.
Some of the excited modes are dependent on the transverse position of the beam. Hence, the beam position can be determined by selecting and measuring the strength of
these modes. Usually, the first dipole mode is used for position measurements as it has the strongest beam coupling among the position dependent modes and also flips the phase of the oscillations by 180$^\circ$ when the offset changes its sign relative to the electric centre of the cavity, which can be detected using an external phase reference.
Figure~\ref{fig:figure1} shows the ATF2 beam line and Figure~\ref{fig:figure2} a schematic and photographs of the area
where the test system was installed. The system located between quadrupoles QF21X and QM16FF of the ATF2 extraction beam line
consisted of 2 blocks containing 2 CBPMs each, but only 3 of the 4 cavities were read out.
Additional instrumentation included a tuned in frequency reference cavity.
\begin{figure*}[htb]
\includegraphics[width=150mm]{figure1.pdf}
\caption{\label{fig:figure1}The ATF2 beam line with a zoom in around the test location.}
\end{figure*}
\begin{figure*}\centering
\includegraphics[width=150mm]{figure2.pdf}
\caption{\label{fig:figure2} Detailed schematic of the CBPM test region and photographs of the installed system.}
\end{figure*}
The cavities constituting the test system used for analysis in this paper are rectangular in shape to
split the $x$ and $y$ dipole modes and separate them in frequency thus reducing the cross-coupling between them. These cavities are coupled via rectangular
slots into waveguides with coaxial adaptors. This arrangement allows
the extraction of the position sensitive dipole mode and suppresses the strong
monopole modes.
The geometry is symmetric with a pair of couplers for each transverse plane.
A cavity outputs an exponentially decaying sine wave with angular frequency $\omega$
and decay constant $\tau$ defined by the geometry and material of the cavity. Assuming the bunch length is fixed, the dipole output
voltage $V_{\rm d}(t)$ is given by
\begin{equation}
\label{eq:dipole}
V_{\rm d}(t;x,\alpha,\theta) = \left[S_x x - j S_{\alpha}\alpha+ j S_{\theta}\theta \right] q e^{-t/2\tau_{\rm d}} e^{j(\omega_{\rm d} t+\phi_{\rm d})} ,
\end{equation}
where $S_x$, $S_\alpha$ and $S_\theta$ are the sensitivities to the beam position
$x$, bunch tilt $\alpha$ and beam trajectory $\theta$ respectively, and $q$ is the bunch charge.
The phase of the signal $\phi$ depends on the bunch arrival time, and so is arbitrary unless an additional phase reference is used.
This is usually provided by a reference cavity, operating the monopole mode at the same frequency as the dipole cavities. It also serves as
an independent measurement of the bunch charge required for the position determination. The reference output voltage is
\begin{equation}
\label{eq:reference}
V_{\rm r}(t) = S_{q} q e^{-t/2\tau_r} e^{j(\omega_{r}t+\phi_r)}.
\end{equation}
Note that the difference between $\phi_d$ and $\phi_r$ is fixed for any 2 points along the waveforms even if the frequencies do not match precisely.
Table~\ref{tab:ipbpmParameter} shows the design parameters of the cavities used in the experiment: the resonant frequency of the
dipole mode $f_{\rm d}$, coupling strength $\beta$, loaded quality factor $Q_{\rm L}$,
internal quality factor $Q_0$, external quality factor $Q_{\rm ext}$, normalised shunt impedance $(R/Q)_0$ and decay time $\tau$.
\begin{table}[hbt]
\centering
\begin{tabular}{c|c|c|c|c|c|c|c}
\hline \hline
Parameters & $f_{\rm d}$ [GHz] & $\beta$ & $Q_{\rm L}$ & $Q_0$ & $Q_{\rm ext}$ & $(R/Q)_0$ @ 1mm offset & $\tau$ [ns] \\
\hline
$x$ dipole & 5.7086 & 1.63 & 2067 & 5424 & 3335 & 0.549 & 58 \\ \hline
$y$ dipole & 6.4336 & 3.32 & 1217 & 5459 & 1586 & 1.598 & 30 \\
\hline \hline
\end{tabular}
\caption{Simulated parameters of the CBPMs~\cite{honda}.}
\label{tab:ipbpmParameter}
\end{table}
\subsection{Signal Processing and Calibration}\label{sec:signal}
The high frequency cavity output is most commonly down-converted to a more manageable intermediate frequency (IF), about tens of MHz , followed by an
additional digital downconversion (DDC) stage, or directly to the "baseband" ("zero IF"). In any case, the target is to obtain the amplitude and phase envelope of the position signal normalised to the reference.
We used both methods as the bandwidth of the processing electronics allowed us to do so. The down-converted signals were digitised at 100~MS/s by 14-bit digitisers.
Figures~\ref{fig:figure3} show example waveforms processed in both zero and nonzero IF configurations.
\begin{figure}[htbp]
\begin{picture}(100.0, 190.5)
\put(100.0,190){\small a)}
\put(330.0,190){\small b)}
\put(-5.0,0)
{\includegraphics[width=7.5cm]{figure3a.pdf}}
\put(210.0,0)
{\includegraphics[width=8.6cm]{figure3b.pdf}}
\end{picture}
\caption{Example of a) nonzero IF digitised and digitally down-converted signals for a dipole cavity
and b) zero IF digitised $I$ and $Q$ signals.}
\label{fig:figure3}
\end{figure}
The top left plot in Figure~\ref{fig:figure3} (a) is a 25 MHz down-converted raw digitised signal.
The top right plot is the same signal with the pedestal subtracted.
The signal is then mixed with a digital local oscillator (LO) and filtered to give the amplitude and phase of the signal shown
in the bottom left and right plots respectively~\cite{kim}. The phase incursion along the waveform is minimised by adjusting the frequency of the digital LO to reduce the effects of the trigger jitter. The plots in Figure~\ref{fig:figure3} (b) show the real, "in-phase" ($I$), and imaginary, "quadrature" ($Q$) components of the extracted phasor. For position calculation, the phasor is sampled at a single time $t_{\rm s}$, roughly one filter length after the amplitude peak.
\begin{equation}
I_{\rm s}+jQ_{\rm s} = g\frac{V_{\rm d}(t=t_{\rm s,d})}{V_{\rm r}(t=t_{\rm s,r})} \, ,
\end{equation}
where the phasor $g$ accounts for any differences between the dipole and reference processing.
The phasor $I+jQ$ needs to be rotated by an angle $\theta_{IQ}$, $I^{\prime}+jQ^{\prime} = e^{i\theta_{IQ}}(I+iQ)$, so that its in-phase component $I^{\prime}$ is proportional to the position and the quadrature component $Q^{\prime}$ only contains the angle and tilt information. The required rotation of the
$IQ$ plane is measured during the calibration when a significant position variation is introduced to reduce the effect of the angular jitter.
The position scale $S$ for converting $I^{\prime}$ into position is measured by offsetting the CBPMs by a known amount using the mover or an orbit bump.
\subsection{Principal Component Analysis}\label{sec:pca}
Methods of Model Independent Analysis (MIA) are used to extract relationships in noisy
data without using an underlying model. In accelerator physics, MIA has been used, for example, to analyse
the complex beam dynamics~\cite{mia} and extract beam position information~\cite{wang1}, and the number of applications is growing due to both improving availability of numerical algorithms and increasing complexity of the accelerator systems and their requirements. So, a method of MIA that
does not require a specific machine model is important to make sensitive
measurements for a future Linear Collider (LC) when the beam orbit changes with time~\cite{mia}.
So far, MIA have been applied to processed CBPM data to measure position
resolution~\cite{kim},~\cite{walston} and improve calibration parameter determination~\cite{frankie}, but not to raw CBPM waveform data.
Principal component analysis (PCA) is a MIA that is used to reduce the dimensionality of the data.
The basic idea of PCA is to transform the raw data into a basis which best explains the variation within the data.
If the data is a matrix {\bf d} with $N$ variables in columns and $M$ rows of repeated measurement,
it can be transformed using an orthogonal matrix $\bf W^{T}$ to ${\bf Y}$ given by
\begin{equation}
{\bf Y} = \bf W^T {\bf d}.
\label{eq:pcaTransform}
\end{equation}
The matrix ${\bf W^T}$ can be
considered as a rotation matrix that transforms the data into another linear vector space.
The vectors ${\bf W^T}_{i*}$ form a set of $N$ basis vectors which the data is projected onto.
The PCA method determines the transformation matrix ${\bf W^T}$ whilst keeping the variability of the original data.
PCA determines ${\bf W^T}$ in such a way to make the covariance matrix of ${\bf Y}$ a diagonal matrix.
The covariance matrix of the transformed data ${\bf Y}$ is calculated by
\begin{equation} \label{eq:pcaCov}
{\bf YY^T} = \left({\bf W^T d}\right) \left({\bf W^T} {\bf d}\right)^{\bf T} \\ \nonumber
= {\bf W^T} {\bf dd^T} {\bf W}.
\end{equation}
The data matrix ${\bf d}$ can be decomposed using singular values decomposition (SVD),
\begin{equation}\label{eq:svd}
{\bf d} = {\bf U}{\bf S}{\bf V^T},
\end{equation}
where ${\bf S}$ is a diagonal matrix, ${\bf U}$ and ${\bf V^T}$ are orthogonal matrices of size $M \times M$ and $N \times N$, respectively. Using Equations~\ref{eq:pcaCov} and~\ref{eq:svd} the covariance matrix of ${\bf d}$ is
\begin{equation}
{\bf d d^T} = {\bf U}{\bf S^2}{\bf U^T} = {\bf W}{\bf YY^T}{\bf W^T}.
\end{equation}
So the PCA transformation matrix ${\bf W}$ can be identified with matrix ${\bf U}$ from the SVD of the data matrix ${\bf d}$, and the covariance matrix of the transform data is diagonal and can be identified with the singular value matrix squared.
\section{Application of Principal Component Analysis to Cavity Beam Position Monitor data}\label{sec:pcabpm}
There are many software packages for calculating principal components, we used the Python implementation \texttt{scikit.learn}~\cite{scikit-learn}.
PCA was applied to the calibration data, guaranteeing the largest variability in the waveform data and thus variance is the position dependent signal. The reference cavity waveforms have been processed in a similar manner to provide a beam charge measurement.
\subsection{Principal Component Analysis of Cavity Beam Position Monitor waveforms}
Data from a single CBPM for a single bunch machine pulse ${\bf d}$ is a vector of length $N$, where
$N$ is the number of digitiser samples.
The vector ${\bf d}$ may contain not only the wanted mode signal ${\bf d_{\rm d}}$, but also unwanted contributions
from other modes and various noise sources ${\bf d_{\rm u}}$, so
\begin{equation} \label{eq:sigSum}
{\bf d} = {\bf d_{\rm d}} + {\bf d_{\rm u}}.
\end{equation}
In a measurement of the amplitude of ${\bf d}$, the unwanted signals contribute a systematic offset or noise.
The vector ${\bf d}_{\rm d}$ varies depending on the beam position and charge, hence for calibration data it is expected to produce strong components in the PCA matrix, while noise sources will form higher components.
This is shown diagrammatically in Figure~\ref{fig:figure4}, which is an example where the signal has zero IF.
\begin{figure}[tbp]
\centering
\includegraphics[width=100mm]{figure4.pdf}
\caption{An example of PCA on a zero IF signal.}
\label{fig:figure4}
\end{figure}
Unless $I\/Q$ demodulation is applied to the cavity signals, each cavity direction has one output.
Each sampled signal is then expressed as a linear combination of basis vectors
\begin{equation}\label{eq:linearSum}
d(t_i) = \sum_{j} y_j {\bf W}_{ji}^{\bf T},
\end{equation}
\noindent where coefficient $y_j$ is the relative contribution of basis vector ${\bf W}_{j*}^{\bf T}$.
They can be found by taking the dot product between $d(t_i)$ and
${\bf W}_{j*}^{\bf T}$.
For CBPM data with nonzero IF, Equation~\ref{eq:linearSum} can be applied as follows:
\begin{equation}
V(t_i) = \sum_{j} y_j \hat{V}_{ji}^{\bf T},
\end{equation}
where $\hat{V}_{j*}^T$ are the principal components of the data matrix of $V(t_i)$ measurements.
For baseband-converted CBPM data, the analysis needs to be applied twice, once for the $I$ and once for $Q$ waveforms:
\begin{eqnarray}
I(t_i) & = & \sum_{j} y_j \hat{I}_{ji}^{\bf T}, \\
Q(t_i) & = & \sum_{j} y_j \hat{Q}_{ji}^{\bf T}.
\end{eqnarray}
\begin{figure}[htbp]
\centering
\includegraphics*[width=100mm]{figure5.pdf}
\caption{PCA components for nonzero IF data (left column) and their FFTs (right column).}
\label{fig:figure5}
\end{figure}
Figure~\ref{fig:figure5} shows the result of applying PCA to nonzero IF CBPM digitised data.
The components are sorted from top to bottom by the explained variance ratio (EVR)~\cite{scikit-learn}, a factor indicating what fraction of the variation in the source data a particular component accounts for. Along with each component the fast Fourier transform of the component FFT(${\bf W}_{j*}^{\bf T}$) is also plotted.
The first two
components match the expected shape of the dipole mode signal, and their frequency spectra peak at the expected frequency. The other components contain transients
and interference signals. The first two components are plotted again in Figure~\ref{fig:figure6}, where it can be clearly seen (apart from the initial transient part) that they are in a 90$^{\circ}$ phase relation. It needs to be noted, that even though the components are orthogonal, they are not necessarily independent, hence, the position information may still be contained in more than one component, therefore, similar methods, such as Independent Component Analysis (ICA) may be considered for this application.
\begin{figure}[htbp]
\centering
\includegraphics*[width=90mm]{figure6.pdf}
\caption{The first two principal components of a CBPM signal plotted together
.}
\label{fig:figure6}
\end{figure}
The EVR of the first two components depends on the interplay between the beam arrival time and the sampling clock and trigger but the sum of their EVR is close to 100\%.
This is also true for all of the triplet test CBPMs as shown in Figure~\ref{fig:figure7}. The top row shows raw data for multiple beam passes and the second and the third rows the first and the second principal components respectively for all three CBPMs.
\begin{figure}[htb]\centering
\includegraphics*[width=120mm]{figure7.pdf}
\caption{PCA components for nonzero IF signals. Raw data (first row), the first component of the signal (second row), the second component
of the signal (last row). }
\label{fig:figure7}
\end{figure}
Since the first two components are dipole-like signals and are orthogonal, at least in the body of the waveform, they can be used as a basis for $I\/Q$ demodulation. Their dot products with the measured waveforms provide two values that can be turned into a position reading by applying the rotation and scale previously obtained by calibrating to a known offset, as in conventional processing.
\begin{figure}[htb]\centering
\includegraphics*[width=120mm]{figure8.pdf}
\caption{PCA components for $I$ and $Q$ signals. Raw $I$ data (first row), the first component of the $I$ signal (second row), raw $Q$ data (third row), the first component of the $Q$ signal (last row).}
\label{fig:figure8}
\end{figure}
Figure~\ref{fig:figure8} shows PCA applied to $I\/Q$ demodulated data.
In this case only one principal component for each of the $I$ and $Q$ signals is selected, as its EVR is already close to 100\%. The $I\/Q$ basis is already fixed by the LO, but the shape rejection of the unwanted signals still benefits the processing. Taking a dot product with the principal component essentially works as a convolution filter on the waveform, with an important difference that the shape of the filter is derived from the data itself and is known to embrace the useful information -- in this case, the beam position.
\subsection{Calibration}
Figure~\ref{fig:figure9} shows an example calibration for all 3 CBPMs in the setup by applying a known offset using a mover and measuring the rotation $\theta_{IQ}$ of the position data in the $I\/Q$ plane and the position scale $S$.
The second raw of Figure~\ref{fig:figure9} shows the $\theta_{IQ}$ measured by fitting $Q$ against $I$ and the third raw shows a similar
measurement for the position scale $S$. The measured scales indicate a lower sensitivity of CBPM2 compared to the other two cavities.
This reflects in a higher residual measured for this CBPM in the next section.
The results are very similar to those observed with conventional signal processing, and, in line with Equation~\ref{eq:dipole}, position data lies on a straight line in the $I\/Q$ space.
\begin{figure}[htb]
\centering
\includegraphics*[width=140mm]{figure9.pdf}
\caption{A mover calibration for $I\/Q$ data produced using the PCA method, vertical direction, zero-IF demodulation. $I\/Q$ data vs. pulse number (top row), $I\/Q$ data in the $I\/Q$ plane (second row), rotated data vs. mover position (third row), residual rotated $Q$ data (bottom row). The numbers
in the legends indicate the $\theta_{IQ}$ and $S$ values extracted from a linear fit.}
\label{fig:figure9}
\end{figure}
\section{Performance}\label{sec:results}
The performance of the PCA method was compared to more conventional methods such as
digital down conversion (DDC) for non-zero IF signals, and simpler single-point readings with and without
additional filtering and waveform integration in case of analog demodulated $I\/Q$ signals~\cite{kimthesis}. The resolution
was taken as a basic performance indicator. It was measured as the root-mean-square (RMS) residual
between the measurement provided by one CBPM in the triplet and the prediction made by the two spectator CBPMs
as illustrated in Figure~\ref{fig:figure10}. Matrix inversion on the measured data using SVD
provided correlation coefficients for this prediction.
Figure~\ref{fig:figure10} shows the resolution data from the triplet CBPM system processed with the PCA based method.
Each column of the figure corresponds to a CBPM in the system. The first row of the plots shows the measured beam position data used for
the resolution measurement, the second row shows the predicted positions versus the measured ones, and the third row
shows the residuals between the measured and predicted positions.
\begin{figure}[htb]\centering
\includegraphics*[width=140mm]{figure10.pdf}
\caption{Plots for the resolution data from processed using the PCA method.}
\label{fig:figure10}
\end{figure}
Attenuation had to be used during the calibration due to the high sensitivity, and inherently a small offset range of the system compared to the beam jitter at the location for the nominal gain.
Resolution data were taken both with and without 20~dB front-end attenuation (Table~\ref{tab:tab1}). The PCA based processing gives results similar to conventional methods applied to the same data: about 100~nm and 4~nm with and without the attenuation respectively. It should be noted that at maximum sensitivity saturation of the processing electronics may cause a reduction of the position sensitivity at large offsets, resulting in an apparently better resolution measurement, and so the absolute numbers should be treated as indicative. Some more detail on this can be found in~\cite{kimthesis}.
\begin{table}[hbt]
\centering
\caption{Summary of vertical resolution for zero IF signals with and without 20 dB attenuation.}
\begin{tabular}{l|l|l|l|l}
\hline \hline
& Single point & Filter & Integration & PCA \\ \hline
with 20 dB attenuation [nm] & 235 & 104 & 98 & 98 \\ \hline
without 20 dB attenuation [nm] & 5.5 & 3.6 & 3.7 & 4.3 \\
\hline \hline
\end{tabular}
\label{tab:tab1}
\end{table}
\section{Conclusions}\label{sec:conclusion}
A method of processing cavity beam position monitor data based on Principal Component Analysis (PCA) has been successfully tested on the example of a high resolution 3-cavity test system installed in the ATF2 beam line. System performance has been assessed by analysing the residuals between the prediction made by 2 spectator CBPMs and the actual measurement obtained by the third one. A similar performance compared to other more conventional processing methods has been observed.
The most flexible method among the ones tested is the digital down-conversion -- it allows to compensate (to a certain degree) for timing and temperature induced changes in the system. However, it requires 2 additional calibration parameters to determine the beam position compared to the PCA-based processing:
the digital LO frequency and sampling point.
Even for a small test system of 3 cavities + reference that means 16 parameters, $4 ({\rm cavities}) \times 2 ({\rm parameters}) \times 2 ({
\rm directions})$, that require additional beam data and effort. The PCA technique, on the other hand, simplifies the whole process of processing and calibration, which is an important feature in the context of large systems, such as future Linear Colliders. This comes with reduced flexibility and most probably higher sensitivity to long-term effects, such as timing drifts. However, the long term stability for this processing still needs to be understood, and some of the experience with conventional methods may be transferred to this new technique.
There are other methods similar to PCA for constructing a signal basis set, that may be suitable for CBPM processing. These include the independent
component analysis (ICA) and its variants. Also, a basis set can be generated including the known mover positions
or beam orbit offsets, so that instead of maximising the variance, a least squares problem is solved.
\acknowledgments
We would like to express our gratitude to all the operators, collaborators, and support staff in the ATF2 group.
|
2,877,628,088,518 | arxiv | \section{Introduction}
In this paper we give a topological viewpoint for the index and its localization phenomena of elliptic operators on certain fiber bundles using the notion of the joint spectral flow, which is a generalization of that of spectral flow introduced by Atiyah-Patodi-Singer~\cite{AtiyahPatodiSinger1976}. It has various generalizations, for example, higher spectral flow given by Dai-Zhang~\cite{DaiZhang1998}, and noncommutative spectral flow by Leichtnam-Piazza~\cite{LeichtnamPiazza2003} and Wahl~\cite{Wahl2007}. However, what we introduce here is a completely different, new generalization.
The spectral flow for a one-parameter family of self-adjoint operators is an integer counting the number of eigenvalues crossing over zero with multiplicity. In geometric situations, it is related to the index of some Fredholm operator as shown by Atiyah-Patodi-Singer~\cite{AtiyahPatodiSinger1976} as follows. For a one parameter family of self-adjoint elliptic differential operators $D_{t}$ of first order ($t\in S^{1}$) on $\Gamma (Y,E)$, where $Y$ is a closed manifold and $E$ is a hermitian vector bundle on $Y$, a first order differential operator $d/dt+D_{t}$ on $\Gamma (Y\times S^{1}, \pi^{*}E)$ is also elliptic and its index coincides with the spectral flow. Its proof is given essentially by the family's index theorem on the closed $1$-dimensional manifold $S^1$.
The joint spectral flow deals with an $n$-parameter family of $n$-tuples of mutually commuting self-adjoint operators and their joint spectra. We deal with continuous or smooth families of commuting Fredholm $n$-tuples, which are defined in Definition \ref{def:tuple}, and the ``Dirac operators'' associated with them. In the special case of $n=1$, it coincides with the usual spectral flow. We also relate it with the index of some elliptic operator as is the case of the ordinal spectral flow.
\theoremstyle{plain}
\newtheorem*{fake1}{Theorem \ref{thm:jsftwisted}}
\begin{fake1}
Let $B$ be a closed $n$-dimensional $Spin^c$ manifold, $Z \to M \to B$ a smooth fiber bundle over $B$ such that the total space $M$ is also a $Spin^c$-manifold, $E$ a smooth complex vector bundle over $M$, $V$ an $n$-dimensional $Spin^c$ vector bundle over $B$. For a bundle map $\mbk{D_v(x)}$ from $V \setminus \mbk{0}$ to the bundle of fiberwise pseudodifferential operators $\Psi _f ^1 (M,E)$ satisfying Condition \ref{cond:commtwist}, the following formula holds.
\ma{\ind (\pi ^* \slashed{\mf{D}}_B + D(x))=\jsf (\mbk{D(x)}).}
\end{fake1}
The proof also works in a similar way to the original one. The crucial theorem introduced by Segal~\cite{Segal1977} is that the space of $n$-tuples of mutually commuting compact self-adjoint operators is a model for the spectrum of the connective $K$-group.
The joint spectral flow and its index formula implies some localization results. In \cite{Witten1982} E. Witten reinterpreted and reproved some localization formulas for the indices of Dirac operators from the viewpoint of supersymmetry. He deformed Dirac operators by adding potential terms coming from Morse functions or Killing vectors. Recently Fujita-Furuta-Yoshida~\cite{FujitaFurutaYoshida2010} used its infinite dimensional analogue to localize the Riemann-Roch numbers of certain completely integrable systems and their prequantum data on their Bohr-Sommerfeld fibers. In this case the indices of Dirac operators on fiber bundles localize on some special fibers instead of points. Here we relate them with our joint spectral flow and give a topological viewpoint for this analytic way of localization. A strong point of our method is that we give a precise way to compute the multiplicity at each point on which the index localizes. As a consequence we reprove and generalize theorems of Witten and Fujita-Furuta-Yoshida.
\newtheorem*{fake2}{Corollary \ref{cor:FFY}}
\begin{fake2}[Andersen~\cite{Andersen1997}, Fujita-Furuta-Yoshida~\cite{FujitaFurutaYoshida2010}]
Let $(X,\omega)$ be a symplectic manifold of dimension $2n$, $\mathbb{T}^n \to X \to B$ a Lagrangian fiber bundle, and $(L,\nabla ^L,h)$ its prequantum data. Then its Riemann-Roch number $RR(M,L)$ coincides with the number of Bohr-Sommerfeld fibers.
\end{fake2}
Finally we consider an operator-theoretic problem.
Unfortunately, there are not many examples of geometrically important operators (for example Dirac operators) represented as Dirac operators associated with commuting Fredholm $n$-tuple coming from differential operators. Compared the case that their principal symbols are ``decomposed'' as the sum of commuting $n$-tuples, which is the easiest case because it is realized when their tangent bundles are decomposed, the case that the Dirac operators themselves are decomposed is much more difficult because it requires some integrability of decompositions of tangent bundles. However, the bounded operators $\slashed{D}(1+\slashed{D}^2)^{-1/2}$ associated with the Dirac operators $\slashed{D}$ and zeroth order pseudodifferential operators are much easier to deal with than first order differential operators. We glue two commuting $n$-tuples of pseudodifferential operators by using topological methods to show that the family's indices are complete obstructions of decomposing property of families of Dirac operators. Here the theory of extensions of $C^*$-algebra and Cuntz's quasihomomorphism plays an important role.
\newtheorem*{fake3}{Theorem \ref{thm:decomp}}
\begin{fake3}
Let $Z \to M \to B$ be a fiber bundle. We assume that there are vector bundles $V_1,\ldots,V_l$ on $B$ and $E_1,\ldots,E_l$ on $M$ such that the vertical tangent bundle $T_VM$ is isomorphic to $\pi ^*V_1 \otimes E_1 \oplus \cdots \oplus \pi ^*V_l \otimes E_l$. Then its fiberwise Dirac operator $\slashed{D}_f^E$ is $n$-decomposable (in the sense of Definition \ref{def:decomp}) if and only if the family's index $\ind (\slashed{D}_f^E)$ is in the image of $K^n(B, B^{(n-1)}) \to K^n(B)$, or equivalently the image of $\tilde{k}^n(B) \to K^n(B)$.
\end{fake3}
This paper is organized as follows. In Section \ref{section:2}, we relate Segal's description of the connective $K$-theory with the theory of Fredholm operators. In Section \ref{section:3}, we introduce the notion of the joint spectral flow and prove its index formula. In Section \ref{section:4}, we apply the theory and reprove or generalize some classical facts. In Section \ref{section:5} we deal with a decomposing problem of Dirac operators and give an index theoretic complete obstruction.
\noindent {\bf Conventions.}
We use the following notations throughout this paper.
First, any topological space is assumed to be locally compact and Hausdorff unless otherwise noted (there are some exceptions, which are mentioned individually).
Second, we use some terms of topology as follows.
For a based space $(X,*)$, we denote by $\Sigma X$ the suspension $X \times S^1 /(X \times *_{S^1} \cup *_X \times S^1)$ and by $\Omega X$ the reduced loop space $\Map ((S^1,*),(X,*))$.
On the other hand, for an unbased space $X$ we denote by $\Sigma X$ (resp. $IX$) the direct sum $X \times (0,1)$ (resp. $X \times [0,1]$). Similarly, for a $C^*$-algebra $A$ we denote by $\Sigma A$ (resp. $IA$) its suspension $A \otimes C_0(0,1)$ (resp. $A \otimes C[0,1]$). In particular, we denote by only $\Sigma $ (resp. $I$) the topological space $(0,1)$ or the $C^*$-algebra $C_0(0,1)$ (resp. $[0,1]$ or $C[0,1]$).
\noindent {\bf Acknowledgement.}
The author would like to thank his supervisor Professor Yasuyuki Kawahigashi for his support and encouragement. He also would like to thank his sub-supervisor Professor Mikio Furuta for suggesting the problem and several helpful comments. This work was supported by the Program for Leading Graduate Schools, MEXT, Japan.
\section{Fredholm picture of the connective K-theory}\label{section:2}
In this section we first summarize the notion of the connective $K$-theory and its relation to operator algebras according to \cite{Segal1977} and \cite{DadarlatNemethi1990}. Then we connect it with a model of the $K$-theory spectrum that is related to the space of Fredholm operators. Finally we generalize the theory for the twisted case.
It is fundamental in order to describe the notion of the joint spectral flow.
Let $\mbk{H^i}_{i \in \zahl}$ be a generalized cohomology theory. We say $\mbk{h_i}_{i \in \zahl}$ is the connective cohomology theory associated to $\mbk{H_i}$ if it is a generalized cohomology theory that satisfies the following two properties.
\begin{enumerate}
\item There is a canonical natural transformation $h^i \to H^i$ that induces an isomorphism $h^i(\pt) \to H^i(\pt)$ for $i \leq 0$.
\item We have $h^i(\pt)=0$ for $i > 0$.
\end{enumerate}
The (reduced) connective $K$-theory is the connective cohomology theory that is associated to the (reduced) $K$-theory.
Segal~\cite{Segal1977} gave an explicit realization of connective $K$-theory spectra by using methods of operator algebras.
For a pair of compact Hausdorff spaces $(X,A)$, we denote by $F(X,A)$ the configuration space with labels in finite dimensional subspaces of a fixed (separable infinite dimensional) Hilbert space. More precisely, an element of $F(X,A)$ is a pair $(S,\mbk{V_x}_{x \in S})$ where $S$ is a countable subset of $X \setminus A$ whose cluster points are all in $A$ and each $V_x$ is a nonzero finite dimensional subspace of a Hilbert space $\mc{H}$ such that $V_x$ and $V_y$ are orthogonal if $x \neq y$. It is a non-locally compact topological space with its canonical topology that satisfies the following.
\begin{enumerate}
\item When two sequences $\mbk{x_i}$, $\mbk{y_i}$ converge to the same point $z$ and $V_z$ is the limit of $\mbk{V_{i,x_i} \oplus V_{i,y_i}}$, the limit of $(\mbk{x_i,y_i}, \mbk{V_{i,x_i}, V_{i,y_i}})$ is $(\mbk{z}, \mbk{V_z})$.
\item When all cluster points of a sequence $\mbk{x_i}$ are in $A$, the limit of $(\mbk{x_i} , \mbk{V_{i,x_i}})$ is $(\emptyset , \emptyset)$.
\end{enumerate}
Then the following holds for this topological space.
\begin{prp}
Let $(X,A)$ be a pair of compact Hausdorff spaces. We assume that $X$ is connected, $A$ is path-connected, and $A$ is a neighborhood deformation retract in $X$. Then the space $F(X,A)$ is homotopy equivalent to its subspace $F_{\rm fin}(X,A):=\mbk{(S,\mbk{V_x}_{x \in S}) \in F(X,A); \# S<\infty}$ and a sequence $F_{\rm fin}(A,*) \to F_{\rm fin}(X,*) \to F_{\rm fin}(X,A)$ is a quasifibration. Here morphisms are induced by continuous maps $(A,*) \to (X,*) \to (X,A)$. Hence the map $F(X,A) \to \Omega F(SX,SA)$ induces a homotopy equivalence.
\end{prp}
\begin{proof}
See Proposition 1.3 of Segal~\cite{Segal1977} and Section 3.1 of D{\u{a}}d{\u{a}}rlat-N{\'e}methi~\cite{DadarlatNemethi1990}.
\end{proof}
This means that $\mbk{F(S^n,*)}_{n=1,2,\ldots}$ is an $\Omega$-spectrum and hence homotopy classes of continuous maps to it realize some cohomology theory.
Now we introduce two other non-locally compact spaces. First, let $F_n(\mc{H})$ be a space of $(n+1)$-tuples $\mbk{T_i}_{i=0,...,n}$ of self-adjoint bounded operators on $\mc{H}$ that satisfy the following.
\begin{enumerate}
\item The operator $T^2:=\sum T_i^2$ is equal to the identity.
\item The operator $T_i$ commutes with $T_j$ for any $i$ and $j$.
\item The operators $T_i$ ($i=1,2,\ldots,n$) and $T_0-1$ are compact.
\end{enumerate}
Then there is a canonical one-to-one correspondence between $F_n(\mc{H})$ and $F(S^n,*)$. If we have an element $(S, \mbk{V_x})$ of $F(S^n,*)$, then we obtain a $(T_0,\ldots,T_n)$ by setting $T_i:=\sum _{x \in S} x_i P_{V_x}$ where $P_V$ is the orthogonal projection onto $V$ and $x_i$ the $i$-th coordinate of $x$ in $S^n \subset \real ^{n+1}$. Conversely if we have an element $(T_0,\ldots,T_n)$ in $F_n(\mc{H})$, then we obtain data of joint spectra and the eigenspaces because they are simultaneously diagonalizable. Actually this correspondence is homeomorphic.
On the other hand, if we have an element $(T_0,\ldots, T_n) \in F_n (\mc{H})$, then there is a canonical inclusion from the spectrum of the abelian $C^*$-algebra $C^*(T_0,\ldots,T_n)$ into the unit sphere of $\real ^{n+1}$ according to condition 1. It gives a $*$-homomorphism $C(S^n) \to \mbb{B}(\mc{H})$ sending $x_i$ to $T_i$. Now by virtue of condition 2, the image of its restriction to $C_0(S^n \setminus \mbk{*})$ is in the compact operator algebra $\mbb{K}=\mbb{K}(\mc{H})$. Conversely, if we have a $*$-homomorphism $\varphi : C_0(\real ^n) \to \mbb{K}$, then we obtain an element $(\varphi (x_0), \varphi (x_1),\ldots, \varphi (x_n))$ in $F_n(\mc{H})$. This gives a canonical one-to-one correspondence between $F_n(\mc{H})$ and $\Hom (C_0(\real ^n),\mbb{K})$. This correspondence is also a homeomorphism if we equip $\Hom (C_0(\real ^n) , \mbb{K})$ with the strong topology. Moreover, a continuous family of $*$-homomorphisms $\mbk{\varphi _x}_{x \in X}$ parametrized by a finite CW-complex $X$ is regarded as a $*$-homomorphism $C_0(\real ^n) \to C(X) \otimes \mbb{K} \cong C(X,\mbb{K})$.
\begin{prp}[\cite{Segal1977}, \cite{DadarlatNemethi1990}]
Let $X$ be a finite CW-complex and $n \in \zahl _{>0}$. The three sets
\begin{enumerate}
\item $[X, F(S^n,*)]$
\item $[X, F_n(\mc{H})]$
\item $[C_0(\real ^n) , C(X) \otimes \mbb{K}]$
\end{enumerate}
are canonically mutually isomorphic and form the $n$-th reduced connective $K$-group $\tilde{k}^n(X)$.
Here the first two are the sets of homotopy classes of continuous maps and the third is that of homotopy classes of $*$-homomorphisms.
\end{prp}
\begin{proof}
We have already seen that these three sets are canonically isomorphic and $\mbk{F(S^n,*)}_{n=1,2,\ldots}$ is an $\Omega$-spectrum. The desired canonical natural transform is a canonical map $\Phi$ from $[C_0(\real ^n), C(X) \otimes \mbb{K}]$ to $KK(C_0(\real^n),C(X) \otimes \mbb{K}) \cong K^n(X)$ that sends a homotopy class $[\varphi ]$ to $[\mc{H}\otimes C(X) , \varphi , 0]$. Hence we only have to compute $\pi _i (F(S^n,*))$. First for a general $C^*$-algebra $A$, $[C_0(\real) , A]$ is isomorphic to $K_1(A)$ because a $*$-homomorphism from $C_0(\real)$ to $A$ is determined by a unitary operator. Hence $[X,F(S^1,*)]$ is isomorphic to $K^1(X)$. In the case $i \geq n$ we have $\pi _i (F(S^n,*)) \cong \pi _{i-n+1} (F(S^1,*)) \cong K^1(\real ^{i-n+1})$ that is $\zahl$ when $i-n$ is even and $0$ when $i-n$ is odd. In the case $i < n $ we have $\pi _i(F(S^n,*)) \cong \pi _0 (F(S^{n-i},*)) \cong 0$ because $F(S^{n-i},*)$ is connected.
\end{proof}
Next we relate this picture to a realization of $K$-theory that uses the space of Fredholm operators.
Atiyah gave a realization of the $K$-theory spectrum in \cite{AtiyahSinger1969}. Let $\Cliff _n$ be the complex Clifford algebra associated to $\comp ^n$ and its canonical inner product, $e_1,\ldots,e_n$ be its canonical self-adjoint generators with relations $e_ie_j+e_je_i =2\delta _{ij}$, and $\mc H$ be a Hilbert space with a $\zahl /2$-grading and a $\zahl /2$-graded $\Cliff _n$-action $c$. Then the (non-locally compact) space of odd bounded self-adjoint Fredholm operators $T$ that commute with the $\Cliff _n$-action and such that $c(e_1) \cdots c(e_n)T|_{\mc{H}^0}$ is neither positive nor negative definite modulo compact operators if $n$ is odd represents the $K^{-n}$-functor.
Similarly, we represent the $K^n$-functor for $n > 0$ as some space of Fredholm operators. For an ungraded separable infinite dimensional Hilbert space $\mc{H}$, let $\mc{H}_{\Cliff _n}$ be a $\zahl /2$-graded Hilbert $\Cliff _n$-module $\mc{H} \hat{\otimes} \Cliff_n$. Now for $n>0$, let $\mc F_{\Cliff _n}(\mc H)$ be the (non-locally compact) space of odd bounded self-adjoint operators in $\mbb{B}(\mc{H}_{\Cliff _n})$ that is Fredholm, that is, invertible modulo $\mbb{K}(\mc{H}_{\Cliff _n})$. Moreover, if $n$ is odd, we additionally assume that $c(e_1) \cdots c(e_n)T|_{\mc{H}\otimes \Cliff_n^0}$ is neither positive nor negative definite. Then it represents the $K^n$-functor. It can be understood from the viewpoint of Kasparov's $KK$-theory (or bivariant $K$-theory) \cite{Kasparov1980}. As is well-known, the $KK$-theory has various formulations and the original one of Kasparov is deeply related to the theory of Fredholm operators and their indices (see also \cite{Blackadar1998}). For separable $\zahl /2$-graded $C^*$-algebras $A$ and $B$, a cycle in $KK(A,B)$ is of the form $[E,\varphi , F]$ where $E$ is a countably generated $\zahl /2$-graded Hilbert $B$-module, $\varphi $ is a $*$-homomorphism from $A$ to $\mbb{B}(E)$, and $F$ is an odd self-adjoint `Fredholm' operator on $E$ relative to $A$. More precisely, $F$ is an operator in $\mbb{B}(E)$ that satisfies $[\varphi(a) ,F]$, $\varphi(a)(F^2-1)$, and $\varphi(a)(F-F^*)$ are in $\mbb{K}(E)$ for any $a \in A$.
A continuous family (in the norm topology) of $\Cliff _n$-equivariant odd Fredholm operators $F(x)$ ($x \in X$) gives a cycle $[\mc{H}_{\Cliff _n} \hat{\otimes} C(X) , 1 , F]$ in $KK(\comp , C(X) \hat{\otimes } \Cliff _n )$ by regarding $F$ as an element in $\mbb{B}(\mc H_{\Cliff_n} \otimes C(X))$ by pointwise multiplication. Because this $KK$-cycle depends only on its homotopy class,
this correspondence gives a map from $[X,\mc F_{\Cliff _n}(\mc{H})]$ to $KK(\comp, C_0(X) \hat{\otimes} \Cliff _n)$. We can see that it is actually an isomorphism by using the equivalence relations called the operator homotopy \cite{Kasparov1980}. Here we do not have to care for addition of degenerate cycles by virtue of the Kasparov stabilization theorem \cite{Kasparov1980b}.
Now we have shown that there is some operator-theoretic description of the connective $K$-theory, but it is not consistent to the Fredholm picture of $KK$-theory and our construction of the $K$-theory spectrum. Next we see that these two are canonically related.
Both of the two groups $KK(C_0(\real ^n),C(X))$ and $KK(\comp , C(X) \hat{\otimes} \Cliff _n)$ are isomorphic to $K^n(X)$. The canonical isomorphism $KK(C_0(\real ^n),C(X)) \to KK(\comp , C(X) \hat{\otimes} \Cliff _n)$ is given by taking the Kasparov product \cite{Kasparov1980} with the canonical generator of $KK(\comp, C_0(\real ^n) \otimes \Cliff _n)$ from the left. It also has many identifications and here we use the one in \cite{Kasparov1980}. It bases on the Fredholm picture and is of the form $[C_0(\real ^n) \hat{\otimes} \Cliff _n , 1 , C]$ where $C:= \sum c_i x_i (1+|x|^2) ^{-1/2}$. Here $c_i:=c(e_i)$ is the left multiplication of $e_i$ on $\Cliff _n$ that is a $\Cliff _n$-module by the right multiplication.
Now we apply it for cycles that come from $\varphi \in \Hom (C_0(\real ^n),C(X) \otimes \mbb{K})$. We then have
\ma{&[C_0(\real ^n)\hat{\otimes} \Cliff _n , 1 , C] \otimes_{C_0(\real ^n)} [\mc{H} \hat{\otimes} C(X), \varphi , 0]\\
=&\lbk{C_0(\real ^n) \otimes _{\varphi} (\mc{H} \otimes C(X)) \hat{\otimes} \Cliff _n , 1 , C \otimes _{\varphi} \id }\\
=&\lbk{\mc{E}(\varphi) \hat{\otimes} \Cliff _n, 1 , \sum c_i T_i}.
}
Here we denote by $\mc{E}(\varphi)$ a Hilbert $C(X)$-module $\mbk{\overline{\varphi_x(C_0(\real ^n))\mc{H}}}_{x \in X}$ (more precisely, a subspace of $C(X) \otimes \mc{H}$ that consists of $\mc{H}$-valued functions on $X$ whose evaluations at $x$ are in $\overline{\varphi _x(C_0(\real ^n)\mc{H})}$). A $*$-homomorphism $\varphi : C_0(\real ^n) \to \mbb{B}(\mc{E}(\varphi))$ uniquely extends to $\tilde \varphi :C_b(\real ^n) \to \mbb{B}(\mc{E}(\varphi))$ because $\varphi $ is nondegenerate onto $\mbb{B}(\mc{E}(\varphi))$ (see Section 5 of \cite{Lance1995}). It is defined by the spectral measure and Borel functional calculus on each $\mc{H}_x$. We give $T_i := \tilde{\varphi} (x_i(1+|x|^2)^{-1/2})$.
This can be regarded as the Fredholm picture of connective $K$-theory. However, unfortunately it is not useful for our purpose because $\mc{E}(\varphi)$ may not be locally trivial and hence not a bundle of Hilbert spaces in general. Nonetheless, if a cycle in $\tilde{k}^n(X)$ has a good origin, then we have a better description for it. Actually cycles that arise in geometric contexts have that good origin and they are of our main interest.
\begin{defn}\label{def:tuple}
\item[$\bullet$] An $n$-tuple of bounded self-adjoint operators $(T_1, \ldots, T_n)$ on $\mc H$ is called a {\it bounded commuting Fredholm $n$-tuple} if it satisfies the following.
\begin{enumerate}
\item The operator $T^2:=\sum T_i^2$ is in $1+ \mbb{K}(\mc{H})$.
\item The operator $T_i$ commutes with $T_j$ for any $i$ and $j$.
\end{enumerate}
We denote by $\mc{F}_n(\mc{H})$ the set of bounded commuting Fredholm $n$-tuples equipped with the norm topology.
\item[$\bullet$] An $n$-tuple of unbounded self-adjoint operators $(D_1, \ldots, D_n)$ on $\mc H$ is an {\it unbounded commuting Fredholm $n$-tuple} if it satisfies the following.
\begin{enumerate}
\item The operator $D^2:=\sum D_i^2$ is densely defined, Fredholm, and has compact resolvents.
\item The operator $D_i$ commutes with $D_j$ for any $i$ and $j$ on $\dom (D^2)^2$.
\end{enumerate}
We denote the set of unbounded commuting Fredholm $n$-tuples by $\ms{F}_n(\mc{H})$. It is equipped with the strongest topology so that the map $(D^1,\ldots,D^n) \mapsto (D^1 (1+D^2)^{-1/2},\ldots,D^n(1+D^2)^{-1/2})$ is continuous. This definition is an analogue of the Riesz topology of the space of self-adjoint operators.
\item For a bounded (resp. unbounded) commuting Fredholm $n$-tuple $(T_1 , \ldots , T_n)$ (resp. $(D_1 ,\ldots, D_n)$), we say that an odd self-adjoint operator $T:=c_1 T_1 + \cdots +c_n T_n$ on $\mc{H} \hat \otimes \Cliff _n$ (resp. $D:=c_1 D_1 + \cdots +c_nD_n$ with domain $\dom (D^2)^{1/2}$) is the {\it Dirac operator} associated with $(T_1,\ldots ,T_n)$. For simplicity of notation, hereafter we use the same letter $T$ (resp. $D$) for commuting Fredholm $n$-tuples and Dirac operators associated with it.
\end{defn}
The continuous map $(\overline{\mathbb{D}^n},\partial \mathbb{D}^n) \to (S^n,*)$ that collapses the boundary, more precisely of the form
$$(T_1,\ldots,T_n) \mapsto (2T^2-1,2(1-T^2)^{1/2}T_1,\ldots,2(1-T^2)^{1/2}T_n),$$
which is the unique continuous extension of composition map of the canonical isomorphism between $\mathbb{D}^n$ and $\real ^n$ and the stereographic projection, induces a continuous map $\iota : \mc{F}_n(\mc{H}) \to F_n(\mc{H})$ by functional calculus by definition of the topology on $\ms{F}_n(\mc{H})$. On the other hand, for $(T_1,\ldots,T_n) \in \mc{F}_n(\mc{H})$, the Dirac operator $T$ is in $\mc{F}_{\Cliff _n}(\mc{H})$. This correspondence gives a map from $[X,\mc{F}_n(\mc{H})]$ to $[X,\mc{F}_{\Cliff_n}(\mc{H})] \cong KK(\comp , C(X) \otimes \Cliff _n)$ that means, in a geometric context, to take the index bundle with $\Cliff _n$-module structure for the continuous family of Dirac operators associated with $(T_1 , \ldots , T_n)$. Hence we denote it by $\ind$.
\begin{thm}\label{thm:hom-op}
The following diagram commutes.
\[
\xymatrix{
[X,\mc{F}_n(\mc{H})] \ar[d]_\iota \ar[r]^{\ind \ \ \ \ \ \ \ } & KK(\comp , C(X)\hat \otimes \Cliff _n) \\
[X,F_n(\mc{H})] \ar[r]^{\Phi \ \ \ \ \ \ \ } & KK(C_0(\real ^n) ,C(X))\ar[u]^{\rotatebox{90}{$\sim$}}.
}
\]
\end{thm}
\begin{proof}
Let $\mbk{T(x)}_{x \in X}:=\mbk{(T_1(x),\ldots,T_n(x))}_{x \in X}$ be a continuous family of bounded commuting Fredholm $n$-tuples and $\varphi ^{T}$ be its image by $\iota$. Then $\Phi \circ \iota [\mbk{T(x)}]$ is of the form $\lbk{\mc{E}(\varphi ^{T}) \hat \otimes \Cliff_n, 1, T}$. Now we give a homotopy connecting $[\ind T] =\lbk{(\mc{H} \otimes C(X)) \hat{\otimes} \Cliff _n,1, T(x)}$ and $\lbk{\mc{E}(\varphi ^{T}) \hat{\otimes} \Cliff_n, 1, T(x)}$ directly. It is given by a Kasparov $\comp$-$IC(X)$-bimodule
$$\lbk{\mc{E}(\varphi ^{T} ) \oplus_{{\rm ev} _0} (\mc{H}_{C(X)} \otimes I), 1 , T}$$
where $\mc{E}(\varphi ^T ) \oplus_{{\rm ev} _0} (\mc{H}_{C(X)} \otimes I) :=\mbk{(x, f) \in \mc{E}(\varphi ^T ) \oplus (\mc{H}_{C(X)} \otimes I) \mid f(0)=x}$.
\end{proof}
\begin{remk}
For a general locally compact CW-complex we have an analogue of the $K$-theory with compact support. The $K$-group with compact support $K_{\rm cpt}^n(X)$ is defined as the the kernel of the canonical morphism $K^n(X^+) \to K^n(x_0)$ where $X^+$ is the one-point compactification of $X$ and $\mbk{x_0}=X^+ \setminus X$. It coincides with the $K$-group of the nonunital $C^*$-algebra $C_0(X)$ by definition. Similarly we write $k^n_{\rm cpt}(X)$ for the kernel of $k^n(X^+) \to k^n(x_0)$. When $X^+$ has a relatively compact deformation retract of $\mbk{x_0}$, $\tilde{k}^n_{\rm cpt}(X)$ is isomorphic to the homotopy class with compact support of maps from $X$ to $F(S^n,*)$ with compact support, $F_n(\mc{H})$, or $\Hom (C_0(\real ^n, \mbb{K}))$. Hence it is also isomorphic to $\Hom(C_0(\real ^n),C_0(X) \otimes \mbb{K})$.
In terms of our Fredholm picture, a continuous family of Fredholm $n$-tuples on $X$ which is {\it bounded below} by some $\kappa >0$ (i.e. $D(x)^2 \geq \kappa$) outside some compact subset $K \subset X$ determines a $k^n$-cycle on $X$.
For simplicity we only denote $\tilde{k}(X)$ insted of $\tilde{k}_{\rm cpt}(X)$ in this paper.
\end{remk}
\begin{remk}
The formulation above is compatible with the product of cohomology theories. We define the product of continuous families of bounded commuting Fredholm $n$-tuples $T(x)=(T_1(x), \ldots , T_n(x))$ in $\Map (X, \mc{F}_n(\mc{H}))$ and $m$-tuples $S(x)=(S_1(x),\ldots,S_m(x))$ in $\Map (X, \mc{F}_m(\mc{H}'))$ as follows.
\ma{T(x)\times S(x)=&(T_1(x) , \ldots , T_n(x)) \times (S_1(x),\ldots,S_m(x)) \\
:= &(T_1(x) \otimes 1 ,\ldots, T_n(x) \otimes 1, 1 \otimes S_1(x),\ldots,1 \otimes S_m(x))\\
& \in \Map (X,\mc{F}_{n+m}(\mc{H} \otimes \mc{H}')).}
Then it is, up to homotopy, independent of the choice of $T(x)$ and $S(x)$. Consequently $[\mbk{T(x)}] \cup [\mbk{S(x)}] :=[\mbk{T(x) \times S(x)}]$ gives a well-defined product $[X, \mc{F}_n(\mc{H})] \times [X , \mc{F}_m(\mc{H})] \to [X,\mc{F}_{n+m}(\mc{H})]$ that is compatible with the product of connective $K$-groups, which is induced from the canonical map $(S^n,*) \times (S^m,*) \to (S^n ,*) \wedge (S^m,*) \cong (S^{n+m},*)$. By a similar argument we can define the product for unbounded commuting Fredholm $n$-tuples.
\end{remk}
\subsection*{twisted case}
Next, we generalize the above theory for the twisted connective $K$-theory. In the above argument, we have used the action of the Clifford algebra $\Cliff _n$ as the coefficients to construct a Dirac operator associated with a family of commuting Fredholm $n$-tuples. Now we regard it as the Clifford algebra bundle $\Cliff (\underline{\comp ^n})$ associated with the trivial bundle. We generalize the notion of the commuting Fredholm $n$-tuple and apply the general Clifford algebra bundles $\Cliff (V _\comp)$ associated with $Spin^c$ vector bundles $V$ for the coefficients of the Dirac operators associated with them.
We consider the canonical actions of $GL(n;\real)$ on the spaces $F(S^{n},*)$, $F_{n}(\mc{H})$, and $\Hom (C_0(\real ^{n}),C(X) \otimes \mbb{K})$. For example, on $F_{n+m}(\mc{H})$ it is of the form
$$g \cdot (T_0,T_1,\ldots ,T_{n}):=\bk{\sum g_{1j}T_j,\ldots,\sum g_{nj}T_j}.$$
Let $V$ be a real vector bundle over $X$. We denote a fiber bundle $GL(V) \times _{GL(n;\real)} F(S^n,*)$ (resp. $F_n(\mc{H})$) by $F_V$ (resp. $F_{V}(\mc{H})$). Similarly, $GL(n,\real )$ acts on the space of bounded (resp. unbounded) commuting Fredholm $n$-tuples $\mc{F}_{n}(\mc{H})$ (resp. $\ms{F}_{n}(\mc{H})$) and we denote by $\mc{F}_{V}(\mc{H})$ (resp. $\ms{F}_{V}(\mc{H})$) the corresponding fiber bundle.
\begin{defn}
A $V$-twisted family of bounded (resp. unbounded) commuting Fredholm $n$-tuples is a continuous section $T=T(x) \in \Gamma (X, \mc{F}_{V}(\mc{H}))$ (resp. $\Gamma (X , \ms {F}_{V}(\mc{H}))$).
\end{defn}
In the similar way to the above argument, the space of continuous sections $\Gamma \Cliff (V) =\Gamma (X, \Cliff(V))$ is a $C^*$-algebra and a continuous section $T \in \Gamma (X,\mc{F}_{V}(\mc{H}))$ defines a Kasparov $\comp$-$\Gamma \Cliff (V)$-bimodule
$$\lbk{\mc{H} \hat{\otimes} \Cliff (V), 1, c(e_1)T_{e_1}(x)+\cdots +c(e_n)T_{e_n}(x)},$$
which is independent of the choice of an orthonormal basis $\mbk{e_1,\ldots ,e_n} \in V_x$. Therefore we obtain a map $ \pi _0 (\Gamma (X,\mc{F})) \to KK(\comp , \Gamma \Cliff (V))$.
Moreover, the following hold.
\begin{prp}
Let $X$ be a finite CW-complex and $V$ a real vector bundle. The three sets
\begin{enumerate}
\item $\Gamma(X, F_V)$
\item $\Gamma(X, F_{V}(\mc{H}))$
\item $\Hom _{C(X)}(C_0(V) , C(X) \otimes \mbb{K})$
\end{enumerate}
are canonically mutually homeomorphic and their connected conponents form the twisted reduced connective $K$-group associated with the principal bundle $GL(V) \times _{GL(n,\real)}\mc{G}_k^{\rm mod}$, which we denote by $\tilde{k}^{V}(X)$ (see Section 3 of ~\cite{AtiyahSegal2004}). Here $\Hom _{C(X)}(C_0(V \times \real ^k) , C(X) \otimes \mbb{K})$ is the set of $C(X)$-homomorphisms, that is, $*$-homomorphisms that is compatible with their $C(X)$-module structures.
\end{prp}
\begin{thm}
Let $X$ be a finite CW-complex. Then the following diagram commutes.
\[
\xymatrix{
\pi _0( \Gamma (X,\mc{F}_{V}(\mc{H}))) \ar[d]_\iota \ar[r]^{\ind \ \ \ \ \ \ \ } & KK(\comp , \Gamma \Cliff(V)) \\
\pi _0 (\Gamma (X,F_{V}(\mc{H}))) \ar[r]^{\Phi \ \ \ \ \ \ \ } & \mc{R}KK(X;C_0(V) ,C(X))\ar[u]^{\rotatebox{90}{$\sim$}}.
}
\]
Here $\mc{R}KK(X;C_0(V),C(X))$ is the representable $KK$-group~\cite{Kasparov1988}.
\end{thm}
In the same as the case of the $K$-theory, the Thom isomorphism holds for the twisted connective $K$-theory.
\begin{prp}
The following isomorphism holds.
$$k^{W} (X) \cong k^{\pi^*V \oplus \pi ^*W}(V)$$
\end{prp}
\begin{proof}
Let $F$ be a closed subspace of $X$ and denote by $V_F$ the restriction $V|_F$ of vector bundles $V$. Then there is a morphism
\ma{\Hom _{C(F)} (C_0(W_F),C(F)\otimes \mbb{K}) &\ral \Hom_{C_0(V_F)} (C_0(\pi^*(V \oplus W)_{V_F}),C_0(V_F)\otimes \mbb{K})\\
\varphi &\longmapsto \id _{V} \otimes \varphi ,}
which is isomorphic if $V$ is trivial on $F$, and functorial with respect to inclusions. The Mayer-Vietoris exact sequence implies the global isomorphism.
\end{proof}
In particular, combining with the Thom isomorphism of the connective $K$-theory, we obtain the fact that the twist associated with $V$ is trivial if $V$ has a $Spin^c$ structure.
\section{The joint spectral flow}\label{section:3}
Now we give the precise definition of the joint spectral flow by using the notions introduced in Section \ref{section:2}. Next we prove an index theorem that generalizes the spectral flow index theorem of Atiyah-Patodi-Singer~\cite{AtiyahPatodiSinger1976}. Finally we generalize it for the case in which coefficients $c_i$ are globally twisted by a $Spin^c$ vector bundle.
\subsection{Definitions and an index theorem}\label{section:3.1}
In the previous section we have seen that $F(S^n,*)$ represents the connective $K$-theory. Now we introduce another configuration space $P(X,A)$ with labels in positive integers on $X$ relative to $A$. More precisely, an element of $P(X,A)$ is a pair $(S,\mbk{n_x}_{x \in S})$ where $S$ is a countable subset of $X \setminus A$ whose cluster points are all in $A$ and each $n_x$ is a positive integer. Its topology is introduced in the same way as that of $F(X,A)$. Then $P(S^n,*)$ is canonically homotopy equivalent to the infinite symmetric product of $(S^n,*)$ that is a model of the Eilenberg-Maclane space $K(\zahl , n)$ by virtue of the Dold-Thom theorem~\cite{DoldThom1958}. There is a canonical continuous map $j$ from $F(S^n,*)$ to $P(S^n,*)$ ``forgetting'' data about vector spaces except for their dimensions, which is more precisely given by
$$(S,\mbk{V_x}_{x \in S}) \longmapsto (S,\mbk{\dim V_x}_{x \in S}).$$
In the viewpoint of commuting Fredholm $n$-tuples it forgets their eigenspaces and keeps only their joint spectra with multiplicity.
It induces a group homomorphism
$$j_*: \tilde{k}^n(X) \ral H^n(X; \zahl).$$
Now we introduce the notion of the joint spectral flow.
\begin{defn}\label{def:jsf}
Let $X$ be an oriented closed manifold of dimension $n$. For a continuous family $\mbk{T(x)}=\mbk{(T_0(x),\ldots ,T_n(x))}_{x \in X}$ of elements in $F^n(\mc{H})$ parametrized by $X$, we say that $\ebk{j_* [\mbk{T(x)}] , [X]} \in \zahl$ is its {\it joint spectral flow} and denote it by $\jsf(\mbk{T(x)})$. For a continuous family of bounded (resp. unbounded) commuting Fredholm $n$-tuple $\mbk{T_1,\ldots,T_n}$, we say $\jsf(\iota \mbk{T(x)})$ is its joint spectral flow and denote it simply by $\jsf(\mbk{T(x)})$.
\end{defn}
\begin{exmp}[the case of $n=1$]
According to Section 7 of \cite{AtiyahPatodiSinger1976}, the spectral flow is defined as the canonical group isomorphism ${\rm sf} : \pi _1 (F_1(\mc{H})) \to \zahl$ as follows. For a continuous map $T : S^1 \to F_1(\mc{H})$ whose essential spectrum is $\mbk{-1,1}$, there is a family of continuous functions $j_i:[0,1] \to [-1,1]$ such that $-1 =j=0 \leq j_1 \leq \cdots \leq j_m =1$ and $\sigma (T(t))=\mbk{j_0(t),\ldots,j_m(t)}$ for any $t \in [0,1]$. Then we obtain the integer $l$ such that $j_k(1)=j_{k+l}(0)$ for any $k$. This $l$ is called the spectral flow. Now let $\mbk{T(t)}$ be a continuous family of bounded self-adjoint Fredholm operators such that $\sigma (T(t))=\mbk{0,(t+1)/2,1}$ and the eigenspace $E_{(t+1)/2}$ is of dimension $1$. Then by definition its spectral flow ${\rm sf} (\mbk{T(t)})$ is equal to $1$. On the other hand, we obtain $j_*(\mbk{T(t)})=1 \in H^1(S^1 ;\zahl)$ since the canonical inclusion $S^1 \to \Sym ^\infty (S^1,*)$ gives a generator $1 \in H^1(S^1;\zahl) \cong [S^1, \Sym ^\infty (S^1,*)]$(see \cite{DoldThom1958} or Proposition5.2.23 of \cite{AguilarGitlerPrieto2002}). It means that the joint spectral flow coincides with the ordinary spectral flow in the case of $X=S^1$.
\end{exmp}
\begin{prp}\label{prp:nat_j}
The homomorphism $j_*$ is a natural transform of multiplicative cohomology theories.
\end{prp}
\begin{proof}
According to Section 3 of D{\u{a}}d{\u{a}}rlat-N{\'e}methi~\cite{DadarlatNemethi1990},
\ma{S: \Hom (C_0(\real ^n) , \mbb{K}) &\ral \Hom (C_0(\real ^{n+1}), C_0(\real) \otimes \mbb{K})\\
\varphi &\longmapsto \id _{\real} \otimes \varphi}
or equivalently
\ma{S: F(S^n,*) &\ral \Omega F(S^n \times I,S^n \times \mbk{0,1} \cup \mbk{*} \times I)\\
(S,\mbk{V_x}_{x \in S})& \longmapsto \mbk{t \mapsto ((x,t) ,\mbk{V_x}_{x \in S})}}
gives a homotopy inverse of $\Omega F(S^{n+1} ,*) \to F(S^n,*)$. By the same argument we obtain
\ma{S: P(S^n,*) &\ral \Omega P(S^n \times I,S^n \times \mbk{0,1} \cup \mbk{*} \times I)\\
(S,\mbk{n_x}_{x \in S})& \longmapsto \mbk{t \mapsto ((x,t) ,\mbk{n_x}_{x \in S})}}
gives a homotopy inverse of $\Omega P(S^{n+1}) \to P(S^n,*)$.
Now by definition the following diagram commutes
\[
\xymatrix{F(S^n,*) \ar[r]^{S \ \ } \ar[d]_j &\Omega F(S^{n+1},*) \ar[d]^j \\
P(S^n,*) \ar[r]^{S \ \ } & \Omega P(S^{n+1},*).}
\]
The multiplicativity of $j_*$ follows immediately since the multiplicative structure on $\mbk{F(S^n,*)}_{n=0,1,2,\ldots}$ and $P(S^n,*)_{n=0,1,2,\ldots}$ are induced from the map $(S^n,*) \times (S^m,*) \to (S^{n+m},*)$ coming from the wedge product.
\end{proof}
To prove the generalization of the spectral flow index theorem, we will see the relation between the joint spectral flow and the Chern character. The Chern character is a natural transform from the $K$-functor to the rational cohomology functor. Here there is a generalization of the Chern character for a general cohomology theory, which was introduced by Dold~\cite{Dold1962} and is called the Chern-Dold character.
Now we identify $k^*(X)$ with $\tilde{k}^{*+1}(SX)$ to extend $j_*$ to a natural transform between unreduced cohomology theories $k^*(X) \to H^*(X)$. It is compatible with the original $j_*$ according to Proposition \ref{prp:nat_j}.
\begin{prp}\label{prp:ch}
The $n$-th Chern-Dold character $\ch_n : k^n(X)\otimes \quot \to H^n(X; \quot)$ coincides with $j_*$ rationally.
\end{prp}
\begin{proof}
The following diagram
\[
\xymatrix{k^n(X) \otimes \quot \ar[r]_{\ch \ \ \ } \ar[d]_{j_*} \ar@{}[rd]|\circlearrowleft & H^n(X;k^* (\pt) \otimes \quot) \ar[d]_{1 \otimes j_*} \\
H^n(X ; \quot) \ar[r] ^{\sim \ \ \ \ } _{\ch =\id \ \ \ \ \ \ } & H^n(X; H^* (\pt) \otimes \quot)}
\]
commutes by Proposition \ref{prp:nat_j} and naturality of the Chern-Dold character. In fact, Dold proved in \cite{Dold1962} that there is a one-to-one correspondence between natural transforms of multiplicative cohomology theories $h \to h'$ and graded ring homomorphisms $h(\pt) \to h'(\pt)$ if $h'(\pt)$ is a graded vector space over $\quot$. The Chern-Dold character is induced from the ring homomorphism $h^*(\pt) \to \quot \otimes _\zahl h^*(\pt)$. Its naturality follows from the uniqueness.
Now $k^*(pt) \cong \zahl[\beta ]$ ($\beta $ is of degree $-2$), $H^*(pt) \cong \zahl$, and ring homomorphism $j_*$ from $\zahl[\beta]$ to $\zahl$ is given by $1 \mapsto 1$ and $\beta \mapsto 0$. Hence $(1 \otimes j_*) \circ \ch$ coincides with the $n$-th Chern-Dold character $ch _n$. This implies that $j_*=\ch _n$.
\end{proof}
Let $X$ be a closed $Spin^c$ manifold, $\slashed{\mf{S}}_\comp(X)$ the associated $\Cliff_n$-module bundle of $Spin^c(X)$ by the left multiplication on $\Cliff _n$ as right $\Cliff_n$-module, and $\slashed{\mf{D}}_X$ the $\Cliff _n$-Dirac operator on $\slashed{\mf{S}}_\comp(X)$. Now $\slashed{\mf{S}}_\comp (X)$ is equipped with the canonical $\zahl /2$-grading and $\slashed{\mf{D}}_X$ is an odd operator. Then it gives an element of $K_n(X) \cong KK(C(X) \hat{\otimes} \Cliff _n , \comp)$
$$[\slashed{\mf{D}}_X]:=[L^2(X,\slashed{\mf{S}}_\comp(X)), m, \slashed{\mf{D}}_X(1+\slashed{\mf{D}}_X^2)^{-1/2}],$$
which is the fundamental class of $K$-theory. Here $m: C(X) \hat \otimes \Cliff _n \to B(L^2(\slashed{\mf{S}}_\comp(X)))$ is given by the Clifford multiplication.
\begin{lem}\label{lem:pair}
Let $\mbk{T(x)}_{x \in X}$ be a continuous family of commuting Fredholm $n$-tuples. Then
$$\ebk{ [\ind T ],[\slashed{\mf{D}}_X]}_n= \jsf \mbk{T(x)}.$$
Here $\ebk{\cdot , \cdot}_n$ in the left hand side is the canonical pairing between $K^n(X)$ and $K_n(X)$.
\end{lem}
\begin{proof}
First we prove it in the case that $n$ is even. In that case we have a unique irreducible representation $\Delta_n$ of $\Cliff _n$ and the Dirac operator $\slashed{D}_X$ on $\slashed{S}_\comp (X):=Spin ^c(X) \times _{\Cliff _n} \Delta_n$. Now $\Delta _n$ is equipped with a canonical $\zahl/2$-grading and $\slashed{D}$ is an odd operator. It defines a $KK$-cycle
$$[\slashed{D}_X]:=[L^2(X,\slashed{S}_\comp (X)), m, \slashed{D}_X (1+\slashed{D}_X^2)^{-1/2}] \in KK(C(X) , \comp).$$
We denote by $[\![ \ind T ]\!]$ a $KK$-cycle $[\mc{H} \otimes \Delta_n , 1, T] \in KK(\comp ,C(X))$. Since $\Cliff _n \cong \Delta_n \otimes \Delta_n ^*$ as $\Cliff_n$-$\Cliff _n$-bimodules, the equalities $[\slashed{\mf{D}}_X]=[\slashed{D}_X] \otimes \Delta_n$ and $[\ind T]=[\![ \ind T ]\!] \hat \otimes \Delta _n$ (in particular $\ch [\ind T]=\ch [\![ \ind T ]\!]$) hold. Here $\Delta ^*$ is a Hilbert $\Cliff_n$-module by the inner product $\ebk{x,y}:=x^*y$.
The pairing $\ebk{\cdot , \cdot}_n$ is given by the Kasparov product $KK(\comp , C(X) \otimes \Cliff _n) \otimes KK(C(X) \otimes \Cliff _n , \comp) \to \zahl$. Therefore
\ma{\ebk{[\ind T], [\slashed{\mf{D}}]}_n&=[\ind T] \otimes _{C(X) \otimes \Cliff _n}[\slashed{\mf{D}}_X] \\&=([\![ \ind T ]\!] \otimes _{C(X)} [\slashed{D}_X]) \otimes (\Delta ^* \otimes _{\Cliff _n} \Delta ) = [\![ \ind T ]\!] \otimes _{C(X)} [\slashed{D}_X].}
Now we use the Chern character for $K$-homology that is compatible with pairing. The Chern character of the $Spin ^c$ Dirac operator $\slashed{D}_X$ is given by its Todd class that is given by its $Spin^c$ structure. Hence
\ma{\ebk{[\mbk{T(x)}],[\slashed{D}_X]}&=\ebk{\ch ([\![ \ind T]\!]), \ch ([\slashed{D}_X])}\\
&=\ebk{\ch ([\ind T ]),\Td (X) \cap [X]}\\
&=\ebk{\ch _n ([ \ind T ] ) ,[X]}=\jsf \mbk{T(x)}.}
Here the third equality holds because $\ch ([ \ind T ])$ is in $\bigoplus _{k \geq 0} H^{n+2k}(X;\quot ) \cong H^n(X; \quot )$ and the zeroth Todd class $\Td _0 (X)$ is equal to 1 and the last equality holds by Proposition \ref{prp:ch}.
Finally we prove it in the case that $n$ is odd. We can reduce the problem to the case $n=1$ because for a family of self-adjoint operators $S(t)$ parametrized by $S^1$ whose spectral flow is $1$ (hence $[\ind S]=1 \in K^1(S^1) \cong \zahl$), we have
\ma{\ebk{[\ind T],[\slashed{\mf{D}}]}_n&=\ebk{[\ind T] \cup [\ind S] , [\slashed{\mf{D}}_X] \otimes [\slashed{\mf{D}}_{S^1}]}_{n+1}\\
&=\jsf (\mbk{T(x)} \times \mbk{S(t)})=\jsf \mbk{T(x)}.
}
Here we use the fact that the joint spectral flow of the product family $\mbk{T(x)} \times \mbk{S(t)}$ coincides with the product $\jsf(\mbk{T(x)}) \cdot \jsf(\mbk{S(t)})$.
\end{proof}
Now we give an index theorem that is a generalization of the spectral flow index theorem in \cite{AtiyahPatodiSinger1976}.
Let $B$ be a closed $n$-dimensional $Spin ^c$ manifold, $Z \to M \to B$ a smooth fiber bundle over $B$, $E$ a smooth complex vector bundle over $M$. We fix a decomposition $TM=T_VM \oplus T_HM$ of the tangent bundle where $T_VM:=\mbk{v \in TM; \pi _*v=0}$ is the vertical tangent bundle. For a hermitian vector bundle $E$, we denote by $\pi^*\slashed{\mf{S}}^E_\comp (B)$ the $\Cliff _n$-module bundle $\pi ^* \slashed{\mf{S}}_\comp (B) \otimes E$ on $M$. Now we define the {\it pull-back} of the $\Cliff_n$-Dirac operator $\slashed{\mf{D}}_B$ on $B$ twisted by $E$ as
\ma{\pi^* \slashed{\mf{D}}_B : &\Gamma (M,\pi ^* \slashed{\mf{S}}_\comp ^E(B)) \xra{\nabla} \Gamma (M , \pi ^*\slashed{\mf{S}} _\comp ^E(B) \otimes T^*M) \\
& \hspace{3em} \xra{p_{T_H^*M}}\Gamma (M,\pi^*\slashed{\mf{S}} _\comp^E (B)\otimes T_H^*M) \xra{h}\Gamma (M,\pi ^*\slashed{\mf{S}}_\comp ^E(B)).}
Here, $h$ is the left Clifford action of $\Cliff (TB) \cong \Cliff (T_HM)$ on $\pi^*\slashed{\mf{S}}_\comp^E(B)$. We write down it by using an orthogonal basis $\mbk{e_1,\ldots,e_n }$ of $T_{\pi(x)}B \cong T_{\pi(x)}^*B$ as
$$\pi ^*\slashed{\mf{D}}_B= \sum h(\pi ^* e_i) \nabla ^{\pi ^*\slashed{\mf{S}}_\comp ^E(B)}_{\pi^* e_i}.$$
Now it satisfies
\ma{\pi ^* \slashed{\mf{D}}_B (\pi ^* \varphi )=\pi ^* (\slashed{\mf{D}}_B\varphi).}
Let $\mbk{D_1,\ldots,D_n}$ be an $n$-tuple of fiberwise first order pseudodifferential operators on $E$, that is, a smooth family $\mbk{D(x)}$ of pseudodifferential operators on $\Gamma (M_x , E|_{M_x})$. Moreover we assume these two conditions.
\theoremstyle{definition}
\newtheorem{cond}[equation]{Condition}
\begin{cond}\label{cond:comm}
\item[1.] The operators $D_i$ and $D_j$ commute for any $i,j$.
\item[2.] The square sum $\sum _{i=1}^n D_i ^2$ is fiberwise elliptic, that is, its principal symbol is invertible on $S(T_VM)$.
\end{cond}
Then, by taking a trivialization of the Hilbert bundle of fiberwise $L^2$-sections $\mc{L}^2_f(M,E \hat \otimes \Cliff _n):=\mbk{L^2(Z_x,E_x \hat \otimes \Cliff _n)}_{x \in B}$, it forms a continuous family of unbounded commuting Fredholm $n$-tuples $\mbk{D(x)}=\mbk{(D_1(x),\ldots,D_n(x))}$ parametrized by $B$. Indeed, according to Kuiper's theorem, any Hilbert space bundles are trivial and $[D(x)]$ is independent of the choice of a trivialization. The second assertion holds because a trivialization of Hilbert bundle $\mc{V}$ gives a unitary $U \in \Hom _{C(X)} (C(X) \otimes \mc{H}, \Gamma (X, \mc{V}))$ and hence two trivializations $U$, $U'$ give a norm continuous unitary-valued function $U^{-1}U'$, which is homotopic to the identity. Combining with a connection on $\pi^*\slashed{\mf{S}}_\comp (B)$, which is fiberwise flat, the Dirac operator $D(x)=c_1D_1(x) + \cdots +c_nD_n(x)$ associated with $\mbk{D(x)}$ (here we denote by $c$ the $\Cliff_n$-action on $\slashed{\mf{S}}_\comp (B)$ and $c_i:=c(e_i)$ for an orthonormal basis $\mbk{e_i}$) also defines a first order pseudodifferential operator on $\pi ^*\slashed{\mf{S}}^E_\comp (B)$.
Now we describe our main theorem.
\begin{thm}\label{thm:jsf}
Let $B$, $M$, $E$, and $\mbk{D(x)}$ be as above. Then the following formula holds.
\ma{\ind _0 (\pi ^*\slashed{\mf{D}}_B +D(x))=\jsf \mbk{D(x)}.}
Here, for an odd self-adjoint operator $D$ on $\mc{H} =\mc{H}^0 \oplus \mc{H}^1$, we denote by $\ind _0 D$ the Fredholm index of $D : \mc{H}^0 \to \mc{H}^1$.
\end{thm}
To prove this theorem we prepare a lemma about an operator inequality. In this section we denote $D(x)$ and $\pi ^*\slashed{\mf{D}}_B$ simply by $D_f$ and $D_b$.
\begin{lem}\label{lem:ineq}
For any $\alpha \geq 0$ there is a constant $C>0$ such that for any $\xi \in \Gamma (M,\pi^*\slashed{\mf{S}}_\comp^E(B))$
\maa{\ebk{[D_b,D_f]\xi,\xi} \geq -\alpha \ssbk{D_f \xi}^2-C\ssbk{\xi}^2. \label{eq:ineq}}
\end{lem}
\begin{proof}
First we observe that $[D_b,D_f]$ is also a fiberwise first-order pseudodifferential operator. Let $(V, x_b^1,\ldots,x_b^n)$ be a local coordinate of $x \in B$ and $(U, x_b^1,\ldots, x_b^n,x_f^1,\ldots ,x_f^m)$ a local coordinate in $\pi^{-1}(V)$ such that tangent vectors $\partial _{x_b^i}(p)$ are in $(T_HM)_p$ for any $p \in {\pi^{-1}(x)}$. We get such a coordinate by identifying a neighborhood of zero section of $T_HM|_{\pi^{-1}(x)} \cong N\pi^{-1}(x)$ with a tubular neighborhood of $\pi^{-1}(x)$. We assume that $\pi^*\slashed{\mf{S}}_\comp^E(B)$ is trivial on $U$ and fix a trivialization. Then, for any fiberwise pseudodifferential operator $P$ supported in $U$, the operator $[\partial _{x_b^i},P]$ is also fiberwise pseudodifferential. Indeed, when we write down a fiberwise pseudodifferential operator $P$ on a bounded open subset of $\real^{n+m}=\real ^n_{x_b}\times \real ^m_{x_f}$ as
$$Pu(x_b,x_f)=\int _{(y_f,\xi_f) \in \real ^m \times \real ^m} e^{i\ebk{x_f-y_f,\xi_f}}a(x_b,x_f,y_f,\xi _f)u(x_b,y_f)dy_fd\xi_f,$$
we have
\ma{\lbk{\partial_{ x_b^{i}},P}u(x_b,x_f)=&\int \partial _{x_b^{i}}(e^{i\ebk{x_f-y_f,\xi_f}}a(x_b,x_f,y_f,\xi_f)u(x_b,y_f))dy_fd\xi_f \\
&-\int e^{i\ebk{x_f-y_f,\xi_f}}a(x_b,x_f,y_f,\xi_f)\partial _{x_b^i}u(x_b,y_f)dy_fd\xi_f\\
=&\int e^{i\ebk{x_f-y_f,\xi_f}}(\partial _{x_b^i}(a(x_b,x_f,y_f,\xi_f))u(x_b,y_f)dy_fd\xi_f .}
Let $D'_b:=\sum g^{ij}h(\partial _{x_b^i})\nabla _{\partial _{x^j_b}}$. Since the Riemannian metric $g^{ij}$ on $T_HM$ only depends on the local coordinate of $B$ (i.e. is a function on $B$), an operator $[D'_b,P]=[\sum g^{ij}h(\partial _{x_b^i})(\partial _{x_b^j} +\omega(\partial_{x_b^j})),P]$ is also fiberwise pseudodifferential.
For any $\xi \in \Gamma (U,\pi^*\slashed{\mf{S}}_\comp^E(B)|_U)$, the section $[D_b,P]\xi|_{\pi^{-1}(x)}$ depends only on the restriction of $\xi$ and its differentials in normal direction on $\pi^{-1}(x)$. Since the Dirac operator $D_b$ coincides with $D'_b$ on $U_0:=U \cap \pi^{-1}(x)$ and $[P,D'_b]$ is fiberwise pseudodifferential, $[D_b,P]\xi|_{\pi^{-1}(x)}=[D'_b,P]\xi|_{\pi^{-1}(x)}$ does not depend on differentials of $\xi$. Now, the above argument is independent of the choice of $x \in B$. As a consequence, $[D_b,P]$ is also fiberwise pseudodifferential. By using a partition of unity, we can see that $[D_b,D_f]$ is also a fiberwise pseudodifferential operator.
As a conseqence, we obtain that $[D_b,D_f](1+D_f^2)^{-1/2}$ is a zeroth order pseudodifferential operator. In particular, it is bounded. Now, for any $\lambda >0$, we obtain an inequality
\ma{
\ebk{[D_b,D_f]\xi,\xi}&=\ebk{\lambda[D_b,D_f]\xi,\lambda^{-1}\xi} \\
&\ebk{\lambda D_b D_f(1+D_f^2)^{-1/2}(1+D_f^2)^{1/2}\xi,\lambda^{-1}\xi} \\
&+\ebk{\lambda D_f D_b\xi,\lambda^{-1}(1+D_f^2)^{-1/2}(1+D_f^2)^{1/2}\xi} \\
&\geq -\frac{1}{2} \lambda^{2}\ebk{[D_b,D_f]\xi,[D_b,D_f]\xi}-\frac{1}{2}\lambda^{-2}\ebk{\xi,\xi}\\
&\geq -\frac{1}{2}\lambda^{2}\ssbk{[D_b,D_f](1+D_f^2)^{-1/2}}^{2}\ebk{(1+D_f^2)\xi,\xi}- \frac{1}{2}\lambda^{-2}\ebk{\xi,\xi}\\
&=-\frac{1}{2}\lambda^{2}\ssbk{[D_b,D_f](1+D_f^{2})^{-1/2}}^{2}\ebk{D_f\xi,D_f\xi}\\
&- \frac{1}{2}(\lambda ^2\ssbk{[D_b,D_f](1+D_f^{2})^{-1/2}}^{2}+\lambda^{-2})\ebk{\xi,\xi}}
as is introduced in Lemma 7.5 of Kaad-Lesch~\cite{KaadLesch2012}. Now by choosing $\lambda := \sqrt{2 \alpha}\ssbk{[D_b,D_f](1+D_f^{2})^{-1/2}}^{-1}$ and $C:=\alpha + \lambda ^{-2} /2$, we show this $C$ satisfies the above condition.
\end{proof}
Now we use the Connes-Skandalis type sufficient condition to realize the Kasparov product unbounded Kasparov bimodules introduced by Kucerovsky~\cite{Kucerovsky1997}.
\begin{thm}[Kucerovsky~\cite{Kucerovsky1997}]\label{thm:Kas}
Suppose that $(E_1,\varphi _1,D_1)$, $(E_2,\varphi _2,D_2)$, and $(E_1 \hat \otimes E_2,\varphi _1 \hat \otimes 1 , D)$ are unbounded Kasparov bimodules for $(A,B)$, $(B,C)$, and $(A,C)$ such that the following conditions hold.
\begin{enumerate}
\item For all $x$ in some dense subset of $\varphi _1(A)E_1$, the operator
$$\lbk{\pmx{D & 0 \\0 & D_2},\pmx{0 & T_x \\ T_x^* & 0}}$$
is bounded on $\dom (D \oplus D_2)$.
\item The resolvent of $D$ is compatible with $D_1 \hat{\otimes} 1$.
\item For all $x$ in the domain, $\ebk{D_1x,Dx} + \ebk{Dx,D_1x} \geq \kappa \ebk{x,x}$.
\end{enumerate}
Here $x \in E_1$ is homogeneous and $T_x : E_2 \to E$ maps $e \mapsto x \hat \otimes e$. Then $[E_1 \hat \otimes E_1,\varphi _1 \hat \otimes 1 , D] \in KK(A,C)$ represents the Kasparov product of $[E_1,\varphi _1,D_1] \in KK(A,C)$ and $[E_2,\varphi _2,D_2] \in KK(B,C)$.
\end{thm}
Here the resolvent of $D$ is said to be {\it compatible} with $D'$ if there is a dense submodule $\mc{W} \subset E_1 \hat \otimes E_2$ such that $D'(i\mu +D)^{-1}(i\mu ' +D')^{-1}$ is defined on $\mc{W}$ for any $\mu , \mu ' \in \real \setminus \mbk{0}$. It holds for example in the case that $\dom D \subset \dom D'$.
\begin{proof}[Proof of Theorem \ref{thm:jsf}]
According to Lemma \ref{lem:pair}, the remaining part for the proof is that the left hand side coincides with the pairing $\ebk{[\ind D], [\slashed{\mf{D}}_B]}_n$. Here this pairing is given by the Kasparov product $KK(\comp , C(B) \hat{\otimes } \Cliff _n) \otimes KK(C(B) \hat{\otimes} \Cliff _n, \comp) \to \zahl$. It is computed as follows.
\ma{
&\lbk{\mc{L}^2(M,E \hat{\otimes} \Cliff _n) , 1 , D } \otimes _{C(B) \hat{\otimes} \Cliff _n} \lbk{L^2(B, \slashed{\mf{S}}_\comp(B)) , m , \slashed{\mf{D}}_B}\\
&=\lbk{L^2(M, (E \hat \otimes \Cliff _n) \hat {\otimes} _{\Cliff _n} \pi ^*\slashed{\mf{S}}_\comp (B)) , 1 , \slashed{\mf{D}}_B \times D}\\
&=\lbk{L^2(M, \pi ^*\slashed{\mf{S}}_\comp (B)^ E) , 1 , \slashed{\mf{D}}_B \times D}.
}
Now the rest to prove is that $D_b+D_f$ satisfies conditions 1, 2, and 3 of Theorem \ref{thm:Kas}.
For any $\sigma \in C^\infty (M,E)$ and $\xi \in C^\infty (B,\slashed{\mf{S}}_\comp(B))$, the Leibniz rule of $\pi ^*\slashed{\mf{D}}_B$ implies that
\ma{(D_b+D_f) T_\sigma \xi &= (D_b +D_f)(\sigma \cdot \pi^*\xi)=(D_b+D_f) x \cdot \pi^*\xi +\sigma \cdot D_b\pi ^*\xi \\
&=T_{(D_b+D_f)\sigma}\xi + \sigma \cdot \pi ^* (\slashed{\mf{D}}_B \xi).}
Therefore $(D_b+D_f)T_\sigma -T_\sigma \slashed{\mf{D}}_B=T_{(D_b+D_f)\sigma}$ is a bounded operator and hence condition 1 holds. Condition 2 holds since $\dom (D_b+D_f) \subset \dom D_f$. For any $\xi \in C^\infty(M,\slashed{\mf{S}}_\comp ^E(M))$, which is dense in the domain,
\ma{\ebk{D_f\xi, (D_b+D_f)\xi} + \ebk{(D_b+D_f)\xi , D_f\xi}&=\ebk{[D_b,D_f]\xi ,\xi} +\ssbk{D_f\xi}^2.}
Condition 3 follows from it and Lemma \ref{lem:ineq}.
\end{proof}
\begin{remk}\label{rem:findim}
The calculus above is motivated by that of Connes-Skandalis~\cite{ConnesSkandalis1984}, in which they dealt with principal symbols and zeroth order pseudodifferential operators. Here we use the unbounded operators directly to apply it for more general cases. For example, by the same argument we obtain a similar formula
$$\ind _0(D+A(x))= \jsf(\mbk{A(x)})$$
for a smooth family of mutually commuting self-adjoint complex coefficient matrices $A(x)=(A_1(x),\ldots ,A_n(x))$. Other examples are given in the next section.
\end{remk}
\subsection{A Callias type index theorem for open manifolds}\label{section:3.2}
Now we consider generalizing our index theorem for the case of noncompact base spaces. The pairing of homology and cohomology works in the noncompact case if the cohomology is replaced with the one with compact support. We can deal with it in the context of an infinite dimensional analogue of Callias-type operators \cite{Callias1978}. Here we use fiberwise elliptic operators as the potential term in the original theory of Callias. First we define the admissibility of a connective $K$-cocycle (see also \cite{Bunke1995}).
\begin{defn}
A continuous family of commuting Fredholm $n$-tuples $\mbk{D_1,\ldots,D_n}$ parametrized by a complete Riemannian manifold $B$ is said to be {\it admissible} if there is a constant $c>0$ that satisfies the following.
\begin{enumerate}
\item $D(x)^2 \geq \kappa >0 $ for $x \in X \setminus K$,
\item There are $C_1>0$ and $C_2>0$ such that $\ebk{([D_b,D_f]+D_f^2)\xi,\xi} \geq C_1\ssbk{D_f\xi}^2- C_2\ssbk{\xi}^2$ and $\kappa C_1>C_2$.
\end{enumerate}
\end{defn}
Actually the second condition is not essential.
\begin{lem}
For any continuous family of commuting Fredholm $n$-tuples $\mbk{D_1,\ldots,D_n}$ parametrized by a complete $n$-dimensional Riemannian manifold $B$ that satisfies condition 1 above, there is some $t >0$ such that $tD:=(tD_1,\ldots,tD_n)$ is admissible.
\end{lem}
\begin{proof}
By a similar calculus to the one in Lemma \ref{lem:ineq} (we replace $D_f$ in the first term with $tD_f$ but do not replace the one that arises in $(1+D_f^2)$ in the middle part) we show that for any $\lambda >0$
\ma{
\ebk{[D_b,tD_f]\xi,\xi}&=-\frac{1}{2}\lambda^{2}R\ebk{D_f\xi,D_f\xi}- \frac{1}{2}(\lambda ^2R+\lambda^{-2})\ebk{\xi,\xi}.}
where we denote that $R:=\ssbk{[D_b,D_f](1+D_f^2)^{-1/2}}^2$. Now if we choose $\lambda =R^{-1/2}$, then
$$\ebk{([D_b,tD_f]+(tD_f)^2) \xi,\xi} \geq \frac{t^2}{2} \ssbk{D_f\xi}^2 - \bk{\frac{t^2}{2} + R} \ssbk{\xi}^2.$$
Now we can take a constant $\kappa$ in condition 1 for $tD_f$ as $t\kappa$. When we set $C_1=\frac{t^2}{2}$ and $C_2=\frac{t^2}{2}$, for sufficiently large $t>0$ the inequality $(t\kappa) C_1 \geq C_2 $ holds and hence the constants $t\kappa$, $C_1$, and $C_2$ satisfies condition 2.
\end{proof}
Now we introduce a geometric setting and an index Theorem for the noncompact case.
Let $B$ be a complete $n$-dimensional manifold, $Z \to M \to B$ a smooth fiber bundle over $B$ with fixed decomposition of the tangent bundle $TM \cong T_VM \oplus T_HM$, $E$ a smooth complex vector bundle over $M$ and $\mbk{D_1,\ldots,D_n}$ an $n$-tuple of fiberwise first order pseudodifferential operators on $E$ that satisfies the Condition \ref{cond:comm}. Moreover we assume that $\mbk{D_1,\ldots,D_n}$ is admissible.
\begin{thm}\label{thm:jsfopen}
In the above situation, the operator $\pi ^*\slashed{\mf{D}}_B +D(x)$ is Fredholm and the following formula holds.
\ma{\ind _0(\pi^*\slashed{\mf{D}}_B + D(x))=\jsf \mbk{D(x)}}
\end{thm}
\begin{proof}
The proof is essentially the same as for Theorem \ref{thm:jsf} and the remaining part is to show that $\slashed{\mf{D}}_B + D(x)$ is a Fredholm operator. We prove it by using an estimate motivated by Theorem 3.7 of Gromov-Lawson~\cite{GromovLawson1984}. Here we use the convention $D_b$ and $D_f$ again. Let $E_\lambda$ ($\lambda \in \real$) be the eigenspace for the self-adjoint operator $D_b+D_f$. Now we fix an $\alpha >0$. Then for any $\sigma \in \bigoplus _{|\lambda |<\alpha} E_\lambda $,
\ma{ 0 &\leq \ssbk{D_b\sigma }^2 \leq \ssbk{(D_b+D_f)\sigma }^2 - (([D_b,D_f]+D_f^2)\sigma ,\sigma )\\
&\leq \alpha \ssbk{\sigma }^2 - C_1 \ssbk{D_f\sigma }^2 +C_2 \ssbk{\sigma }^2\\
&\leq (\alpha +C_2) \ssbk{\sigma }^2 - C_1 \ssbk{D_f \sigma }^2 _{B \setminus K}\\
&\leq (\alpha -\kappa C_1 +C_2) \ssbk{\sigma }^2 + \kappa C_1 \ssbk{\sigma }_K^2.}
By assumption we can retake $\alpha >0$ such that $\kappa C_1-C_2 >\alpha$. Then there is a constant $C>0$ and we obtain an estimate
$$\ssbk{\sigma } \leq C\ssbk{\sigma }_K.$$
Now we take a parametrix $Q$ of the elliptic operator $D_b+D_f$ and $\mc{S}:=1-QD$. Let $P$ be the projection from $L^2(M, \pi ^* \slashed{\mf{S}}_\comp^E (B))$ to the subspace $L^2(\pi ^{-1}(K), \pi ^* \slashed{\mf{S}}_\comp^E (B)|_{\pi ^{-1(K)}})$. Then $P\mc{S}$ is a compact operator and
$$\ssbk{P\mc{S}\sigma } \geq \ssbk{P\sigma}-\ssbk{PDQ\sigma} \geq C\ssbk{\sigma} -\alpha \ssbk{PQ} \ssbk{\sigma } .$$
Choosing $\alpha >0$ sufficiently small, we see that $P\mc{S}$ is bounded below by $C-\alpha \ssbk{PQ} >0$. It implies that $\bigoplus E_\lambda $ is finite dimensional since a compact operator on it is bounded below by some positive number.
\end{proof}
\begin{exmp}[the case of $B=\real$]
Let $\mbk{A(t)}_{t \in \real}$ be a continuous family of self-adjoint matrices such that there is a $\lambda >0$ and two self-adjoint matrices $A_+$, $A_-$ such that $A_t=A_-$ for $t \leq -\lambda $ and $A_t=A_+$ for $ \lambda \leq t$. Now as is noted in Remark \ref{rem:findim}, we have a finite dimensional analogue of Theorem \ref{thm:jsfopen}. In the $1$-dimensional case it is of the form
$$\ind (\frac{d}{dt} + A_t)={\rm sf} (\mbk{A_t}).$$
Now obviously its right hand side is given by the difference
$$\# \mbk{ \text{positive eigenvalues of $A_-$}}- \# \mbk{\text{negative eigenvalues of $A_+$}}.$$
It is nonzero in general whereas in the case that the parameter space is a circle we have to deal with operators on an infinite dimensional Hilbert space to obtain an example of nontrivial indices.
\end{exmp}
\begin{exmp}
Let $B$ be a complete $Spin^c$ manifold, $Z_1, \ldots , Z_n$ be closed odd dimensional $Spin^c$ manifolds and $\mbk{g^1_x,\ldots ,g_x^n}_{x \in B}$ be a smooth family of metrics on $M_1 ,\ldots, M_n$ such that the scalar curvature of the product manifold $Z:=Z_1 \times \cdots \times Z_n$ is uniformly strictly positive outside a compact subset $K \subset B$. We denote by $\slashed{D}_{i,x}$ the Dirac operator on $Z_i$ with respect to the metric $g^i_x$. Then there is a constant $\lambda >0$ such that $(\lambda \slashed{D}_{1,x},\ldots,\lambda \slashed{D}_{n,x})$ is an admissible family of commuting Fredholm $n$-tuples and the Fredholm index of the $Spin^c$ Dirac operator on $M:=B \times Z$ with respect to the product metric coincides with its joint spectral flow. This gives a map
$$\ind : [(B^+,*),(\mc{R} (Z_1,\ldots,Z_n),\mc{R}(Z_1,\ldots,Z_n)_{\geq \lambda})] \to \zahl$$
where $\mc{R}(Z_1, \ldots , Z_n)$ is the product of spaces of Riemannian metrics $\mc{R}(Z_1) \times \cdots \mc{R}(Z_n)$ and $\mc{R}(Z_1,\ldots,Z_n)_{\geq \lambda}$ is the subspace of $\mc{R}(Z_1,\ldots,Z_n)$ such that the scalar curvature of the product metric $(Z_1,g_1) \times \cdots \times (Z_n,g_n)$ is larger than $\lambda >0$ (its homotopy type is independent of the choice of $\lambda$).
In particular when we choose $B$ as $\real ^n$ the left hand side is isomorphic to $\pi _{n-1} (\mc{R}(Z_1,\ldots,Z_n)_{\geq \lambda})$ because $\mc{R}(Z_1,\ldots,Z_n)$ is contractible.
\end{exmp}
\subsection{Families twisted by a vector bundle}
In this section we generalize the joint spectral flow and its index theorem for the case of $V$-twisted families of commuting Fredholm $n$-tuples introduced at the end of Section \ref{section:2}. It is essential in Section \ref{section:4.1}.
Let $V$ be a real vector bundle. We denote by $P_{V}$ the fiber bundle $GL(V) \times _{GL(n,\real)} P(S^{n},*)$. The set of homotopy classes of continuous sections $\pi _0 \Gamma (X,P_{V})$ forms the twisted cohomology group $H^{V}(X;\zahl)$. Now, twists of the ordinary cohomology theory is classified by $H^1(X,\zahl /2)$ and in our case the corresponding cohomology classes are determined by the orient bundle of $V$. As is definition \ref{def:jsf}, there is the continuous map $j: F_{V}(\mc{H}) \to P_{V}$, which induces the natural transform $j_*:k^{V} \to H^{V}$.
\begin{defn}\label{def:jsftwisted}
Let $X$ be an oriented closed manifold of dimension $n$ and $V$ an $n$-dimensional oriented vector bundle. For a continuous family $\mbk{T(x)}_{x \in X}$ of commuting Fredholm $n$-tuple twisted by $V$, we say that the integer $\ebk{j_* [\mbk{T(x)}] , [X]} \in \zahl$ is its {\it joint spectral flow} and denote it by $\jsf(\mbk{T(x)})$. Here we identify two groups $H^V(X;\zahl)$ and $H^n(X;\zahl)$ in the canonical way. For a continuous family of bounded (resp. unbounded) commuting Fredholm $n$-tuple $\mbk{T(x)}$ twisted by $V$, we say $\jsf(\iota \mbk{T(x)})$ is its joint spectral flow and denote it simply by $\jsf(\mbk{T(x)})$.
\end{defn}
Now we introduce the corresponding geometric setting and prove a generalization pf the joint spectral flow index theorem \ref{thm:jsf} for a family twisted by a $Spin^c$ vector bundle.
Let $B$ be a closed $n$-dimensional $Spin^c$ manifold, $Z \to M \to B$ a smooth fiber bundle over $B$ such that the total space $M$ is also a $Spin^c$ manifold, $V$ be an $n$-dimensional $Spin^c$ vector bundle over $B$, and $E$ a smooth complex vector bundle over $M$. We denote by $\Psi _f^1 (M,E)$ the fiber bundle over $B$ whose fiber on $x \in B$ is the space of first order pseudodifferential operators on $\Gamma (M_x,E|_{M_x})$. We consider a map of $B$-bundles $\mbk{D_v(x)}_{(x,v) \in V \setminus \mbk{0}}:V \setminus \mbk{0} \to \Psi ^1 _f (M,E)$ that satisfies the following conditions.
\begin{cond}\label{cond:commtwist}
\item[1.] The operators $D_v(x)$ and $D_w(x)$ commute for any $v,w \in V_x \setminus \mbk{0}$.
\item[2.] The equality $g \cdot (D_{v_1}(x),...,D_{v_n}(x))=(D_{g \cdot v_1}(x) , \ldots , D_{g \cdot v_n}(x))$ holds for any $g \in GL(n;\real)$ and a basis $(v_1,...,v_n) $ of $V_x$.
\item[3.] The square sum $\sum _{v_1,\ldots,v_n {\rm : \ ONB}} D_{v_i} ^2$ is fiberwise elliptic, that is, its principal symbol is invertible on $S(T_VM)$.
\end{cond}
Then it forms a continuous family of unbounded commuting Fredholm $n$-tuples $\mbk{D(x)}$ twisted by $V$.
Next, we replace the fundamental $KK$-class on $B$ with the one that is compatible with $\mbk{D(x)}$. Instead of $\slashed{\mf{S}}_\comp (M)$, we consider the spinor bundle $\slashed{\mf{S}}_\comp(B;V):=\slashed{S}_\comp (TB \oplus V)$ for an even dimensional $Spin^c$ vector bundle $TB \oplus V$. It is equipped with the action of $\Cliff (TB) \hat \otimes \Cliff (V)$. Here we denote by $c $ and $h$ its restriction on $\Cliff(V) \hat \otimes 1$ and $1 \hat \otimes \Cliff (TB)$ respectively. Now we define a pull-back of the Dirac operator $\pi^*\slashed{\mf{D}}_B^V$ twisted by $E$ in a similar way to the one in Section \ref{section:3.1}.
\begin{thm}\label{thm:jsftwisted}
Let $B$, $M$ and $D(x)$ be as above. Then the following formula holds.
\ma{\ind (\pi ^*\slashed{\mf{D}}_B^V + D(x))=\jsf \{ D(x) \} .}
\end{thm}
\begin{proof}
First we embed $V$ into a trivial real vector bundle $\underline{\real}^p$ linearly and denote its orthogonal complement by $W$.
We define the following $KK$-classes
\ma{[D_W]&:=\lbk{\mc{L}_f^2(W ,\Cliff (\pi ^*W)), m , D_W:=\sum h(e_i) \frac{\partial}{\partial w_i}} \in KK(\Gamma _0 \Cliff (\pi^*W) , C(B)),\\
[C_W]&:=\lbk{\Gamma _0 \Cliff (\pi^*W) , m , C_W:=\sum c(e_i) w_i} \in KK(C(B),\Gamma _0 \Cliff (\pi^*W)),}
where $\mbk{e_i}$ is an orthonormal basis on $W_x$ and $w_i=\ebk{w,e_i}$ the coordinate functions with respect to $\mbk{e_i}$. We mention that $D_W$ and $C_W$ are independent of the choice of $\mbk{e_i}$ and hence they are well-defined.
Then, the theory of harmonic oscillators (see for example Section 1.13 of \cite{HigsonGuentner2004}) shows that $[D_W] \otimes _{\Gamma _0 \Cliff (\pi ^*W)} [C_W]=[D_W+C_W] =1 \in KK(C(B) ,C(B))$ because the kernel of the harmonic oscillator is one dimensional and $O(n)$-invariant.
Now
$$D \times C_W=(D_{v_1},\ldots,D_{v_n},c_{w_1},\ldots,c_{w_k})$$
is a smooth family of commuting Fredholm $n$-tuples twisted by $V \oplus W \cong \underline{\real}^p$. Moreover it is admissible on $W$ because $(D \times C_W)^2=D^2+\ssbk{w}^2$. According to Theorem \ref{thm:jsfopen},
\ma{\ind (D_b+D_f+D_W+C_W)=\jsf (\mbk{D \otimes C_W(x,w)})=\jsf (\mbk{D(x)}).}
On the other hand, by the associativity of the Kasparov product
\ma{\ind (D_b+D_f+D_W+C_W)&=[D_f+C_W]\otimes_{\Gamma_0(\pi^*W)} [D_W+D_b]\\
&=([D_f] \otimes _{C(B)} [C_W]) \otimes _{\Gamma_0(\pi^*W)}([D_W] \otimes _{C(B)} [D_f])\\
&=[D_f] \otimes _{C(B)}[D_b]=\ind (D_b+D_f).}
\end{proof}
Some examples of geometric situations that this theorem is applied to is introduced in Section \ref{section:4.1}.
\section{Applications}\label{section:4}
In this section we introduce some applications of the joint spectral flow and its index theorem.
\subsection{Witten deformation and localization}\label{section:4.1}
It is easy to obtain the joint spectral flow of a continuous family of commuting Fredholm $n$-tuples when their joint spectra intersect with zero transversally. In such cases we often reduce the problem of computing the index (which usually requires to solve some linear partial differential equations or to integrate some characteristic classes) to that of counting the number of points with multiplicity.
Most typical example is the classical Poincar\'e-Hopf theorem.
\begin{cor}[The Poincar\'e-Hopf theorem]
Let $M$ be a $Spin ^c$ manifold and $X$ a vector field on $M$ whose null points $M^X:=\mbk{p \in M \mid X(p)=0}$ are isolated. Then
$$\chi (M)=\sum _{p \in M^X} \nu _p$$
\end{cor}
This proof is essentially the same as that of Witten~\cite{Witten1982}. Here we restrict $M$ to $Spin^c$ manifolds, but it is not an essential assumption.
\begin{proof}
By the Hodge-Kodaira decomposition the Euler characteristic $\chi (M)$ can be computed as the index of the de Rham operator $D_{\rm dR}:=d+d^* : \Gamma (\bigwedge ^{\rm even/odd} TM) \to \Gamma (\bigwedge ^{\rm odd/even} TM)$. Now $\Cliff (TM)$ acts on $\Cliff(TM)$ in two ways, $c(v)\xi:=v \cdot \xi$ and $h(v)\xi:=\gamma(\xi)\cdot v$ (for $v \in TM$ and $\xi \in \Cliff (TM)$) where $\gamma $ is the grading operator on $\Cliff (TM) \cong \Cliff (TM)^0 \oplus \Cliff (TM)^1$. They induce the $\Cliff(TM) \hat \otimes \Cliff(TM)$-action on $\Cliff(TM)$ because $c(v)$ and $h(v)$ anticommute. Because $M$ is a $Spin^c$ manifold, it is a unique irreducible $\Cliff (TM \oplus TM)$-module $\slashed{S}_\comp (TM \oplus TM)$. By Leibniz's rule,
\ma{D_{\rm dR}(\gamma (\xi) \cdot X)=-\gamma (D_{\rm dR} \xi) \cdot X + (-1)^{\partial \gamma (\xi )}\gamma (\xi) \cdot D_{\rm dR}(X)}
where we use the fact that $D_{\rm dR}$ is an odd operator. It means that $D_{\rm dR}$ and $h(X)$ anticommute modulo bounded operator $(-1)^{\partial \xi+1} h(D_{\rm dR}(X))$. It shows that $D_{\rm dR}+th(X)$ is Fredholm for any $t>0$ because $(D_{\rm dR}+th(X))^2 =D_{\rm dR}^2 +t^2\ssbk{X}^2 + t[D_{\rm dR},h(X)]$ is a bounded perturbation of the Laplace operator $D_{\rm dR}^2$, which is positive and has compact resolvent. On the other hand, $h(X)=\sum \ebk{e_i,X}h(e_i)$ is a commuting $n$-tuple of Fredholm operators twisted by $TM$ (now we consider $\ebk{e_i,X}$ as Fredholm operators on the $1$-dimensional vector space $\underline{\comp}$). As a consequence of Theorem \ref{thm:jsftwisted} (and Remark \ref{rem:findim}), we have
\ma{\chi(M)&=\ind (D_{\rm dR})=\ind (D_{\rm dR}+h(X))\\
&=\jsf (\mbk{\ebk{e_i,X}})=\sum _{p \in M^X} \nu _p.}
The last equation follows from the definition of the joint spectral flow.
\end{proof}
Now we consider an infinite dimensional analogue of this approach for a localization problem of index.
Let $B$ be an $n$-dimensional closed $Spin^c$ manifold, $M_1,\ldots,M_n \to B$ fiber bundles such that each fiber $Z_1$, \ldots , $Z_n$ is an odd dimensional closed manifold and $T_VM_i$ are equipped with $Spin^c$ structures, and $E$ a complex vector bundle on $M:=M_1 \times _B \cdots \times _B M_n$. Now $TB \oplus \underline{\real}^n$ is a $2n$-dimensional vector bundle and hence there is a unique $\Cliff (TB \oplus \underline{\real}^n)$-module bundle $\slashed{S}_\comp (T_VM \oplus \underline{\real}^n)$. We denote by $\slashed{S}_\comp(T_VM_i) \cong \slashed{S}_\comp^0(T_VM_i) \oplus \slashed{S}_\comp^1(T_VM_i)$ a unique $\zahl/2$-graded $\Cliff (T_VM_i)$-module bundle, which is isomorphic to $\slashed{S}_\comp(T_VM_i \oplus \underline{\real})$. Then it is decomposed as tensor products as follows.
\ma{\slashed{S}_\comp (T_VM \oplus \underline{\real}^n) &\cong \slashed{S}_\comp (T_VM_1 \oplus \underline{\real}) \hat \otimes \cdots \hat \otimes \slashed{S}_\comp (T_VM_n \oplus \underline{\real})\\
&\cong (\slashed{S}^0_\comp (T_VM_1) \hat{\otimes} \Cliff _1) \hat \otimes \cdots \hat \otimes (\slashed{S}^0_\comp (T_VM_n) \hat \otimes \Cliff _1)\\
&\cong \bk{\slashed{S}^0_\comp (T_VM_1) \otimes \cdots \otimes \slashed{S}^0_\comp (T_VM_n)} \hat{\otimes} \Cliff _n .}
Hereafter we denote $\slashed{\mf{S}}_{\comp ,f}(M;\underline{\real}^n):= \slashed{S}_\comp (T_VM \oplus \underline{\real }^n)$ and $\slashed{S}^0_{\comp ,f} (M;\underline{\real}^n):=\slashed{S}^0_\comp (T_VM_1) \otimes \cdots \otimes \slashed{S}^0_\comp (T_VM_n)$. The inclusions $T_VM \subset T_VM \oplus \underline{\real}^n$ and $\underline{\real}^n \subset T_VM \oplus \underline{\real}^n$ induce the actions of $\Cliff(TM)$ and $\Cliff _n$ on $\slashed{\mf{S}}_{\comp ,f} (M ; \underline{\real}^n)$. Under the above identification, a vector $v=v_1 \oplus \cdots \oplus v_n \in T_VM$ acts as $(c(v_1) \otimes 1 \otimes \cdots \otimes 1) \otimes c_1 + \cdots + (1 \otimes \cdots \otimes 1 \otimes c(v_n)) \otimes c_n$ and $\Cliff _n$ acts as $1 \otimes h$ (here we denote the left and twisted right actions of $\Cliff _n$ on $\Cliff _n$ by $c$ and $h$). Hence the fiberwise Dirac operator $\slashed{D}_f$ is decomposed as
$$\slashed{D}_f=c_1\slashed{D}_1 + \cdots +c_n\slashed{D}_n ,$$
where $\slashed{D}_i$'s are Dirac operators for the $M_i$ direction
\ma{\slashed{D}_i &: \Gamma (M, \slashed{S}_\comp (T_VM \oplus \underline{\real}^n) \xra{d} \Gamma (M, \slashed{S}_\comp (T_VM \oplus \underline{\real}^n) \otimes T^*M)\\
& \hspace{5em} \xra{p_{T_VM_i}} \Gamma (\slashed{S}_\comp (T_VM \oplus \underline{\real}^n)) \xra{c} \Gamma (M,\slashed{S}_\comp (T_VM \oplus \underline{\real}^n)).}
Similarly, the twisted spinor bundle $\slashed{\mf{S}}_{\comp ,f} ^E (M; \underline{\real}^n):=\slashed{S}_\comp (T_VM \oplus \underline{\real}^n)\otimes E$ is isomorphic to $\slashed{S}^{0,E}_{\comp ,f}(M; \underline{\real}^n) \hat \otimes \Cliff _n$. Moreover if $E$ is equipped with a connection $\nabla ^E$ whose curvature $R^E$ satisfies $R^E(X,Y)=0$ for any $X \in T_VM_i$ and $Y \in T_VM_j $ ($i \neq j$), then the Dirac operator twisted by $E$ is decomposed as $\slashed{D}^E=c_1 \slashed{D}^E_1 + \cdots + c_n\slashed{D}^E_n$ such that $\slashed{D}_i$ commutes with $\slashed{D}_j$. Now $(\slashed{D}^E_1 , \cdots , \slashed{D}^E_n)$ forms a smooth family of unbounded commuting Fredholm $n$-tuples and $\slashed{D}_f^E$ is the smooth family of the Dirac operators associated with it.
More generally, we obtain some examples of twisted commuting Fredholm $n$-tuples. Let $V$ be a real vector bundle whose structure group is a discrete subgroup $G$ of $GL(n,\real)$ and $B'=G(V)$ a frame bundle of $V$, $M_1',\ldots,M_n'$ fiber bundles with a $G$-action on $M':=M_1' \times \cdots \times M_n'$ that is compatible with the projection $M' \to B'$, and $E$ a $G$-equivariant vector bundle on $M'$ whose connection $\nabla$ is $G$-equivariant and satisfies the above assumption. It induces a unitary representation $U_x$ of $G$ on $L^2(M'_x , \slashed{\mf{S}}_{\comp}^E(M'_x) )$ where $M'_x:={\pi'} ^{-1}(x)$ ($\pi'$ is the projection from $M'$ to $B$). We assume that
$$U_x(g) \slashed{D}^E_i U_x(g)^* = \sum g_{ij}\slashed{D}_j^E.$$
Then $(g=(v_1,\ldots ,v_n), (\slashed{D}^E_1(x,g) , \ldots , \slashed{D}^E_n(x,g))) \in B' \times \mc{F}_n(\mc{H})$ is $G$-invariant and hence the map $x \mapsto \slashed{D}^E_v(x)$ defines a smooth family of commuting Fredholm $n$-tuples twisted by $V$.
There are two fundamental examples. The first is the $SL(n,\zahl)$-action on $\mathbb{T}^n=(S^1)^n$ or the product bundle $\mathbb{T}^n \times B$. The second is the $\mf{S}_n$-action on the bundle $M' \times _B \cdots \times _B M'$.
Then the Dirac operator on a fiber bundle $M:=M'/G \to B$ is that associated with $\{ \slashed{D}^E_v (x) \}_{x \in B}$.
\begin{thm}\label{thm:geom}
Let $B$, $M$, $V$, $E$, and $\nabla$ be as above. Then
$$\ind _0 ( \slashed{D}_M^E)=\jsf \{ \slashed{D}_v^E(x) \}.$$
\end{thm}
This theorem is a direct consequence of Theorem \ref{thm:jsf} since the Dirac operator $\slashed{D}_M^E$ has the same principal symbol as $\pi ^* \slashed{\mf{D}}_B+\slashed{D}^E_f(x)$. As the special case we can show localization of the Riemann-Roch number for prequantum data on its Bohr-Sommerfeld fiber.
\begin{cor}\label{cor:FFY}
Let $(X,\omega)$ be a symplectic manifold of dimension $2n$, $\mathbb{T}^n \to X \to B$ a Lagrangian fiber bundle, and $(L,\nabla^L,h)$ its prequantum data, that is, $(L,h)$ is a hermitian line bundle over $X$ with the connection $\nabla^L$ that is compatible with $h$ whose first Chern form $c_1 (\nabla^L)$ coincides with $-2\pi i\omega$. Then its Riemann-Roch number $RR(M,L):=\ind _0 \slashed{D}_M^{\lambda ^{1/2}\otimes L}$ (where $\lambda$ is the determinant line bundle $\det T^{(1,0)}M$) coincides with the number of fibers $\mathbb{T}_x$ that $\nabla$ is trivially flat, which are called the Bohr-Sommerfeld fibers.
\end{cor}
\begin{proof}
The structure of Lagrangian fiber bundles are studied in Section 2 of \cite{Duistermaat1980} as the follows.
\begin{itemize}
\item[Fact 1.] There is a lattice bundle $P \subset TB$, which induces a flat metric on $TB$.
\item[Fact 2.] If $P$ is trivial, $M$ is actually a principal $\mathbb{T}^n$-bundle.
\end{itemize}
We denote the $GL(n,\zahl)$-frame bundle of $TB$ by $B'$ and by $M'$ the pull-back of $M$ by the quotient $B' \to B$. It has a canonical symplectic structure and $M' \to B'$ is also a Lagrangian fiber bundle. On account of Fact 2, $M'$ is a principal $\mathbb{T}^n$-bundle on $B'$. We identify the space of constant vector fields on a fiber $M'_x$ with the Lie algebra $\mf{t} ={\rm Lie}(\mathbb{T}^n)$.
The free $GL(n,\zahl)$-action on $B'$ extends to that on $M'$ preserving its symplectic form and affine structure on each fiber $M'_x$. Therefore it induces an action on $\mf{t}$ as $g \cdot X_i =g_{ij}X_j$ for some fixed basis $X_1,\ldots,X_n$ of $\mf{t}$. Indeed, by considering the canonical trivialization of the tangent bundle $TB' \cong B' \times \underline{\real }^n$ that is compatible with the isomorphism $\mf{t} \cong T_xB'$ given by a fixed almost complex structure $J$, we obtain the isomorphism $\mf{t} \cong T_x B' \cong \real ^n$ that is independent of the choice of $x \in B'$. Under this identification, $g \cdot : \real ^n \cong T_xB' \to T_{g \cdot x}B' \cong \real ^n$ is represented by $(g_{ij})$ as a matrix. Hence $g$ also acts on $\mf{t}$ as $(g_{ij})$.
Next, we construct some flat connections. The isomorphism $T_VM \cong T_HM \cong \pi^*TB$ induced by $J$ implies the isomorphism $\slashed{\mf{S}}_{\comp,f}(M;\underline{\real}^n) \cong \slashed{S}_\comp(M) \cong \pi ^*\slashed{\mf{S}}_\comp (B)$. Moreover it induces a flat metric on $TM$ that is trivially flat on each fiber $\mbb{T}^n$, and so are associated bundles with $TM$, in particular $\lambda ^{1/2}$ and $\slashed{S}_\comp^{\lambda ^{1/2}}(M)$. Since $R^L=c_1(L)$ is equal to $0$ when it is restricted on each fiber, $\nabla ^L$ is also fiberwise flat and the product connection $\nabla =\nabla ^{\slashed{S}_\comp^{\lambda ^{1/2} \otimes L}(M)}$ is trivially flat if and only if $\nabla ^L$ is trivially flat.
Finally we see that $B$, $M$, $V=TB$, $E=\lambda ^{1/2} \otimes L$, and $\nabla ^{\lambda ^{1/2} \otimes L}$ satisfy the assumptions of Theorem \ref{thm:geom}. Hence $\{\nabla _v (x) \}$ forms a family of commuting Fredholm $n$-tuples twisted by $TB$ and the index of the Dirac operator $\slashed{D}_M^L$ coincides with its joint spectral flow.
The kernel of $\Delta _f:=\nabla _{e_1}^2 + \cdots + \nabla _{e_n}^2$ is not zero if and only if $\nabla$ is, and hence $\nabla ^L$ is, trivially flat. It means that the joint spectrum of $\mbk{\nabla (x)}$ crosses over zero only on its Bohr-Sommerfeld fibers. The remaining part is that the multiplicity of eigenvalues crossing zero on each Bohr-Sommerfeld fiber is equal to $1$. It follows from the fact in symplectic geometry, that the tubular neighborhood of a Lagrangian submanifold is isomorphic to its tangent bundle as symplectic manifolds, and that $T^*\mathbb{T}^n$ is actually the product space $(T^*S^1) ^n$. More detail is in Section 6.4 of \cite{FujitaFurutaYoshida2010}.
\end{proof}
\subsection{Generalized Toeplitz index theorem}
In this section we introduce a generalization of a classical theorem relating the index of Toeplitz operators with the winding numbers.
\begin{defn}
Let $Y$ be an $n=2m-1$-dimensional closed manifold. For $\varphi : Y \to U(k)$ the generalized Toeplitz operator $T_\varphi$ is defined by
$$Pm_\varphi P : PL^2(Y,\slashed{S}_\comp(Y))^{\oplus k} \ral P L^2(Y,\slashed{S}_\comp (Y))^{\oplus k}$$
where $P$ is the orthogonal projection onto $\overline{\rm span}\mbk{\varphi \mid \slashed{D}\varphi =\lambda \varphi \text{ for some $\lambda \geq 0$}}$.
\end{defn}
\begin{exmp}[$Y=S^1$]
In the case of $Y=S^1=\real /2\pi \zahl$ (and hence $\slashed{S}_\comp(Y)$ associated with the canonical $Spin^c$-structure on it is a trivial bundle), we can identify its Dirac operator as $d /d t$. Hence its spectrum coincides with $\zahl$ and eigenspaces $E_n$ are $1$-dimensional complex vector spaces $\comp \cdot e^{int}$. Therefore $PH=\overline{\rm span} \mbk{e^{int} ; n \in \zahl _{\geq 0}}$ and the corresponding generalized Toeplitz operators $T_\varphi$ are nothing but the ordinary ones. Its index is obtained from the winding number as $\ind T_\varphi =-{\rm winding \ } \varphi$.
\end{exmp}
Now we generalize this index theorem for generalized Toeplitz operators in a special case.
Let $\Delta_n=\Delta _n^0 \oplus \Delta _n^1$ be a unique irreducible $\zahl /2$-graded $\Cliff _n$-module and $\gamma$ the grading operator on it. When we have a continuous map $\varphi =(\varphi _0,\ldots, \varphi _n): Y \to S^n$, we obtain an even unitary $\varphi _0 + \gamma c_1 \varphi _1 + \cdots + \gamma c_n \varphi _n$ where $c_i$ ($i=1,\ldots,n$) are Clifford multiplications of an orthonormal basis $e_1,\ldots,e_n$. For simplicity of notation, we use the same letter $\varphi$ for it restricted on $\Delta_n^0$.
\begin{thm}\label{thm:Toep}
Let $Y$ and $\varphi $ be as above. Then
$$\ind T_\varphi =-\deg (\varphi : Y \to S^n).$$
\end{thm}
\begin{proof}
In \cite{BaumDouglas1981} Baum and Douglas proved the cohomological formula for this index which is analogous to the Atiyah-Singer formula. As a consequence, we have the following equality.
$$\ind T_\varphi =-\ebk{\ch (\varphi) \Td (X) ,[X]}.$$
Actually we can give a proof of Theorem~\ref{thm:Toep} by using it and the description of the Chern character in Lemma \ref{lem:pair}.
\end{proof}
\subsection{Localization of family's APS index and eta-form}\label{section:4.3}
We can also apply our joint spectral flow index theorem for fiber bundles whose fibers are compact manifolds with boundary. A main reference for this section is Melrose-Piazza~\cite{MelrosePiazza1997}.
Let $B$ be a closed $n$-dimensional manifold, $Z \to M \to B$ a smooth fiber bundle over $B$ whose boundary also forms a fiber bundle $\partial Z \to \partial M \to B$. The Riemannian metric $g$ on $TM$ is introduced by the direct sum decomposition $g_f \oplus \pi ^*g_B$ on $T_VM \oplus T_HM$, where $g_B$ is a Riemannian metric on $TB \cong T_HM$ and $g_f$ is a smooth family of Riemannian metrics on fibers $Z_x$ that are exact $b$-metrics near boundaries $\partial Z_x$. We assume that there is a $Spin ^c$-vector bundle $V$ on $B$ and $\zahl/2$-graded complex vector bundle $S$ on $M$ such that the spinor bundle $\slashed{S}_\comp(T_VM \oplus V)$ is isomorphic to $\Cliff (\pi^*V) \hat \otimes S$ as a $\Cliff(V)$-modules. Moreover the fiberwise Dirac operator $\slashed{D}_f$ on it coincides with the Dirac operator $c(v_1)D_{v_1}+\cdots +c(v_n)D_{v_n}$ associated with some $V$-twisted $n$-tuple $\mbk{D_v}$ of fiberwise first order pseudodifferential operators on $E$ that satisfies the Condition \ref{cond:commtwist}.
We denote by $H^{1,0}(M,E)$ the fiberwise Sobolev space, the completion of $C^\infty (M,E)$ by the inner product $\ebk{\cdot, \cdot }_{L^2} + \langle \nabla _f^E \cdot , \nabla _f^E \cdot \rangle$ where $\nabla _f^E := p_{T_VM} \circ \nabla ^E$. Then an element in $H^{1,0}(M,E)$ is fiberwise continuous and there is the bounded operator
$$\partial :H^{1,0}(M,E) \to L^2(\partial M,E|_{\partial M}); \ \sigma \mapsto \sigma |_{\partial M}.$$
Now we fix a spectral section $P \in C(B,\mbk{\Psi _0(\partial Z_x, E|_{\partial Z_x}))}_{x \in B})$, that is, $P$ is a projection and there is a smooth function $R:B \to \real $ such that for any $x \in B$, the condition $D_f(x) \sigma =\lambda \sigma$ implies $P(x)\sigma =\sigma$ if $\lambda >R(x)$ and $P(x)\sigma =0$ if $\lambda < -R(x)$.
Then this $P$ determines an elliptic boundary condition at each fiber, and
\ma{\slashed{D}_f & : L^2 (M,E) \to L^2 (M,E)\\
\dom \slashed{D}_f &:= \mbk{\sigma \in H^{(1,0)}(M,E) \mid P ( \partial \sigma )=0}}
is a fiberwise Fredholm self-adjoint operator.
Hence it forms a $V$-twisted continuous family of unbounded commuting Fredholm $n$-tuples $\mbk{D_v(x)}$ parametrized by $B$.
\begin{thm}\label{thm:jsfbdry}
Then the following formula holds.
\ma{\ind _P(\slashed{D})=\jsf (\mbk{D(x)}) }
\end{thm}
The same proof as Theorem \ref{thm:jsf} and \ref{thm:jsftwisted} works for it. It is because we deal with operators directly, instead of the topology of its principal symbol. We only remark that in this situation $D_b$ and $D_f(1+D_f^2)^{-1/2}$ commute modulo bounded operator. Furthermore we obtain an analogue of Theorem \ref{thm:jsfopen}.
Now we introduce its application for a geometric problem.
Let $B$ be a $n$-dimensional closed manifold, $V \to B$ be a real vector bundle of dimension $n$, and $Y \to N \to B$ be a fiber bundle with $\dim Z=n-1$. We assume that $M$ can be embedded into $V$ as a fiber bundle orientedly. Then there is a fiber bundle $Z \to M \to B$ of manifold whose boundary is isomorphic to $Y \to N \to B$ as a fiber bundle. Now we define the eta-form \cite{BismutCheeger1989} for $N$ by
\ma{\hat{\eta} _P &= \int _0 ^\infty \hat{\eta}_P(t)dt \\
\hat {\eta}_P(t)&=\frac{1}{\sqrt{\pi}}{\rm Str} _{\Cliff _1}\bk{\frac{d\tilde{\mathbb{B}_t}}{dt}e^{-\tilde{\mathbb{B}}^2_t}}}
where $\tilde{\mathbb{B}}_t$ is deformed $\Cliff_1$-superconnection.
This differential form is closed and used for the Atiah-Patodi-Singer index thoerem for families.
On the other hand, the canonical metric on $V$ induces a smooth family of exact $b$-metrics on $T_VM$. Therefore, for first order differential operators $\partial /\partial v_i$ on $V_x$ (where $v_1 ,\ldots, v_n$ is a basis of $V_x$) form a $V$-twisted commuting Fredholm $n$-tuple when we fix a spectral section $P$.
\begin{thm}
Let $Z \to M \to B$ and $V$ be as above. If $M$ is orientedly embeddable into $V$, its eta-form $\hat{\eta}_P$ is in $H^n(B;\zahl)$. Moreover in that case
$$\int _B \hat {\eta}_P =\ind _P (\slashed{D}_M)= \jsf \mbk{D(x)}$$
holds.
\end{thm}
\begin{proof}
From Theorem \ref{thm:jsfbdry} we have $j_*\mbk{D(x)}=\ch (\ind _P(\slashed{D}_f))$. Now the Atiyah-Patodi-Singer index theorem for families \cite{MelrosePiazza1997} says that $\ch (\ind _P(\slashed{D}_f))=\pi _!(\hat{A}(T_VM)) +\hat{\eta}_P$. In our case $T_VM$ is trivial and hence the first term of the above equality venishes.
\end{proof}
In particular, in the case of $Y=S^{n-1}$, we get an obstruction for a sphere bundle to be isomorphic to a unit sphere of some vector bundle. It is related with the comparison of homotopy types of ${\rm Diff}_+(S^{n-1})$ and $SO(n)$, which is called the Smale conjecture.
\section{Decomposing Dirac operators}\label{section:5}
Now the converse problem arises. When are geometric Dirac operators ``decomposed'' as Dirac operators associated with commuting Fredholm $n$-tuples? In this section we deal with zeroth order pseudodifferential operators to obtain a complete obstruction from its index by using the theory of $C^*$-algebra extensions, which is related to the $KK^1$-theory in \cite{Kasparov1980} and the index theory.
We start with a folklore. Let $T_\varphi $ be a Toeplitz operator associated with $\varphi \in C(S^1)^\times $. Then $T_\varphi $ is not a normal operator in general and $\Re T_\varphi$ commute with $\Im T_\varphi $ if and only if $\ind T_\varphi $ is equal to $0$. In this situation, the index of the operator $\Re T_\varphi +i\Im T_\varphi$ gives a complete obstruction of mutually commuting self-adjoint operators $A$ and $B$ such that $(A -\Re T_\varphi)$ and $(B-\Im T_\varphi)$ are compact. Our purpose in this section is to give an analogy and a generalization of it for the bounded operator associated with the Dirac operators.
Before we consider the case of families, we deal with a single Dirac operator. First of all, we assume that its principal symbol is decomposed. It is interpreted as a geometric condition as follows. Let $M$ be a closed $Spin ^c$ manifold and $H_1,\ldots,H_n$ mutually orthogonal odd dimensional subbundles of $TM$ such that their direct sum spans $TM$.
As is argued in Section \ref{section:4.1}, $\slashed{\mf{S}}_\comp (M ; \underline {\real}^n) := \slashed{S}_\comp (TM \oplus \underline{\real}^n)$ is decomposed as
\ma{\slashed{\mf{S}}_\comp (M ; \underline{\real}^n) \cong \bk{\slashed{S}^0_\comp (H_1) \otimes \cdots \otimes \slashed{S}^0_\comp (H_n)} \hat \otimes \Cliff _n.}
Hereafter we denote $\slashed{S}^0_\comp (M;\underline{\real} ^n):=\slashed{S}^0_\comp (H_1) \otimes \cdots \otimes \slashed{S}^0_\comp (H_n)$. Under this identification the principal symbol of the Dirac-type operator $\slashed{D}^E$ on $\slashed{S}_\comp^E(M;\underline{\real }^n)$ is interpreted as
\ma{\sigma (\slashed{D}^E)= \sum _{i=1}^k \bk{ \sum _{j=1}^{\dim H_i} 1 \otimes \cdots \otimes c(e_{i,j}) \xi _{i,j}\otimes \cdots \otimes 1 } \hat{\otimes} c_i }
where each $\mbk{e_{i,j}}_{j=1,\ldots,\dim H_i}$ is an orthonormal basis on $H_i$ and $\xi _{i,j}:=\ebk{\xi , e_{i,j}}$ are coordinate functions on each cotangent space.
Then we can construct a commuting $n$-tuple in the symbol level. It also works for the Dirac operator $\slashed{D}^E$ twisted by a complex vector bundle $E$.
We say the Dirac operator $\slashed{D}^E$ is said to be {\it $n$-decomposable} if there is a bounded commuting Fredholm $n$-tuple $(T_1,\ldots,T_n)$ such that each $T_i$ is a zeroth order pseudodifferential operator on $\Gamma (M, \slashed{S}^{E,0}_\comp (M;\underline{\real} ^n))$ whose principal symbols are of the form $\sigma (T_i)=\sum _j1 \otimes \cdots \otimes c(e_{i,j}) \xi _{i,j}\otimes \cdots \otimes 1$.
In that case the bounded operator $\slashed{D}^E(1+\slashed{D}^2)^{-1/2}$ associated with $\slashed{D}^E$ coincides modulo compact with the Dirac operator associated with the bounded commuting Fredholm $n$-tuple $T$.
In fact, $n$-decomposability is a $K$-theoretic property and determined by its index.
\begin{prp}\label{prp:onept}
Let $M$, $H_1,\ldots,H_n$, and $E$ be as above. Then the Dirac operator $\slashed{D}^E$ is $n$-decomposable if and only if $\ind (\slashed{D}^E)=0$.
\end{prp}
\begin{proof}
A decomposition of the principal symbol gives a $*$-homomorphism $\sigma (\slashed{D}^E): C(S^{n-1}) \to A:=\Gamma (S(TM),\End (\pi ^* \slashed{S}_\comp ^{E,0} (M ; \underline{\real} ^n)))$ that maps the coordinate function $x_i$ ($i=1,\ldots,n$) of $\real ^n$, which contains $S^{n-1}$ as the unit sphere, to an element $\sum _j c(e_{i,j}) \xi_{i,j}$. It is well-defined because the square sum $\sum _i ( \sum _j c(e_{i,j}) \xi_{i,j} )^2$ is equal to $1$. Hence we can replace the problem of obtaining a decomposition of $\slashed{D}^E$ with that of obtaining a lift, as is shown in the following diagram by the dotted arrow, of $\sigma (\slashed{D}^E)$.
\[
\xymatrix@C=1em{
&&&C(S^{n-1}) \ar[d]^\varphi \ar@{.>}[dl]& \\
0 \ar[r] & \Psi ^{-1}(\slashed{S}_\comp ^{E,0}(M;\underline{\real} ^n)) \ar@{=}[d] \ar[r] & \Psi ^0 (\slashed{S}_\comp^{E,0}(M ;\underline{\real} ^n)) \ar[r] \ar[d] & A \ar[r] \ar[d]^\tau& 0 \\
0 \ar[r] & \mbb{K}(\mc{H}) \ar[r] & \mbb{B}(\mc{H}) \ar[r] & Q(\mc{H}) \ar[r] & 0
}
\]
where $\mc{H} :=L^2 (M,\slashed{S}_\comp ^{E,0} (M, \underline{\real} ^n))$ and $\Psi ^0(\slashed{S}_\comp ^{E,0}(M;\underline{\real} ^n))$ (resp. $\Psi ^{-1}(\slashed{S}_\comp ^{E,0}(M;\underline{\real} ^n))$) is the norm closure of the space of pseudodifferential operators of order $0$ (resp. $-1$). In terms of extension theory, it means that the extension $\varphi ^*\tau = \tau \circ \varphi$ is trivial. Now, as mentioned above, the theory of $C^*$-algebra extension is translated into $KK^1$-theory. In particular, a semisplit extension $\varphi$ has a lift after stabilizing by the trivial extension if and only if the $KK^1$-class $[\varphi]$ is zero. Moreover in our case we do not have to care for the stabilization of $\varphi$ because the Voiculescu theorem \cite{Voiculescu1976} ensures that $\varphi$ absorbs any trivial extension.
In the case that $n$ is odd, it is immediately $0$ because $KK^1(C(S^{n-1}),\mbb{K})$ itself is $0$. On the other hand, $\ind \slashed{D}$ is also $0$ because $\dim M$ is odd.
In the case that $n$ is even, we obtain an integer $\varphi ^* [\tau] \in KK^1(C(S^{n-1}),\mbb{K}) \cong \zahl$ as the Fredholm index of $\tau \circ \varphi (u) \in Q(\mc{H})$ by Theorem 18.10.2 of \cite{Blackadar1998}. Here $u$ is the canonical generator of $KK^1(\comp ,C(S^{n-1})) \cong K_1(C(S^{n-1}))$ and its additive inverse is represented by a family of unitary matrices $u:= \sum c_1c_i x_i \in C(S^{n-1},\End (\Delta _n^0))$ (it is a consequence of Theorem \ref{thm:Toep}). Now $\cdot \tau \circ \varphi (u)$ coincides with the principal symbol of the Dirac operator $c_1 \cdot (\slashed{D}^E)^0$ on $\Gamma (M, \slashed{S}^{E,0}(M))$ because $\slashed{S}^0_\comp(M) \cong \slashed{S}_\comp^0(M;\underline{\real}^n) \hat \otimes \Delta _n^0$.
\end{proof}
We now turn to the case of the family of Dirac operators, which is of our main interest.
Let $Z \to M \to B$ be a fiber bundle and set $n:=\dim B$. We assume that there are $Spin^c$ vector bundles $V_1,\ldots,V_l$ on $B$ and $H_1,\ldots,H_l$ on $M$ such that $\pi^*V_i \otimes H_i$ are also $Spin^c$ and the vertical tangent bundle $T_VM$ is isomorphic to their direct sum $\pi ^*V_1 \otimes H_1 \oplus \cdots \oplus \pi ^*V_l \otimes H_l$. We denote the direct sum $V_1 \oplus \cdots \oplus V_l$ by $V$ and assume $\dim V=n$. Moreover we assume that each $H_i$ is odd dimensional and decomposed as $H_i \cong H_i ^0 \oplus \underline{\real}$. Now, as is in Section \ref{section:4.1}, the spinor bundle $\slashed{\mf{S}}_{\comp ,f}(M;V):= \slashed{S}_\comp (T_VM \oplus V)$ is decomposed as
\ma{\slashed{\mf{S}}_{\comp ,f} (M ; V) \cong \bk{\slashed{S}_\comp (\pi ^*V_1 \otimes H_1^0) \otimes \cdots \otimes \slashed{S}_\comp (\pi ^*V_n \otimes H_n^0)} \hat \otimes \Cliff ( \pi ^* V).}
Hereafter we denote $\slashed{S}_{\comp ,f }^{E,0}(M;V):=\slashed{S}_\comp (\pi ^*V_1 \otimes H_1^0) \otimes \cdots \otimes \slashed{S}_\comp (\pi ^*V_n \otimes H_n^0)$. The principal symbol of the fiberwise Dirac operator $\slashed{D}_f^E$ on the twisted fiberwise spinor bundle $\slashed{\mf{S}}_{\comp ,f} ^E(M;V):=\slashed{\mf{S}}_{\comp ,f} (M;V) \otimes E$ is also decomposed as a commuting $n$-tuple twisted by $V$. Indeed, for $v=v_1 \oplus \cdots \oplus v_l$, a correspondence
$$\sigma (\slashed{D}^E_f)_v= \sum \bk{c(v_1 \otimes e_{1,j})\xi_{ e_{1,j}}} +\cdots +\sum \bk{c(v_1 \otimes e_{l,j})\xi_{ e_{l,j}}}$$
gives the explicit decomposition. It gives a $*$-homomorphism $\sigma (\slashed{D}^E_f)_v: C(S(V)) \to C(B) \otimes Q(\mc{H})$ that is compatible with $C(B) \subset C(S(V))$ and $C(B) \otimes 1 \subset C(B) \otimes Q(\mc{H})$. In particular, when $V$ is trivial it is reduced to a $*$-homomorphism $\sigma (\slashed{D}_f^E)_v: C(S^{n-1}) \to C(B) \otimes Q(\mc{H})$.
\begin{defn}\label{def:decomp}
The fiberwise Dirac operator $\slashed{D}_f^E$ is said to be {\it $n$-decomposable} if there is a bounded commuting Fredholm $n$-tuple $\mbk{T_v (x)}$ twisted by $V$ such that each $T_v$ is a zeroth order pseudodifferential operator on $\Gamma (\slashed{S}_{\comp ,f}^{E,0} (M;V))$ whose principal symbol is $\sigma (T_v)=\sum \bk{c(v_1 \otimes e_{1,j})\xi_{ e_{1,j}}} +\cdots +\sum \bk{c(v_1 \otimes e_{l,j})\xi_{ e_{l,j}}}$.
\end{defn}
In that case $\slashed{D}^E_f (1+{\slashed{D}^E_f}^2)^{-1/2}$ coincides modulo compact operators with the smooth family of Dirac operators associated with the bounded commuting Fredholm $n$-tuples $\mbk{T_v(x)}$ twisted by $V$. Hence the $K$-class $[\ind \slashed{D}_f^E]$ is in the image of the canonical natural transform from $\tilde{k}^n(B)$ to $K^n(B)$. Moreover, the index of the Dirac operator $\slashed{D}_M^E$ on $M$ twisted by $E$, which coincides with that of $\pi ^* \slashed{\mf{D}}_B + \slashed{D}^E_f$, can be obtained from the joint spectral flow $\jsf \mbk{T_v(x)}$.
\begin{thm}\label{thm:decomp}
Let $Z \to M \to B$, $V_1,\ldots,V_l$, $H_1,\ldots ,H_l$, and $E$ be as above. Then $\slashed{D}_f^E$ is $n$-decomposable if and only if $\ind (\slashed{D}^E_f)$ is in the image of $K^n(B, B^{(n-1)}) \to K^n(B)$, or equivalently the image of $\tilde{k}^n(B) \to K^n(B)$. In that case, the equality $\ind \slashed{D}^E_M=\jsf \{ \slashed{D}^E_f \}$ holds.
\end{thm}
Here $B^{(n-1)}$ is the $(n-1)$-skelton of a cellular decomposition of $B$. The image of $K(B,B^{(n-1)}) \to K^n(B)$, which is the Atiyah-Hirzebruch filtered $K$-group $F^{n-1}K^n(B)$, is independent of the choice of decompositions and coincides with the image of $\tilde{k}^n(B) \to K^n(B)$ because of the functoriality of $\tilde{k}^* \to K^*$ and the fact that $\tilde{k}^n(B^{n-1})=0$.
\begin{remk}
In the proof, except for the last part, the condition that $B$ is an $n$-dimensional closed manifold is not necessary. Actually it is sufficient to be a finite CW-complex. Moreover, if $B$ is an $n$-dimensional CW-complex, the last part also holds.
\end{remk}
The proof is divided into some steps. First, we show that $\slashed{D}^E_f$ is locally $n$-decomposable.
\begin{lem}\label{lem:triv}
Let $M=B \times Z$ and $TZ \cong H_1 \oplus \cdots \oplus H_n$. If the index of the fiberwise Dirac operator $\slashed{D}_f^E$ on $\slashed{S}_\comp ^E (M;\real ^n)$ is zero, then it is $n$-decomposable.
\end{lem}
\begin{proof}
As in Proposition \ref{prp:onept}, it suffices to find a lift of the extension $\sigma (\slashed{D}^E_f)_v: C(S^{k-1}) \to C(B) \otimes C(S(TZ)) \subset C(B) \otimes Q(\mc{H})$. It exists when the metric on fibers are constant because $\sigma (\slashed{D}_f^E)_v$ is trivial and absorbable by Kasparov's generalized Voiculescu theorem \cite{Kasparov1980}. In general case, it exists becasuse $\sigma (\slashed{D}_f^E)_v|_{M_y}=u_y(\sigma (\slashed{D}^E_f)_v|_{M_x})u_y^*$ where $u_y: \pi^*\slashed{S}_\comp ^E (M_x;\real ^n) \to \pi^*\slashed{S}_\comp ^E (M_y;\real ^n)$ is the isometry induced from the polar part of the identity map $\id : TM_x \to TM_y$.
\end{proof}
Next we introduce a gluing technique of two decompositions. We can deal with that problem cohomologically by using the notion of Cuntz's quasihomomorphism \cite{Cuntz1983}. The ``difference'' of two lifts $\varphi _0, \varphi _1 : C(S(V)) \to C(B) \otimes \mbb{B}(\mc{H})$ of $\sigma (\slashed{D}^E_v)$ gives an element of representable $KK$-group \cite{Kasparov1988}
$$[\varphi _0, \varphi _1]:=\lbk{\hat{\mc{H}} \hat \otimes C(B), \pmx{\varphi _0 & 0 \\ 0 & \varphi _1} , \pmx{0 & 1 \\ 1 & 0}} \in \mc{R}KK(B;C(S(V)) , C(B) \otimes \mbb{K}). $$
In particular, in the case that $V$ is trivial, we can reduce the representable $KK$-group $\mc{R}KK(B;C(S(V)),C(B))$ by $KK(C(S^{n-1}),C(B) \otimes \mbb{K})$. Then the split exact sequence $ 0 \to C_0(S^{n-1}\setminus \mbk{*}) \to C(S^{n-1}) \xra{p} \comp \to 0$ gives an isomorphism $KK(C(S^{n-1}),C(F)) \cong KK(C_0(S^{n-1} \setminus \mbk{*}) ,C(F)) \oplus KK(\comp , C(F))$. When both of $\varphi _0$ and $\varphi _1$ are unital, $[\varphi _0 , \varphi _1] $ corresponds to $[\varphi _0,\varphi _1]|_{C(S^{n-1}\setminus \mbk{*})} \oplus 0$ under the above identification because $p^*[\varphi _0, \varphi _1]=[1,1]=0$.
\begin{lem}\label{lem:glue}
Let $F_0,F_1$ be closed subsets of $B$ such that $B=(F_0)^\circ \cup (F_1)^\circ$ and $F:= F_0 \cap F_1$. We assume that $M$ and $E$ are trivial on $F$ and $\sigma (\slashed{D}^E_f)$ has lifts $\varphi _0$ and $\varphi _1$ on $F_0$ and $F_1$. Then the image of $[\varphi _0 , \varphi _1] \in KK(C_0(S^{n-1}\setminus \mbk{*}),\mbb{K}) \otimes C(F)) \cong K^{n-1}(F)$ by the boundary map of the Mayer-Vietoris sequence coincides with $[\ind \slashed{D}^E_f] \in K^n(B)$.
\end{lem}
\begin{proof}
From the diagram
\[
\xymatrix@C=1em{
0 \ar[r] &C_0(\mathbb{D} ^n \setminus \mbk{0}) \ar[r] \ar[d]^\iota & C_0(\overline{\mathbb{D}^n} \setminus \mbk{0}) \ar[r] \ar[d] & C(S^{n-1}) \ar[r] \ar@{=}[d] &0 \\
0 \ar[r] &C_0(\mathbb{D} ^n ) \ar[r] & C_0(\overline{\mathbb{D}^n}) \ar[r] & C(S^{n-1}) \ar[r] &0 \\
0 \ar[r] &C_0(\mathbb{D} ^n ) \ar[r] \ar@{=}[u] & C_0(\overline{\mathbb{D}^n} \setminus \mbk{*}) \ar[r] \ar[u] & C_0(S^{n-1} \setminus \mbk{*}) \ar[r] \ar[u] &0, \\
}
\]
we obtain a diagram of $KK$-groups
\[
\xymatrix@C=1em{
KK^1(C_0(\mathbb{D}^n \setminus \mbk{0}),C(F)) \ar[r]_{\partial_1} ^\sim & KK^0(C(S^{n-1}),C(F))\\
KK^1(C(\mathbb{D}^n ),C(F)) \ar[u]^{\iota^*} \ar@{=}[d] \ar[r]_{\partial_2} & KK^0(C(S^{n-1}),C(F)) \ar@{=}[u] \ar[d] \\
KK^1(C(\mathbb{D}^n),C(F)) \ar[r]_{\partial_3 \ \ \ } ^{\sim \ \ \ } & KK^0(C_0(S^{n-1} \setminus \mbk{*}) ,C(F)).\\
}
\]
Here for a $C^*$-algebra $A$, the group $KK^1(A ,C(F))$ is canonically isomorphic to $KK(A,\Sigma C(F)) \cong KK(A , C_0(\Sigma F))$. One can see that boundary maps $\partial _1$ coincide with taking products with $[\id _\Sigma ] \in KK(\Sigma , \Sigma)$.
As a consequence we obtain
$$\iota ^* \partial _3^{-1}[\varphi _0,\varphi _1]=\partial _1 ^{-1}[\varphi _0,\varphi _1] =[\varphi _0 \otimes \id _\Sigma, \varphi _1 \otimes \id _\Sigma].$$
Next we consider the isomorphism between $KK(C_0(\mathbb{D}^n) , C_0(\Sigma F))$ and $KK(\comp , C_0(\Sigma F) \hat \otimes \Cliff _n)$. As is in Section \ref{section:2}, this correspondence is given by taking a product with the canonical generator
$$[C_{\mathbb{D} ^n}]:=\lbk{C_0(\mathbb{D} ^n) \hat \otimes \Cliff _n , 1 , C_{\real ^n}:=\sum x_i \cdot c_i}$$
of $KK(\comp , C_0(\mathbb{D} ^n) \hat \otimes \Cliff _n)$. Restricting on $C_0(\mathbb{D}^n \setminus \mbk{0}) \hat \otimes \Cliff _n$, the operator $C_{\mathbb{D}^n}$ also defines an element $[C_{\mathbb{D}^n \setminus \mbk{0}}]$ in $KK(\comp , C(\mathbb{D}^n \setminus \mbk{0}) \hat \otimes \Cliff _n)$. When we regard the topological space $\mathbb{D}^n \setminus \mbk{0}$ as $\Sigma S^{n-1}$, the operator $C_{\mathbb{D}^n}$ is of the form $tC_{S^{n-1}}$ where $C_{S^{n-1}}:=\sum c_i \cdot x_i \in C(S^{n-1}) \hat \otimes \Cliff _n$ and $t$ is the identity function on $(0,1)$. Now the diagram
\[
\xymatrix@C=1.0em@R=1.0em{
KK(C_0(\mathbb{D}^n),C_0(\Sigma F)) \ar[dd]^{\iota ^*} \ar[rd]^{[C_{\mathbb{D}^n}]} &\\
&KK(\comp , C_0(\Sigma F) \hat \otimes \Cliff _n)\\
KK(C_0(\mathbb{D}^n \setminus \mbk{0}) , C_0(\Sigma F)) \ar[ru]^{[C_{\mathbb{D}^n \setminus \mbk{0}}]}& \\
}
\]
commutes. As a consequence, we can compute $[C_{\mathbb{D}^n}] \otimes _{C_0(\mathbb{D}^n)} \partial _3^{-1}[\varphi _0 , \varphi _1]$ by using Proposition 18.10.1 of \cite{Blackadar1998} as follows.
\ma{ &[C_{\mathbb{D}^n}] \otimes _{C_0(\mathbb{D} ^n)} \partial _3 ^{-1}[\varphi _0,\varphi _1]\\
&=[C_{\mathbb{D}^n \setminus \mbk{0}}] \otimes _{C_0(\mathbb{D}^n \setminus \mbk{0})} \iota ^* \partial _3^{-1}[\varphi _0,\varphi _1]\\
&=[tC_{S^{n-1}}] \otimes _{C_0(\Sigma S ^{n-1})} [\varphi _0 \otimes \id_{\Sigma },\varphi _1 \otimes \id _{\Sigma }]\\
&=\lbk{\hat{\mc{H}} _{C_0(\Sigma F)}\hat \otimes \Cliff _n , 1 , \pmx{\varphi _0 (tC_{S^{n-1}}) & 0 \\ 0 & \varphi _1 (tC_{S^{n-1}})} \right. \\
& \left. \ \ \ \ \ \ \ \ \ \ \ + \pmx{1-\varphi _0 (tC_{S^{n-1}})^2 & 0 \\ 0 & 1-\varphi _1 (tC_{S^{n-1}})^2}\pmx{0 & 1 \\ 1 & 0}}\\
&=\lbk{\hat{\mc{H}} _{C_0(\Sigma F)}\hat \otimes \Cliff _n , 1 , \pmx{\varphi _0 (tC_{S^{n-1}}) & 1-\varphi _0 (tC_{S^{n-1}})^2 \\ 1-\varphi _1 (tC_{S^{n-1}})^2 & \varphi _1 (tC_{S^{n-1}})}}\\
&=\lbk{\hat{\mc{H}}_{C_0(\Sigma F)}\hat \otimes \Cliff _n , 1 , T}.
}
Here
\[
T=\mbk{T_t}_{t \in [0,1]}:=
\begin{cases}
\pmx{\varphi _0 ((1-2t)C_{S^{n-1}}) & 1-(1-2t)^2 \\ 1-(1-2t)^2 & \varphi _0 ((1-2t)C_{S^{n-1}})} & \text{($0 \leq t \leq 1/2$),}\\
\pmx{\varphi _0 ((2t-1)C_{S^{n-1}}) & 1-\tilde (2t-1)^2 \\ 1-(2t-1)^2 & \varphi _1 ((2t-1)C_{S^{n-1}})} & \text{($1/2 \leq t \leq 1$).}
\end{cases}
\]
Now we claim that this $KK$-class coincides with that comes from the cycle
$$\lbk{\mc{H}_{C_0(\Sigma F)}\hat \otimes \Cliff _n , 1 , t\varphi_0 (C_{S^{n-1}})+(1-t)\varphi _1 (C_{S^{n-1}})}.$$
Indeed, because $T_t - T_{1-t}$ is compact for any $t \in [0,1/2]$, the homotopy of continuous families of Fredholm operators
\[
\mf{T}_{s,t}:=
\begin{cases}
T_t & \text{($0 \leq t \leq s/2$),} \\
\frac{t-s/2}{1-s}T_{s/2}+ \frac{1-t-s/2}{1-s}T_{1-s/2} & \text{($s/2 \leq t \leq 1-s/2$),}\\
T_t & \text{($1-s/2 \leq t \leq 1$)}\\
\end{cases}
\]
connects $\mf{T}_0=T$ with
$$\mf{T}_1=\pmx{\varphi _0 (C_{S^{n-1}}) & 0 \\ 0 & t \varphi _0 (C_{S^{n-1}})+ (1-t) \varphi _1(C_{S^{n-1}})}.$$
Finally we obtain that $[\varphi _0 ,\varphi _1]$ coincides with $[t\varphi _0(C_{S^{n-1}})+(1-t)\varphi _1(C_{S^{n-1}})]$ in $KK(C_0(S^{n-1} \setminus \mbk{*}) , C(F)) \cong K^n(\Sigma F)$. Next we send it by the boundary map $\delta _{MV}$ of the Mayer-Vietoris exact sequence.
We denote by $I(F_0,F_1;F)$ the space $F_0 \sqcup IF \sqcup F_1$. The image of $\delta _{MV}$ is induced from the map $I(F_0,F_1;F) \to (I(F_0,F_1;F),F_0 \cup F_1)$ and excision. Therefore $\delta _{MV}[t\varphi _0(C_{S^{n-1}})+(1-t)\varphi _1(C_{S^{n-1}})]$ is of the form
\[
\begin{cases}
\varphi _0 (C_{S^{n-1}})_x & \text{($x \in F_0$)}\\
t\varphi _0 (C_{S^{n-1}})_x + (1-t)\varphi _1 (C_{S^{n-1}})_x & \text{($(x,t) \in IF$)}\\
\varphi _1 (C_{S^{n-1}})_x & \text{($x \in F_1$)} .\\
\end{cases}
\]
It is a lift of the pull-back of the principal symbol $\sigma (\slashed{D}^E_v)$ by the canonical projection $I(F_0,F_1;F) \to B$, which introduces the homotopy equivalence. As a consequence the above operator coincides with $\slashed{D}_f^E(1+(\slashed{D}^E_f)^2)^{-1/2}$ modulo compact operators and hence defines the same $KK$-class.
\end{proof}
\begin{lem}\label{lem:zero}
If $[\ind \slashed{D}^E_f] =0 \in K^n(B)$, then $\slashed{D}^E_f$ is $n$-decomposable.
\end{lem}
\begin{proof}
Let $U_1,\ldots,U_m$ be a local trivialization of the fiber bundle $M \to B$ and the vector bundles $V_1,\ldots ,V_l \to B$ such that $M$ is also trivial on $F_i:=\overline{U_i}$. By assumption and Lemma \ref{lem:triv}, $\slashed{D}^E_f$ is $n$-decomposable on each $F_i$.
We start with the case that $B=F_0 \cup F_1$ and set $F:=F_0 \cap F_1$. First, we fix a trivial and absorbable extension $\pi : C(S^{n-1}) \to Q(\mc{H}_\pi)$ of $\mbb{K}$ by $C(S^{n-1})$ and denote by $\pi _A$ an extension $C(S^{n-1}) \to Q(\mc{H}_\pi) \to Q(\mc{H}_\pi) \otimes A$ of $C(S^{n-1})$ by $A \otimes \mbb{K}$ for a unital $C^*$-algebra $A$.
Now we choose lifts $\varphi _0$ and $\varphi _1$ of $\sigma (\slashed{D}^E_v)$ on $F_0$ and $F_1$. By Kasparov's generalized Voiculescu theorem, $\varphi _i$'s are approximately equivalent to $\varphi \oplus \pi_{C(F_i)}$. More precisely, there are continuous families of unitaries $u_i : \mc{L}_f^2 (\slashed{S}_\comp ^E(M;V)) \to \mc{L}_f^2 (\slashed{S}_\comp ^E(M;V)) \oplus \mc{H}_\pi \otimes C(B)$ such that $u_i (\varphi _i \oplus \pi_{C(F_i)} )u_i^* \equiv \varphi _i $ modulo compact operators.
According to Lemma \ref{lem:glue}, $\delta_{MV} ([\varphi _0,\varphi _1])=[D_f]=0$. Hence, by exactness of the Mayer-Vietoris sequence, we have quasihomomorphisms $[\alpha _i , \beta _i]$ ($i=0,1$) such that $[\alpha _0,\beta _0]|_F-[\alpha _1,\beta _1]|_F=[\varphi _0,\varphi _1]$. Now there are unitaries $v_i$ such that $v_i (\pi_{C(F_i)} \oplus \alpha _i \oplus \alpha _i ^\perp) v_i^*\equiv \pi _{C(F_i)}$ modulo compact operators. We set
$$\psi _i:= u_i(\varphi \oplus v_i (\pi _{C(F_i)} \oplus \beta _i \oplus \alpha _i^\perp) v_i^*)u_i^* .$$
Then $[\varphi _i , \psi _i]$ are quasihomomorphisms and $[\varphi _i,\psi _i]=[\alpha _i,\beta _i]$ in $KK(C(S^{n-1}),C(F_i) )$.
Now $[\varphi _0,\psi _0]|_F - [\varphi _1 ,\psi _1]|_F =[\varphi _0, \varphi _1]$, which implies $[\psi _0,\psi _1]|=0$. As a consequence, there is a homotopy of quasihomomorphisms $[\Psi_0^t,\Psi _1^t]$ ($t \in [0,1]$) from $C(S^{n-1})$ to $C(F) \otimes \mbb{B}(\mc{H})$ connecting $[\psi _0 |_F , \psi _1|_F]|$ and $[\theta , \theta]$ for some $\theta$. Here we use the fact that extensions $\psi _i|_F$ contain $\pi _{C(F)}$ and hence are absorbable.
Finally we get a homotopy $\tilde \Psi _t:=\Psi _0^{2t} (0 \leq t \leq 1/2) \Psi _1^{2-2t} (1/2 \leq t \leq 1)$ of $*$-homomorphisms from $C(S^{n-1})$ to $C(F) \otimes \mbb{B}(\mc{H})$ connecting $\psi _0$ and $\psi _1$.
Now we denote by $D$ the fiber product of $C^*$-algebras
\[
\xymatrix@C=3.5em@R=1.5em{
D \ar[r] \ar[d] \ar@{}[rd]|\square & C(F) \otimes (B(\mc {H}) \oplus \mbb{B}(\mc{H})) \ar[d] _{\id \otimes (p \oplus p)}\\
C(IF) \otimes Q(\mc{H}) \ar[r]^{{\rm ev}_0 \oplus {\rm ev}_1 \hspace{1.5em}} & C(F) \otimes (Q(\mc{H}) \oplus Q(\mc{H}))
}
\]
and $\tau$ the the extension
$$0 \to C_0(SF) \otimes \mbb{K} \to C(IF) \otimes \mbb{B}(\mc{H}) \to D \to 0.$$
Then $\sigma (\slashed{D}_f^E)_v$ and $(\psi _0 \oplus \psi _1)$ determine a $*$-homomorphism $\sigma :C(S^{n-1}) \to D$. Because the $C^*$-algebra $C(S^{n-1})$ is nuclear, the Choi-Effros theorem~\cite{ChoiEffros1976} implies that the pull-back $\sigma^*\tau$ is an invertible extension and hence defines an element $[\sigma ^*\tau]$ in $KK^1(C(S^{n-1}), C_0(SF) \otimes \mbb{K})$. By construction of $\tilde \Phi$, $\sigma ^*\tau$ is homotopic to the trivial extension $\pi \circ \tilde \Psi $, which implies $[\sigma ^*\tau]=0$. Consequently, $\sigma $ itself has a lift $C(S^{n-1}) \to IC(F) \otimes \mbb{B}(\mc{H})$.
Finally, we obtain a lift $\varphi$ of $\sigma (\slashed{D}_f^E)_v$ on $I(F_0,F_1;F)$. Its pull-back by a continuous section $B \to I(F_0,F_1;F)$ given by a partition of unity is a desired lift of $\sigma (\slashed{D}^E_f)_v$.
In general case we apply induction on the number of covers. We assume that there is a trivialization $B=F_1 \cup \cdots \cup F_n \cup F_{n+1}$ and set $G_0:=F_1 \cup \cdots \cup F_n$, $G_1:=F_{n+1}$. By assumption of induction, we obtain lifts $\varphi _0$ and $\varphi _1$ on $G_0$ and $G_1$. First we may assume that $V$ is trivial by restricting $\varphi _0$ the closure of an open neighborhood of $G:=G_0 \cap G_1 \subset G_0$. Now each $\varphi _i$ contains $\pi _{C(G_i)}$ by its construction. Moreover, because $M$ and $V$ are trivial on $IG$ by assumption, we can take a lift of $\sigma$ containing $\pi _{C(IG)}$. Now, the precise assertion obtained from the above argument is that if (1) $M$ and $V$ are trivial on $G$, (2) there are lifts $\varphi _i$ on $C(G_i)$ ($i=0,1$), and (3) each $\varphi_i$ is absorbable (hence it contains $\pi _{C(G_i)}$), then there is a lift $\varphi$ on $G$ containing $\pi _{C(B)}$. Hence the induction process works.
\end{proof}
Finally we prove our main theorem. Here we mention that in the above argument we restrict the case that the lift can be taken as invertible operators.
\begin{proof}[Proof of Theorem \ref{thm:decomp}]
We assume that $[\ind \slashed{D}^E_f]$ is in the image of $K^n(B,B^{(n-1)})$. Let $U \subset V$ be an inclusion of small open balls in $B$, $F_0:= U^c$, and $F_1:=\overline{V}$. Then $[\ind \slashed{D}^E_f|_{F_0}]$ and $[\ind \slashed{D}^E_f|_{F_1}]$ are $0$ by assumption and hence according to Lemma \ref{lem:zero} $\slashed{D}^E_f$ is $n$-decomposable on $F_0$ and $F_1$. Now because $F:=F_0 \cap F_1$ is homotopic to $S^{n-1}$, a group $KK(C_0(S^{n-1} \setminus \mbk{*}),C(F))$ is isomorphic to $\tilde{k}^{n-1}(F)=[C_0(\real ^{n-1}),C(F)]$. It implies that there is a $*$-homomorphism $\psi :C_0(S^{n-1} \setminus \mbk{*}) \to C(F) \otimes \mbb{K}$ such that $[\varphi _0,\varphi _1]=\Phi[\psi]$.
Since $\varphi _1$ is absorbable, there is a unitary $u$ from $\mc{H}_{C(F)}$ to $\mc{H}_{C(F)} \oplus \mc{H}_{C(F)}$ such that $u (\varphi _1 \oplus {\rm ev}_* \cdot 1)u^* \equiv \varphi_1$ modulo compact operators. Moreover, by an argument similar to Lemma \ref{lem:zero}, we obtain a lift of $\sigma (\slashed{D}^E_f)$ on $IF$ that coincides with $\varphi _0$ on $F \times \mbk{0}$ and $u (\varphi _1 \oplus \tilde{\psi}) u^*$ on $F \times \mbk{1}$ where $\tilde \psi$ is a unital extension of $\psi$.
The remaining part is to construct a homotopy connecting $\varphi \oplus ev_* \cdot 1$ with $\varphi \oplus \tilde \psi$. This is not realized as a family of $*$-homomorphisms on $C(S^{n-1})$ but bounded commuting Fredholm $n$-tuples. Let $\iota ^*$ be the canonical $*$-homomorphism $C(\overline{D}^n \setminus \mbk{*}) \to C_0(S^{n-1}\setminus \mbk{*})$. Then we can take a homotopy connecting $\psi \circ \iota ^*$ and $0$ since $\mathbb{D}^n$ is contractible.
Finally, in the same way as in the proof of Lemma \ref{lem:zero}, we obtain a $*$-homomorphism $T$ that makes the following diagram commute.
\[
\xymatrix@C=4em{C(\overline{\mbb{D}}(V)) \ar[r]^{T} \ar[d] & C(B) \otimes \mbb{B}(\mc{H}) \ar[d] \\
C(S(V)) \ar[r]^{\sigma (\slashed{D}_f^E)_v} &C(B) \otimes Q(\mc{H}).}
\]
Now $\mbk{T(x)}_v:= T(x,v)$ gives a decomposition of $\slashed{D}_f^E$.
\end{proof}
As a concluding remark, we introduce a corollary of Theorem \ref{thm:decomp}.
\begin{cor}
If $\slashed{D}_f^E$ is $n$-decomposable, then $\slashed{D}_f^{E \otimes \pi ^*F}$ is also $n$-decomposable for a complex vector bundle $F$ on $B$. Moreover in that case the following equality holds.
$$\jsf \{ \slashed{D}^{E \otimes \pi ^*F}_f \}= \dim F \cdot \jsf \{ \slashed{D}^E_f \}.$$
\end{cor}
\begin{proof}
It follows from the fact that the connective $K$-group gives a multiplicative filtration in the $K$-group.
\end{proof}
{\small
\bibliographystyle{jalpha}
|
2,877,628,088,519 | arxiv | \section{Introduction}
\label{intro}
Event causality identification (ECI) aims to identify causal relations between events in texts, which can provide crucial clues for NLP tasks, such as logical reasoning and question answering \cite{girju2003automatic,oh2013question,oh2017multi}. This task is usually modeled as a classification problem, i.e. determining whether there is a causal relation between two events in a sentence. For example in Figure \ref{fig1}, an ECI system should identify two causal relations in two sentences: (1) \textbf{attack} $ \stackrel{cause}{\longrightarrow}$ \textbf{killed} in S1; (2) \textbf{statement} $\stackrel{cause}{\longrightarrow}$ \textbf{protests} in S2.
Most existing methods for ECI heavily rely on annotated training data \cite{mirza2016catena,riaz2014recognizing,hashimoto2014toward,hu2017inferring,gao-etal-2019-modeling}. However, existing datasets are relatively small, which impede the training of the high-performance event causality reasoning model. According to our statistics, the largest widely used dataset EventStoryLine Corpus \cite{caselli2017event} only contains 258 documents, 4316 sentences, and 1770 causal event pairs. Therefore, data lacking is an essential problem that urgently needs to be addressed for ECI.
\begin{figure}[t]
\centering
\includegraphics*[clip=true,width=0.48\textwidth,height=0.10\textheight]{fig1.pdf}
\caption{S1 and S2 are \emph{causal sentences} that contain \emph{causal events}. S3 is produced by EDA based on S1. The dotted line indicates the causal relation.} \label{fig1}
\end{figure}
Up to now, data augmentation is one of the most effective methods to solve the data lacking problem. However, most of the NLP-related augmentation methods are a task-independent framework that produces new data at one time \cite{Zhang2015CharacterlevelCN,Guo2019AugmentingDW,Xie2019UnsupervisedDA}. In these frameworks, data augmentation and target task are modeled independently. This often leads to a lack of task-related characteristics in the generated data, such as task-related linguistic expression and knowledge. For example, easy data augmentation (EDA) \cite{wei-zou-2019-eda} is the most representative method that relies on lexical substitution, deletion, swapping, and insertion to produce new data.
However, solely relying on such word operations often generates new data that dissatisfies task-related qualities.
As shown in Figure \ref{fig1}, S3 is produced by EDA, it lacks a linguistic expression that expresses the causal semantics between \emph{kill} and \emph{attack}. Therefore, how to interactively model data augmentation and target task to generate new data with task-related characteristics is a challenging problem on ECI.
Specific to ECI, we argue that an ideal task-related generated causal sentence needs to possess two characteristics as follows. (1)
The two events in the causal sentence need to have a causal relation. We call such property as \textbf{\emph{Causality}}. For example, there is usually a causal relation between an \emph{attack} event and a \emph{kill} event, while nearly no causal relation between an \emph{attack} event and a \emph{born} event. (2) The linguistic expressions of the causal sentence need to be well-formed to express the causal semantic of events. We call such property as \textbf{\emph{Well-formedness}}, which consists of a) canonical sentence grammar, b) event-related entities with semantic roles (e.g. the \emph{attack} was carried out by a \emph{police} in S1), and c) cohesive words that express complete causal semantics (e.g. \emph{in a} and other words except for events and entities in S1).
To this end, we propose a learnable data augmentation framework for ECI, dubbed as \textbf{Learn}able Knowledge-Guided \textbf{D}ata \textbf{A}ugmentation (LearnDA). This framework regards sentence-to-relation mapping (\emph{the target task}, ECI) and relation-to-sentence mapping (\emph{the augmentation task}, sentence generation) as dual tasks and models the mutual relation between them via dual learning. Specifically, LearnDA can use the duality to generate task-related new sentences learning from identification and makes it more accurate to understand the causal semantic learning from generation. On the one hand, LearnDA is knowledge guided. It introduces diverse causal event pairs from KBs to initialize the dual generation which could ensure the \textbf{\emph{causality}} of generated causal sentences. For example, the knowledge of \emph{judgment} $\stackrel{cause}{\longrightarrow}$ \emph{demonstration} from KBs can be used to construct a novel causal sentence, which is also helpful to understand the causal semantic of \emph{statement} $\stackrel{cause}{\longrightarrow}$ \emph{protests}. On the other hand, LearnDA is learnable. It employs a constrained generative architecture to generate \textbf{\emph{well-formed}} linguistic expressions via iteratively learning in the dual interaction, which expresses the causal semantic between given events. Methodologically, it gradually fills the remaining missing cohesive words of the complete sentences under the constraint of given events and related entities.
In experiments, we evaluate our model on two benchmarks. We first concern the standard evaluation and show that our model achieves the state-of-the-art performance on ECI. Then we estimate the main components of LearnDA. Finally, our learnable augmentation framework demonstrates definite advantages over other augmentation methods in generating task-related data for ECI.
In summary, the contributions as follows:
\begin{itemize}
\item We propose a new learnable data augmentation framework to solve the data lacking problem of ECI. Our framework can leverage the duality between identification and generation via dual learning which can learn to generate task-related sentences for ECI.
\item Our framework is knowledge guided and learnable. Specifically, we introduce causal event pairs from KBs to initialize the dual generation, which could ensure the causality of generated causal sentences. We also employ a constrained generative architecture to gradually generate well-formed causal linguistic expressions of generated causal sentences via iteratively learning in the dual interaction.
\item Experimental results on two benchmarks show that our model achieves the best performance on ECI. Moreover, it also shows definite advantages over previous data augmentation methods.
\end{itemize}
\section{Related Work}
To date, many researches attempt to identify the causality with linguistic patterns or statistical features. For example, some methods rely on syntactic and lexical features \cite{riaz2013toward,riaz2014recognizing}. Some focus on explicit causal textual patterns \cite{hashimoto2014toward,riaz2014depth,riaz2010another,do2011minimally,hidey-mckeown-2016-identifying}. And some others pay attention on statistical causal association and cues \cite{beamer2009using,hu2017inference,hu2017inferring}.
Recently, more attention is paid to the causality between events. \citet{Mirza2014AnAO} annotated Causal-TimeBank of event-causal relations based on the TempEval-3 corpus. \citet{mirza2014annotating}, \citet{mirza2016catena} extracted event-causal relation with a rule-based multi-sieve approach and improved the performance incorporating with event temporal relation. \citet{mostafazadeh2016caters} annotated both temporal and causal relations in 320 short stories. \citet{caselli2017event} annotated the EventStoryLine Corpus for event causality identification. \citet{dunietz-etal-2017-corpus} presented BECauSE 2.0, a new version of the BECauSE corpus \cite{dunietz-etal-2015-annotating} of causal relation and other seven relations. \citet{gao-etal-2019-modeling} modeled document-level structures to identify causality. \citet{ijcai2020-499} identified event causality with the mention masking generalization.
Unlike computer vision, the augmentation of text data in NLP is pretty rare \cite{chaudhary2020nlpaugment}. \citet{zuo-etal-2020-knowdis} solved the data lacking problem of ECI with the distantly supervised labeled training data. However, including the distant supervision, most of the existing data augmentation methods for NLP tasks are task-independent frameworks (Related work of data augmentation and dual learning are detailed in Appendix B). Inspired by some generative methods which try to generate additional training data while preserving the class label \cite{AnabyTavor2019NotED,yang-etal-2019-exploring-pre,Papanikolaou2020DAREDA}, we introduce a new learnable framework for augmenting task-related training data for ECI via dual learning enhanced with external knowledge.
\begin{figure}[t]
\centering
\includegraphics*[clip=true,width=0.41\textwidth,height=0.19\textheight]{fig2.pdf}
\caption{Overview of the learnable knowledge-guided dual data augmentation for ECI.} \label{fig2}
\end{figure}
\section{Methodology}
As shown in Figure \ref{fig2}, LearnDA jointly models a knowledge guided sentence generator (input: \emph{event pair and its causal/non-causal relation}, output: \emph{causal/non-causal sentence}) and an event causality identifier (input: \emph{event pair and its sentence}, output: \emph{causal/non-causal relation}) with dual learning. LearnDA iteratively optimizes identifier and generator to generate task-related training data, and then utilize new data to further train the identifier. Therefore, we first present the main idea of dual learning, which is the architecture of learnable dual augmentation, including the states, actions, policies, and rewards. Then, we briefly introduce the knowledge guided sentence generator, especially the processes of knowledge guiding and constrained sentence generation. Finally, we describe the event causality identifier and training processes of LearnDA.
\subsection{Architecture of Learnable Dual Augmentation}
\label{sec:LDAN}
\begin{figure}[t]
\centering
\includegraphics*[clip=true,width=0.39\textwidth,height=0.19\textheight]{fig3.pdf}
\caption{The architecture of learnable dual augmentation. \emph{Causal} and \emph{NCausal} represent the causal and non-causal sentence generator respectively. Red parts are the process of \emph{$<$event pair, relation$>$ $\rightarrow$ sentence $\rightarrow$ relation} (primal cycle), while blue parts are the process of \emph{$<$event pair, sentence$>$ $\rightarrow$ relation $\rightarrow$ sentence} (dual cycle). Solid and dashed lines denote the main process and reward feedback direction respectively.} \label{fig3}
\end{figure}
The architecture of learnable dual augmentation is shown in Figure \ref{fig3}. Specifically, \emph{I} denotes the event causality identifier, and \emph{G} denotes the sentence generator which consists of two independent generators. They produce causal and non-causal sentences on the relation $c$ of input event pair $ep$.
Generally, \emph{G} generates a sentence $s'$ which expresses the causal or non-causal relation $c$ of the input event pair $ep$. Then it receives the reward $R$ that consists of a semantic alignment reward $R_s$ from itself and a causality reward $R_c$ from \emph{I} (primal cycle). Similarly, \emph{I} identifies the causal or non-causal relation $c'$ of the input event pair $ep$ with its sentence $s$. Then it receives the reward $R$ consists of a causality reward $R_c$ from itself and a semantic alignment reward $R_s$ from \emph{G} (dual cycle).
\emph{I} and \emph{G} are optimized interactively with dual reinforcement learning. Specifically, for \emph{G}, an action is the generation from relation to sentence, a state is denoted by the representation of input event pair and its relation, a policy is defined by the parameters of generator. For \emph{I}, an action is the identification from sentence to relation, a state is denoted by the representation of input event pair and its sentence, a policy is defined by the parameters of identifier. Inspired by \citet{shen-feng-2020-cdl}, we utilize a probability distribution over actions given states to represent the policys, i.e., the probability distribution of the generation of \emph{G} and identification of \emph{I}. As aforementioned, we introduce two rewards, causality ($R_c$) and semantic alignment ($R_s$) rewards, which encourage \emph{G} to generate task-related sentences with the feedback from identifier, while further optimize \emph{I} with the feedback from generator. Definitions are as following:
\paragraph{Causality Reward ($R_c$)} If the relation of input event pair can be clearly expressed by the generated sentence, it will be easier to be understood by identifier. Therefore, we use the causal relation classification accuracy as the causality reward to evaluate the causality of generated sentences, while tune and optimize the identifier itself:
\begin{equation}\footnotesize
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
R_{c}(ep, s) =
\begin{cases}
p(c'|s;\theta_{I})& \text{\emph{Correct classification}}\\
-p(c'|s;\theta_{I})& \text{\emph{Otherwise},}
\end{cases}
\end{equation}
where $\theta_{I}$ is the parameter of \emph{I}, $p(c'|s;\theta_{I})$ denotes the probability of relation classification, $s$ denotes the input sentence and $c'$ is the classified relation.
\paragraph{Semantic Alignment Reward ($R_s$)} We hope that the semantic of the generated sentence can be consistent with the relation of the input event pair. Additionally, if the relation of the input event pair can be more accurately classified, the semantic of the new generated sentence can be considered more consistent with it. Therefore, we measure the semantic alignment by means of the probability of constructing a sentence with similar semantic to the input relation, and the reward is:
\begin{equation}\footnotesize
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{6pt}
R_{s}(ep, c) = p(s'|c;\theta_{G}) = \frac{1}{|T_s|} \sum_{t \in T_s} p(t|c;\theta_{G}),
\end{equation}
where $\theta_{G}$ is the parameter of \emph{G}, $c$ is the input relation, $t$ is one of the generated tokens $T_s$ of the generated sentence $s'$, and $p(t|c;\theta_{G})$ is the generated probability of $t$. Specifically, there are two independent \emph{G} with different $\theta_{G}$. In detail, $\theta_{G}^{c}$ is employed to generated causal sentence when the input $c$ is causal relation, and non-causal sentence is generated via $\theta_{G}^{nc}$ when $c$ is non-causal relation.
\subsection{Knowledge Guided Sentence Generator}
As shown in Figure \ref{fig4}, knowledge guided sentence generator (KSG) first introduces diverse causal and non-causal event pairs from KBs for \emph{causality}. Then, given an event pair and its causal or non-causal relation, it employs a constrained generative architecture to generate new \emph{well-formed} causal/non-causal sentences that contain them.
\begin{table*}[h] \footnotesize
\centering
\scalebox{0.90}{
\begin{tabular}{|m{1.5cm}|m{9cm}|m{5.5cm}|}
\hline
\multicolumn{1}{|c|}{\textbf{Knowledge}} & \multicolumn{1}{c|}{\textbf{How to extract event pair}} & \multicolumn{1}{c|}{\textbf{Why causal or non-causal}} \\ \hline
\multicolumn{3}{|c|}{\textbf{Lexical knowledge expanding}} \\ \hline
\multicolumn{1}{|c|}{\textbf{WordNet}} & 1) Extracting the synonyms and hypernyms from WordNet of each event in $ep$. 2) Assembling the items from the two groups of two events to generate causal/non-causal event pairs. & Items in each group are the synonyms and hypernyms of the annotated causal/non-causal event pairs. \\ \hline
\multicolumn{1}{|c|}{\textbf{VerbNet}} & 1) Extracting the words from VerbNet under the same class as each event in $ep$. 2) Assembling the items from the two groups of two events to generate causal/non-causal event pairs. & Items in each group are in the same class of the annotated causal/non-causal event pairs. \\ \hline
\multicolumn{1}{|c|}{\textbf{e.g.}} & \multicolumn{2}{c|}{$<(killed,attack),causal>\Longrightarrow kill \stackrel{Synonyms}{\longrightarrow}hurt$, $attack \stackrel{Synonyms}{\longrightarrow}onrush\Longrightarrow <(hurt,onrush),causal>$} \\
\multicolumn{1}{|c|}{} & \multicolumn{2}{c|}{\emph{Original sentence}: Kimani Gray, a young man who likes football, was killed in a police attack shortly after a tight match.} \\ \hline
\multicolumn{3}{|c|}{\textbf{Connective knowledge introducing}} \\ \hline
\textbf{FrameNet PDTB2} & 1) Extracting causal/non-causal connectives from FrameNet\footnote{Frame with types of \emph{Reasoning}, \emph{Causation}, \emph{Causation\_scenario}, \emph{Reason}, \emph{Triggering} and \emph{Explaining\_the\_facts}.} and PDTB2. 2) Extracting any two events connected by causal/non-causal connectives on KBP corpus to obtain causal/non-causal event pairs and original sentences respectively. & Introduced event pairs are connected by causal/non-causal connectives. \\ \hline
\multicolumn{1}{|c|}{\textbf{e.g.}} & \multicolumn{2}{c|}{\textbf{Looting} \emph{because} someone \textbf{beat up} someone, like the Travon Martin case. $\stackrel{because}{\Longrightarrow}<(loot,beat\_up),causal>$} \\
\multicolumn{1}{|c|}{} & \multicolumn{2}{c|}{\emph{Original sentence}: Looting because someone beat up someone, like the Travon Martin case.} \\ \hline
\end{tabular}}
\caption{Extracting causal and non-causal event pairs from multiple knowledge bases.}
\label{tab1}
\end{table*}
\paragraph{Knowledge Guiding}
\label{sec:KG}
\label{sec:KSG}
\begin{figure}[t]
\centering
\includegraphics*[clip=true,width=0.42\textwidth,height=0.17\textheight]{fig4.pdf}
\caption{Flow diagram of the knowledge guided sentence generator (KSG). We take causal sentence generation via lexical knowledge expanding as an example.} \label{fig4}
\end{figure}
KSG introduces event pairs that are probabilistic causal or non-causal from multiple knowledge bases in two ways. (1) \emph{Lexical knowledge expanding}: expanding annotated event pairs via external dictionaries, such as WordNet \cite{miller1995wordnet} and VerbNet \cite{schuler2005verbnet}. (2) \emph{Connective knowledge introducing}: introducing event pairs from external event-annotated documents (KBP corpus) assisted with FrameNet \cite{Baker1998TheBF} and Penn Discourse Treebank (PDTB2) \cite{pdtb2008pdtb}. As shown in Table \ref{tab1}, we illustrate how to extract event pairs from multiple knowledge bases. Then, inspired by \citet{bordes2013translating}, we filter the extracted event pairs by converting them into triples $<$$e_i$, causal/non-causal, $e_j$$>$ and calculating the causal-distance by maximizing $L$ in a causal representation space:
\begin{equation}\footnotesize
L=\sum_{(e_i,e_j) \in T} \sum_{(e'_i,e'_j) \in T'} [\lambda + d(\bm{e'_i},\bm{e'_j}) - d(\bm{e_i},\bm{e_j})]_{+},
\end{equation}
where $T$ and $T'$ are the causal and non-causal triples set respectively, and $\bm{e}$ is the representation of event. After that, the higher probability of causal relation, the shorter distance between two events, and we sort event pairs in ascending order by their distances. Finally, we keep the top and bottom $\alpha$\% sorted event pairs to obtain the causal and non-causal event pairs sets for generation.
\paragraph{Constrained Sentence Generator} Given an event pair, constrained sentence generator produces a well-formed sentence that expresses its causal or non-causal relation in three stages: (1) \emph{assigning event-related entities} ensures the logic of the semantic roles of events, (2) \emph{completing sentences} ensures the completeness of causal or non-causal semantic expression, (3) \emph{filtering sentences} ensures the quality and diversity of generated sentences.
\emph{\textbf{Assigning Event-related Entities.}} Event related entities play different semantic roles of events in sentences, which is an important part of event-semantic expression. Hence, as shown in Figure \ref{fig4}, given an event pair, we firstly assign logical entities for input events to guarantee the logic of semantic roles in the new sentences, such as \emph{gang} is a logical entity as the body of the event \emph{onrush}.
Logically, entities of the same type play the same semantic roles in similar events. Moreover, as shown in Table \ref{tab1}, there is a corresponding original sentence for each extracted event pair.
Therefore, in new sentence, we assign the most similar entity in the same type from candidate set\footnote{We collect entities from annotated data and KBP corpus.} for each entity in the original sentence. For example, we assign \emph{gang} for \emph{onrush} in new sentence which is similar with the \emph{police} related to \emph{attack} in the original sentence. Specifically, we put the candidate entities in the same position in the original sentence to obtain their BERT embeddings. Then we select entities via the cosine similarity between their embeddings: $\mathcal{E}(ent) = \frac{1}{|ent|}\sum_{w \in ent}\mathcal{E}(w)$, where $ent$ is the entity and $\mathcal{E}(w)$ is the BERT embedding of $ent$.
\emph{\textbf{Completing Sentences.}} A well-formed sentence requires a complete linguistic expression to express the causal or non-causal semantics. Therefore, we complete sentences by filling the cohesive words between given events and assigned entities with masked BERT \cite{devlin-etal-2019-bert}. All words except events and entities are regarded as cohesive words. Specifically, we insert a certain number of the special token [MASK] between events and entities, and then predict the [MASK]\footnote{The inserted [MASK] is 1.2 times the number of words between events and entities in the original sentence.} tokens as new words. As shown in Figure \ref{fig4}, we fill cohesive tokens via two independent generators to express causal and non-causal semantic according to the relation of given events. For example, \emph{in a} guiding a causal semantic filled by the causal generator.
\emph{\textbf{Filtering Sentences.}} Inspired by \citet{yang-etal-2019-exploring-pre}, we design a filter to select new sentences that are balanced between high quality and high diversity with two key factors: 1) \textbf{Perplexity} (PPL): we take the average probability of the filled cohesive words in the new sentence $s'$ as its perplexity: $PPL(s') = \frac{1}{|T(s')|} \sum_{t \in T(s')} P(t)$, where $T$ is the set of filled cohesive words.
2) \textbf{Distance} (DIS): we calculate the cosine similarity between generated sentence $s'$ and annotated data $D_m$ as its distance:
$DIS(s',D_m)=\frac{1}{|D_m|} \sum_{s \in D_m} \frac{\mathcal{E}(s') \cdot \mathcal{E}(s)}{\mathcal{E}(s') \times \mathcal{E}(s)}$, where $D_m$ is $m$ random selected annotated sentences and $\mathcal{E}$ is the BERT sentence representation of the [CLS] token. A new sentence should have both appropriate high PPL which indicates the quality of generation, and appropriate high DIS which indicates the difference from the original sentences. Therefore, we select the top $\beta$\% of the newly generated sentences according to $Score$ for the further training of identifier as following: $Score(s')=\mu PPL(s') + (1- \mu) DIS(s',D_m))$, where the $\mu$ is an hyper-parameter.
\subsection{Training of LearnDA for ECI}
\label{sec:Training}
We briefly describe the training processes of LearnDA for ECI, including the pre-training of generator and identifier, the dual reinforcement training, and the further training of identifier.
\begin{algorithm}[t] \footnotesize
\caption{Dual Reinforcement Training of $\mathcal{G}$ $\mathcal{I}$.}
\begin{algorithmic}[1]
\Require A set of knowledge guided event pairs \{($ep$,$s$,$c$)\}
A pre-trained generator $\mathcal{G}$ and identifier $\mathcal{I}$
\Ensure Early stop on the development set according to $\mathcal{I}$.
\Function {Primal Cycle}{}
\For{event pair $(ep_i,s_i,c_i)$ in batch}
\State Generator generates the sentence $s'_i$ of $ep_i$;
\State Identifier re-predicts the causality $c^*_i$ of $ep_i$;
\State Computing the reward as:
\State $\quad R_{primal}^s = \lambda R_s(ep_i,c_i)+(1-\lambda) R_c(ep_i,s'_i)$.
\State Computing the stochastic gradient of $\theta_{\mathcal{G}}$:
\State $\quad \nabla_{\mathcal{G}} += R_{primal}^s \cdot \nabla_{\theta_{\mathcal{G}}} L_{G}(ep_i, c_i)$.
\EndFor
\State Model batch updates: $\theta_{\mathcal{G}} \leftarrow \theta_{\mathcal{G}} + \eta \cdot \nabla_{\mathcal{G}}$
\EndFunction
\\
\Function {Dual Cycle}{}
\For{event pair $(ep_i,s_i,c_i)$ in batch}
\State Identifier predicts the causality $c'_i$ of $ep_i$;
\State Generator re-generates the sentence $s^*_i$ of $ep_i$;
\State Computing the reward as:
\State $\quad R_{dual}^s = \gamma R_c(ep_i,s_i)+(1-\gamma) R_s(ep_i,c'_i)$.
\State Computing the stochastic gradient of $\theta_{\mathcal{I}}$:
\State $\quad \nabla_{\mathcal{I}} += R_{dual}^s \cdot \nabla_{\theta_{\mathcal{I}}} L_{I}(ep_i, s_i)$.
\EndFor
\State Model batch updates: $\theta_{\mathcal{I}} \leftarrow \theta_{\mathcal{I}} + \eta \cdot \nabla_{\mathcal{I}}$
\EndFunction
\end{algorithmic}
\label{alg1}
\end{algorithm}
\paragraph{Event Causality Identifier}
\label{sec:detector} First of all, we formulate event causality identification as a sentence-level binary classification problem. Specifically, we design a classifier based on BERT \cite{devlin-etal-2019-bert} to build our identifier. The input of the identifier is the event pair $ep$ and its sentence $s$. Next, we take the stitching of manually designed features (same lexical, causal potential, and syntactic features as \citet{gao-etal-2019-modeling}) and two event representations as the input of top MLP classifier. Finally, the output is a binary vector to predict the causal/non-causal relation of the input event pair $ep$.
\paragraph{Pre-training} We pre-train the identifier and generator on labeled data before dual reinforcement training. On the one hand, we train identifier via the cross-entropy objective function of the relation classification. On the other hand, for generators, we keep the events and entities in the input sentences, replace the remaining tokens with a special token [MASK], and then train it via the cross-entropy objective function to re-predict the masked tokens. Specifically, causal generator and non-causal generator are pre-trained on causal and non-causal labeled sentences respectively.
\paragraph{Dual Reinforcement Training} As shown in Algorithm \ref{alg1}, we interactively optimize the generator and identifier by dual reinforcement learning. Specifically, we maximize the following objective functions:
\begin{equation}\footnotesize
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{6pt}
L_{G}(ep, c) =
\begin{cases}
p(s'|c;\theta_{G}) = \frac{1}{|T_s|} \sum_{t \in T_s} p(t|c;\theta_{G}) \\
p(s'|c;\theta_{NG}) = \frac{1}{|T_s|} \sum_{t \in T_s} p(t|c;\theta_{NG}),
\end{cases}
\end{equation}
\begin{equation}\footnotesize
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
L_{I}(ep, s) = p(c'|s;\theta_{I}),
\end{equation}
where $\theta_{G}$ and $\theta_{NG}$ is the parameters of causal and non-causal sentence generators respectively, $T_s$ is the masked tokens. Finally, after dual data augmentation, we utilize generated sentences to further train the dual-trained identifier via the cross-entropy objective function of relation classification.
\section{Experiments}
\subsection{Experimental Setup}
\paragraph{Dataset and Evaluation Metrics}
Our experiments are conducted on two main benchmark datasets, including: \textbf{EventStoryLine} v0.9 (ESC) \cite{caselli2017event} described above; and (2) \textbf{Causal-TimeBank} (Causal-TB) \cite{Mirza2014AnAO} which contains 184 documents, 6813 events, and 318 causal event pairs. Same as previous methods, we use the last two topics of ESC as the development set for two datasets. For evaluation, we adopt Precision (P), Recall (R), and F1-score (F1) as evaluation metrics. We conduct 5-fold and 10-fold cross-validation on ESC and Causal-TB respectively, same as previous methods to ensure comparability.
All the results are the average of three independent experiments.
\paragraph{Parameters Settings} In implementations, both the identifier and generators are implemented on BERT-Base architecture\footnote{\url{https://github.com/google-research/bert}}, which has 12-layers, 768-hiddens, and 12-heads. We set the learning rate of generator pre-training, identifier pre-training/further training, and dual reinforcement training as 1e-5, 1e-5, and 1e-7 respectively. We set the ratio of the augmented data used for training to the labeled data, $\alpha$, $\beta$, $\mu$, $\lambda$ and $\gamma$ as 1:2, 30\%, 50\%, 0.2, 0.5 and 0.5 respectively tuned on the development set. And we apply early stop and SGD gradient strategy to optimize all models. We also adopt a negative sampling rate of 0.5 for training the identifier, owing to the sparseness of positive examples. (See Appendix D for more details.)
\paragraph{Compared Methods} Same as previous state-of-the-art work. For ESC, we prefer 1) \textbf{LSTM } \cite{cheng2017classifying}, a dependency path based sequential model that models the context between events to identify causality; 2) \textbf{Seq} \cite{choubey2017sequential}, a sequence model explores complex human designed features for ECI; 3) \textbf{LR+} and \textbf{ILP} \cite{gao-etal-2019-modeling}, document-level models adopt document structures for ECI. For Causal-TB, we prefer 1) \textbf{RB}, a rule-based system; 2) \textbf{DD}, a data driven machine learning based system; 3) \textbf{VR-C}, a verb rule based model with data filtering and gold causal signals enhancement. These models are designed by \citet{Mirza2014AnAO,DBLP:journals/corr/Mirza16} for ECI.
Owing to our methods are constructed on BERT, we build BERT-based methods: 1) \textbf{BERT}, a BERT-based baseline, our basic proposed event causality identifier. 2) \textbf{MM} \cite{ijcai2020-499}, the BERT-based SOTA method with mention masking generalization. 3) \textbf{MM+$Aug$}, the further re-trained MM with our dual augmented data. 4) \textbf{KnowDis} \cite{zuo-etal-2020-knowdis} improved the performance of ECI with the distantly labeled training data. We compare with it to illustrate the quality of our generated ECI-related training data. 5) \textbf{MM}+$ConceptAug$, to make a fair comparison, we introduce causal-related events from ConceptNet that employed by MM, and generate new sentences via KonwDis and LearnDA to further re-train MM (see Appendix C for details). Finally, we use \textbf{LearnDA}$_{Full}$ indicates our full model, which is the dual-trained identifier further trained via dual augmented data.
\subsection{Our Method vs. State-of-the-art Methods}
\begin{table}[t] \footnotesize
\centering
\begin{tabular}{lccc}
\textbf{Methods} & \textbf{P} & \textbf{R} & \textbf{F1} \\ \hline
\multicolumn{4}{c}{\textbf{ESC}} \\ \hline
LSTM \cite{cheng2017classifying} & 34.0 & 41.5 & 37.4 \\
Seq \cite{choubey2017sequential} & 32.7 & 44.9 & 37.8 \\
LR+ \cite{gao-etal-2019-modeling} & 37.0 & 45.2 & 40.7 \\
ILP \cite{gao-etal-2019-modeling} & 37.4 & 55.8 & 44.7 \\
BERT & 36.1 & 56.0 & 43.9 \\
KnowDis \cite{zuo-etal-2020-knowdis} & 39.7 & 66.5 & 49.7 \\
MM \cite{ijcai2020-499} & 41.9 & 62.5 & 50.1 \\ \hline
MM+${ConceptAug}$ (\textbf{Ours}) & 41.2 & 66.5 & 50.9* \\
MM+${Aug}$ (\textbf{Ours}) & 41.0 & 69.3 & 51.5* \\
\textbf{LearnDA}$_{Full}$ (\textbf{Ours}) & \textbf{42.2} & \textbf{69.8} & \textbf{52.6*} \\ \hline
\multicolumn{4}{c}{\textbf{Causal-TB}} \\ \hline
RB \cite{Mirza2014AnAO} & 36.8 & 12.3 & 18.4 \\
DD \cite{Mirza2014AnAO} & 67.3 & 22.6 & 33.9 \\
VR-C \cite{DBLP:journals/corr/Mirza16} & \textbf{69.0} & 31.5 & 43.2 \\
BERT & 38.5 & 43.9 & 41.0 \\
MM \cite{ijcai2020-499} & 36.6 & 55.6 & 44.1 \\
KnowDis \cite{zuo-etal-2020-knowdis} & 42.3 & 60.5 & 49.8 \\ \hline
MM+${ConceptAug}$ (\textbf{Ours}) & 38.8 & 59.2 & 46.9* \\
MM+${Aug}$ (\textbf{Ours}) & 39.2 & 61.9 & 48.0* \\
\textbf{LearnDA}$_{Full}$ (\textbf{Ours}) & 41.9 & \textbf{68.0} & \textbf{51.9*} \\ \hline
\end{tabular}
\caption{Results on event causality identification. * denotes a significant test at the level of 0.05.}
\label{tab2}
\end{table}
Table \ref{tab2} shows the results of ECI on EventStoryLine and Causal-TimeBank. From the results:
1) Our LearnDA$_{Full}$ outperforms all baselines and achieves the best performance (52.6\%/51.9\% on F1 value), outperforming the no-bert (ILP/VR-C) and bert (MM/KnowDis) state-of-the-art methods by a margin of 7.9\%/8.7\% and 2.5\%/2.1\% respectively, which justifies its effectiveness. Moreover, BERT-based methods demonstrate high recall value, which is benefited from more training data and their event-related guided knowledge.
2) Comparing KnowDis with LearnDA$_{Full}$, we note that training data generated by LearnDA is more helpful to ECI than distant supervision with external knowledge (+2.9\%/+2.1\%). This shows that LearnDA can generate more ECI-related data.
3) Comparing MM+$ConceptNet$ with MM, with the same knowledge base, our dual augmented data can further improve the performance (+0.8\%/+2.8\%), which illustrates that LearnDA can make more effective use of external knowledge by generating task-related training data.
4) Comparing MM+${Aug}$ with MM, we note that training with our dual augmented data can improve the performance by 1.4\%/3.9\%, even though MM is designed on BERT-Large (LearnDA is constructed on BERT-Base) and also introduces external knowledge. This indicates that the augmented data generated by our LearnDA can effectively alleviate the problem of data lacking on the ECI.
\begin{table}[t] \footnotesize
\centering
\begin{tabular}{cccc}
\multicolumn{1}{l}{\textbf{Method}} & \multicolumn{1}{c}{\textbf{P}} & \multicolumn{1}{c}{\textbf{R}} & \multicolumn{1}{c}{\textbf{F}} \\ \hline
\multicolumn{1}{l}{BERT (Our basic identifier)} & 36.1 & 56.0 & 43.9 \\
\multicolumn{1}{l}{BERT$_{OrgAug}$} & 36.6 & 59.7 & 45.4* \\
\multicolumn{1}{l}{BERT$_{DualAug}$} & 37.8 & 65.6 & 48.0* \\
\multicolumn{1}{l}{LearnDA$_{Dual}$} & 36.8 & 63.0 & 46.5* \\
\multicolumn{1}{l}{LearnDA$_{DualAug-w/o.KB}$} & 37.5 & 67.0 & 48.1* \\
\multicolumn{1}{l}{$-$LearnDA$_{DualAug-w/.intro}$} & 39.0 & 66.0 & 49.0* \\
\multicolumn{1}{l}{$-$LearnDA$_{DualAug-w/.verbnet}$} & 39.4 & 66.7 & 49.5* \\
\multicolumn{1}{l}{$-$LearnDA$_{DualAug-w/.wordnet}$} & 39.6 & 67.6 & 49.9* \\
\multicolumn{1}{l}{LearnDA$_{Full}$} & 42.2 & 69.8 & \textbf{52.6*} \\ \hline
\end{tabular}
\caption{Ablation results on event causality identification on ESC. * denotes a significant test at the level of 0.05. BERT$_{OrgAug}$ and BERT$_{DualAug}$ denote the BERT is further trained on no-dual and dual augmented data respectively; LearnDA$_{Dual}$ denotes our identifier is only trained by dual learning without further training; LearnDA$_{DualAug-w/o.KB}$ denotes the LearnDA$_{Dual}$ is further trained by dual augmented data without knowledge guiding; LearnDA$_{DualAug-w/.<kb>}$ denotes LearnDA$_{Dual}$ is further trained by dual augmented data guided with knowledge base $kb$.}
\label{tab3}
\end{table}
\subsection{Effect of Learnable Dual Augmentation}
We analyze the effect of the learnable dual augmentation for event causality identification. 1) \emph{For identifier}. Comparing LearnDA$_{Dual}$ with BERT in Table \ref{tab3}, we note that the performance of the proposed identifier is improved (+2.6\%) after the dual training only with the same labeled data. This indicates that the identifier can learn more informative expressions of causal semantic from generation with dual learning. 2) \emph{For generator}. Comparing BERT$_{DualAug}$ with BERT$_{Aug}$ in Table \ref{tab3}, we note that the dual augmented data is high quality and more helpful to ECI (+2.6\%). This indicates generator can generate more ECI task-related data learned from identifier with dual learning.
\begin{figure}[t]
\centering
\includegraphics*[clip=true,width=0.35\textwidth,height=0.18\textheight]{fig5.pdf}
\caption{The impact of the training rounds of dual learning on event causality identification on ESC. In each round, we generate new training data by the generator at the current round. The performance is achieved by further training the identifier at the current round with the aforementioned newly generated data.} \label{fig5}
\end{figure}
Figure \ref{fig5} illustrates the learnability of our LearnDA. Specifically, as the number of training rounds of dual learning increases, the generated data gradually \emph{learns} task-related information, further improving the performance accordingly.
\subsection{Effect of Knowledge Guiding}
Table \ref{tab3} also illustrates the effect of knowledge guiding on ECI depending on different knowledge bases. 1) Comparing LearnDA$_{Full}$ with LearnDA$_{DualAug-w/o.KB}$, we note that the augmented data guided by external knowledge can further improve the performance of ECI. 2) Specifically, \emph{lexical expanding} and \emph{connective introducing} (Sec \ref{sec:KG}) can both make the representation of causal relation more generalized, further making it easier for the identifier to understand the causality. 3) Moreover, the expanding is more effective than the introducing, because the former brings a wider range of effective knowledge, thus the guidance of causal-related knowledge is better.
\subsection{Our Augmentation vs. Other NLP Augmentations}
\begin{table}[t] \footnotesize
\centering
\begin{tabular}{cccc}
\multicolumn{1}{l}{\textbf{Method}} & \multicolumn{1}{c}{\textbf{P}} & \multicolumn{1}{c}{\textbf{R}} & \multicolumn{1}{c}{\textbf{F}} \\ \hline
\multicolumn{1}{l}{BERT (Our identifier)} & 36.1 & 56.0 & 43.9 \\
\multicolumn{1}{l}{TextSurface$_{BERT}$} & 37.0 & 57.5 & 45.0* \\
\multicolumn{1}{l}{BackTranslation$_{BERT}$} & 36.8 & 61.0 & 45.9* \\
\multicolumn{1}{l}{EDA$_{BERT}$} & 36.6 & 62.4 & 46.1* \\
\multicolumn{1}{l}{LearnDA$_{BERT}$} & 37.8 & 65.6 & \textbf{48.0*} \\ \hline
\end{tabular}
\caption{Results of different data augmentation methods on event causality identification on ESC dataset. * denotes a significant test at the level of 0.05.}
\label{tab4}
\end{table}
In this section, we conduct a comparison between our augmentation framework and other NLP-related augmentation methods to further illustrate the effectiveness of LearnDA.
\paragraph{Effectiveness of Our Augmentation}
We train our identifier with augmented data produced by different NLP-related augmentation methods. As shown in Table \ref{tab4}, the augmented data generated by our LearnDA is more efficient for ECI, which is consistent with the previous analysis. The LearnDA can generate well-formed task-related new sentences that contain more event causal knowledge. Specifically, 1) \emph{text surface transformation} brings a slight change to the labeled data, thus it has relatively little impact on ECI; 2) \emph{Back translation} introduces limited new causal expressions by translation, thus it slightly increases the recall value on ECI; 3) \emph{EDA} can introduce new expressions via substitution, but the augmented data is not canonical and cannot accurately express the causality, therefore, its impact on ECI is also limited.
\begin{table}[t] \footnotesize
\centering
\resizebox{0.47\textwidth}{!}{
\begin{tabular}{lcccc}
\textbf{} & \multicolumn{1}{c}{\textbf{Gold}} & \multicolumn{1}{c}{\textbf{EDA}} & \multicolumn{1}{c}{\textbf{BackTrans}} & \multicolumn{1}{c}{\textbf{LearnDA}} \\ \hline
\textbf{Causality} & 3.80 & 3.20 & 3.70 & 3.60 \\
\textbf{Well-formedness} & 3.95 & 2.75 & 3.83 & 3.64 \\
\textbf{Diversity} (Man/Auto) & 0.0/1.0 & 3.08/0.70 & 2.80/0.85 & 3.51/0.66 \\ \hline
\end{tabular}}
\caption{Manual (4-score rating (0, 1, 2, 3)) and automatic (BLEU score) evaluation of the generated sentences via different methods from causality, well-formedness and diversity. Causality and well-formedness are assessed manually, while diversity is assessed manually and automatically.}
\label{tab5}
\end{table}
\paragraph{Quantitative Evaluation of Task-relevance}
We select five Ph.D. students majoring in NLP to manual score the 100 randomly selected augmented sentences given their corresponding original sentences as reference (Cohen's kappa = 0.85). Furthermore, we calculate the BLEU \cite{Papineni2002BleuAM} value to further evaluate the diversity. As aforementioned, the task-relevance of new sentences on ECI is manifested in causality and well-formedness, while the diversity indicates the degree of generalization. As shown in Table \ref{tab5}, we note the sentences generated by LearnDA are equipped with the above three properties that are close to the labeled sentences. Specifically, the sentences produced by EDA has a certain degree of causality and diversity due to the lexical substitution assisted by external knowledge. However, they cannot well express the causality due to the grammatical irregularities. Correspondingly, new sentences generated via back translation are very similar to the original sentences, while the diversity is poor.
\subsection{Case Study}
\begin{figure}[t]
\centering
\includegraphics*[clip=true,width=0.44\textwidth,height=0.18\textheight]{fig6.pdf}
\caption{The modification of dual learning.} \label{fig6}
\end{figure}
We conduct a case study to further investigate the effectiveness of our LearnDA. Figure \ref{fig6} illustrates the modification process of dual learning. For example as a), given two causal events, the generator is expected to generate a causal sentence. However, the generator without dual learning produces a non-causal sentence. Fortunately, with dual learning, the identifier judges the generated sentence as a non-causal one and guides the generator to produce a causal sentence with the feedback. Similarly, as shown in b), given a causal sentence, the identifier is expected to output a causal relation, but no dual-trained one cannot do. Correspondingly, the generator constructs feedback of low confidence to guide the identifier to output a causal relation.
\section{Conclusion}
This paper proposes a new learnable knowledge-guided data augmentation framework (LearnDA) to solve the data lacking problem on ECI. Our framework can leverage the duality between generation and identification via dual learning to generate task-related sentences for ECI. Moreover, our framework is knowledge guided and learnable.
Our method achieves state-of-the-art performance on EventStoryLine and Causal-TimeBank datasets.
\section*{Acknowledgments}
We thank anonymous reviewers for their insightful comments and suggestions. This work is supported by the National Key Research and Development Program of China (No.2018YFB1005100), the National Natural Science Foundation of China (No.U1936207, 61806201). This work is also supported by Beijing Academy of Artificial Intelligence (BAAI2019QN0301) and the joint project with Beijing Baidu Netcom Science Technology Co., Ltd.
\bibliographystyle{acl_natbib}
\section{Introduction}
\label{intro}
Event causality identification (ECI) aims to identify causal relations between events in texts, which can provide crucial clues for NLP tasks, such as logical reasoning and question answering \cite{girju2003automatic,oh2013question,oh2017multi}. This task is usually modeled as a classification problem, i.e. determining whether there is a causal relation between two events in a sentence. For example in Figure \ref{fig1}, an ECI system should identify two causal relations in two sentences: (1) \textbf{attack} $ \stackrel{cause}{\longrightarrow}$ \textbf{killed} in S1; (2) \textbf{statement} $\stackrel{cause}{\longrightarrow}$ \textbf{protests} in S2.
Most existing methods for ECI heavily rely on annotated training data \cite{mirza2016catena,riaz2014recognizing,hashimoto2014toward,hu2017inferring,gao-etal-2019-modeling}. However, existing datasets are relatively small, which impede the training of the high-performance event causality reasoning model. According to our statistics, the largest widely used dataset EventStoryLine Corpus \cite{caselli2017event} only contains 258 documents, 4316 sentences, and 1770 causal event pairs. Therefore, data lacking is an essential problem that urgently needs to be addressed for ECI.
\begin{figure}[t]
\centering
\includegraphics*[clip=true,width=0.48\textwidth,height=0.10\textheight]{fig1.pdf}
\caption{S1 and S2 are \emph{causal sentences} that contain \emph{causal events}. S3 is produced by EDA based on S1. The dotted line indicates the causal relation.} \label{fig1}
\end{figure}
Up to now, data augmentation is one of the most effective methods to solve the data lacking problem. However, most of the NLP-related augmentation methods are a task-independent framework that produces new data at one time \cite{Zhang2015CharacterlevelCN,Guo2019AugmentingDW,Xie2019UnsupervisedDA}. In these frameworks, data augmentation and target task are modeled independently. This often leads to a lack of task-related characteristics in the generated data, such as task-related linguistic expression and knowledge. For example, easy data augmentation (EDA) \cite{wei-zou-2019-eda} is the most representative method that relies on lexical substitution, deletion, swapping, and insertion to produce new data.
However, solely relying on such word operations often generates new data that dissatisfies task-related qualities.
As shown in Figure \ref{fig1}, S3 is produced by EDA, it lacks a linguistic expression that expresses the causal semantics between \emph{kill} and \emph{attack}. Therefore, how to interactively model data augmentation and target task to generate new data with task-related characteristics is a challenging problem on ECI.
Specific to ECI, we argue that an ideal task-related generated causal sentence needs to possess two characteristics as follows. (1)
The two events in the causal sentence need to have a causal relation. We call such property as \textbf{\emph{Causality}}. For example, there is usually a causal relation between an \emph{attack} event and a \emph{kill} event, while nearly no causal relation between an \emph{attack} event and a \emph{born} event. (2) The linguistic expressions of the causal sentence need to be well-formed to express the causal semantic of events. We call such property as \textbf{\emph{Well-formedness}}, which consists of a) canonical sentence grammar, b) event-related entities with semantic roles (e.g. the \emph{attack} was carried out by a \emph{police} in S1), and c) cohesive words that express complete causal semantics (e.g. \emph{in a} and other words except for events and entities in S1).
To this end, we propose a learnable data augmentation framework for ECI, dubbed as \textbf{Learn}able Knowledge-Guided \textbf{D}ata \textbf{A}ugmentation (LearnDA). This framework regards sentence-to-relation mapping (\emph{the target task}, ECI) and relation-to-sentence mapping (\emph{the augmentation task}, sentence generation) as dual tasks and models the mutual relation between them via dual learning. Specifically, LearnDA can use the duality to generate task-related new sentences learning from identification and makes it more accurate to understand the causal semantic learning from generation. On the one hand, LearnDA is knowledge guided. It introduces diverse causal event pairs from KBs to initialize the dual generation which could ensure the \textbf{\emph{causality}} of generated causal sentences. For example, the knowledge of \emph{judgment} $\stackrel{cause}{\longrightarrow}$ \emph{demonstration} from KBs can be used to construct a novel causal sentence, which is also helpful to understand the causal semantic of \emph{statement} $\stackrel{cause}{\longrightarrow}$ \emph{protests}. On the other hand, LearnDA is learnable. It employs a constrained generative architecture to generate \textbf{\emph{well-formed}} linguistic expressions via iteratively learning in the dual interaction, which expresses the causal semantic between given events. Methodologically, it gradually fills the remaining missing cohesive words of the complete sentences under the constraint of given events and related entities.
In experiments, we evaluate our model on two benchmarks. We first concern the standard evaluation and show that our model achieves the state-of-the-art performance on ECI. Then we estimate the main components of LearnDA. Finally, our learnable augmentation framework demonstrates definite advantages over other augmentation methods in generating task-related data for ECI.
In summary, the contributions as follows:
\begin{itemize}
\item We propose a new learnable data augmentation framework to solve the data lacking problem of ECI. Our framework can leverage the duality between identification and generation via dual learning which can learn to generate task-related sentences for ECI.
\item Our framework is knowledge guided and learnable. Specifically, we introduce causal event pairs from KBs to initialize the dual generation, which could ensure the causality of generated causal sentences. We also employ a constrained generative architecture to gradually generate well-formed causal linguistic expressions of generated causal sentences via iteratively learning in the dual interaction.
\item Experimental results on two benchmarks show that our model achieves the best performance on ECI. Moreover, it also shows definite advantages over previous data augmentation methods.
\end{itemize}
\section{Related Work}
To date, many researches attempt to identify the causality with linguistic patterns or statistical features. For example, some methods rely on syntactic and lexical features \cite{riaz2013toward,riaz2014recognizing}. Some focus on explicit causal textual patterns \cite{hashimoto2014toward,riaz2014depth,riaz2010another,do2011minimally,hidey-mckeown-2016-identifying}. And some others pay attention on statistical causal association and cues \cite{beamer2009using,hu2017inference,hu2017inferring}.
Recently, more attention is paid to the causality between events. \citet{Mirza2014AnAO} annotated Causal-TimeBank of event-causal relations based on the TempEval-3 corpus. \citet{mirza2014annotating}, \citet{mirza2016catena} extracted event-causal relation with a rule-based multi-sieve approach and improved the performance incorporating with event temporal relation. \citet{mostafazadeh2016caters} annotated both temporal and causal relations in 320 short stories. \citet{caselli2017event} annotated the EventStoryLine Corpus for event causality identification. \citet{dunietz-etal-2017-corpus} presented BECauSE 2.0, a new version of the BECauSE corpus \cite{dunietz-etal-2015-annotating} of causal relation and other seven relations. \citet{gao-etal-2019-modeling} modeled document-level structures to identify causality. \citet{ijcai2020-499} identified event causality with the mention masking generalization.
Unlike computer vision, the augmentation of text data in NLP is pretty rare \cite{chaudhary2020nlpaugment}. \citet{zuo-etal-2020-knowdis} solved the data lacking problem of ECI with the distantly supervised labeled training data. However, including the distant supervision, most of the existing data augmentation methods for NLP tasks are task-independent frameworks (Related work of data augmentation and dual learning are detailed in Appendix B). Inspired by some generative methods which try to generate additional training data while preserving the class label \cite{AnabyTavor2019NotED,yang-etal-2019-exploring-pre,Papanikolaou2020DAREDA}, we introduce a new learnable framework for augmenting task-related training data for ECI via dual learning enhanced with external knowledge.
\begin{figure}[t]
\centering
\includegraphics*[clip=true,width=0.41\textwidth,height=0.19\textheight]{fig2.pdf}
\caption{Overview of the learnable knowledge-guided dual data augmentation for ECI.} \label{fig2}
\end{figure}
\section{Methodology}
As shown in Figure \ref{fig2}, LearnDA jointly models a knowledge guided sentence generator (input: \emph{event pair and its causal/non-causal relation}, output: \emph{causal/non-causal sentence}) and an event causality identifier (input: \emph{event pair and its sentence}, output: \emph{causal/non-causal relation}) with dual learning. LearnDA iteratively optimizes identifier and generator to generate task-related training data, and then utilize new data to further train the identifier. Therefore, we first present the main idea of dual learning, which is the architecture of learnable dual augmentation, including the states, actions, policies, and rewards. Then, we briefly introduce the knowledge guided sentence generator, especially the processes of knowledge guiding and constrained sentence generation. Finally, we describe the event causality identifier and training processes of LearnDA.
\subsection{Architecture of Learnable Dual Augmentation}
\label{sec:LDAN}
\begin{figure}[t]
\centering
\includegraphics*[clip=true,width=0.39\textwidth,height=0.19\textheight]{fig3.pdf}
\caption{The architecture of learnable dual augmentation. \emph{Causal} and \emph{NCausal} represent the causal and non-causal sentence generator respectively. Red parts are the process of \emph{$<$event pair, relation$>$ $\rightarrow$ sentence $\rightarrow$ relation} (primal cycle), while blue parts are the process of \emph{$<$event pair, sentence$>$ $\rightarrow$ relation $\rightarrow$ sentence} (dual cycle). Solid and dashed lines denote the main process and reward feedback direction respectively.} \label{fig3}
\end{figure}
The architecture of learnable dual augmentation is shown in Figure \ref{fig3}. Specifically, \emph{I} denotes the event causality identifier, and \emph{G} denotes the sentence generator which consists of two independent generators. They produce causal and non-causal sentences on the relation $c$ of input event pair $ep$.
Generally, \emph{G} generates a sentence $s'$ which expresses the causal or non-causal relation $c$ of the input event pair $ep$. Then it receives the reward $R$ that consists of a semantic alignment reward $R_s$ from itself and a causality reward $R_c$ from \emph{I} (primal cycle). Similarly, \emph{I} identifies the causal or non-causal relation $c'$ of the input event pair $ep$ with its sentence $s$. Then it receives the reward $R$ consists of a causality reward $R_c$ from itself and a semantic alignment reward $R_s$ from \emph{G} (dual cycle).
\emph{I} and \emph{G} are optimized interactively with dual reinforcement learning. Specifically, for \emph{G}, an action is the generation from relation to sentence, a state is denoted by the representation of input event pair and its relation, a policy is defined by the parameters of generator. For \emph{I}, an action is the identification from sentence to relation, a state is denoted by the representation of input event pair and its sentence, a policy is defined by the parameters of identifier. Inspired by \citet{shen-feng-2020-cdl}, we utilize a probability distribution over actions given states to represent the policys, i.e., the probability distribution of the generation of \emph{G} and identification of \emph{I}. As aforementioned, we introduce two rewards, causality ($R_c$) and semantic alignment ($R_s$) rewards, which encourage \emph{G} to generate task-related sentences with the feedback from identifier, while further optimize \emph{I} with the feedback from generator. Definitions are as following:
\paragraph{Causality Reward ($R_c$)} If the relation of input event pair can be clearly expressed by the generated sentence, it will be easier to be understood by identifier. Therefore, we use the causal relation classification accuracy as the causality reward to evaluate the causality of generated sentences, while tune and optimize the identifier itself:
\begin{equation}\footnotesize
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
R_{c}(ep, s) =
\begin{cases}
p(c'|s;\theta_{I})& \text{\emph{Correct classification}}\\
-p(c'|s;\theta_{I})& \text{\emph{Otherwise},}
\end{cases}
\end{equation}
where $\theta_{I}$ is the parameter of \emph{I}, $p(c'|s;\theta_{I})$ denotes the probability of relation classification, $s$ denotes the input sentence and $c'$ is the classified relation.
\paragraph{Semantic Alignment Reward ($R_s$)} We hope that the semantic of the generated sentence can be consistent with the relation of the input event pair. Additionally, if the relation of the input event pair can be more accurately classified, the semantic of the new generated sentence can be considered more consistent with it. Therefore, we measure the semantic alignment by means of the probability of constructing a sentence with similar semantic to the input relation, and the reward is:
\begin{equation}\footnotesize
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{6pt}
R_{s}(ep, c) = p(s'|c;\theta_{G}) = \frac{1}{|T_s|} \sum_{t \in T_s} p(t|c;\theta_{G}),
\end{equation}
where $\theta_{G}$ is the parameter of \emph{G}, $c$ is the input relation, $t$ is one of the generated tokens $T_s$ of the generated sentence $s'$, and $p(t|c;\theta_{G})$ is the generated probability of $t$. Specifically, there are two independent \emph{G} with different $\theta_{G}$. In detail, $\theta_{G}^{c}$ is employed to generated causal sentence when the input $c$ is causal relation, and non-causal sentence is generated via $\theta_{G}^{nc}$ when $c$ is non-causal relation.
\subsection{Knowledge Guided Sentence Generator}
As shown in Figure \ref{fig4}, knowledge guided sentence generator (KSG) first introduces diverse causal and non-causal event pairs from KBs for \emph{causality}. Then, given an event pair and its causal or non-causal relation, it employs a constrained generative architecture to generate new \emph{well-formed} causal/non-causal sentences that contain them.
\begin{table*}[h] \footnotesize
\centering
\scalebox{0.90}{
\begin{tabular}{|m{1.5cm}|m{9cm}|m{5.5cm}|}
\hline
\multicolumn{1}{|c|}{\textbf{Knowledge}} & \multicolumn{1}{c|}{\textbf{How to extract event pair}} & \multicolumn{1}{c|}{\textbf{Why causal or non-causal}} \\ \hline
\multicolumn{3}{|c|}{\textbf{Lexical knowledge expanding}} \\ \hline
\multicolumn{1}{|c|}{\textbf{WordNet}} & 1) Extracting the synonyms and hypernyms from WordNet of each event in $ep$. 2) Assembling the items from the two groups of two events to generate causal/non-causal event pairs. & Items in each group are the synonyms and hypernyms of the annotated causal/non-causal event pairs. \\ \hline
\multicolumn{1}{|c|}{\textbf{VerbNet}} & 1) Extracting the words from VerbNet under the same class as each event in $ep$. 2) Assembling the items from the two groups of two events to generate causal/non-causal event pairs. & Items in each group are in the same class of the annotated causal/non-causal event pairs. \\ \hline
\multicolumn{1}{|c|}{\textbf{e.g.}} & \multicolumn{2}{c|}{$<(killed,attack),causal>\Longrightarrow kill \stackrel{Synonyms}{\longrightarrow}hurt$, $attack \stackrel{Synonyms}{\longrightarrow}onrush\Longrightarrow <(hurt,onrush),causal>$} \\
\multicolumn{1}{|c|}{} & \multicolumn{2}{c|}{\emph{Original sentence}: Kimani Gray, a young man who likes football, was killed in a police attack shortly after a tight match.} \\ \hline
\multicolumn{3}{|c|}{\textbf{Connective knowledge introducing}} \\ \hline
\textbf{FrameNet PDTB2} & 1) Extracting causal/non-causal connectives from FrameNet\footnote{Frame with types of \emph{Reasoning}, \emph{Causation}, \emph{Causation\_scenario}, \emph{Reason}, \emph{Triggering} and \emph{Explaining\_the\_facts}.} and PDTB2. 2) Extracting any two events connected by causal/non-causal connectives on KBP corpus to obtain causal/non-causal event pairs and original sentences respectively. & Introduced event pairs are connected by causal/non-causal connectives. \\ \hline
\multicolumn{1}{|c|}{\textbf{e.g.}} & \multicolumn{2}{c|}{\textbf{Looting} \emph{because} someone \textbf{beat up} someone, like the Travon Martin case. $\stackrel{because}{\Longrightarrow}<(loot,beat\_up),causal>$} \\
\multicolumn{1}{|c|}{} & \multicolumn{2}{c|}{\emph{Original sentence}: Looting because someone beat up someone, like the Travon Martin case.} \\ \hline
\end{tabular}}
\caption{Extracting causal and non-causal event pairs from multiple knowledge bases.}
\label{tab1}
\end{table*}
\paragraph{Knowledge Guiding}
\label{sec:KG}
\label{sec:KSG}
\begin{figure}[t]
\centering
\includegraphics*[clip=true,width=0.42\textwidth,height=0.17\textheight]{fig4.pdf}
\caption{Flow diagram of the knowledge guided sentence generator (KSG). We take causal sentence generation via lexical knowledge expanding as an example.} \label{fig4}
\end{figure}
KSG introduces event pairs that are probabilistic causal or non-causal from multiple knowledge bases in two ways. (1) \emph{Lexical knowledge expanding}: expanding annotated event pairs via external dictionaries, such as WordNet \cite{miller1995wordnet} and VerbNet \cite{schuler2005verbnet}. (2) \emph{Connective knowledge introducing}: introducing event pairs from external event-annotated documents (KBP corpus) assisted with FrameNet \cite{Baker1998TheBF} and Penn Discourse Treebank (PDTB2) \cite{pdtb2008pdtb}. As shown in Table \ref{tab1}, we illustrate how to extract event pairs from multiple knowledge bases. Then, inspired by \citet{bordes2013translating}, we filter the extracted event pairs by converting them into triples $<$$e_i$, causal/non-causal, $e_j$$>$ and calculating the causal-distance by maximizing $L$ in a causal representation space:
\begin{equation}\footnotesize
L=\sum_{(e_i,e_j) \in T} \sum_{(e'_i,e'_j) \in T'} [\lambda + d(\bm{e'_i},\bm{e'_j}) - d(\bm{e_i},\bm{e_j})]_{+},
\end{equation}
where $T$ and $T'$ are the causal and non-causal triples set respectively, and $\bm{e}$ is the representation of event. After that, the higher probability of causal relation, the shorter distance between two events, and we sort event pairs in ascending order by their distances. Finally, we keep the top and bottom $\alpha$\% sorted event pairs to obtain the causal and non-causal event pairs sets for generation.
\paragraph{Constrained Sentence Generator} Given an event pair, constrained sentence generator produces a well-formed sentence that expresses its causal or non-causal relation in three stages: (1) \emph{assigning event-related entities} ensures the logic of the semantic roles of events, (2) \emph{completing sentences} ensures the completeness of causal or non-causal semantic expression, (3) \emph{filtering sentences} ensures the quality and diversity of generated sentences.
\emph{\textbf{Assigning Event-related Entities.}} Event related entities play different semantic roles of events in sentences, which is an important part of event-semantic expression. Hence, as shown in Figure \ref{fig4}, given an event pair, we firstly assign logical entities for input events to guarantee the logic of semantic roles in the new sentences, such as \emph{gang} is a logical entity as the body of the event \emph{onrush}.
Logically, entities of the same type play the same semantic roles in similar events. Moreover, as shown in Table \ref{tab1}, there is a corresponding original sentence for each extracted event pair.
Therefore, in new sentence, we assign the most similar entity in the same type from candidate set\footnote{We collect entities from annotated data and KBP corpus.} for each entity in the original sentence. For example, we assign \emph{gang} for \emph{onrush} in new sentence which is similar with the \emph{police} related to \emph{attack} in the original sentence. Specifically, we put the candidate entities in the same position in the original sentence to obtain their BERT embeddings. Then we select entities via the cosine similarity between their embeddings: $\mathcal{E}(ent) = \frac{1}{|ent|}\sum_{w \in ent}\mathcal{E}(w)$, where $ent$ is the entity and $\mathcal{E}(w)$ is the BERT embedding of $ent$.
\emph{\textbf{Completing Sentences.}} A well-formed sentence requires a complete linguistic expression to express the causal or non-causal semantics. Therefore, we complete sentences by filling the cohesive words between given events and assigned entities with masked BERT \cite{devlin-etal-2019-bert}. All words except events and entities are regarded as cohesive words. Specifically, we insert a certain number of the special token [MASK] between events and entities, and then predict the [MASK]\footnote{The inserted [MASK] is 1.2 times the number of words between events and entities in the original sentence.} tokens as new words. As shown in Figure \ref{fig4}, we fill cohesive tokens via two independent generators to express causal and non-causal semantic according to the relation of given events. For example, \emph{in a} guiding a causal semantic filled by the causal generator.
\emph{\textbf{Filtering Sentences.}} Inspired by \citet{yang-etal-2019-exploring-pre}, we design a filter to select new sentences that are balanced between high quality and high diversity with two key factors: 1) \textbf{Perplexity} (PPL): we take the average probability of the filled cohesive words in the new sentence $s'$ as its perplexity: $PPL(s') = \frac{1}{|T(s')|} \sum_{t \in T(s')} P(t)$, where $T$ is the set of filled cohesive words.
2) \textbf{Distance} (DIS): we calculate the cosine similarity between generated sentence $s'$ and annotated data $D_m$ as its distance:
$DIS(s',D_m)=\frac{1}{|D_m|} \sum_{s \in D_m} \frac{\mathcal{E}(s') \cdot \mathcal{E}(s)}{\mathcal{E}(s') \times \mathcal{E}(s)}$, where $D_m$ is $m$ random selected annotated sentences and $\mathcal{E}$ is the BERT sentence representation of the [CLS] token. A new sentence should have both appropriate high PPL which indicates the quality of generation, and appropriate high DIS which indicates the difference from the original sentences. Therefore, we select the top $\beta$\% of the newly generated sentences according to $Score$ for the further training of identifier as following: $Score(s')=\mu PPL(s') + (1- \mu) DIS(s',D_m))$, where the $\mu$ is an hyper-parameter.
\subsection{Training of LearnDA for ECI}
\label{sec:Training}
We briefly describe the training processes of LearnDA for ECI, including the pre-training of generator and identifier, the dual reinforcement training, and the further training of identifier.
\begin{algorithm}[t] \footnotesize
\caption{Dual Reinforcement Training of $\mathcal{G}$ $\mathcal{I}$.}
\begin{algorithmic}[1]
\Require A set of knowledge guided event pairs \{($ep$,$s$,$c$)\}
A pre-trained generator $\mathcal{G}$ and identifier $\mathcal{I}$
\Ensure Early stop on the development set according to $\mathcal{I}$.
\Function {Primal Cycle}{}
\For{event pair $(ep_i,s_i,c_i)$ in batch}
\State Generator generates the sentence $s'_i$ of $ep_i$;
\State Identifier re-predicts the causality $c^*_i$ of $ep_i$;
\State Computing the reward as:
\State $\quad R_{primal}^s = \lambda R_s(ep_i,c_i)+(1-\lambda) R_c(ep_i,s'_i)$.
\State Computing the stochastic gradient of $\theta_{\mathcal{G}}$:
\State $\quad \nabla_{\mathcal{G}} += R_{primal}^s \cdot \nabla_{\theta_{\mathcal{G}}} L_{G}(ep_i, c_i)$.
\EndFor
\State Model batch updates: $\theta_{\mathcal{G}} \leftarrow \theta_{\mathcal{G}} + \eta \cdot \nabla_{\mathcal{G}}$
\EndFunction
\\
\Function {Dual Cycle}{}
\For{event pair $(ep_i,s_i,c_i)$ in batch}
\State Identifier predicts the causality $c'_i$ of $ep_i$;
\State Generator re-generates the sentence $s^*_i$ of $ep_i$;
\State Computing the reward as:
\State $\quad R_{dual}^s = \gamma R_c(ep_i,s_i)+(1-\gamma) R_s(ep_i,c'_i)$.
\State Computing the stochastic gradient of $\theta_{\mathcal{I}}$:
\State $\quad \nabla_{\mathcal{I}} += R_{dual}^s \cdot \nabla_{\theta_{\mathcal{I}}} L_{I}(ep_i, s_i)$.
\EndFor
\State Model batch updates: $\theta_{\mathcal{I}} \leftarrow \theta_{\mathcal{I}} + \eta \cdot \nabla_{\mathcal{I}}$
\EndFunction
\end{algorithmic}
\label{alg1}
\end{algorithm}
\paragraph{Event Causality Identifier}
\label{sec:detector} First of all, we formulate event causality identification as a sentence-level binary classification problem. Specifically, we design a classifier based on BERT \cite{devlin-etal-2019-bert} to build our identifier. The input of the identifier is the event pair $ep$ and its sentence $s$. Next, we take the stitching of manually designed features (same lexical, causal potential, and syntactic features as \citet{gao-etal-2019-modeling}) and two event representations as the input of top MLP classifier. Finally, the output is a binary vector to predict the causal/non-causal relation of the input event pair $ep$.
\paragraph{Pre-training} We pre-train the identifier and generator on labeled data before dual reinforcement training. On the one hand, we train identifier via the cross-entropy objective function of the relation classification. On the other hand, for generators, we keep the events and entities in the input sentences, replace the remaining tokens with a special token [MASK], and then train it via the cross-entropy objective function to re-predict the masked tokens. Specifically, causal generator and non-causal generator are pre-trained on causal and non-causal labeled sentences respectively.
\paragraph{Dual Reinforcement Training} As shown in Algorithm \ref{alg1}, we interactively optimize the generator and identifier by dual reinforcement learning. Specifically, we maximize the following objective functions:
\begin{equation}\footnotesize
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{6pt}
L_{G}(ep, c) =
\begin{cases}
p(s'|c;\theta_{G}) = \frac{1}{|T_s|} \sum_{t \in T_s} p(t|c;\theta_{G}) \\
p(s'|c;\theta_{NG}) = \frac{1}{|T_s|} \sum_{t \in T_s} p(t|c;\theta_{NG}),
\end{cases}
\end{equation}
\begin{equation}\footnotesize
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
L_{I}(ep, s) = p(c'|s;\theta_{I}),
\end{equation}
where $\theta_{G}$ and $\theta_{NG}$ is the parameters of causal and non-causal sentence generators respectively, $T_s$ is the masked tokens. Finally, after dual data augmentation, we utilize generated sentences to further train the dual-trained identifier via the cross-entropy objective function of relation classification.
\section{Experiments}
\subsection{Experimental Setup}
\paragraph{Dataset and Evaluation Metrics}
Our experiments are conducted on two main benchmark datasets, including: \textbf{EventStoryLine} v0.9 (ESC) \cite{caselli2017event} described above; and (2) \textbf{Causal-TimeBank} (Causal-TB) \cite{Mirza2014AnAO} which contains 184 documents, 6813 events, and 318 causal event pairs. Same as previous methods, we use the last two topics of ESC as the development set for two datasets. For evaluation, we adopt Precision (P), Recall (R), and F1-score (F1) as evaluation metrics. We conduct 5-fold and 10-fold cross-validation on ESC and Causal-TB respectively, same as previous methods to ensure comparability.
All the results are the average of three independent experiments.
\paragraph{Parameters Settings} In implementations, both the identifier and generators are implemented on BERT-Base architecture\footnote{\url{https://github.com/google-research/bert}}, which has 12-layers, 768-hiddens, and 12-heads. We set the learning rate of generator pre-training, identifier pre-training/further training, and dual reinforcement training as 1e-5, 1e-5, and 1e-7 respectively. We set the ratio of the augmented data used for training to the labeled data, $\alpha$, $\beta$, $\mu$, $\lambda$ and $\gamma$ as 1:2, 30\%, 50\%, 0.2, 0.5 and 0.5 respectively tuned on the development set. And we apply early stop and SGD gradient strategy to optimize all models. We also adopt a negative sampling rate of 0.5 for training the identifier, owing to the sparseness of positive examples. (See Appendix D for more details.)
\paragraph{Compared Methods} Same as previous state-of-the-art work. For ESC, we prefer 1) \textbf{LSTM } \cite{cheng2017classifying}, a dependency path based sequential model that models the context between events to identify causality; 2) \textbf{Seq} \cite{choubey2017sequential}, a sequence model explores complex human designed features for ECI; 3) \textbf{LR+} and \textbf{ILP} \cite{gao-etal-2019-modeling}, document-level models adopt document structures for ECI. For Causal-TB, we prefer 1) \textbf{RB}, a rule-based system; 2) \textbf{DD}, a data driven machine learning based system; 3) \textbf{VR-C}, a verb rule based model with data filtering and gold causal signals enhancement. These models are designed by \citet{Mirza2014AnAO,DBLP:journals/corr/Mirza16} for ECI.
Owing to our methods are constructed on BERT, we build BERT-based methods: 1) \textbf{BERT}, a BERT-based baseline, our basic proposed event causality identifier. 2) \textbf{MM} \cite{ijcai2020-499}, the BERT-based SOTA method with mention masking generalization. 3) \textbf{MM+$Aug$}, the further re-trained MM with our dual augmented data. 4) \textbf{KnowDis} \cite{zuo-etal-2020-knowdis} improved the performance of ECI with the distantly labeled training data. We compare with it to illustrate the quality of our generated ECI-related training data. 5) \textbf{MM}+$ConceptAug$, to make a fair comparison, we introduce causal-related events from ConceptNet that employed by MM, and generate new sentences via KonwDis and LearnDA to further re-train MM (see Appendix C for details). Finally, we use \textbf{LearnDA}$_{Full}$ indicates our full model, which is the dual-trained identifier further trained via dual augmented data.
\subsection{Our Method vs. State-of-the-art Methods}
\begin{table}[t] \footnotesize
\centering
\begin{tabular}{lccc}
\textbf{Methods} & \textbf{P} & \textbf{R} & \textbf{F1} \\ \hline
\multicolumn{4}{c}{\textbf{ESC}} \\ \hline
LSTM \cite{cheng2017classifying} & 34.0 & 41.5 & 37.4 \\
Seq \cite{choubey2017sequential} & 32.7 & 44.9 & 37.8 \\
LR+ \cite{gao-etal-2019-modeling} & 37.0 & 45.2 & 40.7 \\
ILP \cite{gao-etal-2019-modeling} & 37.4 & 55.8 & 44.7 \\
BERT & 36.1 & 56.0 & 43.9 \\
KnowDis \cite{zuo-etal-2020-knowdis} & 39.7 & 66.5 & 49.7 \\
MM \cite{ijcai2020-499} & 41.9 & 62.5 & 50.1 \\ \hline
MM+${ConceptAug}$ (\textbf{Ours}) & 41.2 & 66.5 & 50.9* \\
MM+${Aug}$ (\textbf{Ours}) & 41.0 & 69.3 & 51.5* \\
\textbf{LearnDA}$_{Full}$ (\textbf{Ours}) & \textbf{42.2} & \textbf{69.8} & \textbf{52.6*} \\ \hline
\multicolumn{4}{c}{\textbf{Causal-TB}} \\ \hline
RB \cite{Mirza2014AnAO} & 36.8 & 12.3 & 18.4 \\
DD \cite{Mirza2014AnAO} & 67.3 & 22.6 & 33.9 \\
VR-C \cite{DBLP:journals/corr/Mirza16} & \textbf{69.0} & 31.5 & 43.2 \\
BERT & 38.5 & 43.9 & 41.0 \\
MM \cite{ijcai2020-499} & 36.6 & 55.6 & 44.1 \\
KnowDis \cite{zuo-etal-2020-knowdis} & 42.3 & 60.5 & 49.8 \\ \hline
MM+${ConceptAug}$ (\textbf{Ours}) & 38.8 & 59.2 & 46.9* \\
MM+${Aug}$ (\textbf{Ours}) & 39.2 & 61.9 & 48.0* \\
\textbf{LearnDA}$_{Full}$ (\textbf{Ours}) & 41.9 & \textbf{68.0} & \textbf{51.9*} \\ \hline
\end{tabular}
\caption{Results on event causality identification. * denotes a significant test at the level of 0.05.}
\label{tab2}
\end{table}
Table \ref{tab2} shows the results of ECI on EventStoryLine and Causal-TimeBank. From the results:
1) Our LearnDA$_{Full}$ outperforms all baselines and achieves the best performance (52.6\%/51.9\% on F1 value), outperforming the no-bert (ILP/VR-C) and bert (MM/KnowDis) state-of-the-art methods by a margin of 7.9\%/8.7\% and 2.5\%/2.1\% respectively, which justifies its effectiveness. Moreover, BERT-based methods demonstrate high recall value, which is benefited from more training data and their event-related guided knowledge.
2) Comparing KnowDis with LearnDA$_{Full}$, we note that training data generated by LearnDA is more helpful to ECI than distant supervision with external knowledge (+2.9\%/+2.1\%). This shows that LearnDA can generate more ECI-related data.
3) Comparing MM+$ConceptNet$ with MM, with the same knowledge base, our dual augmented data can further improve the performance (+0.8\%/+2.8\%), which illustrates that LearnDA can make more effective use of external knowledge by generating task-related training data.
4) Comparing MM+${Aug}$ with MM, we note that training with our dual augmented data can improve the performance by 1.4\%/3.9\%, even though MM is designed on BERT-Large (LearnDA is constructed on BERT-Base) and also introduces external knowledge. This indicates that the augmented data generated by our LearnDA can effectively alleviate the problem of data lacking on the ECI.
\begin{table}[t] \footnotesize
\centering
\begin{tabular}{cccc}
\multicolumn{1}{l}{\textbf{Method}} & \multicolumn{1}{c}{\textbf{P}} & \multicolumn{1}{c}{\textbf{R}} & \multicolumn{1}{c}{\textbf{F}} \\ \hline
\multicolumn{1}{l}{BERT (Our basic identifier)} & 36.1 & 56.0 & 43.9 \\
\multicolumn{1}{l}{BERT$_{OrgAug}$} & 36.6 & 59.7 & 45.4* \\
\multicolumn{1}{l}{BERT$_{DualAug}$} & 37.8 & 65.6 & 48.0* \\
\multicolumn{1}{l}{LearnDA$_{Dual}$} & 36.8 & 63.0 & 46.5* \\
\multicolumn{1}{l}{LearnDA$_{DualAug-w/o.KB}$} & 37.5 & 67.0 & 48.1* \\
\multicolumn{1}{l}{$-$LearnDA$_{DualAug-w/.intro}$} & 39.0 & 66.0 & 49.0* \\
\multicolumn{1}{l}{$-$LearnDA$_{DualAug-w/.verbnet}$} & 39.4 & 66.7 & 49.5* \\
\multicolumn{1}{l}{$-$LearnDA$_{DualAug-w/.wordnet}$} & 39.6 & 67.6 & 49.9* \\
\multicolumn{1}{l}{LearnDA$_{Full}$} & 42.2 & 69.8 & \textbf{52.6*} \\ \hline
\end{tabular}
\caption{Ablation results on event causality identification on ESC. * denotes a significant test at the level of 0.05. BERT$_{OrgAug}$ and BERT$_{DualAug}$ denote the BERT is further trained on no-dual and dual augmented data respectively; LearnDA$_{Dual}$ denotes our identifier is only trained by dual learning without further training; LearnDA$_{DualAug-w/o.KB}$ denotes the LearnDA$_{Dual}$ is further trained by dual augmented data without knowledge guiding; LearnDA$_{DualAug-w/.<kb>}$ denotes LearnDA$_{Dual}$ is further trained by dual augmented data guided with knowledge base $kb$.}
\label{tab3}
\end{table}
\subsection{Effect of Learnable Dual Augmentation}
We analyze the effect of the learnable dual augmentation for event causality identification. 1) \emph{For identifier}. Comparing LearnDA$_{Dual}$ with BERT in Table \ref{tab3}, we note that the performance of the proposed identifier is improved (+2.6\%) after the dual training only with the same labeled data. This indicates that the identifier can learn more informative expressions of causal semantic from generation with dual learning. 2) \emph{For generator}. Comparing BERT$_{DualAug}$ with BERT$_{Aug}$ in Table \ref{tab3}, we note that the dual augmented data is high quality and more helpful to ECI (+2.6\%). This indicates generator can generate more ECI task-related data learned from identifier with dual learning.
\begin{figure}[t]
\centering
\includegraphics*[clip=true,width=0.35\textwidth,height=0.18\textheight]{fig5.pdf}
\caption{The impact of the training rounds of dual learning on event causality identification on ESC. In each round, we generate new training data by the generator at the current round. The performance is achieved by further training the identifier at the current round with the aforementioned newly generated data.} \label{fig5}
\end{figure}
Figure \ref{fig5} illustrates the learnability of our LearnDA. Specifically, as the number of training rounds of dual learning increases, the generated data gradually \emph{learns} task-related information, further improving the performance accordingly.
\subsection{Effect of Knowledge Guiding}
Table \ref{tab3} also illustrates the effect of knowledge guiding on ECI depending on different knowledge bases. 1) Comparing LearnDA$_{Full}$ with LearnDA$_{DualAug-w/o.KB}$, we note that the augmented data guided by external knowledge can further improve the performance of ECI. 2) Specifically, \emph{lexical expanding} and \emph{connective introducing} (Sec \ref{sec:KG}) can both make the representation of causal relation more generalized, further making it easier for the identifier to understand the causality. 3) Moreover, the expanding is more effective than the introducing, because the former brings a wider range of effective knowledge, thus the guidance of causal-related knowledge is better.
\subsection{Our Augmentation vs. Other NLP Augmentations}
\begin{table}[t] \footnotesize
\centering
\begin{tabular}{cccc}
\multicolumn{1}{l}{\textbf{Method}} & \multicolumn{1}{c}{\textbf{P}} & \multicolumn{1}{c}{\textbf{R}} & \multicolumn{1}{c}{\textbf{F}} \\ \hline
\multicolumn{1}{l}{BERT (Our identifier)} & 36.1 & 56.0 & 43.9 \\
\multicolumn{1}{l}{TextSurface$_{BERT}$} & 37.0 & 57.5 & 45.0* \\
\multicolumn{1}{l}{BackTranslation$_{BERT}$} & 36.8 & 61.0 & 45.9* \\
\multicolumn{1}{l}{EDA$_{BERT}$} & 36.6 & 62.4 & 46.1* \\
\multicolumn{1}{l}{LearnDA$_{BERT}$} & 37.8 & 65.6 & \textbf{48.0*} \\ \hline
\end{tabular}
\caption{Results of different data augmentation methods on event causality identification on ESC dataset. * denotes a significant test at the level of 0.05.}
\label{tab4}
\end{table}
In this section, we conduct a comparison between our augmentation framework and other NLP-related augmentation methods to further illustrate the effectiveness of LearnDA.
\paragraph{Effectiveness of Our Augmentation}
We train our identifier with augmented data produced by different NLP-related augmentation methods. As shown in Table \ref{tab4}, the augmented data generated by our LearnDA is more efficient for ECI, which is consistent with the previous analysis. The LearnDA can generate well-formed task-related new sentences that contain more event causal knowledge. Specifically, 1) \emph{text surface transformation} brings a slight change to the labeled data, thus it has relatively little impact on ECI; 2) \emph{Back translation} introduces limited new causal expressions by translation, thus it slightly increases the recall value on ECI; 3) \emph{EDA} can introduce new expressions via substitution, but the augmented data is not canonical and cannot accurately express the causality, therefore, its impact on ECI is also limited.
\begin{table}[t] \footnotesize
\centering
\resizebox{0.47\textwidth}{!}{
\begin{tabular}{lcccc}
\textbf{} & \multicolumn{1}{c}{\textbf{Gold}} & \multicolumn{1}{c}{\textbf{EDA}} & \multicolumn{1}{c}{\textbf{BackTrans}} & \multicolumn{1}{c}{\textbf{LearnDA}} \\ \hline
\textbf{Causality} & 3.80 & 3.20 & 3.70 & 3.60 \\
\textbf{Well-formedness} & 3.95 & 2.75 & 3.83 & 3.64 \\
\textbf{Diversity} (Man/Auto) & 0.0/1.0 & 3.08/0.70 & 2.80/0.85 & 3.51/0.66 \\ \hline
\end{tabular}}
\caption{Manual (4-score rating (0, 1, 2, 3)) and automatic (BLEU score) evaluation of the generated sentences via different methods from causality, well-formedness and diversity. Causality and well-formedness are assessed manually, while diversity is assessed manually and automatically.}
\label{tab5}
\end{table}
\paragraph{Quantitative Evaluation of Task-relevance}
We select five Ph.D. students majoring in NLP to manual score the 100 randomly selected augmented sentences given their corresponding original sentences as reference (Cohen's kappa = 0.85). Furthermore, we calculate the BLEU \cite{Papineni2002BleuAM} value to further evaluate the diversity. As aforementioned, the task-relevance of new sentences on ECI is manifested in causality and well-formedness, while the diversity indicates the degree of generalization. As shown in Table \ref{tab5}, we note the sentences generated by LearnDA are equipped with the above three properties that are close to the labeled sentences. Specifically, the sentences produced by EDA has a certain degree of causality and diversity due to the lexical substitution assisted by external knowledge. However, they cannot well express the causality due to the grammatical irregularities. Correspondingly, new sentences generated via back translation are very similar to the original sentences, while the diversity is poor.
\subsection{Case Study}
\begin{figure}[t]
\centering
\includegraphics*[clip=true,width=0.44\textwidth,height=0.18\textheight]{fig6.pdf}
\caption{The modification of dual learning.} \label{fig6}
\end{figure}
We conduct a case study to further investigate the effectiveness of our LearnDA. Figure \ref{fig6} illustrates the modification process of dual learning. For example as a), given two causal events, the generator is expected to generate a causal sentence. However, the generator without dual learning produces a non-causal sentence. Fortunately, with dual learning, the identifier judges the generated sentence as a non-causal one and guides the generator to produce a causal sentence with the feedback. Similarly, as shown in b), given a causal sentence, the identifier is expected to output a causal relation, but no dual-trained one cannot do. Correspondingly, the generator constructs feedback of low confidence to guide the identifier to output a causal relation.
\section{Conclusion}
This paper proposes a new learnable knowledge-guided data augmentation framework (LearnDA) to solve the data lacking problem on ECI. Our framework can leverage the duality between generation and identification via dual learning to generate task-related sentences for ECI. Moreover, our framework is knowledge guided and learnable.
Our method achieves state-of-the-art performance on EventStoryLine and Causal-TimeBank datasets.
\section*{Acknowledgments}
We thank anonymous reviewers for their insightful comments and suggestions. This work is supported by the National Key Research and Development Program of China (No.2018YFB1005100), the National Natural Science Foundation of China (No.U1936207, 61806201). This work is also supported by Beijing Academy of Artificial Intelligence (BAAI2019QN0301) and the joint project with Beijing Baidu Netcom Science Technology Co., Ltd.
\bibliographystyle{acl_natbib}
|
2,877,628,088,520 | arxiv | \section*{Introduction}
Let ${\mathfrak g}$ be a simple Lie algebra of type $ADE$ over ${\mathbb C}$ with the
index set $I$ of simple roots, $\mathbf L\g = {\mathfrak g} \otimes {\mathbb C}[z,z^{-1}]$ be its
loop algebra, and $\Ul$ be its quantum universal enveloping algebra,
or the quantum loop algebra for short. It is a subquotient of the
quantum affine algebra $\Ua$, i.e.\ without central extension and
degree operator. It contains the quantum enveloping algebra $\Uq$
associated with ${\mathfrak g}$ as a subalgebra.
By Drinfeld \cite{Dr} and Chari-Pressley \cite{CP-rep}, simple
$\Ul$-modules are parametrized by $I$-tuples of polynomials $P =
(P_i(u))_{i\in I}$ with normalization $P_i(0) = 1$. They are called
{\it Drinfeld polynomials}. Let us denote by $L(P)$ the simple module
with Drinfeld polynomial $P$. When $P$ is given by $P_i(u) =
(1-au)^{\delta_{iN}}$ for a given $N\in I$, we call corresponding
module an {\it $N^{\mathrm{th}}$ $l$--fundamental representation}.
(It has been called a {\it level $0$ fundamental module\/} or simply
{\it fundamental representation\/} in some literature.) We can assume
$a=1$ without the loss of generality as the general module is a
pullback of the module with $a=1$ by an algebra automorphism of $\Ul$.
Let $\chi_{q,t}(L(P))$ be the $t$--analog of $q$--character of a
simple module $L(P)$ defined by the author
\cite{Na-qchar,Na-qchar-main}. It is defined via the geometry of
graded quiver varieties.
It values in certain Laurent polynomial ring with infinitely many
variables with integer coefficients. It is a $t$--analog of the
$q$--character $\chi_q(L(P))$ introduced earlier \cite{Kn,FR}, which
was a refinement of the ordinary character of the restriction of
$L(P)$ to a $\Uq$-module.
In \cite{Na-qchar,Na-qchar-main} we ``computed'' $\chi_{q,t}(L(P))$
for arbitrary given $L(P)$, in the sense that we gave a purely combinatorial
algorithm to write down all monomials and coefficients in
$\chi_{q,t}(L(P))$ where the final expression involves only $+$, $\times$,
integers and variables.
In order to clarify in what sense our result is new compared with
earlier results, we {\it define\/} what the word compute mean
precisely.
When we write the word compute in the quotation marks, it means that
we give a combinatorial algorithm to compute something in the above sense.
It does not necessarily mean that we actually compute it. We can
write a computer program in principle, but the question whether we can
actually compute it or not depends on the size of computer memory.
(For example, it is clear that the rank $n$ of ${\mathfrak g}$ cannot be larger
than the size of the memory.)
On the other hand, when we write the word compute without the
quotation mark, we mean to compute something in a strict sense, i.e.\
we express something so that it contains only finitely many $\pm$,
$\times$, integers and variables.
For example, if we write $x = \sum_{i=1}^{2^{(2^{100})}} a_i$ for some explicit
$a_i$, we ``compute'' $x$, but we do not compute $x$ unless we actually
compute the sum.
On the other hand, we do not require that the final expression can be
read by the human, as such a concept cannot make precise.
The algorithm is separated into three steps:
\begin{enumerate}
\item ``Computation'' of $\chi_{q,t}$ for $l$--fundamental representations.
\item ``Computation'' of $\chi_{q,t}$ for standard modules, i.e.\ tensor
products of $l$--fundamental representations.
\item ``Computation'' of the $t$-analog of the composition factors of
simple modules in standard modules.
\end{enumerate}
The third step is analogous to the definition of Kazhdan-Lusztig
basis. If $M(P)$ denote the standard module, we have
\begin{equation}\label{eq:L(P)}
\overline{\chi_{q,t}(L(P))} = \chi_{q,t}(L(P)), \qquad
\chi_{q,t}(L(P)) = \chi_{q,t}(M(P)) + \sum_{Q:Q<P} a_{PQ}(t)
\chi_{q,t}(M(Q))
\end{equation}
for some $a_{PQ}(t)\in t^{-1}{\mathbb Z}[t^{-1}]$, where `$<$' is a certain
explicitly defined ordering. Thus $a_{PQ}(t)$ is analogous to
Kazhdan-Lusztig polynomials. The above characterization allows us to
``compute'' $a_{PQ}(t)$, once $\chi_{q,t}(M(P))$ is ``computed''. (And
it is known that the actual computation of Kazhdan-Lusztig polynomials
is very hard.)
In the second step, we express $\chi_{q,t}(M(P))$ as a twisted
multiplication of $\chi_{q,t}$ of $l$--fundamental representations. It
is almost the same as usual multiplication on the polynomials, but a
product of two monomials $m$, $m'$ is twisted as $t^{2d(m,m')} mm'$.
Therefore this step is very simple. It is clear that
$\chi_{q,t}(M(P))$ can be ``computed'' if $\chi_{q,t}$ of
$l$--fundamental representations are ``computed''.
This paper concerns the first step. Our ``computation'' in
\cite{Na-qchar,Na-qchar-main} was $t$--analog of the ``computation'' by
Frenkel-Mukhin~\cite{FM}.
It is based on the observation that (a) $\chi_{q,t}$ satisfies a
certain analog of the Weyl group invariance of the ordinary
characters, and (b) the $l$--fundamental representation satisfies a
certain property analogous to that of minuscule representations of
${\mathfrak g}$. Recall that a simple finite dimensional representation of ${\mathfrak g}$
is called {\it minuscule\/} if all weights are conjugates of the
highest weight under the Weyl group each occurring with multiplicity
$1$.
When ${\mathfrak g}$ is of classical type, i.e.\ of type $A$, $D$, the author
gave a tableaux sum expression of $\chi_{q,t}$ of $l$--fundamental
representations \cite{Na-AD}. It means that we give another
``computation'' of $\chi_{q,t}$, which are more familiar to us than
the above one. It does not mean we compute $\chi_{q,t}$ in our
strict sense. In fact, the comparison of two methods does not make sense
unless we define what we mean by `familiar'.
In practice, it just means that we have a faster algorithm for the
actual computer calculation.
In this paper we report the actual computer computation of
$\chi_{q,t}$ of $l$--fundamental representations when ${\mathfrak g}$ is of type
$E_6^{(1)}$, $E_7^{(1)}$, $E_8^{(1)}$. Our algorithm is implemented in
the computer language {\bf C}. The source code is available at
\linebreak[3] \verb|http://www.math.kyoto-u.ac.jp/~nakajima/Qchar/|.
The author's personal computer (Dell Dimension 9100) can give the
answer up to the $6^{\mathrm{th}}$ $l$--fundamental representation of
$E_8$, where our numbering of $I$ is the following:
\begin{equation*}
\begin{array}{ccccccccccccc}
7 & - & 6 & - & 5 & - & 4 & - & 3 & - & 2 & - & 1
\\
&&&& | &&&&&&&&
\\
&&&& 8 &&&&&&&&
\end{array}
\end{equation*}
We need about 120Mbtyes of the memory for this calculation. For the
$4^{\mathrm{th}}$ and $5^{\mathrm{th}}$ $l$--fundamental
representations, the computation was done on a supercomputer FUJITSU
HPC 2500 at Kyoto University. The calculation required about 2.6Gbytes
(for $4^{\mathrm{th}}$) and 120Gbytes (for $5^{\mathrm{th}}$) of
memory, and it took 6 hours and 350 hours for the calculation
respectively. The final answers (stored in a compressed format as
explained below) are 3.2Gbytes and 180Gbytes respectively.
In fact, the calculation of the $4^{\mathrm{th}}$ one was done several
years ago and was mentioned in some of the author's papers. However
we needed to wait for the Kyoto University to renovate the
supercomputer so that we can use 120Gbytes of memory in a single
program, and then wait for the author to get an enough budget to use
the supercomputer.
As far as the author knows, the computation (in our strict sense) for
the $5^{\mathrm{th}}$ one was not known before.
Frenkel-Mukhin, Hernandez-Schedler told the author that they wrote
computer programs calculating $\chi_{q,t=1}$ and $\chi_{q,t}$ respectively.
But both had a problem of computer memory.
In conclusion, we can now delete the quotation mark for computation in
the first step of the algorithm for type $E$ above.
As an application, we can compute $t$--analog of the ordinary
characters of the restrictions of $l$--fundamental representations to
$\Uq$-modules. The $l$--fundamental modules are examples of the
so-called Kirillov-Reshetikhin modules. Kirillov-Reshetikhin gave
conjectural formula for the ordinary character of the restriction of a
Kirillov-Reshetikhin module \cite{KR}. Its graded version (i.e.\
$t$--analog) together with an interpretation in terms of the
conjectural crystal base was given by Hatayama, Kuniba, Okado, Takagi
and Yamada \cite{HKOTY}. Then Lusztig conjectured that their
conjectural grading is the same as the cohomological degree
\cite{Lu:ferm}, in a certain class of Kirillov-Reshetikhin modules
including $l$--fundamental representations. Therefore the formula in
\cite{HKOTY}, in the class, gives the generating function of
Poincar\'e polynomials of quiver varieties.
In general, the conjectural formula is expressed as a summation over
partitions, and called a {\it fermionic formula}.
The author gave an expression for $t=1$ in \cite[Cor.~1.3]{Na-KR} (the
result was extended to type $BCFG$ in \cite{Her}). It is again given
as a summation over partition, but the definition of the binomial
coefficient appearing in the coefficients is different. The
equivalence between two expressions are not known so far, therefore
the original fermionic formula is remained open.
For an $l$--fundamental representation, the original fermionic formula
can be given by an explicit polynomial by the so-called Kleber's
algorithm \cite{Kl}. Here we do not make precise what we mean by
`explicit'.
For types $A$, $D$, it was shown in \cite{Na-AD} that this
`explicit' expression for an $l$--fundamental representation is equal
to the ``computation'' in \cite{Na-qchar}. For type $E$, the algorithm
can be used to compute the fermionic formula in our strict sense. Then
the result can be checked in some special cases previously computed
(at least for $t=1$) (e.g.\ \cite{CP}), but most of $l$--fundamental
representations have remained open.
Remark that Kleber's algorithm does not apply to the modified formula
in \cite{Na-KR}, so it is not known that the modified formula gives
the computation in the strict sense.
Our computation of $\chi_{q,t}$ gives the explicit expression and we
find that it is the same as one given in \cite{HKOTY}.
Therefore we prove Lusztig's conjecture for all
$l$--fundamental representations.
Also as another application, we determine all monomials appearing
in the monomoial realization of the crystal corresponding to
fundamental representations of type $E$. For types $A$, $D$, they were
determined in \cite{Na-AD} as an application of the explicit
description of $\chi_{q,t}$ of $l$--fundamental representations.
For types $B$, $C$, they were determined in \cite{kks}. For types $F$,
$G$, they can be easily determined (cf.\ \cite{HN}). In conclusion, we
describe the monomial realization of the crystals of all
fundamental representations explicitly.
\subsection*{Acknowledgement}
A part of the computer program was written while the author stayed at
Centre for Advanced Study (CAS) at the Norwegian Academy of Science
and Letters in 2002. He would like to thank CAS for the hospitality.
\section{$t$--analogs of $q$--characters}
We shall not give the definition of quantum loop algebras, nor
their finite dimensional representations in this paper. (See
\cite{Na-qchar} for a survey.) We just review properties of
$\chi_{q,t}$, as axiomized in \cite{Na-qchar-main}.
Let
\(
\mathscr Y_t \overset{\operatorname{\scriptstyle def.}}{=}
{\mathbb Z}[t,t^{-1},Y_{i,a}, Y_{i,a}^{-1}]_{i\in I, a\in{\mathbb C}^*}
\)
be a Laurent polynomial ring of uncontably many variables $Y_{i,a}$'s
with coefficients in ${\mathbb Z}[t,t^{-1}]$. A {\it monomial\/} in $\mathscr
Y_t$ means a monomial only in $Y_{i,a}^\pm$, containing no
$t$'s. Therefore a polynomial is a sum of monomials multiplied by
Laurent polynomials in $t$, called coefficients as usual.
Let
\begin{equation*}
A_{i,a} \overset{\operatorname{\scriptstyle def.}}{=} Y_{i,a\varepsilon} Y_{i,a\varepsilon^{-1}}
\prod_{j:j\neq i} Y_{j,a}^{c_{ij}},
\end{equation*}
where $c_{ij}$ is the $(i,j)$-entry of the Cartan matrix.
Let $\mathcal M$ be the set of monomials in $\mathscr Y_t$.
\begin{Definition}
(1) For a monomial $m\in\mathcal M$, we define $u_{i,a}(m)\in{\mathbb Z}$ be the
degree in $Y_{i,a}$, i.e.\
\begin{equation*}
m = \prod_{i,a} Y_{i,a}^{u_{i,a}(m)}.
\end{equation*}
(2) A monomial $m\in\mathcal M$ is said {\it $i$--dominant\/} if
$u_{i,a}(m)\ge 0$ for all $a$. It is said {\it l--dominant\/} if it is
$i$--dominant for all $i$.
(3) Let $m, m'$ be monomials in $\mathcal M$. We say $m \le m'$ if
$m/m'$ is a monomial in $A_{i,a}^{-1}$ ($i\in I$, $a\in{\mathbb C}^*$).
Here a monomial in $A_{i,a}^{-1}$ means a product of nonnegative
powers of $A_{i,a}^{-1}$. It does not contain any factors
$A_{i,a}$. In such a case we define $v_{i,a}(m, m')\in{\mathbb Z}_{\ge 0}$ by
\begin{equation*}
m = m' \prod_{i,a} A_{i,a}^{-v_{i,a}(m,m')}.
\end{equation*}
This is well-defined since the $\varepsilon$-analog of the Cartan matrix is
invertible. We say $m < m'$ if $m\le m'$ and $m\neq m'$.
(4) For an $i$--dominant monomial $m\in\mathcal M$ we define
\begin{equation*}
E_i(m) \overset{\operatorname{\scriptstyle def.}}{=}
m\, \prod_a
\sum_{r_a=0}^{u_{i,a}(m)}
t^{r_a(u_{i,a}(m)-r_a)}
\begin{bmatrix}
u_{i,a}(m) \\ r_a
\end{bmatrix}_t A_{i,a\varepsilon}^{-r_a},
\end{equation*}
where
\(
\left[\begin{smallmatrix}
n \\ r
\end{smallmatrix}\right]_t
\)
is the $t$-binomial coefficient.
(5) We define a ring involution
$\setbox5=\hbox{A}\overline{\rule{0mm}{\ht5}\hspace*{\wd5}}$ on
${\mathscr Y}_t$ by $\overline{t} = t^{-1}$,
$\overline{Y_{i,a}^\pm} = Y_{i,a}^\pm$.
\end{Definition}
Suppose that {\it l\/}--dominant monomials $m_{P^1}$, $m_{P^2}$ and
monomials $m^1\le m_{P^1}$, $m^2\le m_{P^2}$ are given. We define an
integer $d(m^1, m_{P^1}; m^2, m_{P^2})$ by
\begin{multline}\label{eq:d}
d(m^1, m_{P^1}; m^2, m_{P^2})
\\
\overset{\operatorname{\scriptstyle def.}}{=}
\sum_{i,a} \left( v_{i,a\varepsilon}(m^1, m_{P^1}) u_{i,a}(m^2)
+ u_{i,a\varepsilon}(m_{P^1}) v_{i,a}(m^2, m_{P^2})\right).
\end{multline}
For an $I$-tuple of rational functions $Q/R = (Q_i(u)/R_i(u))_{i\in
I}$ with $Q_i(0) = R_i(0) = 1$, we set
\begin{equation*}
m_{Q/R} \overset{\operatorname{\scriptstyle def.}}{=}
\prod_{i\in I} \prod_{\alpha} \prod_{\beta}
Y_{i,\alpha} Y_{i,\beta}^{-1},
\end{equation*}
where $\alpha$ (resp.\ $\beta$) runs roots of $Q_i(1/u) = 0$
(resp.\ $R_i(1/u) = 0$), i.e.\
$Q_i(u) = \prod_\alpha ( 1 - \alpha u)$ (resp.\ $R_i(u) = \prod_\beta
(1 - \beta u)$). As a special case, an $I$-tuple of polynomials $P =
(P_i(u))_{i\in I}$ defines $m_P = m_{P/1}$. The $l$--dominant monomial
$m_{P^\alpha}$ appeared above is assoicated to an $I$-tuple of
polynomials $P = (P_i(u))_{i\in I}$.
In this way, the set $\mathcal M$ of monomials are identified with the
set of $I$-tuple of rational functions, and the set of {\it
l\/}--dominant monomials are identified with the set of $I$-tuple of
polynomials.
The $t$--analog of the Grothendieck ring $\mathbf R_t$
is a free ${\mathbb Z}[t,t^{-1}]$-module with base $\{ M(P) \}$ where $P =
(P_i(u))_{i\in I}$ is the Drinfeld polynomial.
(We do not recall the definition of standard modules $M(P)$ here, but
the reader safely consider them as formal variables.)
The $t$--analog of the $\varepsilon$--character homomorphism is a
${\mathbb Z}[t,t^{-1}]$-linear homomorphism
\(
\chi_{q,t}\colon \mathbf R_t \to \mathscr Y_t.
\)
It is defined as the generating function of Poincar\'e polynomials of
graded quiver varieties, or the generating function of graded dimensions of
$l$--weight spaces of a $\Ul$-module \cite{VV2}, and will not be
reviewed in this paper.
We also need a slightly modified version:
\begin{equation*}
\qch{q,t}(M(P)) = \sum_m t^{d(m,m_P;m,m_P)} a_m(t) m
\qquad \text{if $\chi_{q,t}(M(P)) = \sum_m a_m(t) m$}.
\end{equation*}
If we know one of $\chi_{q,t}$ and $\qch{q,t}$, we know the remaining
one.
The following was proved in \cite{Na-qchar,Na-qchar-main}:
\begin{Fact}\label{thm:ind}
\textup{(1)}
The $\chi_{q,t}$ of a standard module $M(P)$ has a form
\begin{equation*}
\chi_{q,t}(M(P)) = m_P + \sum a_m(t) m,
\end{equation*}
where the summation runs over monomials $m < m_P$.
\textup{(2)}
For each $i\in I$, $\qch{q,t}(M(P))$ can be expressed as a linear
combination \textup(over ${\mathbb Z}[t,t^{-1}]$\textup) of $E_i(m)$ with
$i$--dominant monomials $m$.
\textup{(3)}
Suppose that two $I$-tuples of polynomials $P^1 = (P^1_i)$, $P^2 =
(P^2_i)$ satisfy the following condition:
\begin{equation}
\label{eq:Z}
\begin{minipage}[m]{0.75\textwidth}
\noindent
$a/b\notin\{ \varepsilon^n \mid n\in{\mathbb Z}, n \ge 2\}$ for any
pair $a$, $b$ with $P^1_i(1/a) = 0$, $P^2_j(1/b) =
0$ \textup($i,j\in I$\textup).
\end{minipage}
\end{equation}
Then we have
\begin{equation*}
\qch{q,t}(M(P^1P^2)) =
\sum_{m^1, m^2} t^{2d(m^1, m_{P^1}; m^2, m_{P^2})}
a_{m^1}(t) a_{m^2}(t) m^1 m^2
,
\end{equation*}
where
\(
\qch{q,t}(M(P^a)) = \sum_{m^a} a_{m^a}(t) m^a
\)
with $a=1,2$.
Moreover, properties \textup{(1),(2),(3)} uniquely determine
$\chi_{q,t}(M(P))$.
\textup{(4)} The $\chi_{q,t}$ of the simple module $L(P)$ is given by
\eqref{eq:L(P)}.
\end{Fact}
Apart from the existence problem, one can consider the above
properties (1), (2), (3) as the definition of $\chi_{q,t}$ (an
axiomatic definition). We only use the above properties, and the
reader can safely forget the original definition. Note that we will
prove the existence of $\chi_{q,t}$ by our computer calculation.
By the property (1) we call the monomial $m_P$ corresponding to the
Drinfeld polynomial $P$ {\it $l$--highest weight monomial}.
\section{Algorithm}\label{sec:algorithm}
In this section we shall explain our algorithm to determine
$\qch{q,t}(L(P))$ recursively starting from the $l$--dominant weight
monomial $m_P$. It is a slight modification of one in \cite{FM}.
We shall also explain why we require large memory to compute
$\chi_{q,t}$ of the $5^{\mathrm{th}}$ $l$--fundamental representation
of $\Ul$ with ${\mathfrak g} = E_8$. The problem does not exist for the other
$l$--fundamental representations.
We take a Drinfeld
polynomial $P = (P_i(u))$ $P_i(u) = (1-u)^{\delta_{iN}}$ corresponding
to the $N^{\mathrm{th}}$ $l$--fundamental representation.
One of the key property of $\chi_{q,t}$ of an $l$--fundamental
representation is that all monomials appearing in $\chi_{q,t}$ are not
$l$--dominant except the $l$--highest one. This was proved in
\cite[Cor.~4.5]{FM} and \cite[4.13]{Na-qchar-main}.
For each monomial $m$ in $\qch{q,t}(L(P))$ we determine the
coefficient $a_m(t)\in{\mathbb Z}[t]$ and the $I$-tuple of polynomial
$(a_{m,i}(t))_{i\in I}\in {\mathbb Z}[t]$ (called {\it coloring\/})
recursively. Let us introduce several concepts.
We say $m$ is {\it admissible\/} if all
$a_{m,i}(t)$ are the same for any $i$ such that $m$ is not
$i$--dominant. We say {\it the algorithm fails at $m$\/} if $m$ is not
admissible.
We say {\it the algorithm stops at $m$} if $m$ is
$l$--dominant.
Now we explain the algorithm.
At the first stage we set $a_{m_P}(t) = 1$ and $a_{m_P,i}(t) = 0$ for
all $i\in I$ for the $l$--highest weight monomial $m_P$.
Next take a monomial $m$ such that $a_m(t)$ and $a_{m,i}(t)$ are
determined. If $m$ is not $i$--dominant for any $i$ (this will happen
if $m$ the $l$--lowest weight vector), we do nothing on $m$ and go to
the next monomial.
If $m$ is $i$--dominant, we compute $(a_m(t) - a_{m,i}(t))
E_i(m)$. We call this procedure the {\it $i$-expansion at $m$}.
We add a monomial $m'$ appearing there to the list.
And for a monomial $m'$ in the list, we set $a_{m',i}(t)$ be the sum
of the contribution to $m'$ in the $i$-expansion at $m$ for various $m
< m'$ which is $i$--dominant.
As there is only finitely many $m < m'$, $a_{m',i}(t)$ will be
eventually determined.
After all $a_{m',i}(t)$ are determined in this way, we can ask
$m'$ is admissible or not.
If $m'$ is not admissible (i.e.\ the algorithm fails at $m'$), we stop.
If $m'$ is $l$--dominant (i.e.\ the algorithm stop at $m'$), we stop.
If $m'$ is admissible and not $l$--dominant, we set $a_{m'}(t) =
a_{m',i}(t)$ for some (and any by admissibility) $i$ such that $m'$ is
not $i$--dominant.
We continue this procedure until all $a_{m}(t)$ and
$a_{m,i}(t)$ are determined, and all $(a_m(t) - a_{m,i}(t)) E_i(m)$
are expanded, or we stop at some $m$.
Now we apply the algorithm starting from the $l$--highest weight
monomial $m_P$.
As $\qch{q,t}(L(P))$ satisfies the properties (1),(2) in
\factref{thm:ind}, the algorithm cannot fail. As $\qch{q,t}(L(P))$ does
not contain $l$--dominant monomials other than $l$--highest one, the
algorithm cannot stop.
Finally as $L(P)$ is a finite dimensional, $\qch{q,t}(L(P))$ contains
only finitely many monomials. Therefore we eventually determine
all $a_{m}(t)$ and $a_{m,i}(t)$.
\begin{Remark}
If we apply the same algorithm in case ${\mathfrak g}$ is a Kac-Moody Lie algebra
(say an affine Lie algebra), the algorithm does not fail, does not
stop, but we always get a new monomial in the expansion. Therefore the
procedure never end.
\end{Remark}
Now we consider the $5^{\mathrm{th}}$ $l$--fundamental representation
of $\Ul$ with ${\mathfrak g} = E_8$ and we will explain the reason why we need
various tricks to save the size of data.
Because of these tricks, we had not known how big the total size is
in advance, so we used the following guess:
We know that the dimension of the $4^{\mathrm{th}}$ fundamental
representation of ${\mathfrak g}$ is $146325270$, while $5^{\mathrm{th}}$ one is
$6899079264$. Therefore we expect that the corresponding
$\chi_{q,t}$'s have a similar ratio. We first compute the
$4^{\mathrm{th}}$ $l$--fundamental representations and expect that the
total size of the $5^{\mathrm{th}}$ one is about $50$ times as much.
This turned out to be approximately correct as we can see from the
data in Introduction.
By \cite[Prop.~3.4]{Na-AD} the set of monomials appearing in the
$q$--character of an $l$--fundametal representation has a
$\Uq$-crystal structure, which is isomorphic to the corresponding
fundamental representation of $\Uq$. In particular, the number of the
monomials appearing in the $5^{\mathrm{th}}$ $l$--fundamental
representation is equal to the dimension of the $5^{\mathrm{th}}$
fundamental representation of ${\mathfrak g} = E_8$, i.e.\ $6899079264 \approx
6.4 \times 2^{30} = 6.4\mathrm{Giga}$.
For each monomial $m$, we must remember (a) the expression of the
monomial and (b) the coloring, i.e.\ an $I$-tuple of polynomials in
$t$.
Let us first consider how we can express the monomial.
It is known that $l$--lowest weight monomial, i.e.\ the unique
monomial with (ordinary) weight $-\varpi_5$, is $Y_{5,q^{30}}^{-1}$
(see e.g.\ \cite[6.8]{FM}). We have
{\allowdisplaybreaks
\begin{equation*}
\begin{split}
Y_{5,q^{30}}^{-1} = Y_{5,1}
& \times A_{1,q^{5}} A_{1,q^{7}} A_{1,q^{9}} A_{1,q^{11}} A_{1,q^{13}} A_{1,q^{15}}^2 A_{1,q^{17}} A_{1,q^{19}}
A_{1,q^{21}} A_{1,q^{23}} A_{1,q^{25}}
\\ &\times
A_{2,q^{4}} A_{2,q^{6}}^2 A_{2,q^{8}}^2 A_{2,q^{10}}^2 A_{2,q^{12}}^2 A_{2,q^{14}}^3 A_{2,q^{16}}^3 A_{2,q^{18}}^2
A_{2,q^{20}}^2 A_{2,q^{22}}^2 A_{2,q^{24}}^2 A_{2,q^{26}}
\\ &\times
A_{3,q^{3}} A_{3,q^{5}}^2 A_{3,q^{7}}^3 A_{3,q^{9}}^3 A_{3,q^{11}}^3 A_{3,q^{13}}^4 A_{3,q^{15}}^4 A_{3,q^{17}}^4 A_{3,q^{19}}^3
A_{3,q^{21}}^3 A_{3,q^{23}}^3 A_{3,q^{25}}^2 A_{3,q^{27}}
\\ &\times
A_{4,q^{2}} A_{4,q^{4}}^2 A_{4,q^{6}}^3 A_{4,q^{8}}^4 A_{4,q^{10}}^4 A_{4,q^{12}}^5 A_{4,q^{14}}^5 A_{4,q^{16}}^5 A_{4,q^{18}}^5
A_{4,q^{20}}^4 A_{4,q^{22}}^4 A_{4,q^{24}}^3 A_{4,q^{26}}^2 A_{4,q^{28}}
\\ &\times
A_{5,q^{1}} A_{5,q^{3}}^2 A_{5,q^{5}}^3 A_{5,q^{7}}^4 A_{5,q^{9}}^5 A_{5,q^{11}}^6 A_{5,q^{13}}^6 A_{5,q^{15}}^6 A_{5,q^{17}}^6
A_{5,q^{19}}^6 A_{5,q^{21}}^5 A_{5,q^{23}}^4 A_{5,q^{25}}^3 A_{5,q^{27}}^2 A_{5,q^{29}}
\\ &\times
A_{6,q^{2}} A_{6,q^{4}}^2 A_{6,q^{6}}^2 A_{6,q^{8}}^3 A_{6,q^{10}}^4 A_{6,q^{12}}^4 A_{6,q^{14}}^4 A_{6,q^{16}}^4 A_{6,q^{18}}^4
A_{6,q^{20}}^4 A_{6,q^{22}}^3 A_{6,q^{24}}^2 A_{6,q^{26}}^2 A_{6,q^{28}}
\\ &\times
A_{7,q^{3}} A_{7,q^{5}} A_{7,q^{7}} A_{7,q^{9}}^2 A_{7,q^{11}}^2 A_{7,q^{13}}^2 A_{7,q^{15}}^2 A_{7,q^{17}}^2 A_{7,q^{19}}^2
A_{7,q^{21}}^2 A_{7,q^{23}} A_{7,q^{25}} A_{7,q^{27}}
\\ &\times
A_{8,q^{2}} A_{8,q^{4}} A_{8,q^{6}}^2 A_{8,q^{8}}^2 A_{8,q^{10}}^3 A_{8,q^{12}}^3 A_{8,q^{14}}^3 A_{8,q^{16}}^3 A_{8,q^{18}}^3
A_{8,q^{20}}^3 A_{8,q^{22}}^2 A_{8,q^{24}}^2 A_{8,q^{26}} A_{8,q^{28}}.
\end{split}
\end{equation*}
Any} other monomial is given equal to $Y_{5,1}$ multiplied by a part of
$A_{i,q^k}$'s appeared above. We record the monomial as a sequence of
$A_{i,q^k}^m$'s, where $i$ runs $1$ to $8$, $k$ runs from $1$ to $29$,
and $m$ runs from $1$ to $6$. We can store the triple $(i, k, m)$ in a
single \verb+short int+, i.e.\ $16$bit of memory.
The length of the sequence is at most $106$, which is the length for
$Y_{5,q^{30}}^{-1}$. A naive count gives
\(
6899079264 \times 106 \times 16\mathrm{bit} > 1300\mathrm{Gbyte}.
\)
This is too large. Therefore we use the following trick:
Noticing that many monomials share the same sequences of
$A_{i,q^k}^m$'s, we store the data into a tree so that we do not need
to repeat the common part.
By this trick, it becomes uncertain how much size we need in advance,
as we mentioned above.
Next let us turn to coloring.
By \cite{Na-qchar-main}, $\chi_{q,t}(L(P)) = \sum_m a_m(t) m$ is given
by the Poincar\'e polynomials of various graded quiver varieties
corresponding to $m$. Therefore the degree of the
coefficient $a_m(t)$ is equal to the (real) dimension of the variety
corresponding to $m$. On the other hand, the dimension of the graded
quiver variety is bounded by the half of the ordinary quiver variety
containing it. For the $5^{\mathrm{th}}$ fundamental representaion,
the maximum (among various connected components) of the dimension is
equal to $60$. Therefore the maximum of the degree is $30$. As
$a_{m,i}(t)$ is given by a virtual Hodge polynomial of a certain
stratum of the graded quiver variety, the degree is also less than or
equal to $30$. As $a_m(t)$, $a_{m,i}(t)$ are polynomials in $t^2$, we
have $30/2 + 1 = 16$ coefficients. Therefore we must record
$16\times 8$ integers for each monomial. We did not know how
large integers were in advance. As a result of our calculation, it
turns out we can store it into a \verb+short int+. Then we would need
$16\times 8 \times 16\mathrm{bit} = 256\mathrm{byte}$ for each
monomial. This is huge size, though it could be handled by our computer
probably.
However we note that many monomials $m$ have coefficient $a_m(t) = 1$. We
store $a_{m,i}(t)$ for those monomials in a special format to save the
size of data. As we do not need $a_{m,i}(t)$ for the final result,
they are not included.
(As a result of our calculation we find $4639565354$ among
$6899079264$ monomials have this property.)
We have explained the total size of the data so far. In practice, it
is more important to know how much memory is required in the course of
the calculation.
For the simplicity of the program, we replace the ordering $<$ among
monomials by more manageable ordering given by
\begin{equation*}
\operatorname{depth}m \overset{\operatorname{\scriptstyle def.}}{=} \sum_{i,a} v_{i,a}(m,m_P).
\end{equation*}
Therefore the $l$-highest weight vector has depth $0$,
$Y_{5,1}A_{5,q}^{-1}$ has depth $1$, etc. We expand the monomial of
depth $0$, then monomials with depth $1$, monomials with depth $2$,
and so on. When we expand all monomials of given depth, we store all
obtained monomials together with coloring in memory. As a single
monomial appears many times in the expansions at various monomials, it
is not practical to save the data in the hard disk.
Therefore the most crucial point is to save the size of data so that
the program requires, in a fixed depth, up to $200\mathrm{Gbyte}$ of
memory, which is the limit of the supercomputer. We estimated
the memory requirement by that for $4^{\mathrm{th}}$ $l$--fundamental
representation as above, and we guessed that the calculation was
possible. This turns out to be true fortunately.
\section{Results}\label{sec:results}
We only consider the $5^{\mathrm{th}}$ $l$--fundamental representation
of $\Ul$ with ${\mathfrak g} = E_8$.
As the final result is a huge polynomial, we cannot give it here. So
we only give a part of the information. The monomial whose
coefficient with the highest degree $t^{30}$ is
\begin{multline*}
(1 + 4t^2 + 10t^4 + 20t^6 + 33t^8 + 47t^{10} + 59t^{12} + 66t^{14}
\\
+ 66t^{16} + 59t^{18} + 47t^{20} + 33t^{22} + 20t^{24} + 10t^{26} +
4t^{28} + t^{30})
\\
\times Y_{1,q^{14}} Y_{1,q^{16}}^{-1} Y_{3,q^{14}}^2
Y_{3,q^{16}}^{-2} Y_{5,q^{14}}^3 Y_{5,q^{16}}^{-3} Y_{7,q^{14}} Y_{7,q^{16}}^{-1}.
\end{multline*}
The coefficient is the Poincar\'e polynomial of a certain graded
quiver variety.
We define the $t$--graded character by
\begin{equation*}
\operatorname{ch}_t(L(P))
= \left.\qch{q,t}(L(P))\right|_{Y_{i,a}\to y_i}.
\end{equation*}
If we put $t=1$, it becomes the ordinary character of the restriction
of $L(P)$ to $\Uq$. It is also equal to the generating function of the
Poincar\'e polynomials of the quiver varieties, where the degree $0$
corresponding to the middle degree. For example, the coefficient of the
weight $0$ is
\begin{multline*}
1357104 + 2232771{}t^2 + 2002423{}t^4 + 1317308{}t^6 + 716312{}t^8 +
342421{}t^{10} + 148512{}t^{12}
\\
+ 59490{}t^{14} + 22162{}t^{16} + 7687{}t^{18} +
2463{}t^{20} + 726{}t^{22} + 192{}t^{24} + 44{}t^{26} + 8{}t^{28} + t^{30}.
\end{multline*}
Let $V(\lambda)$ denote the irreducible highest weight representation
of $\Uq$ with the highest weight $\lambda$. Let $\operatorname{ch}
V(\lambda)$ be its character. If we write
\begin{equation*}
\operatorname{ch}_t L(P)
= \sum_\lambda M(P,\lambda,t) \operatorname{ch} V(\lambda),
\end{equation*}
the coefficient $M(P,\lambda,t)$ is specialized to the multiplicity of
$V(\lambda)$ in the restriction of $L(P)$ at $t=1$. The fermionic
formula mentioned in the Introduction is a conjectural expression of
$M(P,\lambda,t)$ (for $P$ corresponding to the Kirillov-Reshetikhin
modules).
As we have computed $\qch{q,t}(L(P))$, $M(P,\lambda,t)$ can be given
if we compute $V(\lambda)$. Let us compute $V(\lambda)$ by the method in
\cite[7.1.1]{Na-qchar}, i.e.\
\begin{equation*}
V(\lambda) = \left.\qch{q,t}(L(Q))\right|_{Y_{i,a}\to y_i, t\to 0},
\end{equation*}
where $Q$ corresponding to $\lambda$ is given as follows: We choose an
orientation for each edge of the Dynkin diagram and choose a function
$m\colon I\to {\mathbb Z}$ such that $m(i) - m(j) = 1$ for an oriented
edge $i\to j$. Then we take
\begin{equation*}
Q_i(u) = (1 - uq^{m(i)})^{\langle\lambda, h_i\rangle}.
\end{equation*}
For this choice of $Q$, it is known that
\(
\operatorname{ch}_t(L(Q)) =
\qch{q,t}(L(Q))|_{Y_{i,a}\to y_i}
\)
is equal to the generating function of shifted Poincar\'e polynomial
of the quiver variety as above. In particular, it is independent of
the choice of the orientation. For each dominant weight $\lambda$
appearing in $\operatorname{ch}_t L(P)$, we choose $Q = Q_\lambda$ as
above and define matrices
\(
P(t) = (P_{\lambda\mu}(t))
\)
and
\(
IC(t) = (IC_{\lambda\mu}(t))
\)
by
\begin{gather*}
\operatorname{ch}_t L(Q_\lambda)
= \sum_\mu P_{\lambda\mu}(t) e^{\mu} + \text{non dominant terms},
\\
\operatorname{ch}_t L(Q_\lambda)
= \sum_\mu IC_{\lambda\mu}(t) \operatorname{ch} V(\mu).
\end{gather*}
Then we have
\begin{equation*}
IC(t) = P(t) P(0)^{-1}
\end{equation*}
By \cite{Na-qchar,Na-qchar-main} $IC_{\lambda\mu}(t)$ is the
Poincar\'e polynomial of the stalk of the intersection cohomology
sheaf of a stratum of the quiver variety corresponding to $\lambda$ at
a point in the stratum corresponding to $\mu$. In our case it is given
by
\begin{equation*}
IC(t) =
\left(
\begin{tabular}{c|c|c}
Table~\ref{tab:table1} & &
\\\cline{1-1}
& \raisebox{1.5ex}[0cm][0cm]{Table~\ref{tab:table2}} &
\raisebox{1.5ex}[0cm][0cm]{Table~\ref{tab:table3}}
\\\cline{2-3}
\raisebox{1.5ex}[0cm][0cm]{0} & 0 & Table~\ref{tab:table4}
\end{tabular}
\right),
\end{equation*}
where $y_i = e^{\varpi_i}$.
The first row gives $\operatorname{ch}_t(L(P))$ for the
$5^{\mathrm{th}}$ $l$--fundamental representation $L(P)$. We see that
it coincides with the conjectural formula in \cite{HKOTY}. The same
assertion for other $l$--fundamental representations can be proved by
invoking other rows. The same can be proved for types $E_6$, $E_7$ in
the same manner.
\begin{table}
\begin{sideways}
\newcolumntype{L}{>{$\scriptstyle}l<{$}}
\begin{tabular}{>{$}c<{$}|LLLLLLLLLL}
&
\multicolumn{1}{c}{$\varpi_5$} &
\multicolumn{1}{c}{$\varpi_3+\varpi_7$} &
\multicolumn{1}{c}{$\varpi_2+\varpi_8$} &
\multicolumn{1}{c}{$\varpi_1+2\varpi_7$} &
\multicolumn{1}{c}{$\varpi_1+\varpi_6$} &
\multicolumn{1}{c}{$2\varpi_2$} &
\multicolumn{1}{c}{$\varpi_7+\varpi_8$} &
\multicolumn{1}{c}{$\varpi_1+\varpi_3$} &
\multicolumn{1}{c}{$\varpi_4$} &
\multicolumn{1}{c}{$2\varpi_1+\varpi_7$}
\\
\hline
\varpi_5 &
1&{t}^{2}&{t}^{2}+{t}^{4}&{t}^{4}&{
t}^{2}+{t}^{4}+{t}^{6}&{t}^{6}&{t}^{2}+2\,{t}^{4}+2\,{t}^{6}+{t}^{8}&2
\,{t}^{4}+{t}^{6}+{t}^{8}&{t}^{2}+2\,{t}^{4}+2\,{t}^{6}+{t}^{8}+{t}^{
10}&2\,{t}^{6}+{t}^{8}+{t}^{10}\\
\varpi_3+\varpi_7 &
0&1&{t}^{2}&{t}^{2}
&{t}^{2}+{t}^{4}&{t}^{4}&{t}^{2}+2\,{t}^{4}+{t}^{6}&{t}^{2}+{t}^{4}+{t
}^{6}&{t}^{2}+2\,{t}^{4}+{t}^{6}+{t}^{8}&2\,{t}^{4}+{t}^{6}+{t}^{8}
\\
\varpi_2+\varpi_8 &
0&0&1&0&{t}^{2}&{t}^{2}&{t}^{2}+{t}^{4}&{t}^{2}+{t
}^{4}&{t}^{2}+{t}^{4}+{t}^{6}&{t}^{4}+{t}^{6}\\
\varpi_1+2\varpi_7 &
0&0&0
&1&{t}^{2}&0&{t}^{2}+{t}^{4}&{t}^{4}&{t}^{4}+{t}^{6}&{t}^{2}+{t}^{4}+{
t}^{6}\\
\varpi_1+\varpi_6 &
0&0&0&0&1&0&{t}^{2}&{t}^{2}&{t}^{2}+{t}^{4}&
{t}^{2}+{t}^{4}\\
2\varpi_2 &
0&0&0&0&0&1&0&{t}^{2}&{t}^{4}&{t}^{
4}\\
\varpi_7+\varpi_8 &
0&0&0&0&0&0&1&0&{t}^{2}&0\\
\varpi_1+\varpi_3 &
0&0
&0&0&0&0&0&1&{t}^{2}&{t}^{2}\\
\varpi_4 &
0&0&0&0&0&0&0&0&1&0
\\
2\varpi_1+\varpi_7 &
0&0&0&0&0&0&0&0&0&1
\end{tabular}
\end{sideways}
\caption{}
\label{tab:table1}
\end{table}
\begin{table}
\begin{sideways}
\newcolumntype{L}{>{$\scriptstyle}p{2.3cm}<{$}}
\begin{tabular}{>{$}c<{$}|LLLLLLLL}
&
\multicolumn{1}{c}{$\varpi_2+\varpi_7$} &
\multicolumn{1}{c}{$\varpi_1+\varpi_8$} &
\multicolumn{1}{c}{$2\varpi_7$} &
\multicolumn{1}{c}{$\varpi_6$} &
\multicolumn{1}{c}{$3\varpi_1$} &
\multicolumn{1}{c}{$\varpi_1+\varpi_2$} &
\multicolumn{1}{c}{$\varpi_3$} &
\multicolumn{1}{c}{$\varpi_1+\varpi_7$}
\\
\hline
\varpi_5 &
3\,{t}^{4}+4\,{t}^{6}+4\,{t}^{8}+2\,{
t}^{10}+{t}^{12}&2\,{t}^{4}+5\,{t}^{6}+5\,{t}^{8}+3\,{t}^{10}+2\,{t}^{
12}+{t}^{14}&3\,{t}^{6}+2\,{t}^{8}+3\,{t}^{10}+{t}^{12}+{t}^{14}&2\,{t
}^{4}+4\,{t}^{6}+6\,{t}^{8}+4\,{t}^{10}+3\,{t}^{12}+{t}^{14}+{t}^{16}&
{t}^{8}+{t}^{12}&2\,{t}^{6}+5\,{t}^{8}+5\,{t}^{10}+3\,{t}^{12}+2\,{t}^
{14}+{t}^{16}&5\,{t}^{6}+5\,{t}^{8}+7\,{t}^{10}+4\,{t}^{12}+3\,{t}^{14
}+{t}^{16}+{t}^{18}& 2\,{t}^{6}+9\,{t}^{8}+10\,{t}^{10}+10\,{t}^{12}
+6\,{t}^{14}+4\,{t}^{16}+2\,{t}^{18}+{t}^{20}
\\
\varpi_3+\varpi_7 &
{t}^{2}
+3\,{t}^{4}+4\,{t}^{6}+2\,{t}^{8}+{t}^{10}&3\,{t}^{4}+5\,{t}^{6}+3\,{t
}^{8}+2\,{t}^{10}+{t}^{12}&2\,{t}^{4}+2\,{t}^{6}+3\,{t}^{8}+{t}^{10}+{
t}^{12}&2\,{t}^{4}+5\,{t}^{6}+4\,{t}^{8}+3\,{t}^{10}+{t}^{12}+{t}^{14}
&{t}^{6}+{t}^{10}&{t}^{4}+4\,{t}^{6}+5\,{t}^{8}+3\,{t}^{10}+2\,{t}^{12
}+{t}^{14}&2\,{t}^{4}+4\,{t}^{6}+7\,{t}^{8}+4\,{t}^{10}+3\,{t}^{12}+{t
}^{14}+{t}^{16}&6\,{t}^{6}+9\,{t}^{8}+10\,{t}^{10}+6\,{t}^{12}+4\,{t}^
{14}+2\,{t}^{16}+{t}^{18}\\
\varpi_2+\varpi_8 &
{t}^{2}+3\,{t}^{4}+2\,{t}
^{6}+{t}^{8}&{t}^{2}+3\,{t}^{4}+3\,{t}^{6}+2\,{t}^{8}+{t}^{10}&{t}^{4}
+2\,{t}^{6}+{t}^{8}+{t}^{10}&3\,{t}^{4}+3\,{t}^{6}+3\,{t}^{8}+{t}^{10}
+{t}^{12}&{t}^{8}&2\,{t}^{4}+4\,{t}^{6}+3\,{t}^{8}+2\,{t}^{10}+{t}^{12
}&2\,{t}^{4}+5\,{t}^{6}+4\,{t}^{8}+3\,{t}^{10}+{t}^{12}+{t}^{14}&{t}^{
4}+6\,{t}^{6}+8\,{t}^{8}+6\,{t}^{10}+4\,{t}^{12}+2\,{t}^{14}+{t}^{16}
\\
\varpi_1+2\varpi_7 &
{t}^{2}+2\,{t}^{4}+2\,{t}^{6}+{t}^{8}&2\,{t}^{4}+3
\,{t}^{6}+2\,{t}^{8}+{t}^{10}&{t}^{2}+{t}^{4}+2\,{t}^{6}+{t}^{8}+{t}^{
10}&2\,{t}^{4}+3\,{t}^{6}+3\,{t}^{8}+{t}^{10}+{t}^{12}&{t}^{4}+{t}^{8}
&{t}^{4}+3\,{t}^{6}+3\,{t}^{8}+2\,{t}^{10}+{t}^{12}&4\,{t}^{6}+4\,{t}^
{8}+3\,{t}^{10}+{t}^{12}+{t}^{14}&2\,{t}^{4}+4\,{t}^{6}+8\,{t}^{8}+6\,
{t}^{10}+4\,{t}^{12}+2\,{t}^{14}+{t}^{16}\\
\varpi_1+\varpi_6 &
{t}^{2}+2
\,{t}^{4}+{t}^{6}&{t}^{2}+3\,{t}^{4}+2\,{t}^{6}+{t}^{8}&{t}^{4}+{t}^{6
}+{t}^{8}&{t}^{2}+2\,{t}^{4}+3\,{t}^{6}+{t}^{8}+{t}^{10}&{t}^{6}&2\,{t
}^{4}+3\,{t}^{6}+2\,{t}^{8}+{t}^{10}&3\,{t}^{4}+4\,{t}^{6}+3\,{t}^{8}+
{t}^{10}+{t}^{12}&2\,{t}^{4}+6\,{t}^{6}+6\,{t}^{8}+4\,{t}^{10}+2\,{t}^
{12}+{t}^{14}\\
2\varpi_2 &
{t}^{2}+{t}^{4}+{t}^{6}&{t}^{4}+2\,{t
}^{6}+{t}^{8}&{t}^{4}+{t}^{8}&2\,{t}^{6}+{t}^{8}+{t}^{10}&{t}^{6}&{t}^
{2}+2\,{t}^{4}+2\,{t}^{6}+2\,{t}^{8}+{t}^{10}&2\,{t}^{4}+2\,{t}^{6}+3
\,{t}^{8}+{t}^{10}+{t}^{12}&{t}^{4}+4\,{t}^{6}+4\,{t}^{8}+4\,{t}^{10}+
2\,{t}^{12}+{t}^{14}\\
\varpi_7+\varpi_8 &
{t}^{2}+{t}^{4}&{t}^{2}+{t}^{4
}+{t}^{6}&{t}^{2}+{t}^{4}+{t}^{6}&{t}^{2}+2\,{t}^{4}+{t}^{6}+{t}^{8}&0
&{t}^{4}+{t}^{6}+{t}^{8}&2\,{t}^{4}+2\,{t}^{6}+{t}^{8}+{t}^{10}&2\,{t}
^{4}+4\,{t}^{6}+3\,{t}^{8}+2\,{t}^{10}+{t}^{12}
\\
\varpi_1+\varpi_3 &
{t}^{2}+{t}^{4}&{t}^{2}+2\,{t}^{4}+{t}^{6}&{t}^{6}&2\,{t}^{4}+{t}^{6}+{t}
^{8}&{t}^{4}&{t}^{2}+2\,{t}^{4}+2\,{t}^{6}+{t}^{8}&{t}^{2}+2\,{t}^{4}+
3\,{t}^{6}+{t}^{8}+{t}^{10}&3\,{t}^{4}+4\,{t}^{6}+4\,{t}^{8}+2\,{t}^{
10}+{t}^{12}
\\
\varpi_4 &
{t}^{2}&{t}^{2}+{t}^{4}&{t}^{4}&{t}^{2
}+{t}^{4}+{t}^{6}&0&{t}^{4}+{t}^{6}&{t}^{2}+2\,{t}^{4}+{t}^{6}+{t}^{8}
&2\,{t}^{4}+3\,{t}^{6}+2\,{t}^{8}+{t}^{10}\\
2\varpi_1+\varpi_7 &
{t}^{2}&
{t}^{2}+{t}^{4}&{t}^{4}&{t}^{4}+{t}^{6}&{t}^{2}&{t}^{2}+2\,{t}^{4}+{t}
^{6}&2\,{t}^{4}+{t}^{6}+{t}^{8}&{t}^{2}+2\,{t}^{4}+4\,{t}^{6}+2\,{t}^{
8}+{t}^{10}
\\
\varpi_2+\varpi_7 &
1&{t}^{2}&{t}^{2}&{t}^{2}+{t}^{4}&0&{t}
^{2}+{t}^{4}&{t}^{2}+{t}^{4}+{t}^{6}&{t}^{2}+3\,{t}^{4}+2\,{t}^{6}+{t}
^{8}\\
\varpi_1+\varpi_8 &
0&1&0&{t}^{2}&0&{t}^{2}&{t}^{2}+{t}^{4}&{t}^{2
}+2\,{t}^{4}+{t}^{6}\\
2\varpi_7 &
0&0&1&{t}^{2}&0&0&{t}^{4}&{t}^
{2}+{t}^{4}+{t}^{6}\\
\varpi_6 &
0&0&0&1&0&0&{t}^{2}&{t}^{2}+{t}
^{4}\\
3\varpi_1 &
0&0&0&0&1&{t}^{2}+{t}^{4}&{t}^{6}&{t}^{4}+{t}^
{6}+{t}^{8}\\
\varpi_1+\varpi_2 &
0&0&0&0&0&1&{t}^{2}&{t}^{2}+{t}^{4}
\\
\varpi_3 &
0&0&0&0&0&0&1&{t}^{2}\\
\varpi_1+\varpi_7&
0&0&0&0&0&0&0&1
\end{tabular}
\end{sideways}
\caption{}
\label{tab:table2}
\end{table}
\begin{table}
\begin{sideways}
\newcolumntype{L}{>{$\scriptstyle}p{3.3cm}<{$}}
\begin{tabular}{>{$}c<{$}|LLLLLL}
&
\multicolumn{1}{c}{$\varpi_8$} &
\multicolumn{1}{c}{$2\varpi_1$} &
\multicolumn{1}{c}{$\varpi_2$} &
\multicolumn{1}{c}{$\varpi_7$} &
\multicolumn{1}{c}{$\varpi_1$} &
\multicolumn{1}{c}{$0$}
\\
\hline
\varpi_5 &
{t}^{6}+5\,{t}^{8}+8\,{t}^{10}+7\,{t}^{
12}+6\,{t}^{14}+4\,{t}^{16}+2\,{t}^{18}+{t}^{20}+{t}^{22}&5\,{t}^{10}+
4\,{t}^{12}+6\,{t}^{14}+3\,{t}^{16}+3\,{t}^{18}+{t}^{20}+{t}^{22}&3\,{
t}^{8}+6\,{t}^{10}+11\,{t}^{12}+8\,{t}^{14}+7\,{t}^{16}+4\,{t}^{18}+3
\,{t}^{20}+{t}^{22}+{t}^{24}&5\,{t}^{10}+6\,{t}^{12}+9\,{t}^{14}+6\,{t
}^{16}+6\,{t}^{18}+3\,{t}^{20}+2\,{t}^{22}+{t}^{24}+{t}^{26}&4\,{t}^{
12}+5\,{t}^{14}+8\,{t}^{16}+5\,{t}^{18}+5\,{t}^{20}+3\,{t}^{22}+2\,{t}
^{24}+{t}^{26}+{t}^{28}&{t}^{14}+3\,{t}^{18}+{t}^{20}+2\,{t}^{22}+{t}^
{24}+{t}^{26}+{t}^{30}
\\
\varpi_3+\varpi_7 &
2\,{t}^{6}+7\,{t}^{8}+7\,{t}
^{10}+6\,{t}^{12}+4\,{t}^{14}+2\,{t}^{16}+{t}^{18}+{t}^{20}&4\,{t}^{8}
+4\,{t}^{10}+6\,{t}^{12}+3\,{t}^{14}+3\,{t}^{16}+{t}^{18}+{t}^{20}&{t}
^{6}+4\,{t}^{8}+10\,{t}^{10}+8\,{t}^{12}+7\,{t}^{14}+4\,{t}^{16}+3\,{t
}^{18}+{t}^{20}+{t}^{22}&3\,{t}^{8}+5\,{t}^{10}+9\,{t}^{12}+6\,{t}^{14
}+6\,{t}^{16}+3\,{t}^{18}+2\,{t}^{20}+{t}^{22}+{t}^{24}&3\,{t}^{10}+4
\,{t}^{12}+8\,{t}^{14}+5\,{t}^{16}+5\,{t}^{18}+3\,{t}^{20}+2\,{t}^{22}
+{t}^{24}+{t}^{26}&{t}^{12}+3\,{t}^{16}+{t}^{18}+2\,{t}^{20}+{t}^{22}+
{t}^{24}+{t}^{28}
\\
\varpi_2+\varpi_8 &
4\,{t}^{6}+5\,{t}^{8}+6\,{t}^{10}
+4\,{t}^{12}+2\,{t}^{14}+{t}^{16}+{t}^{18}&{t}^{6}+3\,{t}^{8}+5\,{t}^{
10}+3\,{t}^{12}+3\,{t}^{14}+{t}^{16}+{t}^{18}&{t}^{6}+7\,{t}^{8}+7\,{t
}^{10}+7\,{t}^{12}+4\,{t}^{14}+3\,{t}^{16}+{t}^{18}+{t}^{20}&3\,{t}^{8
}+6\,{t}^{10}+6\,{t}^{12}+6\,{t}^{14}+3\,{t}^{16}+2\,{t}^{18}+{t}^{20}
+{t}^{22}&3\,{t}^{10}+6\,{t}^{12}+5\,{t}^{14}+5\,{t}^{16}+3\,{t}^{18}+
2\,{t}^{20}+{t}^{22}+{t}^{24}&2\,{t}^{14}+{t}^{16}+2\,{t}^{18}+{t}^{20
}+{t}^{22}+{t}^{26}
\\
\varpi_1+2\varpi_7 &
2\,{t}^{6}+5\,{t}^{8}+6\,{t}^{
10}+4\,{t}^{12}+2\,{t}^{14}+{t}^{16}+{t}^{18}&2\,{t}^{6}+2\,{t}^{8}+5
\,{t}^{10}+3\,{t}^{12}+3\,{t}^{14}+{t}^{16}+{t}^{18}&{t}^{6}+5\,{t}^{8
}+7\,{t}^{10}+7\,{t}^{12}+4\,{t}^{14}+3\,{t}^{16}+{t}^{18}+{t}^{20}&{t
}^{6}+2\,{t}^{8}+6\,{t}^{10}+6\,{t}^{12}+6\,{t}^{14}+3\,{t}^{16}+2\,{t
}^{18}+{t}^{20}+{t}^{22}&2\,{t}^{8}+2\,{t}^{10}+6\,{t}^{12}+5\,{t}^{14
}+5\,{t}^{16}+3\,{t}^{18}+2\,{t}^{20}+{t}^{22}+{t}^{24}&{t}^{10}+2\,{t
}^{14}+{t}^{16}+2\,{t}^{18}+{t}^{20}+{t}^{22}+{t}^{26}
\\
\varpi_1+\varpi_6 &
{t}^{4}+4\,{t}^{6}+6\,{t}^{8}+4\,{t}^{10}+2\,{t}^{
12}+{t}^{14}+{t}^{16}&{t}^{6}+4\,{t}^{8}+3\,{t}^{10}+3\,{t}^{12}+{t}^{
14}+{t}^{16}&3\,{t}^{6}+6\,{t}^{8}+7\,{t}^{10}+4\,{t}^{12}+3\,{t}^{14}
+{t}^{16}+{t}^{18}&{t}^{6}+4\,{t}^{8}+6\,{t}^{10}+6\,{t}^{12}+3\,{t}^{
14}+2\,{t}^{16}+{t}^{18}+{t}^{20}&{t}^{8}+4\,{t}^{10}+5\,{t}^{12}+5\,{
t}^{14}+3\,{t}^{16}+2\,{t}^{18}+{t}^{20}+{t}^{22}&{t}^{12}+{t}^{14}+2
\,{t}^{16}+{t}^{18}+{t}^{20}+{t}^{24}
\\
2\varpi_2 &
{t}^{6}+4\,{t
}^{8}+4\,{t}^{10}+2\,{t}^{12}+{t}^{14}+{t}^{16}&{t}^{4}+{t}^{6}+4\,{t}
^{8}+2\,{t}^{10}+3\,{t}^{12}+{t}^{14}+{t}^{16}&3\,{t}^{6}+3\,{t}^{8}+6
\,{t}^{10}+4\,{t}^{12}+3\,{t}^{14}+{t}^{16}+{t}^{18}&3\,{t}^{8}+3\,{t}
^{10}+6\,{t}^{12}+3\,{t}^{14}+2\,{t}^{16}+{t}^{18}+{t}^{20}&{t}^{8}+4
\,{t}^{10}+3\,{t}^{12}+5\,{t}^{14}+3\,{t}^{16}+2\,{t}^{18}+{t}^{20}+{t
}^{22}&2\,{t}^{12}+2\,{t}^{16}+{t}^{18}+{t}^{20}+{t}^{24}
\\
\varpi_7+\varpi_8 &
{t}^{4}+3\,{t}^{6}+3\,{t}^{8}+2\,{t}^{10}+{t}^{12}
+{t}^{14}&{t}^{6}+2\,{t}^{8}+2\,{t}^{10}+{t}^{12}+{t}^{14}&2\,{t}^{6}+
4\,{t}^{8}+3\,{t}^{10}+3\,{t}^{12}+{t}^{14}+{t}^{16}&{t}^{6}+3\,{t}^{8
}+4\,{t}^{10}+3\,{t}^{12}+2\,{t}^{14}+{t}^{16}+{t}^{18}&{t}^{8}+3\,{t}
^{10}+3\,{t}^{12}+3\,{t}^{14}+2\,{t}^{16}+{t}^{18}+{t}^{20}&{t}^{12}+{
t}^{14}+{t}^{16}+{t}^{18}+{t}^{22}
\\
\varpi_1+\varpi_3 &
{t}^{4}+4\,{t}^{
6}+4\,{t}^{8}+2\,{t}^{10}+{t}^{12}+{t}^{14}&3\,{t}^{6}+2\,{t}^{8}+3\,{
t}^{10}+{t}^{12}+{t}^{14}&{t}^{4}+3\,{t}^{6}+6\,{t}^{8}+4\,{t}^{10}+3
\,{t}^{12}+{t}^{14}+{t}^{16}&2\,{t}^{6}+3\,{t}^{8}+6\,{t}^{10}+3\,{t}^
{12}+2\,{t}^{14}+{t}^{16}+{t}^{18}&3\,{t}^{8}+3\,{t}^{10}+5\,{t}^{12}+
3\,{t}^{14}+2\,{t}^{16}+{t}^{18}+{t}^{20}&{t}^{10}+2\,{t}^{14}+{t}^{16
}+{t}^{18}+{t}^{22}
\\
\varpi_4 &
2\,{t}^{4}+3\,{t}^{6}+2\,{t}^{8
}+{t}^{10}+{t}^{12}&{t}^{6}+2\,{t}^{8}+{t}^{10}+{t}^{12}&3\,{t}^{6}+3
\,{t}^{8}+3\,{t}^{10}+{t}^{12}+{t}^{14}&{t}^{6}+4\,{t}^{8}+3\,{t}^{10}
+2\,{t}^{12}+{t}^{14}+{t}^{16}&{t}^{8}+3\,{t}^{10}+3\,{t}^{12}+2\,{t}^
{14}+{t}^{16}+{t}^{18}&{t}^{12}+{t}^{14}+{t}^{16}+{t}^{20}
\\
2\varpi_1+\varpi_7 &
{t}^{4}+3\,{t}^{6}+2\,{t}^{8}+{t}^{10}+{t}^{12}&2
\,{t}^{4}+2\,{t}^{6}+3\,{t}^{8}+{t}^{10}+{t}^{12}&{t}^{4}+4\,{t}^{6}+4
\,{t}^{8}+3\,{t}^{10}+{t}^{12}+{t}^{14}&{t}^{4}+{t}^{6}+5\,{t}^{8}+3\,
{t}^{10}+2\,{t}^{12}+{t}^{14}+{t}^{16}&2\,{t}^{6}+2\,{t}^{8}+5\,{t}^{
10}+3\,{t}^{12}+2\,{t}^{14}+{t}^{16}+{t}^{18}&{t}^{8}+2\,{t}^{12}+{t}^
{14}+{t}^{16}+{t}^{20}
\\
\varpi_2+\varpi_7 &
2\,{t}^{4}+2\,{t}^{6}+{t}^{8
}+{t}^{10}&{t}^{4}+2\,{t}^{6}+{t}^{8}+{t}^{10}&2\,{t}^{4}+3\,{t}^{6}+3
\,{t}^{8}+{t}^{10}+{t}^{12}&3\,{t}^{6}+3\,{t}^{8}+2\,{t}^{10}+{t}^{12}
+{t}^{14}&{t}^{6}+3\,{t}^{8}+3\,{t}^{10}+2\,{t}^{12}+{t}^{14}+{t}^{16}
&{t}^{10}+{t}^{12}+{t}^{14}+{t}^{18}
\\
\varpi_1+\varpi_8 &
{t}^{2}+2\,{t}
^{4}+{t}^{6}+{t}^{8}&{t}^{4}+{t}^{6}+{t}^{8}&2\,{t}^{4}+3\,{t}^{6}+{t}
^{8}+{t}^{10}&{t}^{4}+3\,{t}^{6}+2\,{t}^{8}+{t}^{10}+{t}^{12}&{t}^{6}+
3\,{t}^{8}+2\,{t}^{10}+{t}^{12}+{t}^{14}&{t}^{10}+{t}^{12}+{t}^{16}
\\
2\varpi_7 &
{t}^{4}+{t}^{6}+{t}^{8}&{t}^{4}+{t}^{8}&2\,{t}^{6}
+{t}^{8}+{t}^{10}&{t}^{4}+{t}^{6}+2\,{t}^{8}+{t}^{10}+{t}^{12}&{t}^{6}
+{t}^{8}+2\,{t}^{10}+{t}^{12}+{t}^{14}&{t}^{8}+{t}^{12}+{t}^{16}
\\
\varpi_6 &
{t}^{2}+{t}^{4}+{t}^{6}&{t}^{6}&2\,{t}^{4}+{t}^{6}
+{t}^{8}&{t}^{4}+2\,{t}^{6}+{t}^{8}+{t}^{10}&{t}^{6}+2\,{t}^{8}+{t}^{
10}+{t}^{12}&{t}^{10}+{t}^{14}
\\
3\varpi_1 &
{t}^{8}+{t}^{10}&{t}^{2}+{t}^{4}+2\,{t}
^{6}+{t}^{8}+{t}^{10}&{t}^{4}+2\,{t}^{6}+2\,{t}^{8}+{t}^{10}+{t}^{12}&
{t}^{6}+{t}^{8}+2\,{t}^{10}+{t}^{12}+{t}^{14}&{t}^{4}+{t}^{6}+3\,{t}^{
8}+2\,{t}^{10}+2\,{t}^{12}+{t}^{14}+{t}^{16}&{t}^{6}+{t}^{10}+{t}^{12}
+{t}^{14}+{t}^{18}
\\
\varpi_1+\varpi_2 &
{t}^{4}+{t}^{6}&{t}^{2}+{t}^{4}+
{t}^{6}&{t}^{2}+2\,{t}^{4}+{t}^{6}+{t}^{8}&{t}^{4}+2\,{t}^{6}+{t}^{8}+
{t}^{10}&{t}^{4}+2\,{t}^{6}+2\,{t}^{8}+{t}^{10}+{t}^{12}&{t}^{8}+{t}^{
10}+{t}^{14}
\\
\varpi_3 &
{t}^{2}+{t}^{4}&{t}^{4}&{t}^{2}+{t}^{4
}+{t}^{6}&2\,{t}^{4}+{t}^{6}+{t}^{8}&2\,{t}^{6}+{t}^{8}+{t}^{10}&{t}^{
8}+{t}^{12}
\\
\varpi_1+\varpi_7&
{t}^{2}&{t}^{2}&{t}^{2}+{t}^{4}&{t}^{2}
+{t}^{4}+{t}^{6}&2\,{t}^{4}+{t}^{6}+{t}^{8}&{t}^{6}+{t}^{10}
\end{tabular}
\end{sideways}
\caption{}
\label{tab:table3}
\end{table}
\begin{table}
\newcolumntype{L}{>{$\scriptstyle}l<{$}}
\begin{tabular}{>{$}c<{$}|LLLLLL}
&
\multicolumn{1}{c}{$\varpi_8$} &
\multicolumn{1}{c}{$2\varpi_1$} &
\multicolumn{1}{c}{$\varpi_2$} &
\multicolumn{1}{c}{$\varpi_7$} &
\multicolumn{1}{c}{$\varpi_1$} &
\multicolumn{1}{c}{$0$}
\\
\hline
\varpi_8&
1&0&{t}^{2}&{t}^{2}+{t}^{4}&{t}^{4}+{t}^{6}&{t}^{8
}
\\
2\varpi_1&
0&1&{t}^{2}&{t}^{4}&{t}^{2}+{t}^{4}+{t}^{6}&{t}^{
4}+{t}^{8}
\\
\varpi_2&
0&0&1&{t}^{2}&{t}^{2}+{t}^{4}&{t}^{6}
\\
\varpi_7&
0&0&0&1&{t}^{2}&{t}^{4}
\\
\varpi_1&
0&0&0&0&1&{t}^{2}
\\
0&
0&0&0&0&0&1
\end{tabular}
\caption{}
\label{tab:table4}
\end{table}
|
2,877,628,088,521 | arxiv | \section{Introduction}
The intention of this article is to make it clear to theorists how to use available experimental data to carry out standard Bayesian inference, by which they can estimate and set limits on parameters of interest of their own theoretical models or, depending on the mood, of their friends' models.
\paragraph*{Disclaimer:} The method described in this article is not, currently, an official recommendation of any experimental collaboration. The author is a High Energy experimentalist who writes on his own behalf. The strengths and limitations of the method will be explained, so, the readers should use their own judgement, as always.
\subsection{Warning}
One should never think it's possible to claim a discovery without consulting with the experimental collaboration which produced the data. If one suspects something significant is seen in some data, it is essential to investigate possible detector effects, or other experimental factors that could explain it, before attributing it to new physics.
Interpretations should be discussed with the experimentalists who produced the data you use! {\em The goal of this article is to strengthen the collaboration between theorists and experimentalists, not to let theorists run off with potentially wrong conclusions.}
\subsection{Why Bayesian inference?}
Bayesian inference dates back to the 18th century, and is based on solid theoretical ground: standard probability theory, which underpins Bayes' theorem. So, although in the last years it has emerged as the cutting edge of Statistics, it is actually very old.
Luckily, Bayesian inference is very easy to carry out, which makes it possible to propose here a practical procedure that theorists can actually use without complicated software or large computing power.
The fundamental advantage of Bayesian inference, compared to Frequentist methods, is that it makes statements directly about the {\bf parameter of interest (POI)}. Namely, in the end one finds the {\bf probability density function (PDF)} of his POI. This is not true for any of the Frequentist constructions, where confidence intervals are obtained. There, no statement is made about the probability of the POI to be within any interval. The coverage of a Frequentist confidence interval is a statistical property of the confidence interval itself, {\em not} the probability for the POI to be within that interval, as many wrongly think. The Frequentist approach is to assume various values for the POI, and compute how likely the data would be according to each value, and then set the limit at the value which, if assumed, starts making the observed data very unlikely. But the fact $P(\rm data | hypothesis)$ is small doesn't entail that also $P(\rm hypothesis | data)$ is small, and what one really asks is the latter\footnote{Think of the classic example: The obvious difference between $P(\rm pregnant|female)$ and $P(\rm female|pregnant)$.}. The Bayesian approach is to assume the data, and find how likely each hypothesis is, thus compute $P(\rm hypothesis | data)$ directly.
\subsection{The prior}
\label{sec:prior}
A necessary ingredient of inference is the prior, which is the PDF assumed for the POI before seeing the data. This prior PDF could be the posterior of a previous experiment, but the latter too would depend on some prior used to interpret the data of the previous experiment. In the end it is impossible to avoid the dependence on some prior\footnote{This very first prior, which is used to make an inference with the very first data, is sometimes called "uberprior". As more data is accumulated, the uberprior keeps being updated to give newer posteriors. It makes no difference if the uberprior is updated with all experimental observations at once, or if the updates are done incrementally using the posterior of the last inference as a prior for the next inference.}.
Some people, the so-called ``subjective Bayesians'', embrace the prior as a means to express the mathematical fact that the conclusion (i.e.\ the posterior PDF) doesn't depend only on the evidence (i.e.\ the data), but also on the initial assumptions under which this evidence is interpreted (i.e.\ the prior PDF). Not surprisingly, different people will arrive to different conclusions, or, the same person will arrive to different conclusions, if he starts from different assumptions. These assumptions don't have to reflect anyone's subjective distribution of probability, since nobody prohibits asking what the result would be if the prior was different, regardless of personal preferences.
To draw an analogy, the posterior is like a function ($f$) of the prior ($x$). Just like the function $f(x)$ could be evaluated at any $x$, a posterior can be computed for any prior. In the case of a function mapping $\mathbb{R} \to \mathbb{R}$, it is easy to plot $f(x)$ versus $x$ to visualize the function. Unfortunately, this can not be done on a piece of paper when $x$ is a prior PDF and $f(x)$ is a posterior PDF. Still, it should be possible to plug in a prior and easily evaluate the corresponding posterior. This convenience is offered by the method presented here.
It is not surprising that the posterior depends on the prior, and it doesn't mean that the results are not well-defined. For each prior, the posterior is unique, and determined by the data.
Ultimately, with more data, all priors asymptotically result in the same posterior, except for very extreme priors, like the Kronecker $\delta$ function which represents an unshakeable prior conviction.
The prior allows to interpret the same data from various starting points, including even the interpretation of someone with an unshakeable prior conviction. In this sense, any prior is legitimate; even a Kronecker $\delta$, although most wouldn't find interesting the inferences of someone who was committed to note letting data change his mind. That's why every Bayesian result must be accompanied by a statement of the assumed prior, to know how to judge it.
Other people, the so-called ``objective Bayesians'', view the prior as something undesirable. Since it is impossible to eliminate, they try at least to prescribe its definition. There are prescriptions which offer the posterior specific properties that some consider important, such as independence of the result under re-parametrization of the POI. Other prescriptions try to achieve the opposite effect of a Kronecker $\delta$, namely, maximal susceptibility to the data.
To use the previous analogy, these efforts are like prescribing a value of $x$ with some special property; for example the $x$ which maximizes $\frac{df(x)}{dx}$. Some would argue that, if we can't plot $f(x)$ for all values of $x$, let's at least compute $f(x)$ at that special $x$ that has some (subjectively?) interesting property. A couple of criteria for this were mentioned already.
For a ``subjective Bayesian'', since all priors are fine, so are these special priors, which are known as ``non-informative'' priors. It should be mentioned, though, that if one follows the prescription to compute a non-informative prior (which can be quite cumbersome), he may not be satisfied with the result, because it is highly unlikely to reflect any intuitive guess anyone would have made for the POI. Such priors lose their meaning as distributions of prior belief, and become {\em devices} used to {\em tune} the properties of the posterior. For example, non-informative priors often depend on the expected background. To see how paradoxical this is, consider that, if an experimental device registered more background noise for any instrumental reason, we would have to change accordingly our prior PDF of the Higgs mass, or some other fundamental POI that the device would be supposed to measure. One would think that our prior PDF for the Higgs mass should have nothing to do with how much background is registered by some instrument. But again, if that is the prior an ``objective Bayesian'' wants to try, a ``subjective Bayesian'' has no reason to object. The method presented here allows the readers to plug in any prior they wish, including even non-informative priors
\subsection{Systematic uncertainties}
This article shows how to set an {\em exact} limit to your own signal, ignoring systematic uncertainties.
The basic principles of including systematic uncertainties, with some examples, will be given in Section~\ref{sec:systematics}. It is not possible, however, to provide a complete general prescription for this, because not all systematic uncertainties are the same. The reader will have to generalize a little the examples provided here. Theorists are equipped to evaluate theoretical systematic uncertainties, but experimental uncertainties are the expertise of experimenters. Collaboration is necessary for a complete result.
Most theorists would be satisfied with limits which ignore systematic uncertainty, since they are typically only a few per-cent different from the limits with full treatment of systematic uncertainty. Given that it is practically impossible for the experimentalist to compute limits to all possible theories of the present and the future, it is important for theorists to be able to easily set limits, even with the approximation of ignoring some systematic uncertainties. An approximation is better than nothing.
Furthermore, if an experiment uses a benchmark model to demonstrate the impact of systematic uncertainties, that can be used as a guideline to estimate the impact of the same uncertainties on another model, though for some uncertainties the impact may depend on the signal.
If someone has a reliable model of systematic uncertainties, Section~\ref{sec:systematics}, will allow him, in principle, to convolute these uncertainties.
\subsection{Modeling the detector response}
\label{sec:detectorSimulation}
This article is not trying to address the issue of detector simulation. It is assumed that the theorist can approximate the distribution of his signal after reconstruction. Many theorists do this with tools like PGS \cite{PGS}. Experiments also provide their acceptance to objects (jets, leptons) as a function of quantities accessible to theorists, such as transverse momentum ($p_T$) and pseudo-rapidity ($\eta$). In some cases\footnote{For example, in \cite{ATLASdijetPLB}, the amount of detector smearing in dijet mass is given, so, a hadron-level dijet mass value can be smeared, stochastically, to model the detector-level dijet mass.} the detector resolution is also parametrized, so, a theorist can approximately smear the energy of jets and leptons.
It is often claimed that unfolding \cite{glenUnfolding} the experimental data allows theorists to test their theories without needing detector simulation. This is an idealization of the actual situation \cite{myUnfolding}. There are many ways to do unfolding; it is not as unique and well-defined as the data. Regularization, which plays central role in unfolding, depends on some arbitrary choices. The root of all problems with unfolding is that it is impossible to recover information that is lost during detector smearing. The unbiased estimator of the true spectrum has enormous variance, which makes it useless, so during regularization one introduces some bias, on purpose, to reduce the variance. In practice, unfolding may introduce more difficulties than it solves, so, it's advisable to avoid it unless nothing else is possible. Unfolded spectra are estimators that follow complicated probability distributions; it is no longer correct to treat each bin as an independent observation, or to assume that its contents follow a Poisson or Gaussian distribution. So, simple tests like $\chi^2$/(degrees of freedom) are no longer correct. There are bin-to-bin correlations which are usually not published. Even if a correlation matrix is provided, it assumes that the multidimensional PDF of the estimator is Gaussian, but in reality its shape is irregular, especially when low statistics appear in some bins. The bias that is introduced by regularization is typically larger in parts of the spectrum where statistics are lower, which is precisely where exotic effects might be. In reality it is impossible to estimate the actual bias of unfolding, unless we knew the actual spectrum of the data before smearing, which is obviously unknown, and if we look for new physics it can not be assumed to be given by Standard Model (SM) simulation prior to smearing, or else we wouldn't be looking for new physics. So, searches for new physics are an unfavorable environment for unfolding.
If an experimentalist has a matrix of migrations, which is the main ingredient of all unfolding methods, it is better to publish the matrix to allow theorists to fold the detector smearing into their theoretical signal, instead of using the matrix to unfold the data. This works without problems because, while it is impossible to recover lost information, it is totally possible to reduce existing information. The benefits of smearing, or folding, compared to unfolding, are the following: (a) Unlike data, the theoretical prediction before smearing does not have statistical fluctuations, so there is no need for regularization. A simple multiplication of the folding matrix with the spectrum prior to smearing returns the expected spectrum after detector smearing. (b) Since the data are not unfolded, they follow a well-known PDF, e.g.\ Poisson, Binomial, or Gaussian. Each bin can be used as an independent observation, so, there is no need to consider complicated (and inevitably approximate) correlations among data in different bins. It is simple to compare the data to the folded theoretical spectrum using simple methods such as a $\chi^2$ or a likelihood test.
While folding solves some of the problems of unfolding, it faces a difficulty: Different theoretical models would require different folding matrices to be folded correctly\footnote{The same problem exists to some degree in unfolding. The elements of the migration matrix, which are usually derived by passing the standard model prediction through detector simulation, are not realistic if there is new physics.}. To see why, imagine new particles of different spin, whose decay products would be distributed differently in $\eta$, thus measured by different parts of the detector, thus suffering different amounts of smearing. That is why it is difficult to provide folding matrices that would work equally well with all theories. Probably the best strategy is to model detector effects in the level of measurable objects (jets and leptons). By smearing each object separately, we can smear any signal that decays into such objects.
This article allows a theorist to easily assume different signal distributions, therefore, if there is some doubt about the exact signal shape after detector smearing, it is easy to try different possibilities. The data, however, have to always be the observed data, {\em not} the output of any unfolding. If an experimental analysis chooses to use unfolding, always ask also for the original data, because {\em there} one can see reality; unfolding offers mere interpretations.
\subsection{Analysis event selection}
\label{sec:selection}
To model the signal that makes it to the final plot, it is necessary to apply the event selection of the analysis whose data are used.
Analyses always publish the event selection they apply. Typically, the selection applies to single objects, or combinations of objects. For example, ``cuts'' are made in transverse momentum ($p_T$), pseudorapidity ($eta$) or rapidity ($y$), differences in azimuthal angles $(\Delta \phi)$, differences in rapidity or pseudorapidity, scalar or vectorial sums of transverse momenta, and missing transverse energy (MET).
A theorist has easy access to the 4-vectors of quarks, gluons, leptons, and to exotic particles escaping detection (e.g.\ stable or long-lived neutralinos). With generators like {\sc Pythia} \cite{Pythia}, it is also possible to have access to hadronic showers resulting from emitted quarks and gluons, and using jet clustering algorithms such as those implemented in {\tt FastJet} \cite{fastjet} it is possible to define hadron-level jets.
For leptons it is simple to apply kinematic cuts. For jets it is a little less simple. If a theorist has hadron-level jets, their energy corresponds (on average) to the energy of calibrated reconstructed jets on which experimentalists typically apply $p_T$ cuts. So, it is acceptable to apply the same $p_T$ cut to hadron-level jets. The situation is a little more tricky when one has access only to partons, before showering and hadronization. In that case, one needs to consider that the hadron-level $p_T$ differs from the parton $p_T$, due to out-of-cone losses. To apply a lower $p_T$ at hadron level, a slightly higher threshold is necessary at parton level. This becomes less of an issue when a large jet size parameter is used, or if the partons (and jets) are at higher $p_T$, thus more boosted, thus having less out-of-cone losses. For anti-$k_T$ jets with $R$=0.4, the out-of-cone energy fraction is roughly 10\% at hadron-level $p_T \simeq 30$~GeV, it reduces to about 4\% at 100 GeV, and is less than 2\% at $p_T>200$~GeV. For anti-$k_T$ jets reconstructed with R=0.6, the out-of-cone fraction is less than 2\% even at $p_T \simeq 30$~GeV. Usually jets produced by new physics carry large momentum, well above such $p_T$ thresholds, so such differences wouldn't be important.
When a minimum MET is required, this translates into a minimum $p_T$ carried by the vectorial sum of neutrinos, gravitons, neutralinos etc.\ in the signal. Similar to charged leptons, one has the momenta of these objects, so it is simple to add vectorially their transverse momenta and apply the same threshold. In reality, there is some MET even if at parton level all objects balance perfectly. That (fake) MET comes mostly from the finite energy resolution of the calorimeter, from detector cracks, beam remnants, etc. Detector simulation reproduces this MET. It can be approximated by smearing, according to detector resolution, the transverse momenta of any jets or other objects in each event. However, fake MET is typically much less than the real MET produced by exotic particles that escape detection, and it is also well below the MET thresholds required in searches for such particles. So, it should be safe to ignore fake MET when genuine MET is part of a new physics signature.
\subsection{Where to find data}
\label{sec:dataSources}
A source of data is the HepData project, hosted at the University of Durham \cite{HepData}. From there, anyone can retrieve the observed and expected spectrum in bins of specified delimiters, for an increasingly number of analyses from various experiments.
Other disciplines, such as observational astrophysics, have a culture of open access to data. Since the author is a High Energy experimentalist, the focus of this article is in High Energy physics, but it must be obvious how the methods shown here can be used in other disciplines.
As many theorists evidently know, there is software which enhance one's ability to read numbers off of published figures.
An example is {\tt DataThief} \cite{DataThief}. Don't let the name of this software fill you with guilt; there is nothing unethical in reading values off of a published and peer-reviewed experimental plot. Just beware that, due to limited resolution, it may be hard to ``see'' exactly the content and the delimiters of each bin. It can't hurt to ask an experimental collaboration to publish the exact values, to avoid approximations.
\subsection{Software used}
Since most High Energy theorists are familiar with Mathematica\footnote{Mathematica$^\copyright$ is proprietary software, developed by the Wolfram Research company.}, we will use it here for demonstration. Very elementary use of Mathematica is made, so, anyone should be able to read the code shown here and understand what it does.
The computation is so simple that, with some perseverance, it can be carried out even by hand. No generation of Monte Carlo events is required for Bayesian inference. Mathematica provides an intuitive, interpreted language, that makes easy also the visualization of results.
Of course, once the computation method is understood, it can be ported to any programming environment.
\subsection{The numbers used}
For the purposes of this article, it doesn't matter where the data come from. They can be considered fictitious.
\section{Poisson-distributed data}
\label{sec:poisson}
Let's start with the very common case of a spectrum where events are counted in defined intervals (bins) of some observable. As a representative example, consider the distribution of events in some measured mass, as was done in \cite{ATLASdijetPLB, CMSdijet} and numerous other analyses.
Let $d_i \in \mathbb{N}$ denote the number of events observed in bin $i$, and $b_i \in \mathbb{R}^\ast$ the number of events expected in bin $i$ if there is no new physics. The index $i$ runs from 1 to $N$, which is the total number of bins. Let $s\in \mathbb{R}$ be the expected number of produced signal events, {\em which is our POI}. One would simply divide $s$ by the integrated luminosity to transform it to the cross-section of the new physics process. Let $f_i \in \mathbb{R}$ be the fraction of the produced signal that ends up in bin $i$ after detector reconstruction and event selection. By definition, the total signal acceptance (times reconstruction efficiency) is $A$:
\begin{equation}
\sum_{i=1}^N f_i = A.
\label{eq:acceptance}
\end{equation}
If $A = 1$, then all signal makes it into the $N$ bins of the spectrum after event selection. If $A<1$, then some of the signal doesn't make it to the final spectrum, either due to detector inefficiency, or because it fails some of the analysis cuts. In either case, the array of $f_i$ values completely determines the expected signal distribution after detector smearing and event selection. One uses $f_i$ to specify his model. To do so one needs to calculate the theoretically predicted signal distribution in $N$ bins, and model the effect of detector smearing (Section~\ref{sec:detectorSimulation}) and event selection (Section~\ref{sec:selection}).
If $s$ signal events are produced, then the expected events in bin $i$ are $b_i + s\cdot f_i$. Assuming that the data are indeed the data, and not the product of some unfolding (see Section \ref{sec:detectorSimulation}), the likelihood of the data under the assumption of $s$ produced events, which are distributed according to $f_i$, is:
\begin{equation}
L({\rm data}|s) = \prod_{i=1}^N {\rm Poisson}(d_i | b_i + s\cdot f_i) = \prod_{i=1}^N \frac{(b_i + s\cdot f_i)^{d_i}}{d_i!} e^{-(b_i+s\cdot f_i)}.
\label{eq:likelihood}
\end{equation}
Applying Bayes' theorem, the posterior PDF\ of our POI ($s$) is
\begin{equation}
p(s|{\rm data}) = L({\rm data}|s)\frac{\pi(s)}{\mathcal N},
\label{eq:posteriorPoisson}
\end{equation}
where $\pi(s)$ is the prior PDF\ (see Section \ref{sec:prior}), and ${\mathcal N}$ is a constant which normalizes the posterior to 1:
\begin{equation}
{\mathcal N} = p({\rm data}|s) = \int L({\rm data}|s) \pi(s) ds.
\label{eq:N}
\end{equation}
Let's use some numbers. The data are represented by the array $d$, and the background by the array $b$, and the signal distribution by $f$, each with $N=30$ elements:
\begin{lstlisting}
d = {20839, 14404, 10285, 7094, 4841, 3440, 2338, 1555, 1059, 706, 515, 367, 214, 155, 112, 73, 45, 31, 23, 14, 2, 9, 2, 1, 1, 0, 2, 1, 0, 0};
b = {21000., 14000., 10000, 7100., 4800., 3400., 2300., 1600., 1100., 740., 500, 350., 230., 160., 100, 70., 46., 30., 20., 13., 8.2, 5.2, 3.2, 2.0, 1.2, 0.71, 0.42, 0.24, 0.13, 0.074};
f = {0, 0.0000105, 0.000335, 0.000485, 0.00015, 0.0008, 0.00115, 0.00425, 0.0022, 0.0034, 0.00495, 0.0055, 0.0095, 0.018, 0.0185, 0.028, 0.085, 0.21, 0.085, 0.0125, 0.0044, 0, 0.0000105, 0, 0, 0.000335, 0, 0, 0, 0};
\end{lstlisting}
The above inputs are plotted in Fig.~\ref{fig:data1}.
\begin{figure}
\subfigure[]{\includegraphics[width=0.5\textwidth]{figures/allInputs1_lin.pdf}}
\subfigure[]{\includegraphics[width=0.5\textwidth]{figures/allInputs1_log.pdf}}
\caption{An example of data (markers), background (blue bars), and assumed signal distribution for $s=500$ (green bars stacked on top of background), in linear (left) and logarithmic scale (right). The numbers have been chosen to resemble a mass spectrum like those studied in resonance searches. \label{fig:data1}}
\end{figure}
The following lines compute the posterior of eq.~\ref{eq:posteriorPoisson}:
\begin{lstlisting}
nBins = Length[b] (*d,b & f should all have the same length*)
A = Total[f] (*The acceptance*)
L[s_] := Exp[Sum[d[[i]]*Log[b[[i]]+s*f[[i]]],{i,1,nBins}]-s*A]
Prior[s_] := UnitStep[s]
NormConst = NIntegrate[L[s]*Prior[s],{s,-Infinity,Infinity}]
Posterior[s_] := L[s]*Prior[s]/NormConst
\end{lstlisting}
\begin{description}
\item[Line 1:] simply identifies the number of bins from the size of the $b$ array. The variable {\tt nBins} is 30 in this example.
\item[Line 2:] The acceptance is computed, as the sum of the elements of the $f$ array. In this example, $A\simeq 0.49$.
\item[Line 3:] Definition of $L({\rm data}|s)$, up to a constant. The syntax {\tt d[[i]]} represents the element $d_i$. The expression is not identical to eq.~\ref{eq:likelihood}; it has been manipulated algebraically to avoid the factorial term, which consumes CPU. The manipulation starts by computing the logarithm of the likelihood:
\begin{eqnarray}
\nonumber \log L({\rm data}|s) &=& \sum_{i=1}^N \{d_i \log(b_i+s\cdot f_i)-\log(d_i!)-(b_i+s\cdot f_i)\} \\
\nonumber &=& \sum\{d_i \log(b_i+s\cdot f_i)\}-\sum \{\log(d_i!)+b_i\} - s\cdot A \\
&=& \sum\{d_i \log(b_i+s\cdot f_i)\} - s\cdot A + {\rm const.}
\end{eqnarray}
The factorial is hidden in the last term, which is constant in $s$.
The likelihood is:
\begin{eqnarray}
\label{eq:logL}
\nonumber L({\rm data}|s) &=& \exp\{\log L({\rm data}|s)\} \\
&=& {\rm const.} \cdot \exp \left( \sum\{d_i \log(b_i+s\cdot f_i)\} - s\cdot A\right),
\end{eqnarray}
which is exactly what is written in line 3, except for the constant factor which is ignored, because it is absorbed in the normalization constant that is going to be computed in line 5.
\item[Line 4:] The prior is defined here (see Section \ref{sec:prior}). It doesn't have to be normalized, because the posterior will be eventually normalized in one step, using the normalization constant that is going to be computed in line 5. In this example, the prior is a unit step function, which is 0 for $s<0$ and 1 for $s \ge 0$. This is a ``flat'' prior, except for excluding a-priori negative values for $s$, assuming that this would not make physical sense. In cases where $s$ would make sense to be negative, it is possible to replace the above line 4 with
\begin{lstlisting}
Prior[s_] := 1
\end{lstlisting}
However, if that is done, it will be seen in the next steps that the posterior will be impossible to normalize (it will be ``improper''). This can be avoided by cutting off the prior at some extreme values of $s$. For example, to define a flat prior between -2000 and 1000, one can replace line 4 with
\begin{lstlisting}
Prior[s_]:= UnitStep[s-(-2000)] * UnitStep[1000 - s]
\end{lstlisting}
When $L({\rm data}|s)$ converges to 0 quickly for large $|s|$, it doesn't matter whether the prior is cut off at $\pm$1000 or $\pm$2000 or any other large number. The cut off is introduced for purely numerical reasons, to avoid infinities; the result does not depend practically on the exact cut off value.
Another detail is that, if $s$ can be negative, then there is a danger of some $(b_i+s\cdot f_i)$ assuming negative values, for which the likelihood is not defined, because Poisson probability is not defined with negative background. To avoid such problems, line 3 should be re-written, inserting a {\tt Max} function that never allows any of these terms to become negative:
\begin{lstlisting}
L[s_] := Exp[Sum[d[[i]]*Log[Max[0, b[[i]] + s*f[[i]]]], {i, 1, nBins}] - s*A]
\end{lstlisting}
It is very easy to manipulate line 4 to encode any prior; not only flat priors. Since this is possible, it is also possible to demonstrate the sensitivity of the result on the prior, by trying out various priors. If one accidentally defines a prior which makes the posterior improper, that will become obvious in the next step where the normalization constant is computed for the posterior, and one can then go back to line 4 and cut off the prior at some high value of $s$ to avoid the divergence of the normalization constant. It is easy to try different cut off values, to confirm that the posterior does not depend on this cut off.
\item[Line 5:] By numerical integration, the normalization constant $\mathcal N$ of eq.~\ref{eq:N} is computed, and stored as {\tt NormConst}. If the posterior is improper, this command will fail, and the measures mentioned above can be taken.
\item[Line 6:] Here the posterior is finally defined, according to eq.~\ref{eq:posteriorPoisson}.
\end{description}
It is now easy to visualize the posterior and define credibility intervals from it. For example:
\begin{lstlisting}
Plot[Posterior[s],{s, -50, 100},AxesLabel->{"s", "p(s|data)"}]
NIntegrate[Posterior[s], {s, -Infinity, Infinity}]
FindRoot[NIntegrate[Posterior[s], {s, 0, x}] - 0.95, {x, 10}]
\end{lstlisting}
Line 1 produces a plot similar to those in Fig.~\ref{fig:posterior1}, and line 2 confirms that $\int_{-\infty}^{+\infty} p(s|\text{data})ds = 1$. Line 3 computes numerically the 95\% quantile of $p(s|\text{data})$, namely the 95\% credibility level upper limit on $s$, which is this exaple is 55.7 events.\footnote{The number 10 which appears in the command is the initial value of {\tt x} that is used to find numerically the root of the equation $\int_0^x p(s|\text{data}) ds = 0.95$. It can be different, obviously.}
\begin{figure}
\subfigure[]{\includegraphics[width=0.5\textwidth]{figures/prior1.pdf}}
\subfigure[]{\includegraphics[width=0.5\textwidth]{figures/posterior1.pdf}}
\subfigure[]{\includegraphics[width=0.5\textwidth]{figures/prior3.pdf}}
\subfigure[]{\includegraphics[width=0.5\textwidth]{figures/posterior3.pdf}}
\subfigure[]{\includegraphics[width=0.5\textwidth]{figures/prior4.pdf}}
\subfigure[]{\includegraphics[width=0.5\textwidth]{figures/posterior4.pdf}}
\caption{Three pairs of priors (not normalized) on the left and their corresponding posteriors on the right. The quantity of interest ($s$) which is estimated, and the data on which its inference is based, are explained in Section~\ref{sec:poisson}.\label{fig:posterior1}}
\end{figure}
\section{Binomial-distributed data}
\label{sec:binomial}
The measured quantity is not always a Poisson-distributed variable. For example, consider the dijet angular distribution analysis by ATLAS \cite{ATLASdijetAndAngular}. The observable, $F_\chi$, is the fraction of events which are central in each dijet mass bin. The exact definition of ``central'' is of no importance here. In general, there is some criterion, and each event represents a Bernoulli trial, leading to success if the criterion is satisfied.
The probability of having $t$ successes in $T$ trials, when each success has probability $\epsilon$, is given by the Binomial distribution:
\begin{equation}
P(t|\epsilon,T) = \binom{T}{t} \epsilon^t (1-\epsilon)^{T-t}
\end{equation}
In each bin $i$ of the total $N$ dijet mass bins, the observed number of events in bin $i$ be $T_i$, of which $t_i$ are central. Let the probability of non-signal (i.e.\ background) events be $\epsilon_{\text{bkg},i}$.
A theorist knows the probability of signal events to be central when they belong in bin $i$, which is denoted $\epsilon_{\text{sig},i}$. He also knows, like in Section~\ref{sec:poisson}, the {\em total} (not only central) number of signal events in bin $i$, which is $s\cdot f_i$.
As before, $s$ is the POI. The likelihood of the observed data, assuming some value for $s$, is
\begin{eqnarray}
L(T_i,t_i|s) &=& \prod_{i=1}^{N} \binom{T_i}{t_i} (\epsilon_i)^{t_i} (1-\epsilon_i)^{T_i - t_i}, \\
\text{where } \epsilon_i &=& \frac{1}{T_i}(\epsilon_{\text{bkg},i}(T_i-s\cdot f_i) + \epsilon_{\text{sig},i} \cdot s\cdot f_i)
\end{eqnarray}
The computation in this case is a little more complicated than the simple Poisson case, because the inputs are more. The data are not one array, but two: $T_i$ and $t_i$. The signal is also described by two arrays: $f_i$ and $\epsilon_{\text{sig},i}$.
We can use, like in eq.~\ref{eq:logL}, the logarithm of $L(T_i,t_i|s)$, to simplify and speed up the computation:
\begin{eqnarray}
\nonumber \log L(T_i,t_i|s) &=& \sum_{i=1}^{N} \left\{ \log\binom{T_i}{t_i} + t_i\log\epsilon_i + (T_i-t_i)\log(1-\epsilon_i) \right\} \\
&=& \text{const.} + \sum t_i \log(\epsilon_i) + (T_i-t_i)\log(1-\epsilon_i) \Rightarrow\\
L(T_i,t_i|s) &=& \text{const.}\cdot \exp\left\{\sum t_i \log(\epsilon_i) + (T_i-t_i) \log(1-\epsilon_i)\right\} \label{eq:logLbinom}
\end{eqnarray}
Let's create an example, where for simplicity $\epsilon_{\text{sig},i}$ is constant in all bins $i$, and equal to $\epsilon_{\text{sig}} = 0.5$. Let's use for $f_i$ the same values that we used in Section~\ref{sec:poisson}. For the background we will assume that $\epsilon_{\text{bkg},i} = \epsilon_{\text{bkg}} = 0.1$ for all bins $i$. We will use as $t_i$ the same array that we called $d_i$ in the example of Section~\ref{sec:poisson}, and we will add a new array $T_i$. To make the example look like a scenario without new physics, we will define $T_i$ by sampling a Poisson distribution with mean $t_i / \epsilon_{\text{bkg}}$. Let's put the above in code:
\begin{lstlisting}
T = {208837, 144387, 102698, 70993, 48508, 34414, 23578, 15452, 10461, 7127, 5078, 3767, 2160, 1591, 1098, 739, 437, 284, 244, 148, 13, 78, 21, 13, 7, 10, 18, 8, 5, 5};
t = {20839, 14404, 10285, 7094, 4841, 3440, 2338, 1555, 1059, 706, 515, 367, 214, 155, 112, 73, 45, 31, 23, 14, 2, 9, 2, 1, 1, 0, 2, 1, 0, 0};
nBins = Length[t];
epsilonSig = Table[0.5, {i, 1, nBins}];
epsilonBkg = Table[0.1, {i, 1, nBins}];
f = {0, 2.1*10^-05, 0.00067, 0.00097, 0.0003, 0.0016, 0.0023, 0.0085, 0.0044, 0.0068, 0.0099, 0.011, 0.019, 0.036, 0.037, 0.056, 0.17, 0.42, 0.17, 0.025, 0.0088, 0, 2.1*10^-05, 0, 0, 0.00067, 0, 0, 0, 0}/2;
\end{lstlisting}
The above choice of numbers is depicted in Fig.~\ref{fig:dataBinom}.
\begin{figure}
\includegraphics[width=\textwidth]{figures/binom/data.pdf}
\caption{An example of data (markers), background (blue bars), and assumed signal distribution for $s=500$ (green bars). Details about the chosen example are given in Section~\ref{sec:binomial}. The vertical axis shows the fraction of events which satisfy a criterion, e.g.\ being central. The error bars are, for simplicity, just the Pearson interval that spans $\pm \sqrt{\frac{t_i}{T_i}(1-\frac{t_i}{T_i})/T_i}$. \label{fig:dataBinom}}
\end{figure}
Now that the inputs are defined, let's write the computational part:
\begin{lstlisting}
epsilon[i_,s_]:=(epsilonBkg[[i]]*(T[[i]] - s*f[[i]]) + epsilonSig[[i]]*s*f[[i]])/T[[i]]
L[s_] := Exp[Sum[t[[i]]*Log[epsilon[i, s]] + (T[[i]] - t[[i]])*Log[1 - epsilon[i, s]], {i, 1, nBins}]]
Prior[s_] := UnitStep[s]
NormConst = NIntegrate[L[s]*Prior[s], {s, -Infinity, Infinity}]
\end{lstlisting}
Above, line 1 obviously defines $\epsilon_i$ as a function of $s$ and $i$, which is then used in line 2 that reproduces eq.~\ref{eq:logLbinom}, except for the constant term which is omitted on purpose, to be included in the overall normalization constant that is computed in line 4. Line 3 defines the prior $\pi(s)$ of our choice (not normalized), which here is uniform in $s>0$, and line 4 computes the normalization constant $\mathcal{N} = \int L(s)\pi(s) ds$.
At this point, in the example we use, a computational difficulty appeared. This is an opportunity to explain how to overcome it, and how to investigate such cases.
When we executed line 4, a complain was returned that the numerical integration didn't converge, and a half-done result was returned, which was $0.*10^{276465283}$, which looked like an approximation of $0\times\infty$. To understand what was going on, we tried to plot directly $L(s)$, using the command {\tt Plot[L[s],{s,0,100}]}, however that failed too, so, no wonder the integral was failing. Then instead of plotting $L(s)$ we tried something more humble; to just compute it for $s=10$, and for $s=20$. The result was two very small numbers, with a similar order of magnitude: $3.9\times 10^{-96226}$ and $4.0\times 10^{-96226}$. This is a sign that the likelihood function is computable (it had no reason to not be anyway), but its extremely small numerical value makes its plotting and integration problematic. Solution: Remember that we can multiply the likelihood with any constant, since in the end the posterior it will be normalized to 1 anyway. It would makes computation easier to divide the likelihood it by a constant of the same order of magnitude, to transform $3.9\times 10^{-96226}$ to 3.9, i.e.\ a much easier number to treat. One way to do this is to divide $L(s)$ by $L(0)$. This is indeed done in the following code:
\begin{lstlisting}
NormConstDividedByL0 = NIntegrate[L[s]/L[0]*Prior[s], {s, -Infinity, Infinity}]
NormConst = NormConstDividedByL0 * L[0]
Posterior[s_] := L[s]*Prior[s]/NormConst
\end{lstlisting}
The trick we played was to add the division by {\tt L[0]} in the integrand of line 1, defining this auxiliary variable {\tt NormConstDividedByL0}, and then we defined {\tt NormConst} in line 2 based on {\tt NormConstDividedByL0}. Line 3 is nothing but the definition of the normalized posterior $p(s|\text{data})$, according to Bayes' theorem, as was done in Section~\ref{sec:poisson}. It may seem strange that dividing and multiplying by the same number makes any difference, but in computation such things can matter.\footnote{It is not necessarily the only way to treat this case. There may be ways, by specifying a different numerical integration method for the {\tt NIntegrate} command, to make it converge without tricks. However, showing exactly how an effective solution was worked out is more educative and can prove useful in different cases.}
It is simple to plot the posterior PDF, to compute limits, and to verify that the integral of the posterior is indeed 1, using the same commands given in Section \ref{sec:poisson}. E.g., Fig.~\ref{fig:posteriorsBinom} shows the posteriors corresponding to a variety of priors.
\begin{figure}
\subfigure[]{\includegraphics[width=0.5\textwidth]{figures/binom/prior1.pdf}}
\subfigure[]{\includegraphics[width=0.5\textwidth]{figures/binom/posterior1.pdf}}
\subfigure[]{\includegraphics[width=0.5\textwidth]{figures/binom/prior2.pdf}}
\subfigure[]{\includegraphics[width=0.5\textwidth]{figures/binom/posterior2.pdf}}
\subfigure[]{\includegraphics[width=0.5\textwidth]{figures/binom/prior3.pdf}}
\subfigure[]{\includegraphics[width=0.5\textwidth]{figures/binom/posterior3.pdf}}
\caption{Three pairs of priors (not normalized) on the left and their corresponding posteriors on the right. The quantity of interest ($s$) which is estimated, and the data on which its inference is based, are explained in Section~\ref{sec:binomial}.\label{fig:posteriorsBinom}}
\end{figure}
\section{Models where the signal is not simply additive}
\label{sec:nonAdd}
In some theoretical models, the signal is not just added to a fixed Standard Model background, but interferes with it. As a result, the likelihood of the data, assuming some value for the POI, may not be as simple to express analytically as in eq.~\ref{eq:likelihood}.
It is still possible to set limits to such models, and to compute the posterior PDF\ of their parameter(s) of interest, as long as there is a way to map each value of the POI into a shape for the expected distribution. This needs to be done in a continuous way, if the POI is continuous.
To use the nomenclature of Section~\ref{sec:poisson}, one needs a function $b_i(s)$ to express the expected content of bin $i$, if the POI is $s$. Then, the likelihood function of would in general be
\begin{equation}
L({\rm data}|s) = \prod_{i=1}^N {\rm Poisson}(d_i | b_i(s)) = \prod_{i=1}^N \frac{b_i(s)^{d_i}}{d_i!} e^{-b_i(s)}.
\end{equation}
If it is not possible to have an analytical function $b_i(s)$, one can compute the expected spectrum ($b_i$) for several discrete values of $s$, and interpolate to intermediate values of $s$ by using a {\em morphing} technique, as described in \cite{Read}. An example of morphing would lie beyond the scope of the current document.
\section{Combining data}
\label{sec:combo}
Previously we talked about data in bins of an observable quantity. Nothing, however, would change if the index $i$ enumerated bins of different observables, or even different experiments. All one would do is expand the arrays to contain all independent observations.
Combining two, or more, sets of data proceeds by writing down the joint likelihood of all observations, as a function of the POI. In this aspect, Section~\ref{sec:poisson} \emph{was already} a combination of datasets, if we view the 30 bins as 30 independent observation, which, in that case originated from the same experiment. We will construct also an example where we combine observations from different experiments, which is usually what people refer to by ``combination''.
Let's keep, as input from the first experiment, the numbers used in Section~\ref{sec:poisson}. We add the suffix {\tt 1} in the variable names, to remind us that they come from the first experiment.
\begin{lstlisting}
d1 = {20839, 14404, 10285, 7094, 4841, 3440, 2338, 1555, 1059, 706, 515, 367, 214, 155, 112, 73, 45, 31, 23, 14, 2, 9, 2, 1, 1, 0, 2, 1, 0, 0};
b1 = {21000., 14000., 10000, 7100., 4800., 3400., 2300., 1600., 1100., 740., 500, 350., 230., 160., 100, 70., 46., 30., 20., 13., 8.2, 5.2, 3.2, 2.0, 1.2, 0.71, 0.42, 0.24, 0.13, 0.074};
f1 = {0, 0.0000105, 0.000335, 0.000485, 0.00015, 0.0008, 0.00115, 0.00425, 0.0022, 0.0034, 0.00495, 0.0055, 0.0095, 0.018, 0.0185, 0.028, 0.085, 0.21, 0.085, 0.0125, 0.0044, 0, 0.0000105, 0, 0, 0.000335, 0, 0, 0, 0};
A1 = Total[f1] (*which returns 0.494476*)
nBins1 = Length[b1] (*returns 30*)
\end{lstlisting}
Let's consider a second experiment, where a different observable is used, and we have it distributed in 20 bins (instead of 30 bins that we had in the first experiment). That observable is affected by the same new physics, but the detector is different, the background level is different, the shapes of background and signal are different from the first experiment. For example,
\begin{lstlisting}
d2 = {496, 1007, 1495, 1937, 2392, 2785, 3022, 3279, 3733, 3848, 4046, 4177, 4413, 4178, 3960, 3834, 3711, 3598, 3247, 2934};
b2 = {498, 990, 1466, 1921, 2346, 2738, 3088, 3393, 3648, 3850, 3997, 4087, 4119, 4094, 4015, 3883, 3702, 3477, 3214, 2919};
f2 = {0.0016, 0.0023, 0.0085, 0.0044, 0.0068, 0.0099, 0.011, 0.019, 0.036, 0.037, 0.056, 0.17, 0.42, 0.17, 0, 0, 0, 0, 0, 0};
A2 = Total[f2] (* which returns 0.9525 *)
nBins2 = Length[b2] (* returns 20 *)
\end{lstlisting}
To make the example more interesting, the data of the second experiment ({\tt d2}) correspond to the background ({\tt b2}) plus 600 signal events produced in the second experiment, which are distributed according to {\tt f2}. The elements of {\tt f2} have sum ${\tt A2} \simeq 0.95$, so, we have assumed for the second experiment most of the signal gets reconstructed. Figure~\ref{fig:b2andD2} shows the above inputs from the second experiment.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{figures/combo/b2plusSignal.pdf}
\caption{The data (markers), expected background (blue histogram), and expected distribution of signal (green) for 600 signal events produced in the second experiment of Section~\ref{sec:combo}.\label{fig:b2andD2}}
\end{center}
\end{figure}
Now that we have the data, background, and signal distribution in both experiments, we need to compute their joint likelihood, as a function of a POI, which may be some quantity proportional to the (unknown) cross-section of new physics.
For example, the POI could be the number of produced events in the first experiment, or in the second experiment, or in both experiments together, or it could be an expression of the coupling constant itself. Let's make a choice that will spare us one proportionality constant in our expressions, and define as POI the number of signal event produced in the first experiment, which we denoted already with $s$ in Section~\ref{sec:poisson}. Here, to remember the definition of our POI, we will denote it with {\tt s1}. If we infer the true value of {\tt s1}, it will be easy to divide it by the integrated luminosity of the first experiment, to convert it to the cross-section of the new physics process in the conditions of the first experiment. When that is known, the coupling strength of the new physics can be extracted, which is a universal characteristic of the new physics and doesn't depend on the experiment.
The number of signal events produced in the second experiment ({\tt s2}) is proportional to the signal events produced in the first experiment ({\tt s1}). The proportionality constant depends on the respective integrated luminosities, and on the cross-section of the new physics process in the two experiments\footnote{For example, the first experiment may be at the Tevatron, and the second at the LHC. Different initial states, different energies, different cross-section. This difference has nothing to do with the differences between detectors, because we are talking about {\em produced} events, not reconstructed. All reconstruction effects, including detector smearing and inefficiencies, are encoded by the arrays {\tt f1} and {\tt f2}, which are independent from {\tt s1} and {\tt s2}.}.
Let's assume that the second experiment recorded 2 times larger integrated luminosity than the first experiment, and the signal cross-section in the second experiment is 3 times larger than in the first experiment. That means that
\begin{equation}
{\tt s2} = 2\cdot3\cdot{\tt s1} \equiv r \cdot {\tt s1}
\end{equation}
We will need this proportionality constant ($r = 2\cdot3=6$) when we write the joint likelihood of the two experiments as a function of {\tt s1}. Obviously, a theorist can calculate $r$, if he can compute the cross-section of his model in the conditions of the two experiments, and if he knows how the two integrated luminosities compare.
Time to write the joint likelihood analytically, assuming that the two experiments are statistically independent:
\begin{equation}
L({\tt d1},{\tt d2}|{\tt s1}) = \prod_{i=1}^{\tt nBins1} {\rm Poisson}({\tt d1}_i | {\tt b1}_i + {\tt s1}\cdot {\tt f1}_i) \prod_{j=1}^{\tt nBins2} {\rm Poisson}({\tt d2}_j | {\tt b2}_j + {\tt s1}\cdot r \cdot {\tt f2}_j)
\label{eq:jointL}
\end{equation}
Now, let's implement this in Mathematica. We will first merge the arrays {\tt d1} and {\tt d2} into the \emph{joint} data {\tt d}, and the arrays {\tt b1} and {\tt b2} into the \emph{joint} background {\tt b}:
\begin{lstlisting}
d = Join[d1, d2] (* this concatenates d1 and d2 *)
b = Join[b1, b2] (* this concatenates b1 and b2 *)
\end{lstlisting}
Then we will define a new array {\tt f} using {\tt f1} and {\tt f2}. Note that, in eq.~\ref{eq:jointL}, all elements of {\tt f2} are multiplied by $r$. We can simplify our expressions by defining letting {\tt f2} \emph{absorb} $r$. This is done by writing {\tt f} as:
\begin{lstlisting}
r = 2*3 (*this is what we assume in this example*)
f = Join[f1,r*f2] (*first f1 elements, then r*f2 elements*)
\end{lstlisting}
Figure~\ref{fig:jointData} shows the contents of {\tt b} and {\tt d}, and how the new physics would appear in this joint dataset (according to {\tt f}) if we assumed ${\tt s1} = 100$ events, i.e.\ ${\tt s2} = r\cdot s1 = 600$ events.
\begin{figure}
\begin{center}
\subfigure[]{\includegraphics[width=0.45\textwidth]{figures/combo/jointData_lin.pdf}}
\subfigure[]{\includegraphics[width=0.45\textwidth]{figures/combo/jointData_log.pdf}}
\caption{The data (markers), expected background (blue histogram), and expected distribution of signal (green) for 100 signal events produced in the first experiment, which correspond to 600 signal events produced in the second experiment. The plot on the left uses linear scale, which makes it easier to see the expected signal in the second experiment, and the right plot uses logarithmic scale to make visible the smaller signal expected in the first experiment. See Section~\ref{sec:combo}.\label{fig:jointData}}
\end{center}
\end{figure}
Using the same computational trick as in eq.~\ref{eq:logL}, we write the joint likelihood (up to a constant that will be absorbed by the normalization constant) of eq.~\ref{eq:jointL} as:
\begin{lstlisting}
L[s1_] := Exp[Sum[d1[[i]]*Log[Max[0, b1[[i]] + s1*f1[[i]]]], {i, 1, nBins1}] - s1*A1 + Sum[d2[[i]]*Log[Max[0, b2[[i]] + s1*r*f2[[i]]]], {i, 1, nBins2}] - s1*r*A2]
\end{lstlisting}
The above expression uses explicitly the arrays of the two experiments and the constant $r$, but since we have also defined the joint arrays {\tt d}, {\tt b} and {\tt f} which absorbs $r$ in its second part, we can write the {\em totally equivalent} expression:
\begin{lstlisting}
nBins = nBins1+nBins2; (*total bins: 30+20 = 50*)
A = Total[f]; (* sum of elements of f *)
L[s1_] := Exp[Sum[d[[i]]*Log[Max[0, b[[i]] + s1*f[[i]]]], {i, 1, nBins}] - s1*A]
\end{lstlisting}
The rest is just like before. We define a prior PDF\ as a function of {\tt s1}, we find the normalization constant, and we get a posterior PDF:
\begin{lstlisting}
Prior[s1_] := UnitStep[s1] (* constant for s1>0 *)
NormConst = NIntegrate[L[s1]*Prior[s1], {s1, -Infinity, Infinity}]
Posterior[s1_] := L[s1]*Prior[s1]/NormConst
\end{lstlisting}
\begin{figure}
\subfigure[]{\includegraphics[width=0.5\textwidth]{figures/combo/prior1.pdf}}
\subfigure[]{\includegraphics[width=0.5\textwidth]{figures/combo/posterior1.pdf}}
\subfigure[]{\includegraphics[width=0.5\textwidth]{figures/combo/prior2.pdf}}
\subfigure[]{\includegraphics[width=0.5\textwidth]{figures/combo/posterior2.pdf}}
\subfigure[]{\includegraphics[width=0.5\textwidth]{figures/combo/prior3.pdf}}
\subfigure[]{\includegraphics[width=0.5\textwidth]{figures/combo/posterior3.pdf}}
\caption{Three pairs of priors (not normalized) on the left and their corresponding posteriors on the right. The red PDF\ corresponds to the posterior inferred only from the first experiment, the blue to the PDF\ only from the second experiment, and the result of the combination is shown by the black dashed PDF. The quantity of interest ({\tt s1}) which is estimated, and the data on which its inference is based, are explained in Section~\ref{sec:combo}.\label{fig:combinationPosterior}}
\end{figure}
Figure~\ref{fig:combinationPosterior} shows the results we get from the current numerical example, with the three following indicative prior assumptions for {\tt s1}:
\begin{lstlisting}
Prior[s1_] := UnitStep[s1]
Prior[s1_] := UnitStep[s1]*(Exp[-0.02 s1])
Prior[s1_] := UnitStep[s1]*(0.1 + Exp[-(s1 - 80)^2/100])
\end{lstlisting}
Figure~\ref{fig:combinationPosterior} includes the posteriors inferred using only the first or only the second experiment. These are computed as follows (the suffix {\tt 1} and {\tt 2} distinction the first from the second experiment):
\begin{lstlisting}
L1[s1_] := Exp[Sum[d1[[i]]*Log[Max[0, b1[[i]] + s1*f1[[i]]]], {i, 1, nBins1}] - s1*A1]
NormConst1 = NIntegrate[L1[s1]*Prior[s1], {s1, -Infinity, Infinity}]
Posterior1[s1_] := L1[s1]*Prior[s1]/NormConst1
L2[s1_] := Exp[Sum[d2[[i]]*Log[Max[0, b2[[i]] + s1*r*f2[[i]]]], {i, 1, nBins2}] - s1*r*A2]
NormConst2 = NIntegrate[L2[s1]*Prior[s1], {s1, -Infinity, Infinity}]
Posterior2[s1_] := L2[s1]*Prior[s1]/NormConst2
\end{lstlisting}
\section{Multiple parameters of interest}
\label{sec:multidim}
The POI in the above examples was always a single variable ($s$), but it can be multidimensional ($\vec{s}$). A characteristic class of models with multiple POIs are SUSY models. Here is an example where the data of Section.~\ref{sec:poisson} are interpreted to find a posterior in a 2-dimensional parameter space.
Let's consider a quite generic model, where the signal is Gaussian-distributed, its mean depends on an unknown POI {\tt s1}, and its amplitude by another unknown POI {\tt s2}. Its width could be given by a third parameter {\tt s3}, but for visualization purposes it is better to keep the space of unknown parameters 2-dimensional, so, we will assume that the width is constant. Here is such a model:
\begin{lstlisting}
f[s1_] := Table[Exp[-(s1 - i)^2/10], {i, 1, nBins}]
\end{lstlisting}
where {\tt nBins} is the length of the {\tt b} array of Section \ref{sec:poisson}, namely ${\tt nBins} = 30$. Figure~\ref{fig:signals} shows some examples of {\tt f[s1]} for various values of {\tt s1}.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{figures/multidim/signals.pdf}
\caption{Examples of {\tt f[s1]} defined in Section~\ref{sec:combo}, for {\tt s1} equal to 1, 8, 15, 22, and 29.\label{fig:signals}}
\end{center}
\end{figure}
In this example we will reuse the background array {\tt b} of Section~\ref{sec:poisson}.
\begin{lstlisting}
b = {21000., 14000., 10000, 7100., 4800., 3400., 2300., 1600., 1100., 740., 500, 350., 230., 160., 100, 70., 46., 30., 20., 13., 8.2, 5.2, 3.2, 2.0, 1.2, 0.71, 0.42, 0.24, 0.13, 0.074};
\end{lstlisting}
To make the case more interesting, we will use data which are generated after injecting some signal on top of this background. Specifically, the injected signal will be distributed according to $50\cdot e^{-\frac{(i-10)^2}{5}}$, where $i$ is the bin index. This injected signal is on purpose narrower than {\tt f[10]}, to show what happens when the actual signal shape is not exactly like the hypothesis one uses to interpret the data. So, here are the data of this example:
\begin{lstlisting}
d = {20985, 13927, 9899, 7139, 4821, 3398, 2348, 1617, 1079, 798, 555, 365, 224, 163, 88, 75, 52, 31, 21, 11, 8, 2, 5, 3, 0, 1, 0, 0, 0, 0}
\end{lstlisting}
It is simple to write the likelihood of the data, as a function of the two POIs ({\tt s1}, {\tt s2}):
\begin{lstlisting}
L0 = Exp[ Sum[d[[i]]*Log[b[[i]]], {i, 1, nBins}] ]
L[s1_,s2_] := Exp[ Sum[d[[i]]*Log[Max[0, b[[i]] + s2*f[s1][[i]]]], {i, 1, nBins}] - s2*Total[f[s1]] ] / L0
\end{lstlisting}
For computational reasons, to avoid enormous numbers, we multiply {\tt L[s1,s2]} by ${\tt L0}$, which is proportional to the likelihood of the data when no signal is assumed. Notice that {\tt s1} is passed as an argument to {\tt f[s1]}, to determine the signal shape, and then the shape is scaled by {\tt s2}.
The prior of course needs to be defined in the same 2-dimensional space. For example, it could represent the presumption that {\tt s2} (the produced signal amount) has to be non-negative, while all values of {\tt s1} are considered equally likely:
\begin{lstlisting}
Prior[s1_,s2_] := UnitStep[s2]
\end{lstlisting}
It is interesting to demonstrate, in the 2-dimensional case, what would happen if we introduced some non-trivial presumption in the prior. Let's presume that {\tt s1} is more likely to be around 15 (which is at the middle of the spectrum), as expressed by the following prior:
\begin{lstlisting}
Prior[s1_, s2_] := UnitStep[s2]*Exp[-(s1-15)^2/5]
\end{lstlisting}
Figure~\ref{fig:2dposteriors1} shows the shape of the posterior (ignoring the normalization constant) for both priors. The posterior, up to a normalization constant, is:
\begin{lstlisting}
Posterior[s1_, s2_] := L[s1,s2]*Prior[s1,s2]
\end{lstlisting}
The posterior with uniform prior, which has the same shape as the likelihood function, does not have its maximum exactly at {\tt (s1,s2)=(10,50)}, and the reason is dual:
\begin{itemize}
\item The data ({\tt d}) are {\em consistent} with injected signal of {\tt (s1,s2)=(10,50)}, but it is ultimately the result of Poisson random fluctuations in each bin, so, it is expectable that the best-fitting {\tt (s1,s2)} will be close to that point, but not exactly there.
\item The signal shape {\tt f[s1]} that is used to compute the likelihood is wider than the actual signal that has been injected, on purpose, to demonstrate this scenario, which is quite plausible, because Nature may produce some signal, which we ignore, so we may try to interpret the data to infer the parameters of a different signal.
\end{itemize}
For comparison, Fig.~\ref{fig:2dposteriors2} shows the same result, with exactly the same {\tt b} and {\tt d}, when {\tt f[s1]} has been modified to have the same width as the injected signal:
\begin{lstlisting}
f[s1_] := Table[Exp[-(s1-i)^2 / 5], {i, 1, nBins}]
\end{lstlisting}
The difference is that, when the prior is uniform (red contours in Fig.~\ref{fig:2dposteriors2}), the posterior is more narrow in {\tt s1}. This makes sense; it's more clear where the signal is, when we have gotten the signal width right. As a result, the effect of the non-uniform prior is quite different in Fig.~\ref{fig:2dposteriors2} than in \ref{fig:2dposteriors1}: The prior ``pulls'' the posterior towards {\tt s1}=15, but the likelihood is larger around ${\tt s1}\simeq 10$, so, the resulting posterior has two local maxima, of which the one near {\tt s1}=15 prevails with greater probability density.
It is also worth noting that, in both Fig.~\ref{fig:2dposteriors1} and \ref{fig:2dposteriors2}, the non-uniform prior in {\tt s1} does not only pull {\tt s1} towards 15, but it also changes the most likely value of {\tt s2}. This happens because {\tt s1} and {\tt s2} are correlated, as one can see from the asymmetric shape of the contours. This can only be appreciated in a multidimensional space, where there is room for correlations: The prior may be factorized to one part that depends only on {\tt s1} and another that depends only on {\tt s2}, but its effect on the posterior is not factorized in a similar way; a change in the prior with respect to {\tt s1} will modify the posterior in all dimensions.
\begin{figure}
\begin{center}
\subfigure[]{\includegraphics[width=0.45\textwidth]{figures/multidim/posteriors1.pdf} \label{fig:2dposteriors1}}
\subfigure[]{\includegraphics[width=0.45\textwidth]{figures/multidim/posteriors2.pdf} \label{fig:2dposteriors2}}
\caption{Red: Contour plot of the posterior PDF corresponding to a prior that is uniform in {\tt s2}. Blue: The posterior corresponding to a prior where {\tt s2} is distributed around 15, according to $e^{-(i-15)^2/5}$. Left: The signal shape ({\tt f[s1]}) assumed to compute the likelihood ({\tt L[s1,s2]}) of the data is wider than the injected signal. Right: The ({\tt f[s1]}) has been modified to have the same width as the signal that is actually injected in the data. See discussion in Section~\ref{sec:multidim}. \label{fig:2dposteriors}}
\end{center}
\end{figure}
\section{Inclusion of systematic uncertainty}
\label{sec:systematics}
Systematic uncertainties are uncertainties about assumptions which affect the measurement. If these assumptions were slightly different, within their own (systematic uncertainty), that would have an effect on the measurement. To quantify this effect, we need first to use parameters to quantify the assumptions. These parameters are called ``nuisance parameters''.
The procedure to take these uncertainties into account starts by treating the nuisance parameters as if they were POIs, alongside with the actual POIs. This leads to a multi-dimensional space of parameters, where a prior needs to be defined, and a posterior is computed based on the data. The posterior PDF\ can be integrated along the dimension of the nuisance parameter(s), leaving only the actual POIs as free variables in the posterior.
Let's write this analytically, denoting the nuisance parameter(s) with $n$, and the actual POI with $s$. From Bayes' theorem,
\begin{equation}
p(s,n|{\rm data}) = \frac{L({\rm data}|s,n) \pi(s,n) }{ \mathcal{N}} ,
\end{equation}
where
\begin{equation}
\mathcal{N} \equiv \iint L({\rm data}|s,n)\pi(s,n)\;ds\;dn.
\end{equation}
Then, since we are not interested in the actual value of $n$, but only of $s$, the posterior we actually care about is
\begin{equation}
p(s|{\rm data}) = \int p(s,n|{\rm data})\;dn = \frac{1}{{\mathcal{N}}} \int L({\rm data}|s,n) \pi(s,n)\;dn. \label{eq:posteriorNuis1}
\end{equation}
If the prior $\pi(s,n)$ is factorable as:
\begin{equation}
\pi(s,n) = \pi_s(s) \; \pi_n(n), \label{eq:priorFactor}
\end{equation}
then
\begin{equation}
p(s|{\rm data}) = \frac{\pi_s(s)}{\mathcal{N}} \int L({\rm data}|s,n) \pi_n(n)\;dn
\end{equation}
This integral can be read as ``the expected likelihood function, over all possible values of the nuisance parameter $n$'', which can be denoted:
\begin{equation}
p(s|{\rm data}) = \frac{\pi_s(s)}{\mathcal{N}} \langle L({\rm data}|s,n) \rangle_{n} \label{eq:posteriorNuis2}
\end{equation}
Notice the similarity between eq.~\ref{eq:posteriorNuis2} and eq.~\ref{eq:posteriorPoisson}. The only difference is that the likelihood at $s$ is replaced by the average likelihood. If one wishes to try a different prior for $s$ he can do it by just changing $\pi_s(s)$, without having to recalculate the average likelihood. This can be a great advantage in practical applications, where calculating the average likelihood (namely, performing the ``convolution'' of the nuisance parameters) is time-consuming.
Equation~\ref{eq:posteriorNuis2} is based on the condition that the prior is factorable as in eq.~\ref{eq:priorFactor}. This condition is easy to satisfy, and actually most intuitive prior choices would satisfy it. Usually the nuisance parameters express some uncertainty about the experimental conditions, like the actual detector response etc. There is no reason to correlate, in the prior, the true cross-section of a process with the nuisances of the detector\footnote{The posterior $p(s,n|{\rm data})$ may indicate that $s$ and $n$ are correlated, but don't confuse the prior with the posterior; eq.~\ref{eq:priorFactor} concerns just the prior.}. However, this may not be the case for theoretical nuisance parameters, which may be intimately related to $s$ even a-priori. After all, eq.~\ref{eq:priorFactor} refers to the prior, so, someone may wish to assume some $\pi(s,n)$ that isn't factorable, just because that's what he finds interesting. We can not prevent that, so, the numerical examples below do not rely on the assumption of eq.~\ref{eq:priorFactor}, but make use of eq.~\ref{eq:posteriorNuis1}, which is generally true.
\begin{figure}
\begin{center}
\subfigure[]{\includegraphics[width=0.45\textwidth]{figures/conv/signalsMean.pdf} \label{fig:convSig1}}
\subfigure[]{\includegraphics[width=0.45\textwidth]{figures/conv/signalsRMS.pdf} \label{fig:convSig2}}
\caption{Left: Examples of the signal {\tt f[n]} defind in Section~\ref{sec:convMean}, for {\tt n} equal to 0.9, 1 and 1.1.
Right: Examples of the {\tt f[n]} of Section~\ref{sec:convRMS}, for {\tt n} from 0.5 to 1.5. \label{fig:convSig}}
\end{center}
\end{figure}
\subsection{Example of uncertainty in signal position}
\label{sec:convMean}
For example, let's take the background and data of the example in Section~\ref{sec:poisson}. The goal now is to infer the amount of produced signal ($s$), where the signal is known to be Gaussian-distributed around bin 15, with standard deviation equal to 3 bins. However, there is some doubt about the actual position of the signal; maybe the mean is not exactly 15. This could reflect, for example, an uncertainty about the {\em actual} detector energy response, if the bins are defined in an observable which depends on energy.
Let's parametrize this uncertainty using a nuisance parameter $n$, such that the signal peaks at $15\cdot n$. The following lines implement this parametrization. The array {\tt f} is a function of {\tt n}, and is normalized to sum {\tt A = 0.49}, simply to keep the same acceptance as in Section~\ref{sec:poisson}.
\begin{lstlisting}
A = 0.49 (* some arbitrary acceptance *)
f[n_] := A * Table[Exp[-(15*n - i)^2/(2*3^2)], {i, 1, nBins}] / Sum[Exp[-(15*n - i)^2/(2*3^2)], {i, 1, nBins}]]
\end{lstlisting}
Figure~\ref{fig:convSig1} demonstrates this {\tt f[n]}.
Then we compute, up to a constant term which will be absorbed in the final normalization, the likelihood function {\tt L[s,n]}, which corresponds to $L(\text{data}|s,n)$ of eq.~\ref{eq:posteriorNuis1}. To avoid problematically large numbers, we divide by the constant {\tt L0}, which is assuming no signal:
\begin{lstlisting}
L0 = Exp[Sum[d[[i]]*Log[b[[i]]], {i, 1, nBins}]];
L[s_, n_] := Exp[Sum[d[[i]]*Log[Max[0, b[[i]] + s*f[n][[i]]]], {i, 1, nBins}] - s*A]/L0
\end{lstlisting}
Note that {\tt f[n]} in this example is constructed to have $\sum_{i=1}^{\tt nBins} f_i(n) = {\tt A} = 0.49$, for any choice of $n$. In a more general case, where the acceptance depends on $n$, one would replace {\tt A} by {\tt Total[f[n]]}, to compute the acceptance at the same time with {\tt L[s,n]}. This would make computation slightly slower, which is why it is avoided here.\footnote{If one uses compiled code for these computations, he will probably not notice any difference in performance, but Mathematica, in its simplest version, is an ``interpreted'' language, not compiled, which makes it considerably slower.}
We need to define a prior PDF (up to a constant), which will be assumed to be uniform in {\tt s}, allowing only positive values of {\tt s}, and Gaussian in {\tt n}, with maximum probability density at {\tt n = 1} and standard deviation equal to 0.1:
\begin{lstlisting}
Prior[s_, n_] := UnitStep[s] * Exp[-(n - 1)^2/(2*0.1^2)]
\end{lstlisting}
The posterior $p(s,n|\text{data})$, before integration along $n$, is given (up to a constant) by the product {\tt L[s,n] * Prior[s,n]}, which is shown in Fig.~\ref{fig:convPosteriorMeanSN}.
\begin{figure}
\begin{center}
\subfigure[]{\includegraphics[width=0.45\textwidth]{figures/conv/posteriorMeanSN.pdf} \label{fig:convPosteriorMeanSN}}
\subfigure[]{\includegraphics[width=0.45\textwidth]{figures/conv/posteriorRMSSN.pdf} \label{fig:convPosteriorRMSSN}}
\caption{Left: The contours of $p(s,n|\text{data})$ from the example in Section~\ref{sec:convMean}.
Right: The contours of $p(s,n|\text{data})$ from the example in Section~\ref{sec:convRMS}.}
\end{center}
\end{figure}
The next step is to ``integrate out'' $n$, in which we are not really interested. Here, this is done using a simple approximation of the integral, where we break the interval $n\in[0.5,1.5]$ in 100 steps of size 0.01, and we approximate
\begin{eqnarray}
\int_{-\infty}^\infty L(\text{data}|s,n) \pi(s,n)\; dn &\simeq& \int_{0.5}^{1.5} L(\text{data}|s,n) \pi(s,n) \;dn \\
&\simeq& \sum_{i=1}^{100} L(\text{data}|s, n_i) \pi(s, n_i) \cdot 0.01, \label{eq:trapezoid} \\
\text{where } n_i &=& 0.5 + i\cdot 0.01.
\end{eqnarray}
This approximation is justified by $\pi(s,n)$ being almost zero for $n>1.5$ or $n<0.5$, and eq.~\ref{eq:trapezoid} being a simple numerical integration method, admittedly not the most advanced, but simple enough to implement in the following few lines:
\begin{lstlisting}
integ[s_] := Sum[L[s, n]*Prior[s, n], {n,0.5,1.5,0.01}]
normConst = NIntegrate[integ[s], {s, -Infinity, Infinity}]
Posterior[s_] := integral[s] / normConst
\end{lstlisting}
The function {\tt integ[s]} corresponds to the result of eq.~\ref{eq:trapezoid}. The constant 0.01 has been omitted, or rather absorbed by the {\tt normConst} normalization constant computed in line 2.
Finally, the (normalized) posterior PDF\ of $s$ is given in line 3, which can now be plotted, or used to compute its 95\% or any other quantile, as shown in Section~\ref{sec:poisson}.
Figure~\ref{fig:posteriorMeanS} shows the resulting posterior, after this convolution of the nuisance parameter $s$, and compares it to the posterior one would get if there were no uncertainty in $n$, namely, if $\pi(s,n)$ were $$\pi(s,n)=\Theta(s)\cdot \delta(n-1),$$ where $\Theta(s)$ is the step function represented in Mathematica by {\tt UnitStep[s]}, and $\delta(n-1)$ is just the Kronecker $\delta$, pinning $n$ to 1. The latter posterior, which is unaffected by systematic uncertainty, is given simply by:
\begin{lstlisting}
normConst = NIntegrate[ L[s,1]*Prior[s,1] , {s,-Infinity,Infinity}]
Posterior[s_] := L[s,1]*Prior[s,1] / normConst
\end{lstlisting}
This comparison shows that the convolution of $n$ makes the posterior wider, and the upper limit worse (looser). Specifically, the upper limit, and 95\% credibility level, without systematic uncertainty is 131.315, and with this uncertainty it becomes 153. However, it is not always true that inclusion of systematic uncertainty loosens the upper limit. We will see in Section~\ref{sec:convRMS} such an example.
\begin{figure}
\begin{center}
\subfigure[]{\includegraphics[width=0.45\textwidth]{figures/conv/posteriorsMean.pdf} \label{fig:convPosteriorMeanS}}
\subfigure[]{\includegraphics[width=0.45\textwidth]{figures/conv/posteriorsRMS.pdf} \label{fig:convPosteriorRMSS}}
\caption{Dashed red: The posterior $p(s|\text{data})$ without any systematic uncertainty. Solid blue: The same posterior after convoluting systematic uncertainty. Left: Using the systematic uncertainty of the example in Section~\ref{sec:convMean}. Right: Using the systematic uncertainty of Section~\ref{sec:convRMS}.}
\end{center}
\end{figure}
\subsection{Example of uncertainty in signal width}
\label{sec:convRMS}
In this example we follow the same steps as in Section~\ref{sec:convMean}, except that we formulate {\tt f[n]} at the beginning in a different way. Here we wish $n$ to parametrize some uncertainty in the width of signal which is known to be Gaussian with mean equal to 15 and width somewhere near 3. Here is the only different command:
\begin{lstlisting}
f[n_] := A*Table[Exp[-(15 - i)^2/(2*(3*n)^2)], {i, 1, nBins}]/
Sum[Exp[-(15 - i)^2/(2*(3*n)^2)], {i, 1, nBins}]];
\end{lstlisting}
Figure~\ref{fig:convSig2} shows some examples of this {\tt f[n]}.
The posterior is assumed the same as in the previous example. The resulting $p(s,n|\text{data})$ is shown in Fig.~\ref{fig:convPosteriorRMSSN}, and the final $p(s|\text{data})$ in Fig.~\ref{fig:convPosteriorRMSS}.
Interestingly, the effect of this uncertainty is much smaller than the uncertainty of Section~\ref{sec:convMean}. Not only it is much smaller, but it goes in the opposite direction: it makes the upper limit slightly stricter than if we had no uncertainty at all. Specifically, the 95\% upper limit moves from 131.315 to 130.825. This is admittedly a minuscule improvement, but it is possible to find an example where the improvement is noticeable. For example, if instead of the prior of Section~\ref{sec:convMean} we use a ``box'' prior in $n$:
\begin{lstlisting}
Prior[s_, n_] := UnitStep[s] * UnitStep[n-0.5]*UnitStep[1.5-n]
\end{lstlisting}
then the effect of this width systematic uncertainty is more visible (Fig.~\ref{fig:convFlatRMS}), and it changes the 95\% upper limit to 126.7, which is a more clear improvement.
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{figures/conv/posteriorsRMS_flat.pdf}
\caption{Same as Fig.~\ref{fig:convPosteriorRMSS}, except that this time the systematic uncertainty of Section~\ref{sec:convRMS} is convoluted using a prior which does not constrain $n$ to be Gaussian-distributed around 1$\pm$0.1, but gives $n$ equal probability to be anywhere between 0.5 and 1.5. \label{fig:convFlatRMS}}
\end{center}
\end{figure}
Many people are under the impression that systematic uncertainties have to always make limits worse, because ``less information has to make things worse'', or something along these lines. This is a verbal over-simplification of the actual mathematical procedure. Systematic uncertainty is not only ``less information''; it is also ``more possibilities''. Upper limits get worse (looser) when the data show an excess, and better (stricter) when there is a deficit. If we have an excess, and no uncertainty whatsoever, we are in a situation that disfavors the limit, and there is no chance the situation is any different. But if some systematic uncertainty is introduced, it might allow some scenarios where the situation is more favorable. If we average out all scenarios, which is what the convolution of eq.~\ref{eq:posteriorNuis1} does, then the limit might improve. Empirically, this doesn't happen very often, but it has been observed several times, in numerous analyses, including the numerical example above.
\section{Computing the coverage of limits}
\label{sec:coverage}
This section will please readers who like Frequentist limits. The ``holy grail'' in Frequentist limits is {\em coverage}. Frequentist constructions provide (or should provide at least) intervals of specific coverage, a typical choice in High Energy Physics being 95\%. Intervals of coverage 95\% are called ``95\% confidence intervals'' (CIs).
\paragraph{What is coverage?} To understand that, one needs first to realize that, even if the laws of Nature don't change, a different observation of Nature would result in different data; there are random fluctuations. Both Bayesian inference and Frequentist constructions use data as input, so, their outputs (PDFs and CIs) are also subject to statistical fluctuation. If the POI has some value ($v$), and we collect many (in principle infinite) independent datasets, and we compute an interval using each one of these datasets, we will find $v$ within our interval with frequency $c$. This number ($c$) is the coverage of the interval. It is a statistical property of the interval, and the procedure used to determine it. Obviously, the coverage may depend on $v$, and on the procedure used to find the interval.
Coverage is not only a property of Frequentist intervals; Any interval, however it is defined, has some coverage.
To compute the coverage of a Bayesian credibility interval, we can write a loop which repeatedly creates pseudo-data that are consistent with some assumed value of the POI, repeats the Bayesian limit-setting procedure, and in the end count how many times the assumed value of the POI was within the interval.
The following lines compute the coverage of an upper limit with 95\% credibility
\begin{lstlisting}
b = {21000., 14000., 10000, 7100., 4800., 3400., 2300., 1600., 1100., 740., 500, 350., 230., 160., 100, 70., 46., 30., 20., 13., 8.2, 5.2, 3.2, 2.0, 1.2, 0.71, 0.42, 0.24, 0.13, 0.074};
f = {0, 0.0000105, 0.000335, 0.000485, 0.00015, 0.0008, 0.00115, 0.00425, 0.0022, 0.0034, 0.00495, 0.0055, 0.0095, 0.018, 0.0185, 0.028, 0.085, 0.21, 0.085, 0.0125, 0.0044, 0, 0.0000105, 0, 0, 0.000335, 0, 0, 0, 0};
nBins = Length[b]
A = Total[f]
L[s_] := Exp[Sum[d[[i]]*Log[Max[0, b[[i]] + s*f[[i]]]], {i, 1, nBins}] - s*A]/L0
Prior[s_] := UnitStep[s]
Posterior[s_] := L[s]*Prior[s]/NormConst;
v = 100;
nPseudo = 1000;
cred = 0.95;
answers = Table[0, {i, 1, nPseudo}];
Do[
d = Table[Random[PoissonDistribution[b[[i]] + v*f[[i]]]], {i, 1, nBins}];
L0 = Exp[Sum[d[[i]]*Log[b[[i]]], {i, 1, nBins}]];
NormConst = NIntegrate[L[s]*Prior[s], {s,-Infinity,Infinity}];
integ = NIntegrate[Posterior[s], {s, -Infinity, v}];
If[integ < cred, answers[[i]] = 1],
{i, 1, nPseudo}]
N[Total[answers]/nPseudo]
\end{lstlisting}
\begin{description}
\item[Line 1:] Define the background in each bin. Same as in Section~\ref{sec:poisson}.
\item[Line 2:] Define the signal distribution. Same as in Section~\ref{sec:poisson}.
\item[Line 3:] Number of bins. {\tt nBins} is 30 in this example.
\item[Line 4:] The acceptance to the signal, given by eq.~\ref{eq:acceptance}. In this example $A \simeq 0.49$.
\item[Line 5:] Define the likelihood function.
\item[Line 6:] Define a flat prior for $s>0$. It can obviously change, and the coverage will have some dependence on the prior, since the prior is part of the procedure that defines the Bayesian credibility interval.
\item[Line 7:] Define the formula for the posterior.
\item[Line 8:] The assumed amount of produced signal. Variable {\tt v} corresponds to the $v$ used above, in the definition of coverage. This is the amount of signal that will be added to the background to generate each set of pseudo-data, in line 13.
\item[Line 9:] Define how many iterations to make in the loop which starts in line 12 and ends in line 18. A large number of iterations will lead to a more precise estimation of the actual coverage.
\item[Line 10:] Define the credibility level of the upper limit whose coverage will be estimated. 0.95 means 95\% credibility level.
\item[Line 11:] Initialize an array of {\tt nPseudo} answers. The elements initially are all 0, and some of them will turn into 1 inside the loop, in line 17. Each element represents the output of a set of pseudo-data. If the element is 0, it means that the interval failed to contain the true POI value ($v$). If it is 1, it means that the interval succeeded to contain $v$, namely the upper limit is a number greater than $v$.
\item[Line 12:] Starting the loop.
\item[Line 13:] Define the data, which consist of Poisson fluctuations of the content of each bin, with mean equal to the background of the bin, plus the signal events that would end up in the bin if $v$ signal events were produced. Clearly, {\tt d} are data consistent with the hypothesis that $v$ signal events are produced.
\item[Line 14:] Calculate the constant {\tt L0} which is introduced in line 5 to make {\tt L[s]} easier the handle numerically.
\item[Line 15:] Compute the normalization constant which normalizes the posterior defined in line 7.
\item[Line 16:] Compute $\int_{-\infty}^v p(s|\text{data})\;ds$, and store it in variable {\tt integ}.
\item[Line 17:] If {\tt integ} is less than {\tt cred}, then register the value 1 in the {\tt answers} array, in the position that corresponds to the current pseudo-data set. The logic is that, if {\tt integ} is less than {\tt cred}, then the upper limit with credibility {\tt cred} can't be but some number greater than $v$. That's obvious, since the upper limit is defined as the $x$ which satisfies $\int_{\-infty}^x p(s|\text{data})\;ds = {\tt cred}$. This trick allows us to know if the interval covers $v$, without really computing the interval, which would be a more CPU-expensive computation.
\item[Line 18:] The loop closes, after {\tt nPseudo} iterations.
\item[Line 19:] Out of the {\tt nPseudo} trials, some have succeeded, in the sense that the interval covered the actual POI value ($v$). We can count these successes by summing the elements of the {\tt answers} array. Dividing by {\tt nPseudo}, we get an estimator of the success rate, which is, by definition, the coverage.
\end{description}
Running the above code, with the numbers given, returned coverage 0.960.
Smaller values of $v$ result in larger coverage, and when $v$ increases the coverage asymptotically becomes equal to credibility, namely 0.95. This is true for any prior one may assume, and there are some special non-informative priors which make the convergence faster.
\section{Summary}
It has been shown how to compute posterior PDFs and limits to any arbitrary signal in the most common case of Poisson-distributed data (Section~\ref{sec:poisson}) and in the case of binomially distributed data (Section~\ref{sec:binomial}).
The treatment is described for signals that are not simply additive to the background, but interfere with it (Section~\ref{sec:nonAdd}).
It was then shown how to combine datasets in the most general case where the datasets are coming from dissimilar experiments and dissimilar observables (Section~\ref{sec:combo}).
Then the case of simultaneous estimation of multiple POIs was shown in Section~\ref{sec:multidim}.
All the above computations assumed no systematic uncertainties, until Section~\ref{sec:systematics}, where the principle was laid out to perform convolution of systematic uncertainties, and two complete examples where shown.
Finally, Section~\ref{sec:coverage} shows the way to compute the coverage of a Bayesian upper limit, which can be interesting to someone who, being used to Frequentist limits, may appreciate coverage.
Emphasis has been given to the practical implementation of all computations, and remarks have been made to gain some insight in the results.
\section{Acknowledgements}
G.C.\ thanks the theorists Michele Redi and Mads Frandsen for their encouragement to proceed with this work, expecting it to be welcomed with interest by many theorists and experimentalists.
He also thanks his ATLAS collaborators, Glen Cowan, Alex Read, Eilam Gross, David Adams, and Diego Casadei, for all our discussions\footnote{Thanking people for discussing does not imply that they necessarily endorse everything written here. Responsibility lies with the author.}.
If you spot errors, thank you in advance for informing the author (\url{[email protected]}).
Finally, if you learned something useful that you can apply in your phenomenological or experimental research, please cite this document, to acknowledge the author for his effort. Thank you.
|
2,877,628,088,522 | arxiv | \section{Introduction}\label{introduction}
The binary system \object{PSR~B1259$-$63}/\object{LS~2883}\footnote{Star 2883 in the catalog of Luminous Stars in the Southern Milky Way \citep{stephenson71}. The use of SS~2883 should be avoided, see \cite{negueruela11}.} is formed by a young 48~ms radio pulsar in an eccentric orbit of 3.4~years around a massive main-sequence star \citep{johnston92, johnston94}. The parameters of the system are shown in Table~\ref{table:system}. The spectral type of the massive star, O9.5\,Ve, and some of the binary parameters have been recently updated by \cite{negueruela11}, who obtained a distance to the system of $2.3\pm0.4$~kpc. Close to the periastron passage the system displays non-thermal unpulsed emission that has been detected in radio \citep{johnston05}, X-rays \citep{cominsky94, uchiyama09, chernyakova09}, hard X-rays up to 200~keV \citep{grove95}, and very high energy (VHE; 0.1--100~TeV) $\gamma$-rays above $380$~GeV \citep{aharonian05, aharonian09}. In the range $\sim$0.1--100~GeV strict upper limits were obtained by EGRET close to the 1994 periastron passage \citep{tavani96}, and the source has not yet been observed by \textit{AGILE} and \textit{Fermi} close to periastron. The VHE emission is variable on orbital timescales, and is interpreted as the result of inverse Compton upscattering of stellar UV photons by relativistic electrons, which are accelerated in the shock between the relativistic wind of the young non-accreting pulsar and the wind of the stellar companion (see \citealt{maraschi81, tavani97, kirk99, dubus06, bogovalov08}, and references therein).
The high-mass binaries \object{LS~5039} and \object{LS~I~+61~303} have also been detected at VHE, and show a broadband spectral energy distribution (SED) similar to that of \object{PSR~B1259$-$63}/\object{LS~2883} \citep{dubus06}. In contrast to X-ray binaries, all three sources have the peak of the SED at MeV-GeV energies. For these reasons, they can be considered gamma-ray binaries. However, the nature of the compact objects in \object{LS~5039} and \object{LS~I~+61~303} is unknown because their masses are not well constrained by the system mass functions \citep{casares05a, casares05b, aragona09}, and no pulsations have been found. These systems have been extensively observed at VHE during several orbital cycles, while the observations of \object{PSR~B1259$-$63}, the only system with a confirmed pulsar, are scarce due to the long orbital period. Any observational link between the three gamma-ray binaries would shed light in the understanding of this kind of systems.
\begin{deluxetable*}{l c c c }
\tablecaption{Parameters of the Pulsar, the Massive Star and the Binary System.
\label{table:system}}
\tablehead{
\colhead{Parameter} & \colhead{Symbol} & \colhead{Value} & \colhead{Reference}}
\startdata
Pulsar period & $P$ & $47.762506780(2)$~ms & 1 \\
Period derivative & $\dot{P}$ & $2.276554(2)\times 10^{-15}$ & 1 \\
Characteristic age & $\tau_{c}$ & $3.3\times 10^5$~yr & 2 \\
Surface magnetic field & $B$ & 3.3$\times 10^{11}$~G & 2 \\
Spindown luminosity & $\dot{E}_{\rm sp}$& $8\times 10^{35}$~erg~s$^{-1}$ & 3 \\
Spectral type & \nodata & O9.5\,Ve & 4 \\
Effective temperature & $T_{\rm eff}$ & $27\,500$--$34\,000$~K & 4 \\
Surface gravity & $\log~g$ & $3.7$--$4.1$ & 4 \\
Radius & $R_{1}$ & $8.1$--$9.7~R_\odot$ & 4 \\
Optical luminosity & $L_{\rm opt}$ & $2.4\times10^{38}$~erg~s$^{-1}$ & 4 \\
Mass & $M_{1}$ & $31~M_\odot$ & 4 \\
Distance & $d$ & $2.3\pm0.4$~kpc & 4 \\
Mass function & $f(M_{2})$ & 1.53~$M_\odot$ & 5 \\
Terminal wind velocity & $v_{\infty}$ & $1350\pm200$~km~s$^{-1}$ & 6 \\
Orbital period & $P_{\rm orb}$ & 1236.72432(2)~days & 1 \\
Reference epoch & $T_{0}$ & MJD~48124.34911(9) & 1 \\
Semimajor axis & $a_{\rm 2}$ & $7.2\pm0.4$~AU & 4\tablenotemark{a} \\
Orbit inclination & $i$ & $22\fdg2{\pm1.4}$ & 4\tablenotemark{a} \\
Eccentricity & $e$ & 0.8698872(9) & 1 \\
Argument of periastron & $\omega_{\rm 2}$ & $138\fdg6659(1)$ & 1 \\
Longitude of ascending node & $\Omega$ & $-40^{\circ}$ & See the text \\
Proper motion (rigth ascension) & $\mu_{\alpha}\cos\delta$ & $-1.4\pm2.7$~mas~yr$^{-1}$ & 7 \\
Proper motion (declination) & $\mu_{\delta}$ & $-3.2\pm1.9$~mas~yr$^{-1}$ & 7 \\
\enddata
\tablecomments{The values in parentheses refer to the uncertainty in the last digit at 1$\sigma$ level.}
\tablenotetext{a}{Derived from (4).}
\tablerefs{(1) \citealt{wang04}; (2) \citealt{wex98}; (3) \citealt{manchester95}; (4) \citealt{negueruela11}; (5) \citealt{johnston94}; (6) \citealt{mccollum93}; (7) \citealt{zacharias09}.}
\end{deluxetable*}
The radio emission from the \object{PSR~B1259$-$63} system is described most recently in \cite{johnston05}, which includes multiwavelength data from ATCA observations obtained during the periastron passages of 1994, 1997, 2000, and 2004 (hereafter we will refer to the epoch of each periastron passage as $\tau$). The emission has two non-thermal components. The first one is pulsed emission with a flux density of $\sim$2--5~mJy at 2.5~GHz and a nearly flat spectral index, which disappears approximately from 16~days prior to periastron ($\tau-$16) to 15~days after periastron ($\tau+$15). This has been interpreted as an eclipse of the pulsar when it crosses behind the equatorial circumstellar disk present around the massive star \citep{melatos95}. The second component is transient unpulsed synchrotron emission that appears at $\tau-$20~days and shows two peaks centered around $\tau-$10 and $\tau+$20~days, with flux densities at 2.5~GHz up to $\sim$15--20~mJy and $\sim$30--50~mJy, respectively. After the post-periastron peak, the flux density of the unpulsed emission decreases continuously, and it has been detected up to $\sim\tau+$100~days. This transient emission remains optically thin during the outbursts.
The broadband transient emission of \object{PSR~B1259$-$63}/ \object{LS~2883} is produced around periastron, when strong interaction is produced between the stellar and pulsar winds. The shocked material is contained by the stellar wind behind the pulsar, producing a nebula extending away from the stellar companion. Along this adiabatically expanding flow, the accelerated particles produce synchrotron emission from radio to X-rays \citep{tavani97, kirk99, dubus06, takata09}. The expected morphology depends on the magnetization parameter of the pulsar wind, $\sigma$, defined as the upstream ratio of magnetic to kinetic energy. These models predict that the radio emission extends up to several AU (corresponding to several milliarcseconds, mas, at 2.3~kpc), and that its structure should be variable on orbital timescales. This radio behavior has not been tested in \object{PSR~B1259$-$63}, which only emits over a period of a few months every 3.4~year.
Here we present the first VLBI radio images of \object{PSR~B1259$-$63}, obtained close to its 2007 periastron passage. The high-resolution images at three different orbital phases provide a direct view of the small-scale morphology of the source, which is comparable to those previously observed in \object{LS~5039} and \object{LS~I~+61~303}.
\section{Observations and data reduction}\label{observations}
\begin{figure*}[]
\resizebox{1.0\hsize}{!}{\includegraphics[angle=0]{fig1.eps}}
\caption{LBA images of PSR~B1259$-$63 at 2.3~GHz. North is up and east is to the left. The dates and the days after the periastron passage ($\tau$) are quoted at the top of each panel. The synthesized beam is displayed in the rectangle on the bottom-right corner of each image. The red crosses mark the region where the pulsar should be contained in each run (see the text). As a reference, the size of the major axis of the orbit of PSR~B1259$-$63/LS~2883 is shown in the first panel. For each image, the displayed contours start at 3$\sigma$ and increase by factors of $2^{1/2}$; the 1$\sigma$ rms close to the source in each image from left to right is 0.30, 0.66, and 0.15~mJy~beam$^{-1}$.
\label{fig:f1}}
\end{figure*}
\begin{deluxetable*}{c c c c c c c c }
\tabletypesize{\scriptsize}
\tablecaption{Observational Parameters for Each Run. \label{table:observations}}
\tablewidth{0pt}
\tablehead{
\colhead{Run} & \colhead{MJD} & \colhead{On-source Time} & \colhead{No. of Antennas} & \colhead{$\tau+$days\tablenotemark{a}} & \colhead{Orbital Phase\tablenotemark{a} ($^\circ$)} & \colhead{True Anomaly ($^\circ$)\tablenotemark{a}} & \colhead{$\theta$($^\circ$) \tablenotemark{a}}
}
\startdata
A & 54309.04--54309.46 & 4.65 & 5 & 1.1(2)--1.5(2) & 0.00087(14)--0.00120(14) & 9.1(1.4)--12.5(1.4) & 191 \\
B & 54328.97--54329.40 & 3.75 & 5 & 21.0(2)--21.4(2) & 0.01698(14)--0.01733(14) & 98.3(3)--99.1(3) & 279 \\
C & 54623.29--54623.67 & 4.20 & 4 & 315.3(2)--315.7(2) & 0.25496(14)--0.25527(14) & 165.97(1)--165.99(1) & 346 \\
\enddata
\tablenotetext{a}{Periastron passage at $\tau=$ MJD 54307.9710(1), from the fourth orbital solution obtained in \cite{wang04}.}
\tablecomments{$\theta$ is the mean true anomaly plus 180$^{\circ}$.}
\end{deluxetable*}
\object{PSR~B1259$-$63} was observed with the Australian Long Baseline Array (LBA) at 2.3~GHz (13~cm) on three epochs: 2007 July 28 (run~A), 2007 August 17 (run~B), and 2008 June 6 (run~C). The LBA observations were performed with five~antennas of the array: Parkes, ATCA, Mopra, Hobart (not present in run~C) and Ceduna. The observational parameters of each of the $\sim10$ hr runs are shown in Table~\ref{table:observations}. The small number of antennas provides a rather poor $uv$-coverage that makes the data calibration and imaging more difficult than for arrays with more elements, because we have less baseline redundancy. The data were recorded at a bit rate of 512~Mbps per telescope distributed in eight sub-bands (four for each right- and left-handed polarization) with a bandwidth of 16~MHz, each of them correlated using 64~frequency channels, two-bit sampling, and 2~s of integration time. Hobart and Ceduna recorded at 256~Mbps (only two sub-bands for polarization). The data were correlated at Swinburne University using the DiFX software correlator \citep{deller07} without applying pulsar gating or binning.
The observations were performed using phase referencing on the calibrator J1337$-$6509 (B1334$-$649), which has an angular separation of $4\fdg0$ from \object{PSR~B1259$-$63} and was correlated at $\alpha_{\rm J2000.0}=13^{\rm h} 37^{\rm m} 52\fs4443$ and $\delta_{\rm J2000.0}=-65\degr 09\arcmin 24\farcs900$. This reference position from the global VLBI solution 2006d\_astro\footnote{http://lacerta.gsfc.nasa.gov/vlbi/solutions/2006d/2006d\_apr.src} has an uncertainty of 13~mas. The cycle time was 6~minutes, spending half of the time on the phase calibrator and the target source alternatively. The total flux density of the phase calibrator was $457\pm11$, $367\pm20$, and $491\pm6$~mJy for runs~A, B, and C, respectively. The source J0538$-$4405 (B0537$-$441) was used as a fringe finder for runs~A and B, and J1337$-$6509 (B1349$-$439) was used for run~C. No astrometric check source was observed during the runs.
The data reduction was performed in AIPS\footnote{The NRAO Astronomical Image Processing System. http://www.aips.nrao.edu/}. Total electron content models based on GPS data obtained from the CDDIS data archive\footnote{The Crustal Dynamics Data Information System http://cddis.nasa.gov/} were used to correct phase variations due to the ionosphere. Standard instrumental corrections were applied (parallactic angle, instrumental offsets, and slopes between and within bands and bandpasses). Fringe fitting on the phase calibrator was performed with the AIPS task FRING, and the solutions were applied to the target source. A self-calibrated model of the calibrator was used as an input model for CALIB, which was used to reduce the amplitude instabilities on timescales greater than 10~minutes. The data were averaged in frequency and time, and clean images were produced. For run B only, a single round of phase self-calibration was applied, to mitigate the residual atmospheric phase instabilities, which were more noticeable at this epoch. The final images were produced using a natural weighting scheme (robust 5 within the AIPS task IMAGR). For run~B, robust 2 and tapering were applied to avoid the presence of possible unreliable high-resolution features due to sidelobes of the synthesized beam. Self-calibration slightly affects measured properties such as extent and position angle (P.A.), while it preserves the morphology.
\section{Results}\label{results}
The resulting VLBI images at 2.3~GHz are shown in Figure~\ref{fig:f1}. Extended emission is detected at distances up to 50--55~mas ($120$--$130\pm20$~AU at $2.3\pm0.4$~kpc) during the two runs shortly after the periastron passage. The emission becomes gradually fainter from the peak toward the northwest, and no individual components have been found. The P.A. of the extended emission with respect to the peak is $\sim-67^{\circ}$ for run~A and $\sim-50^{\circ}$ for run~B. The emission in run~C, 315~days after the periastron passage, is dominated by a point-like source of a few mJy.
\begin{figure*}[]
\begin{center}
\epsfxsize=14cm
\epsffile{fig2.eps}
\caption{Computed trajectory of the nebular flow in the past. The contour plots are the same as in Figure~\ref{fig:f1} (left and center). The green ellipse represents the counterclockwise orbit. The longitude of the ascending node, $\Omega$, is set to $-40^{\circ}$. The different magnetizations used are displayed in the bottom-left panels. The axes units are AU ($100$~AU $\simeq43$~mas).}
\label{fig:f2}
\end{center}
\end{figure*}
\begin{deluxetable*}{cccccccccccccc}
\tabletypesize{\scriptsize}
\tablecaption{Source Parameters for the Images Shown in Figure~\ref{fig:f1}.
\label{table:results}}
\tablewidth{0pt}
\tablehead{
\colhead{Run} & \colhead{$S_{\rm total}$} &\colhead{$S_{\rm peak}$} & \colhead{$\theta_{\rm HPBW}$}& \colhead{P.A.$_{\rm HPBW}$} & \multicolumn{2}{c}{Size} & \colhead{P.A.} & \colhead{$\Delta\alpha$} & \colhead{$\Delta\delta$} & \multicolumn{2}{c}{Separation} \\
\cline{6-7} \cline{11-12} \\
\colhead{} & \colhead{(mJy)} &\colhead{(mJy~beam$^{-1}$)} & \colhead{(mas)}& \colhead{($^{\circ}$)} & \colhead{(mas)} & \colhead{(AU)} & \colhead{($^{\circ}$)} & \colhead{(mas)} & \colhead{(mas)} & \colhead{(mas)}& \colhead{(AU)}
}
\startdata
A & 19.9 $\pm$ 1.4 & 10.4 $\pm$ 0.2 & 28.9 $\times$ 26.1 & $-49$ & 50 & 120 $\pm$ 20 & $-$67 & 11.3 $\pm$ 0.4 & 14.0 $\pm$ 0.5 & (10--20) $\pm$ 3 & (24--46) $\pm$ 7 \\
B & 46.7 $\pm$ 1.0 & 32.7 $\pm$ 0.4 & 31.0 $\times$ 25.1 & $-78$ & 55 & 132 $\pm$ 22 & $-$50 & 4.2 $\pm$ 0.1 & 11.3 $\pm$ 0.1 & (5--14) $\pm$ 3 & (12--31) $\pm$ 7 \\
C & 3.0 $\pm$ 0.4 & 2.8 $\pm$ 0.4 & 50.3 $\times$ 25.1 & $-14$ & $<$2.8 & $<$6.7 $\pm$ 1.1 &\nodata& 0.0 $\pm$ 0.6 & 0.0 $\pm$ 1.1 & \nodata & \nodata \\
\enddata
\tablecomments{The columns are run label, total and peak flux density at 2.3~GHz, synthesized beam (HPBW) parameters, size and P.A. of the extended emission, the position offset of the peak of the emission from the reference position (position of run~C), and the range of possible separations between the peak and the pulsar position.}
\end{deluxetable*}
Like all VLBI arrays, the absolute flux calibration of the LBA relies on noise calibration injection and so the absolute flux values reported in Table~\ref{table:results} should be taken as uncertain at the $\sim10\%$ level. The flux densities of runs~A and B are compatible with previous ATCA observations at the corresponding orbital phases (the ATCA data from the current observations were not correlated as an independent array). The flux density in run~C is compatible with the flux density of the pulsar. This is expected considering the lack of unpulsed emission at $\tau+150$ and $\tau+180$ in previous ATCA observations \citep{johnston05}.
The phase-referenced observations allow us to obtain relative astrometry between runs. Since no astrometric check source was observed, we do not have real measurements of the astrometric uncertainties, and we will use the formal errors of a Gaussian fit obtained with JMFIT within AIPS. Given the relatively large calibrator--target separation of $\sim4^{\circ}$, we expect an additional systematic component to the position error due to the unmodeled ionosphere of 1--5~mas \citep{brisken00}. As a reference position for the plots we use the peak position of run~C, $\alpha_{\rm J2000.0}=13^{\rm h} 02^{\rm m} 47\fs6435(1)$ and $\delta_{\rm J2000.0}=-63\degr 50\arcmin 08\farcs636(1)$, which we consider to represent the pulsar position at MJD~54623.48. Assuming a mass of the neutron star of 1.4~$M_\odot$ and a stellar mass of 31$\pm5$~$M_\odot$ (see Table~\ref{table:system} for the system parameters), the mass function provides $i=22\fdg2$ and the semimajor axis of the pulsar orbit is $7.2$~AU, or $3.1$~mas for a distance to the system of $2.3$~kpc. The red crosses in Figure~\ref{fig:f1} mark the region where the pulsar should be located in each epoch. Their centers are placed at the position of run~C corrected for proper motion at the corresponding epochs (MJD 54309.25 for run A and MJD 54329.18 for run B). Since we do not know the orientation of the orbit in the sky, we plot as error bars the projected orbital separation of the pulsar with respect to run~C. The error bars also include the 1$\sigma$ uncertainties on the mass of the star (and hence the distance uncertainty), on the astrometry of run~C, and on the offset due to the proper motion. Finally, we have included the astrometry in runs~A and B to compute the range of possible separations between the peak of the emission and the pulsar, as shown in Table~\ref{table:results}. We consider all possible values of the longitude of the ascending node ($\Omega$), which determines the orientation of the orbit in the plane of the sky.
\section{Kinematical interpretation}\label{model}
The radio morphology at a given epoch depends on the spatial distribution of synchrotron emitting particles and their emission processes. Given the limitations of our data (only two images, and without accurate astrometry), we have used a simple kinematical model to check if it can trace the extended structures detected.
We have considered the shock between the relativistic pulsar wind and a spherical stellar wind. The shock is produced at the standoff distance, the region where the pulsar and stellar wind pressures balance, as described in \cite{dubus06}. The evolution of the nebular flow after the shock is described in \cite{kennel84}. This should be considered as a first approximation to the much more complex hydrodynamic behavior of a shocked flow in a binary system, as shown in \cite{bogovalov08}. In the Kennel \& Coroniti approximation of a non-turbulent adiabatically expanding flow, the flow speed depends only on the magnetization parameter $\sigma$ when assuming $\sigma\ll1$. This allows us to compute the past trajectory of the flow produced behind the standoff distance, which depends on the components separation along the orbit, the mass loss rate of the star ($0.6\times10^{-7}~M_\odot$~yr$^{-1}$ for a typical O9 star; \citealt{vink00}), the terminal wind velocity, $v_{\infty}$, and the spin-down luminosity of the pulsar, $\dot{E}_{\rm sp}$. With these restrictions, the only free parameters are the longitude of the ascending node, $\Omega$, which describes the orientation of the orbit, and the magnetization parameter, $\sigma$. Projecting the past trajectory of the flow on our VLBI images of runs~A and B, we found that the best match with the detected morphologies is obtained for $\Omega\simeq-40^{\circ}$ and a magnetization parameter of $\sigma\simeq0.005$, assuming an orbital inclination of $i=22\fdg2$. The obtained trajectories are shown in Figure~\ref{fig:f2} for three different values of $\sigma$. We note that the uncertainty in the distance to the system scales the size of the contours, but not the orbit and the trajectories, which are computed in AU. The range of magnetizations in the plots has been chosen to approximately show the effect of keeping $\sigma=0.005$ and changing either the distance from 1.9 to 2.7~kpc, either $\Omega$ from $-$35 to $-45^{\circ}$, or a variation of the product $\dot{M}~v_{\infty}$ of two orders of magnitude.
The parameters of the circumstellar equatorial disk of the star are uncertain, but considering $\dot{M}_{e}=5\times10^{-8}~\dot{M}$~yr$^{-1}$, \citep{johnston96}, and a wind velocity of $\sim$10~km~s$^{-1}$, the product $\dot{M}_{\rm e}~v_{\infty, \rm e}$ is $\sim$100 times smaller than the spherical wind contribution. If this equatorial component dominates during certain orbital phases, the flow trajectory would be closer to the orbit than the dashed models in Figure~\ref{fig:f2} for some specific regions of the more recent part of the trajectory. However, we do not attempt to accurately model this circumstellar disk, as its density, velocity, and crossing time are unknown.
We note that the astrometric errors can be of the order of $\sim$10~AU (see above). We also emphasize that this simple model cannot account for the complex magnetohydrodynamical turbulences of the real flow and should be considered as a first approximation to constrain $\sigma$. Previous studies have assumed a magnetization parameter $\sigma$ of around 0.01--0.02 for this pulsar (see \citealt{tavani97}; \citealt{dubus06}), considerably higher than our best-fit value, although comparable lower values for $\sigma$ have been suggested for other systems \citep[e.g., the Crab pulsar;][]{kennel84}.
\section{Discussion and conclusions}\label{discussion}
The results presented in this Letter show that the particle accelerator within \object{PSR~B1259$-$63}/\object{LS~2883} can produce a flow of particles emitting synchrotron radiation that can travel several AU. The total projected extent of the nebula is $\sim50$~mas, or $120\pm20$~AU, and the peak of the emission is clearly displaced from the binary system orbit (see Figure~\ref{fig:f1} and Table~\ref{table:results}). Similar morphologies and displacements have been found in the other two known gamma-ray binaries, \object{LS~5039} and \object{LS~I~+61~303}, although for smaller sizes and on shorter timescales \citep{ribo08,moldon10,massi04,dhawan06,albert08}.
There is an ongoing debate on the nature of the compact object and particle acceleration mechanisms in \object{LS~5039} and \object{LS~I~+61~303}. Although initially suggested to be accreting/ejecting microquasar systems \citep{paredes00,massi04}, they are now thought to contain young non-accreting pulsars \citep{maraschi81,dubus06,dhawan06,ribo08,moldon08}, which can explain their multiwavelength emission \citep{sierpowska08, cerutti08, bogovalov08, zdziarski10}. However, there are three basic issues that are not well understood for \object{LS~5039} and \object{LS~I~+61~303}, and comparisons with \object{PSR~B1259$-$63}/\object{LS~2883} can help to clarify the situation. First, the putative pulsar properties of these two sources are unknown, as no pulsations have been detected for these systems. The lack of pulsations can be explained by the intense stellar wind that produces an extremely high absorption for these small orbits, a 3.9~day orbit with separations between 0.1 and 0.2~AU for \object{LS~5039}, and 26.5~days with separations of 0.1--0.7~AU for \object{LS~I~+61~303} \citep{casares05a,casares05b,aragona09}. As a reference, the separation for \object{PSR~B1259$-$63} is in the range 0.9--13.4~AU, and the pulsations disappear for distances below $\sim1.6$~AU. Second, it is not clear if the massive stellar wind can confine the pulsar wind \citep{romero07}. VLBI images, like the ones presented here, can shed light on the shock conditions and geometry. Third, the observed SED and variability at GeV energies is not well understood \citep{abdo09_ls5039,abdo09_lsi,torres10}. \textit{Fermi} and \textit{AGILE} are observing \object{PSR~B1259$-$63} for the first time during the 2010 periastron passage, and are providing GeV data that can be compared with those already obtained for \object{LS~5039} and \object{LS~I~+61~303}. In this context, the high-resolution VLBI radio observations presented here establish a common link to test the similarities between the three systems.
In conclusion, our results provide the first observational evidence that non-accreting pulsars orbiting massive stars can produce variable extended radio emission at AU scales. Similar structures are also seen in \object{LS~5039} and \object{LS~I~+61~303}, in which the nature of the compact object is unknown because the detection of pulsations is challenging. The discovery presented here for the young non-accreting pulsar \object{PSR~B1259$-$63} reinforces the link with these two sources and supports the presence of pulsars in these systems as well. Planned LBA observations of \object{PSR~B1259$-$63} during the 2010 periastron passage will allow us to provide a full comparison with the behavior observed in these sources, which have been extensively monitored during several orbital cycles. We have also shown that the orientation of the orbit and the magnetization of the pulsar can be inferred from VLBI observations of the source. Several images at different orbital phases covering a wider range of true anomalies will allow for a complete modeling of the orbital changes of the extended emission. Finally, accurate VLBI observations of the pulsed emission during several orbits can provide the pulsar trajectory, from which we can directly obtain the proper motion of the binary system, the inclination and $\Omega$ of the orbit, and the distance to the system.
\acknowledgments
J.M., M.R., and J.M.P. acknowledge support by DGI of the Spanish Ministerio de Ciencia e Innovaci\'on (MICINN) under grants AYA2010-21782-C03-01 and FPA2010-22056-C06-02.
This work has been supported by the Consejer\'{\i}a de Innovaci\'on, Ciencia y Empresa of Junta de Andaluc\'{\i}a as research group FQM-322, and excellence grant FQM-5418.
J.M. acknowledges support by MICINN under grant BES-2008-004564.
M.R. acknowledges financial support from MICINN and European Social Funds through a \emph{Ram\'on y Cajal} fellowship.
The Australian Long Baseline Array is part of the Australia Telescope which is funded by the Commonwealth of Australia for operation as a National Facility by CSIRO. A.T.D. is a Jansky Fellow of the National Radio Astronomy Observatory (NRAO). The NRAO is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
|
2,877,628,088,523 | arxiv |
\section{Introduction}
Accurate clip-level video classification, utilising a rich vocabulary of sophisticated terms, remains a challenging problem.
One of the contributing factors is the complexity and ambiguity of the interrelations between linguistic terms and the actual audio-visual content of the video. For example, while a "travel" video can depict any location with any accompanying sound, it is the {\em intent of the producer} or even the {\em perception of the viewer} that makes it a "travel" video, as opposed to a "news" or "real estate" clip. Hence true {\em understanding} of the video's meaning is called for, and not mere {\em recognition} of the constituent locations, objects or sounds.
The recent Kaggle competition entitled "Google Cloud \& YouTube-8M Video Understanding Challenge" provides a unique platform to benchmark existing methods and to develop new approaches to video analysis and classification. It is based around the {\bf YouTube-8M (v.2)} dataset, which contains approximately 7 million individual videos, corresponding to almost half a million hours (50 years!), annotated with a rich vocabulary of 4716 labels \cite{45619}. The challenge for participants was to develop classification algorithms which accurately assign video-level labels.
Given the complexity of the video understanding task, where humans are known to use diverse clues, we hypothesise that a successful solution must efficiently combine different expert models. We pose two important questions: (i) How do we construct such diverse models and how to combine them?, and (ii) do we need to individually train and combine discrete models or can we simply train a very large/flexible DNN to obtain a fully trained end-to-end solution? The first question clearly links to ensemble-based classifiers, where significant body of prior work demonstrates that diversity is important. However, do we know all the different ways to promote diversity in DNN architectures? On the second question, our analysis shows that training a single network results in sub-optimal solutions as compared to an ensamble.
In the following section we briefly review the state-of-the-art in video labelling and ensemble-based classifiers. We then introduce the Kaggle competition, including data-sets, performance measures and the additional features engineered and evaluated by the Yeti team. Next, in Section \ref{sec:dnns}, we describe the different forms of DNNs that were employed and quote the baseline performance of individual DNNs trained on different features. Section \ref{sec:exp} demonstrates that further gains in performance can be achieved by promoting diversification of DNNs during training by adjusting dropout rates, different architectures and - surprisingly - using over-fitted DNNs. We then provide analysis on the link between diversity of the DNNs in the final {\em Yeti} ensemble, performance gains in Section \ref{sec:diverse_analysis}, and conclude in Section \ref{sec:conclusions}.
\section{Related Work}
We first overview some existing approaches to video classification before discussing ensemble-based classifiers. Ng et al. \cite{7299101} introduced two methods which aggregate frame-level features into video-level predictions: Long short-term memory (LSTM) and Feature pooling. Fernando et al. \cite{7458903} proposed a novel rank-based pooling method that captures the latent structure of the video sequence data.
Karpathy et al. \cite{6909619} investigated several methods for fusing information across temporal domain and introduced Multiresolution CNNs for efficient video-classification.
Wu et al. \cite{Wu} developed a multi-stream architecture to model short-term motion, spatial and audio information respectively. LSTM are then used to capture long-term temporal dynamics.
DNNs are known to provide significant improvement in performance over traditional classifiers across a wide range of datasets. However, it was also found that further significant gains can be achieved by constructing ensembles of DNNs. One example is the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) \cite{ILSVRC15}. Here, improvements up to 5\% were achieved over individual DNN performance (e.g. GoogLeNet\cite{googlenet}) by using ensembles of existing networks. Furthermore, all the top entries in this challenge employed ensembles of some form.
One of the key reasons for such a large improvement was found to be due to the diversity present across different base classifiers (i.e. different classifiers specialise to different data or label subsets)\cite{Hansen,Krogh95}. An increase in diversity of classifiers of equal performance will usually increase the ensemble performance. There are numerous methods for achieving this: random initialisation of the same models, or data modification using Bagging \cite{Breiman} or Boosting \cite{Boosting} processes. Recently, work was carried out on end-to-end training of an ensemble based on diversity-aware loss functions. Chen et al. \cite{NCL} proposed to use Negative Correlation Learning for promoting diversity in an ensemble of DNNs, where a penalty term based on the covariance of classifier outputs is added to an existing loss function. An alternative was proposed by Lee et al \cite{Mheads} based on the approach of Multiple Choice Learning (MCL) \cite{MCL}. Here, DNNs are trained based on a loss function that uses the final prediction chosen from an individual DNN with the lowest independent loss value.
\section{Youtube-8M Kaggle competition}
\label{sec:data}
The complete Youtube-8M dataset consists of approximately 7 million Youtube videos, each approximately 2-5 minutes in length, with at least 1000 views each. There are 4716 possible classes for each video, given in a multi-label form. For the Kaggle challenge, we were provided with 6.3 million labelled videos (i.e. each video was associated with a 4716 binary vector for labels). For test purposes, approximately 700K unlabelled videos were provided. The resulting class test predictions from our trained models were uploaded to the Kaggle website for evaluation.
The evaluation measure used is called `GAP20'. This is essentially the mean average precision of the top-20 ranked predictions across all examples. To calculate its value, the top-20 predictions (and their corresponding ground-truth labels) are extracted for each test video. The set of top-20 predictions for all videos is concatenated into a long list of predictions. A similar process is performed for the corresponding ground-truth labels. Both lists are then sorted according to the confidence prediction values and mean average precision is calculated on the resulting list.
Below we present different features used for classification; the first two (FF, MF) were provided by Google, the remaining ones were computed by our team. For some features, we also quote the performance as a rough guide of the usefulness of the individual feature: this was computed by 4k-4k-4k DNN with dropout $0.4$.
\begin{itemize}
\item{\bf Frame-level Deep Features (FL)}\\
In the Kaggle challenge, the raw frames (image data) of the videos were not provided. Instead, each video in the dataset was decoded at 1 frame-per-second up to first 300 seconds and then passed through an Inception-v3 network \cite{44903}. The ReLu activation values of the last hidden layer formed a frame-level representation (2048 dimensions), which was subsequently reduced to 1024 dimensions using a PCA transformation with whitening. Similar processing was performed on the audio stream, resulting in an additional 128-dimensional audio feature vector. Video and audio features are concatenated to yield a frame feature vector of 1152 dimensions. The set of frame-level deep features extracted for a video $I$ is denoted as $\mathcal X^{I}=\{x_{t}\in \mathbb{R}^{d}, t=1...T\}$.
\end{itemize}
The extracted features are then aggreagted using state-of-the-art aggregation methods: Mean aggregation, Mean + Standard Deviation aggregation, ROI-Pooling \cite{Radenovic2016CNNIR}, VLAD, Fisher Vectors \cite{Jegou12PAMI}, RVD \cite{RVD} \cite{BRVD} \cite{RVD_PAMI} and BoW.
\begin{itemize}
\item{\bf Video-Level Mean Features (MF)}\\
Google also provided the mean feature $\mu^I$ for each video, which was obtained by averaging frame-level features across the time dimension.
Our reference performance for MF feature is $81.94\%$, but it can peak at $82.55\%$ with a 12k-12k-12k network and dropout of $0.4$.
\item{\bf Video-Level Mean Features + Standard Deviation (MF+STD)}\\
We extract the standard deviation feature $\sigma^I$ from each video. The signature $\sigma^I$ is L2-normalised and concatenated with mean feature $\mu^I$ to form a 2304-Dim representation $\phi^I=[\mu^I;\sigma^I]$.
\item{\bf Region of Interest pooling (ROI)}\\
The ROI-pooling based descriptor, proposed by Tolias et al \cite{Radenovic2016CNNIR}, is a global image representation that achieves state-of-the-art performance in image retrieval and classification. We compute a new video-level representation using the ROI-pooling approach. More precisely, the frame-level features are max-pooled across 10 temporal-scale overlapping regions, obtained from a rigid-grid covering the frame-level features, producing a single signature per region. These region-level signature are independently L2-normalised, PCA transformed and subsequently whitened. The transformed vectors are then sum-aggregated and finally L2-normalised. The dimensionality of final video-level representation is 1152. The ROI-based training architecture is presented in Fig. \ref{GDES}(b); it achieves $82.34\%$ with the 12k-12k-12k net.
\item{\bf Fisher Vectors, RVD and VLAD pooling (FV, RVD, VLAD)}\\
We encode the frame level features using the classical Fisher Vectors, RVD and VLAD approaches. Fisher Vector encoding aggregates local features based on the Fisher Kernel framework while VLAD is simplified version of Fisher vectors. The detailed experimental results show that the mean pooling achieves significantly better classification accuracy than FV, RVD and VLAD approaches (81.94\% vs 81.3\%, 80.8\% and 80.4\%) [4k-4k-4k network].
\item{\bf BoW pooling (BoW)}\\
We compute the BoW representation of the frame-level video features, using 2k and 10k BOW representations. We compute BoW features by first applying K-Means clustering across the frame-level deep features with either 2k or 10k clusters, and then calculating the number of frames in each cluster for each video. Finally, we L1-normalize this BoW vector to remove the effect of video length on the features. The base BoW performance is $78.1\%$ with the 4k-4k-4k net.
\end{itemize}
\section{DNN-based Multi-Label Classifiers}
\label{sec:dnns}
This section describes the base neural network architectures that we used for multi-label predictions on this dataset.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{ROI.pdf}
\caption[\bf{CNN architectures}]{(a) Mean features CNN (b) ROI features CNN (c) Audio Visual fusion CNN.}
\label{GDES}
\end{figure}
\subsection{Fully Connected NN Architecture}
For our work, we use a 3-hidden layer fully connected neural network, with layers FC6, FC7 and FC8. The size of the input layer is dependent on the feature vectors chosen. These will be described in more detail in the sections below.
The activation function for all hidden units is ReLU. Additionally, we also employ dropout on each hidden layer.
The number of hidden units and dropout value will be detailed in Section \ref{sec:exp}.
The output layer is again a fully connected layer, with a total of 4716 output units, one for each class. In order to provide class-prediction probability values, each output unit uses a sigmoid activation function.
Since this challenge is a multi-label problem, we used the binary cross entropy loss function for training the DNNs. We have chosen the Adam optimization algorithm \cite{adam} for training, with a learning rate of $1e-4$. Using the learning rate of $1e-3$ as in the original paper led to NNs getting stuck in local minima very early in training.
All the DNNs were trained to convergence, and the number of epochs required to achieve this ranged from 15 to 150 depending on the hyper-parameter settings chosen, as detailed in Section \ref{sec:exp}.
\subsection{Audio Visual Fusion}
This method comprises of two stages: Firstly the audio and visual features networks are trained separately to minimise the classification loss and then these two NNs are integrated in a fusion network consisting of two fully-connected layers.
1) Training audio and video networks: We first train the audio and video networks individually. This is conducted by connecting their features to three fully connected layers similar to FC6, FC7 and FC8, respectively. The size of all FC layers is 4096. Each FC layer is followed by a ReLu and a dropout layer. The output of the FC8 layer is passed through another fully connected layer FC9 which computes the predictions and finally updates the network parameters to minimise the cross entropy loss over the training data.
2) Training fusion networks: After training the audio and video networks, we discard their FC9 layers and connect their FC8 layers to the fusion network shown in Fig. \ref{GDES}(c). In this way, 4096-Dim audio and 4096-Dim video features are concatenated to form a 8192-Dim representation as an input to the fusion network. This fusion network contains two fully connected layers of size 8192, followed by a fully connected prediction layer and cross-entropy optimisation.
The model based on Audio-Visual Fusion achieved $82.28\%$, but added significant diversity to our ensemble.
\subsection{Ensemble of DNNs}
The test predictions from multiple DNNs that were trained separately with different architectures, input features and hyper-parameters can be combined together by averaging them. We have found such ensembles often provide significant improvements over the performance of individual DNNs. The details of the diversification process and ensemble construction is presented in Section \ref{sec:exp}.
\section{Diversification of DNNs}
\label{sec:exp}
It was found that the performance of individual models (architectures and input features) could be significantly improved when they were combined together into a DNN ensemble. However, in order to achieve these gains, it was necessary to build diverse DNNs. In this section, we describe a number of approaches that we have attempted that were mostly successful in achieving the aim of diversification of DNNs. These range from using different dropouts, hidden unit counts, use of overfitted models and segmented frame-level features.
\subsection{Sizes of Hidden Layers}
For our experiments, we have considered the use of the following number of units for each hidden layers: 4096, 8192, 10240, 12288 and 16384. All the hidden layers within a model were set to have the same number of hidden units, as we did not see substantial gains by varying the hidden layer size within a model.
\subsection{Dropout Sizes}
In the process of training different DNNs, a number of different dropout values were used: 0.0 (no dropout), 0.25, 0.3, 0.4 and 0.5. As expected, we have found that the higher the dropout, the larger the number of epochs required for convergence to be achieved.
\subsection{Use of Overfitted DNNs}
We have also used models that are overfitting. We have found that individual DNNs will have a validation GAP20 score that peaks after a certain number of training epochs (usually around 40-50 for large networks of ${>}$8K units). If training continues, we find that the validation GAP20 score will steadily decrease. This implies that the model is overfitting to the training data. Existing practise often discards these models and use the model with the best validation score.
Counter-intuitively, using large network models that have overfitted was found to give a {\em larger} performance improvement to the ensemble of classifiers. This is despite individual validation GAP scores that are less than its peak GAP score many epochs prior.
\subsection{Using Different Training Subsets}
Finally, we have explored how using different training subsets for building similar architectures can influence the final performance of the ensemble. To this end, we first trained a DNN ensemble using DNNs with the above different hidden units, dropouts and feature vectors (ROI, video mean, video mean + std. dev.) using the training dataset and validation set (except last 100K validation data) provided by Google.
We then produced another set of training data and 100K validation data that was split differently to the above. Next, DNNs with 4K-4K-4K, 4K-8K-8K, 8K-8K-8K, 10K-10K-10K, 12K-12K-12K, 14K-14K-14K and 16K-16K-16K architectures were trained using a separate training set for the video-mean features. These were used to form a separate DNN ensemble. We have found that this also provides improvement when outputs of both ensembles are linearly combined together.
\subsection{Diversification-based Loss Function}
\label{sec:divbased}
Recently, there has been work on performing end-to-end learning of multiple DNN output layers that promote diversity \cite{Opitz2017} for the task of multi-class classification. Here, a multi-output layer DNN was proposed. The final output label is the class with the maximum votes from all the output layers. In order to learn this DNN, a ``diversity-aware'' loss function was proposed. This was a linear combination of the MSE error with the sum of cross-entropy of outputs for the different layers. The aim was not only to have each output layer minimise classification error, but to also provide classification outputs that are different from other output layers.
We have attempted to use a similar approach to sequentially train diversity-aware DNNs. In order to achieve this, we first train a single 3-layer fully connected DNN as described above. Our ensemble is initialised using this single DNN. The outputs for the training data of this ensemble is then recorded for subsequent use.
In order to further add new DNNs into our ensemble, we wish to learn DNNs that minimise labelling errors {\em and} produce outputs that are different to that of the ensemble.
Learning the next DNNs was performed by proposing a loss function that accounts for multi-label classification accuracy and is also diversity-aware. For the multi-label classification, we have used the binary cross entropy method. For the diversity awareness, we use the negative of cross entropy between the current DNN and the ensemble output. The final loss function is a linear combination between the above two losses, with a combination parameter of $\lambda = 0.3$ for diversity and $0.7$ for multi-label accuracy. The new DNN is then added into the ensemble set and this step repeated a number of times until a pre-defined ensemble size is reached (here we chose 4).
\section{Experimental Results}
In this section, we shall provide results from the individual models. We also show how a significantly improved GAP20 score can be achieved by combining the individual classifiers into an ensemble. We further achieve improvement by performing linear combination on ensembles trained using different training sets.
\subsection{Performance of Individual Features}
In this section, we detail the baseline performances of DNNs that were trained on the different features described above.
\begin{table}[h!]
\centering
\begin{tabular}{|c c c c c|}
\hline
Features & Architecture & DropOut & GAP20 & Peak \\
\hline \hline
Mean & 4k-4k-4k & 0.25 & 81.94 & 81.94 \\
\hline
Mean & 4k-4k-4k & 0.3 & 82.01 & 82.01 \\
\hline
Mean & 4k-4k-4k & 0.4 & 82.08 & 82.08\\
\hline
Mean & 8k-8k-8k & 0.4 & 82.48 & 82.48\\
\hline
Mean & 12k-12k-12k & 0.4 & 82.31 & 82.55\\
\hline
ROI & 8k-8k-8k & 0.4 & 82.25 & 82.25\\
\hline
ROI & 12k-12k-12k & 0.4 & 82.12 & 82.34\\
\hline
Fusion & 8k-8k-8k & 0.4 & 82.28 & 82.28\\
\hline
Mean+sd & 8k-8k-8k & 0.3 & 82.1 & 82.1\\
\hline
Mean+sd & 10k-10k-10k & 0.4 & 82.54 & 82.54\\
\hline
Mean+sd & 12k-12k-12k & 0.4 & 82.41 & 82.6\\
\hline
\end{tabular}
\bigskip
\caption{Table showing the GAP performance of different architectures for the first ensemble, features and dropout settings. Shown are two GAP scores, one at the last epoch (GAP20) and another at the epoch where the peak GAP score was achieved. }
\label{table:indiv_perf}
\end{table}
It can be observed from the Table \ref{table:indiv_perf} that GAP20 score increases as we increase the dropouts percentage, keeping the rest of the network hyperparameters the same. Also the deeper 8k-8k-8k architecture performs significantly better than 4k-4k-4k using dropout 0.4. Furthermore, adding second order statistics (standard deviation features) to mean features increases the GAP20 from 82.0\% to 82.1\%. The ROI and Fusion CNNs performs marginally less that Mean CNNs. However, all the architectures presented add value to the overall performance.
\begin{table}[h!]
\centering
\begin{tabular}{|c c c c c|}
\hline
Features & Architecture & DropOut & GAP20 & Peak \\
\hline \hline
Mean & 4k-4k-4k & 0.25 & 81.87 & 81.87 \\
\hline
Mean & 4k-4k-4k & 0.3 & 81.92 & 81.92 \\
\hline
Mean & 4k-4k-4k & 0.4 & 82.01 & 82.01\\
\hline
Mean & 8k-8k-8k & 0.4 & 82.15 & 82.48\\
\hline
Mean & 10k-10k-10k & 0.4 & 82.18 & 82.28\\
\hline
Mean & 12k-12k-12k & 0.4 & 82.11 & 82.38\\
\hline
Mean & 14k-14k-14k & 0.4 & 82.16 & 82.24\\
\hline
Mean & 16k-16k-16k & 0.4 & 82.20 & 82.41\\
\hline
Mean+sd & 10k-10k-10k & 0.4 & 82.39 & 82.43\\
\hline
Mean+sd & 12k-12k-12k & 0.4 & 82.37 & 82.45\\
\hline
\end{tabular}
\bigskip
\caption{Table showing the GAP performance of different architectures for the second ensemble, trained with a different train-validation split, input features and dropout settings. Shown are two GAP scores, one at the last epoch (GAP20) and another at the epoch where the peak GAP score was achieved. }
\label{table:indiv_perf2}
\end{table}
\subsection{Performance of DNN Ensemble}
We have found that the overall GAP20 performance of the ensemble $E1$ formed in Table \ref{table:indiv_perf} was 83.884\% and ensemble $E2$ from Table \ref{table:indiv_perf2} was 83.634\% on the Kaggle leaderboard. When combined together, we have found potential improvements in the linearly weighted predictions from both ensembles, with a weighting of $\alpha \in (0,1)$ for one ensemble and $1-\alpha$ for the other ensemble. The results can be seen in Fig. \ref{fig:weight_res}. The optimal GAP20 score achieved is 83.96\% on the Kaggle leaderboard using the formula $0.65\times E1+0.35\times E2$.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{fig1.pdf}
\caption{ This figure shows different linear combination values for combining the two ensembles trained with different training-validation splits.}
\label{fig:weight_res}
\end{figure}
An interesting discovery was that the use of overfitted DNNs can improve the generalisation performance when incorporated into an ensemble. We have found that for large DNNs, (8K and above hidden units), when models trained up to later epochs (100+) were used, the validation error of the ensemble further decreases. This is despite the {\em increase} of validation error in the individual models. We have found that the use of overfitted models resulted in an average of 0.671\% improvement in the ensemble GAP20 score, compared with 0.579\% when using peak-validation GAP models. One hypothesis is that the overfitted models are overfitting to different video and label subsets. This in turn promotes diversity across different DNNs used, which results in better generalisation of the ensemble.
The ensemble that was trained using the sequential addition with the diversity aware loss function (Section \ref{sec:divbased}) did not yield any improvement over a simple average of randomly initialised and different architecture DNNs. We found that a 4-DNN ensemble (8K-8K-8K DNN) of learnt this way yielded a GAP score of 82.15\% and this did not improve by adding more DNNs.
\section{DNN Ensemble Diversity Analysis}
\label{sec:diverse_analysis}
It is generally agreed that greater output diversity of member classifiers in an ensemble result in improved performance. Unfortunately, the measurement of diversity is not straightforward, and at present, a generally accepted formulation does not exist \cite{Kuncheva}. Here, at least 10 different measures of diversity were found.
For our purposes, first suppose there are $M$ classifiers in our ensemble. There are 2 diversity measures that are relevant to our analysis. The first is based on the Pearsons correlation coefficient and the second based on Generalised Diversity Measure.
\begin{figure}[!h]
\centering
\includegraphics[width=0.8\columnwidth]{gap_scatter.pdf} \\
(a)\\
\includegraphics[width=0.8\columnwidth]{gap_heatmap_3.pdf} \\
(b)\\
\includegraphics[width=0.8\columnwidth]{divergence_heatmap.pdf} \\
(c)
\caption[]{a) A scatter plot showing the improvement in GAP score as a function of the models' diversity. b) shows the gap improvement (in \%) for different DNN pairs and c) shows the corresponding diversity score. In b),c) the type of DNN is identified as $\langle $feature$\rangle$ $\langle$ hid units$\rangle$ $\langle$ droupout$\rangle$, where for feature: M is mean, S is std. dev. and R means ROI. }
\label{fig:gap_corr}
\end{figure}
\subsection{Correlation-based Diversity Analysis}
The first analysis is based on Pearsons correlation coefficient defined as:
\[
R_{ij} = \frac{C_{ij}}{\sigma_i\sigma_j }
\]
where $i,j \in \{1,2,...,M\}$ and for classifiers indexed by $i$ and $j$, $R_{ij}$ is their correlation coefficient, $C_{ij}$ represents the covariance between these 2 classifiers and $\sigma_i,\sigma_j$ their respective output prediction standard deviations. Next, we find that one measure of diversity is: $1 - R_{ij}$, where if the correlation is minimal (i.e. 0), diversity is maximal and vice versa.
This method has the advantage of not requiring the classifier outputs to be binary, as is the case here.
When applied to the output predictions of the different classifiers in our ensembles, we find that a lower correlation is indicative of a greater improvement in the GAP20 score. This can be seen in Fig. \ref{fig:gap_corr}a). As shown there, we find that a a higher diversity score is highly correlated with an magnitude of improvement in the ensemble GAP score. Additional detail on the divergence scores and corresponding GAP20 improvement between pairs of DNNs can be seen in their respective heatmaps can be seen in Fig. \ref{fig:gap_corr}b,c.
Additionally, we can also use the diversity score to analyse the performance of overfitted models. This can be seen in Table \ref{table:overfit}. Here, we observe that when we allow a model to overfit past its highest validation score, this leads to an increase in diversity with other models. By ensembling overfitted models with lower individual scores, we actually observe that whilst this is detrimental to a single model's performance, it provides better improvement when incorporated into an ensemble
\begin{table}[h!]
\centering
\begin{tabular}{|c| c | c|}
\hline
& GAP Improvement & Diversity Score \\
\hline \hline
Peak Model & 0.579\% & 0.0322 \\
\hline
Overfitted Model& 0.671\% & 0.0340 \\ \hline
\end{tabular}
\bigskip
\caption{ Table showing the GAP improvement and diversity score for ensembles that use models with peak validation GAP20 or overfitted models with suboptimal GAP20 validation scores. }
\label{table:overfit}
\end{table}
\subsection{Generalised Diversity Measure-based Analysis}
Our second analysis is inspired by the Generalised Diversity Measure proposed by Partridge et al. \cite{Partridge}. In this measure, the authors propose that maximum diversity exists between two classifiers if, given an example, an error made by one classifier is accompanied by a correct classification of another classifier.
In order to obtain more insight into the improvements of classifier addition into an ensemble, we propose to analyse the performance of classifiers using ``wrong example sets''.
Consider that each class has two sets of video examples, $N^+$ number of positive videos (label 1) and $N^-$ number of negative videos (label 0). Let these sets of videos be defined as $X^+ = \{x^+_1, x^+_2,...,x^+_{N^+}\}$ and $X^-= \{x^-_1, x^-_2,...,x^-_{N^-}\}$ respectively. Now, suppose we are given a classifier $h$, which can be an ensemble or single DNN. Correspondingly, the predictions given by $h$ on the different video sets are: $Y_h^+ = \{y^+_{h,1}, y^+_{h,2},...,y^+_{h,N^+}\}$ and $Y_h^- = \{y^-_{h,1}, y^-_{h,2},...,y^-_{h,N^-}\}$.
We can now extract the set of videos that are considered ``wrong'' with respect to some threshold $\theta \in (0,1)$:
\begin{eqnarray*}
\varepsilon^+_{h,\theta} & = & \{i \in \{1,2,..., N^+\} : 1 - y^+_{h,i} \geq \theta\}\\
\varepsilon^-_{h,\theta} & = & \{i \in \{1,2,..., N^+\} : y^-_{h,i} \geq \theta\}
\end{eqnarray*}
The final set of ``wrong examples'' for classifier $h$ is:
\begin{equation}
\varepsilon_{h,\theta} = \varepsilon^+_{h,\theta} \cup \varepsilon^-_{h,\theta}
\label{eq:wrong_ex}
\end{equation}
We can now use the Eq. \ref{eq:wrong_ex} to analyse the effect of combining all of these classifiers together into an ensemble. In particular, we would like to discover if individual classifiers produce errors for {\em different} videos. If this were the case, when the classifiers are combined together, the erroneous predictions of individual classifiers can potentially be diluted by correct predictions from other classifiers.
To achieve this, first suppose we have an ensemble of $M$ classifiers: $H = h_1, h_2, ..., h_M$, and we assume that the errors for each classifiers are approximately equal. Next, the wrong-example sets are extracted using Eq. \ref{eq:wrong_ex} for each classifier, giving: $\varepsilon_{H,\theta} = \{\varepsilon_{h_1,\theta}, \varepsilon_{h_1,\theta}, ..., \varepsilon_{h_M,\theta}\}$.
Now, consider the intersection of the sets in $\varepsilon_{H,\theta}$:
\[
\Upsilon_{H,\theta} = \bigcap^M_{i=1} \varepsilon_{h_i,\theta}
\]
The set of examples that fall into $\Upsilon_{H,\theta}$ are those that {\em all} the classifiers in the ensemble gave wrong predictions (w.r.t $\theta$) for. As such, this ensemble will not improve the predictions for any example in $\Upsilon_{H,\theta}$. Nonetheless, we find that the size of the set $\Upsilon_{H,\theta}$ either decreases or remains unchanged as we add new classifiers into the ensemble $H$.
Additionally, we find that the union of sets in $\varepsilon_{H,\theta}$
\[
\Upsilon'_{H,\theta} = \bigcup^M_{i=1} \varepsilon_{h_i,\theta}
\]
represent the total unique videos that were wrongly classified by at least one classifier. However, examples in $\Upsilon'_{H,\theta}$ that are {\em not} in $\Upsilon_{H,\theta}$ will have an overall improved prediction in the ensemble.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{union_intersection.pdf}
\caption{ This graph shows the size of the sets representing the intersection of ``extremely wrong'' examples ($\theta = 0.9$) of individual classifiers in an ensemble, as more classifiers are added. Shown are also the size of the union of wrong examples that at least one classifier in the ensemble got significantly wrong. }
\label{fig:union_int}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=\columnwidth]{1theta0_9_hist.pdf}\\
(a)\\
\includegraphics[width=\columnwidth]{2theta0_9_hist.pdf}\\
(b)
\caption{ Shown here is how adding additional classifiers that are diverse into an ensemble diffuses the severity of wrong predictions. For clarity we have provided a zoomed-in view of prediction value histograms for wrong examples associated with very low predictions (+ve examples) (a) or very high predictions (-ve) examples (b). Shown are the histograms of examples with wrong predictions for two ensembles, one with 6 DNNs, and later when 5 more DNNs have been added. }
\label{fig:diffuse}
\end{figure}
Fig. \ref{fig:union_int} shows the size of $\Upsilon_{H,\theta}$ and $\Upsilon'_{H,\theta}$ as more classifiers are added to the ensembled used for this challenge, where $\theta = 0.9$. These represents extreme wrongly labelled videos. As such, these examples would have the greatest impact on decreasing the final GAP20 score. If these extreme mislabelling is due only to a small number of classifiers, then the ensemble should improve on their predictions (by means of accurate labelling from other classifiers).
Furthermore, if the above phenomena is occurring, we expect to see the intersection of ``wrong examples'' sets from individual classifiers decrease in size as we add more classifiers into the ensemble. This can indeed be seen in Fig. \ref{fig:union_int}. Here, we find that the number of examples that are wrongly labelled by all the individual classifiers in the ensemble steadily decreases as the ensemble size increases. This indicates that the individual classifiers of the ensemble each label different subsets of videos wrongly, suggesting that diversity is present. This in turn is results in a steady increase in the GAP score.
An additional confirmation of the diversity is that the size of the union of wrong example sets is increasing. The classifiers in the ensemble all have approximately the same accuracy. That means their wrong example sets will be approximately the same size. Thus, their union will only expand in size if these examples are different.
Finally, we present results where we ``track'' the movement of extremely wrong predictions as we expand the ensemble size. We start by identifying the example videos that have prediction errors greater than $\theta = 0.9$. A histogram of their prediction scores is then built. We then obtain their predictions after a number of DNNs have been added in, and construct an updated histogram. The result of this is shown in Fig. \ref{fig:diffuse}. Here, the baseline-ensemble of 6 DNNs misclassified many examples and classes (approx. 42K), as can be observed in the blue histograms. However, after having added 5 additional DNNs that were found to be diverse, we find that many examples have smaller error scores, as shown in the green histograms. This will in turn result in the entries corresponding to these wrong predictions migrate further down the final sorted GAP list, thus improving the final GAP20 score.
\section{Conclusions}
\label{sec:conclusions}
In this paper we have investigated factors controlling DNN diversity in the context of the "Google Cloud and YouTube-8M Video Understanding Challenge". We have shown that diversity can be cultivated by using DNN different architectures. Surprisingly, we have also discovered diversity can be achieved through some unexpected means, such as model over-fitting and dropout variations. We have presented details of our overall solution to the video understanding problem, which ranked \#7 in the Kaggle competition (Yeti team - gold medal).
\section*{Acknowledgements}
The work in this paper was partially funded by Innovate UK under the iTravel project (Ref: 102811). |
2,877,628,088,524 | arxiv | \section{Introduction}
The study of collective states of anyonic excitations is an exciting and yet relatively unexplored area of condensed matter physics. The nontrivial exchange behaviour of nonabelian anyons may be exploited for universal quantum computation,\cite{kitaev2003,nayak2008}
with the simplest suitable model being that of Fibonacci anyons. It has been suggested that, as the non-Abelian component of the $k=3$ $Z_k$-parafermion Read-Reyazi state,\cite{read1999} they may appear in the fractional quantum Hall state with filling fraction $\nu=12/5$.\cite{xia2004}
These systems are therefore presently of intense theoretical and experimental interest.
\citeauthor{feiguin2007} recently initiated the study of interacting non-Abelian anyons with the analysis of nearest neighbour interactions in \rcite{feiguin2007}, in which it was shown that a chain of Fibonacci anyons on the torus could be analysed by mapping it to an equivalent spin chain. This work was later extended to next-to-nearest neighbour interactions and $SU(2)_k$ anyons in Refs.~\olcite{trebst2008} and \olcite{trebst2008a}. These papers identified numerous critical phases, and scaling dimensions of the local scaling operators were extracted using exact diagonalisation by shifting and rescaling the resulting energy spectra of the Hamiltonians.\cite{cardy1996,difrancesco1997}
Local scaling operators are of interest as they may appear as perturbations of the critical Hamiltonian, and may be classified by whether or not they respect the topological symmetry of the system. For Fibonacci anyons undergoing an antiferromagnetic (AFM) interaction, the authors of \rcite{feiguin2007} show that this topological symmetry protects the critical system against all conformally relevant translation-invariant perturbations.
In this paper we investigate the relationship between periodic chains of anyons on the torus and on the disc, and their mappings to equivalent spin chains.
We show that while the natural definition of translation for a ring of anyons on the torus is closely related to that on the spin chain,
the same cannot be said for the natural definition of translation on the disc,
and a Hamiltonian which is translation invariant on the disc will only be translation invariant up to a defect on the corresponding spin chain.
The spectra of the same local Hamiltonian acting on two periodic chains of anyons, one on a torus and one on a disc, will therefore not in general coincide. We further show that
the energy spectrum obtained on the disc always constitutes a subset of the spectrum obtained on the torus, and that for a critical theory, the local scaling operators associated with this subset are precisely those operators which
respect the topological symmetry defined in \rcite{feiguin2007}.
We also show that similar considerations apply to open chains, where the spectrum of the theory, and for critical theories the inferred local scaling operator content, is once again affected by the topology of the surface on which the anyons are found.
\section{Anyonic states and operators\label{sec:ASO}}
Although many papers have been published which study the behaviour of anyonic systems on surfaces of various topologies,\cite{wen1989,einarsson1990,wen1990,wen1990a,wen1990b,kitaev2003,feiguin2007,trebst2008,trebst2008a,bais2009,gils2009,gils2009a,buerschaper2010,pfeifer2010,konig2010} little attention has been paid to
how the diagrammatic formalism may be used to explicitly develop the relationship between states on surfaces of various genus. In this section we address this deficit, beginning by reviewing the origin and formulation of the diagrammatic representation of states and operators for systems of anyons on surfaces of genus 0 (sphere, finite disc, and infinite disc) in \sref{sec:genus0}. This material may be familiar to many readers. However, we present it here in a manner intended to emphasise the relationship between anyon models and topological quantum field theories (TQFTs),\cite{witten1989,blok1990,blok1990a,wen1990a,wen1990d,read1992,bais2009}{} as we will exploit this relationship to generalise the formalism to surfaces of higher genus in \sref{sec:genus1+}. We will also explicitly examine the construction of the translation operator on surfaces of genus 0 and 1, as this will prove important to the study of translation invariant local Hamiltonians on the disc and the torus in Secs~\ref{sec:PBC}-\ref{sec:OBC}.
\subsection{Anyons on surfaces of genus 0\\(disc, sphere)\label{sec:genus0}}
\subsubsection{Diagrammatic representation of states\label{sec:diagstates_sphere}}
A system of anyons may be considered to consist of a collection of localised quasiparticle excitations in a two-dimensional medium, for example the topological liquid of a Fractional Quantum Hall (FQH) state.\cite{laughlin1983,arovas1984,girvin1987,read1989a,wilczek1990,read1990,prange1990,moore1991,read1992,das-sarma1997,read1999,xia2004,nayak2008} In general, a system may be considered anyonic if its particles may be described in terms of a Unitary Braided Tensor Category (UBTC).
In this paper we will concern ourselves only with anyon models which may be defined on the torus, known as \emph{modular} anyon models,\cite{kitaev2006,bonderson2007,bonderson2008} for which the properties of the quasiparticles admit description in terms of both a Unitary Braided Modular Tensor Category (UBMTC) and a 2+1D Topological Quantum Field Theory (TQFT)\cite{witten1989,blok1990,blok1990a,wen1990a,wen1990d,read1992,bais2009}{} of the Schwarz type.\cite{schwarz1978,kaul2005} Each of the quasiparticle excitations, or anyons, may then be characterised by a label, or charge, which corresponds to a label of the UBMTC.
However, providing a full description of such a system is in general more complicated than simply cataloguing the value and location of each non-trivial charge. This is because specifying the individual charges of two anyons, $a$ and $b$, does not necessarily uniquely determine the total charge of the pair ($a\times b$). These total charges are constrained by the fusion rules of the UBMTC, which may be written in terms of the multiplicity tensor $N^c_{ab}$ as
\begin{equation}
a\times b\rightarrow \sum_c N_{ab}^c c,\label{eq:fusionrules}
\end{equation}
but when there exist nonzero entries in $N_{ab}^c$ such that multiple terms appear on the right-hand side of this equation, the total charge of $a$ and $b$ may correspond to any of these values $c$ such that $N_{ab}^c\not=0$.
To specify these products, we represent the state of a system of anyons by means of a \emph{fusion tree} (\fref{fig:anyonstates_sphere}).
\begin{figure}[tp]
\includegraphics[width=246.0pt]{anyonstates_sphere}
\caption{Some possible fusion trees for a chain of six anyons with charges $a_1$ to $a_6$ on the disc or sphere. Labels $x_i$ denote intermediate fusion products which may not be uniquely determined by the fusion rules, and labels $u_i$ are associated with vertices and serve to enumerate multiple copies of a given charge for anyon models having some $N^c_{ab}>1$. Note that no vertex index is required for fusion to the vacuum state. Tree (ii) is constructed from tree (i) by means of an $F$ move [\protect{\fref{fig:basischange}}(i)], and tree (iii) is constructed from tree (i) by recognising, from \protect{\Eref{eq:fusionrules}}, that fusion with the vacuum state $\mbb{I}$ is trivial.
Diagram (iv) specifies a state $|\psi\rangle$ of $n$ anyons on the disc or sphere.\label{fig:anyonstates_sphere}}
\end{figure}%
Labels on the interior edges of the fusion tree graph correspond to the total charge of multiple anyons. For example in \fref{fig:anyonstates_sphere}(i), $x_1$ is the total charge of anyons $a_1$ and $a_2$ together, $x_2$ is the total charge of anyons $a_1$, $a_2$, and $a_3$ together, and so on. The set of valid labellings of a single fusion tree constitutes an orthogonal basis for the Hilbert space of a system of fixed anyons, and a labelling is deemed valid if all fusion vertices correspond to processes associated with non-zero entries in the multiplicity tensor $N_{ab}^c$. In this paper we will normalise all fusion tree bases using the diagrammatic isotopy convention given in Refs.~\onlinecite{bonderson2007} and \onlinecite{bonderson2008}. For free anyons, the co-ordinates of each anyon must be specified in addition to the fusion tree.
Although a single fusion tree does not explicitly state the outcome of all possible measurements, it is possible to convert between different fusion trees using procedures known as $F$ moves and braiding (\fref{fig:basischange}).
\begin{figure}[tp]
\includegraphics[width=246.0pt]{changeofbasis}
\caption{Manipulations capable of performing a change of basis on a fusion tree: (i) $F$ move. (ii) Braiding.\label{fig:basischange}}
\end{figure}%
In constructing a fusion tree, we have imposed a (possibly arbitrary) linear ordering on the anyons of the system. An $F$ move [\fref{fig:basischange}(i)] alters the structure of the fusion tree while preserving that linear ordering, permitting the computation of additional fusion products [e.g. $\tilde x_2$ in \fref{fig:anyonstates_sphere}(ii) is the combined charge of $a_3$ and $a_4$], while braiding [\fref{fig:basischange}(ii)] permits conversion between different linear orderings.\footnote{Note that in this instance, the process of braiding represents a passive transformation between equivalent bases representing the same physical state. We will subsequently also encounter the use of braiding to denote the active process of particle exchange (\protect{\sref{sec:discoperators}}).} Using these two operations it is possible to determine the probability amplitudes of different outcomes when measuring the total charge of any group of anyons regardless of the fusion tree structure on which the state is initially described.
The tensors $\left(F^{a_1a_2a_3}_{a_4}\right)_{(a_5u_1u_2)(a_6u_3u_4)}$ and $R^{a_1a_2}_{a_3}$ are specified by the UBMTC to which the system of anyons corresponds.
While the associated UBMTC describes a system of anyons in terms of individual quasiparticles, an equivalent description may also be made in terms of the diffeomorphism-invariant fields of a Schwarz-type 2+1D TQFT. Here, the 2D manifold on which the anyons exist becomes the spatial manifold of the TQFT, with individual anyons corresponding to punctures in this manifold (\fref{fig:punctures}).
\begin{figure}
\includegraphics[width=246.0pt]{punctures}
\caption{Anyonic quasiparticles carrying labels from the UBMTC (denoted $\times$) map to punctures in the manifold when the system is represented by a TQFT.\label{fig:punctures}}
\end{figure}%
The state of the TQFT may be specified in terms of the outcome of a complete set of commuting Wilson loop operators, whose expectation values may be identified with the labels of the UBMTC. In a TQFT, a pair of Wilson loop operators which are topologically equivalent
necessarily constitute a measurement of the same observable. Furthermore, the outcome of a Wilson loop measurement which may be contracted to a point is necessarily trivial. Consequently, we may identify the expectation value of an appropriate Wilson loop operator (of specified orientation, to allow for charges which are not self-dual) with measurement of the total charge on the anyons, or punctures, which it encloses. Where degeneracies exist (i.e. $N_{ab}^c>1$ for some $a,b,c$), the different copies of a particular charge label in the UBMTC can be associated with different expectation values of the Wilson loop operator. We therefore see that we may map between the TQFT and the UBMTC fusion tree as follows:
First, perform a pairs-of-pants decomposition of the punctured 2D spatial manifold of the TQFT.
Then, take one specific pair of pants (or 3-punctured 2-sphere), declare that this 2-sphere has an inside and an outside, and specify which is which. Extend this definition of inside and outside consistently over all pairs of pants (this is always possible for a non-self-intersecting orientable 2-manifold embedded in $\mbb{R}^3$). Having done this, construct the fusion tree by drawing lines inside each pair of pants as shown in \fref{fig:pantsconstruction}.
\begin{figure}[tp]
\includegraphics[width=246.0pt]{pantsconstruction}
\caption{Construction of a fusion tree graph from a pair of pants.\label{fig:pantsconstruction}}
\end{figure}%
Now associate a Wilson loop operator which measures charge with each opening of each pair of pants, up to topological equivalence. Specifically, where two pairs of pants connect together, we find two charge measurement operators which are topologically equivalent and so only one of these need be retained. Each line of the fusion tree graph now passes through exactly one Wilson loop, and we label the lines of the graph with the outcome of these charge measurements.
Finally, if there exist entries in the multiplicity tensor $N^c_{ab}$
which are greater than 1, then it is also necessary to associate a degeneracy index with the fusion vertices to enumerate these outcomes, which are also specified by the outcomes of the Wilson loop measurements.
So far, this fusion tree has been constructed in the space $\mathbb{R}^3$ in which the 2D spatial manifold is embedded. When representing this three-dimensional construction on paper, it is customary to employ a diagrammatic convention whereby the 2D manifold on which the punctures exist is mapped onto a plane perpendicular to the page, and whose projection onto that page forms a horizontal line at the top of the fusion tree diagram (for systems of anyons on the sphere, this is achieved by mapping that sphere first to the Riemann sphere, and then to the infinite plane). The vertical axis of fusion trees drawn in this way (e.g. \fref{fig:anyonstates_sphere}) may then be interpreted as a possible history whereby the present physical state may be obtained from the vacuum (i.e. a state with no punctures) in the 2+1D TQFT, with the lines of the fusion tree corresponding to world lines of the quasiparticles presently observed on the manifold. For this reason, a charge label $\mathbb{I}$ is typically placed at the bottom of the fusion tree diagram, representing the initial vacuum state.\footnote{We are free to do this because for any UBMTC there exists an identity charge, denoted $\mbb{I}$, such that fusion with this charge does not modify either the space of labels of a given fusion tree, or an individual labelling of this tree. We may therefore, on any diagram, freely insert and delete lines carrying only the trivial charge $\mbb{I}$ without modifying the state which this diagram represents.}
By adopting different pairs-of-pants decompositions of the spatial manifold of the TQFT, it is possible to recover all different fusion tree bases of the UBMTC. It is also possible to interchange the definitions of ``inside'' and ``outside'' when constructing the fusion tree from the pairs-of-pants decomposition, but for surfaces of genus 0 this has no effect on the basis obtained.
In \fref{fig:manifoldandtree} we give a simple example of the pairs-of-pants construction, showing the decomposition of a 6-punctured finite disc with trivial charge on the boundary which corresponds to the fusion tree of \fref{fig:anyonstates_sphere}(ii).
\begin{figure}
\includegraphics[width=246.0pt]{samplepants}
\caption{A sample pairs-of-pants decomposition for a 6-punctured finite disc: the manifold is decomposed into pairs of pants by cutting along
the dotted lines. When the charge associated with the boundary is $\mbb{I}$, the construction described in \protect{\sref{sec:diagstates_sphere}} yields the fusion tree of \protect{\fref{fig:anyonstates_sphere}}(i).\label{fig:manifoldandtree}}
\end{figure}%
We conclude this section with a couple of remarks about specific systems of genus 0:
First, to extend the pairs-of-pants construction to surfaces having less than three punctures, such as the 2-punctured 2-sphere, we note that fusion with the identity label $\mbb{I}$ is trivial. For such a system we may therefore freely introduce additional trivial punctures to obtain a single 3-punctured 2-sphere from which we construct the fusion tree. Similarly, lines carrying trivial charge may be freely added to or removed from any fusion tree diagram [e.g. to obtain \fref{fig:anyonstates_sphere}(iii) from \fref{fig:anyonstates_sphere}(i)].
Second, we note that there exists an important relationship between the sphere and the finite disc. While the infinite disc is topologically equivalent to the Riemann sphere, the finite disc may be treated as the Riemann sphere with a puncture at infinity. The edges of this puncture then constitute the edges of the disc. The charge associated with the edge of the disc is measured by a Wilson loop of the usual orientation enclosing this puncture on the Riemann sphere, or equivalently, one of reversed orientation enclosing all other punctures on the disc. When the charge associated with this puncture on the Riemann sphere is (and remains) trivial, we may delete the associated line from the fusion tree diagram, and therefore ignore the existence of the boundary when studying anyon behaviour on the finite disc. Third, we note that in the study of lattice models with $n$ sites, we may treat the system as always containing $n$ anyons at fixed locations, even if some of these anyons have trivial charge. The states of these systems can therefore always be represented by a fusion tree with $n$ leaves. The enumeration of the leaves of the fusion tree then corresponds to an enumeration of the lattice sites, and consequently for such a system it is not necessary to separately state the co-ordinates of the individual anyons.
\subsubsection{Inner product\label{sec:discinnerproduct}}
Next, we introduce the diagrammatic representation of the dual space and the inner product. In the diagrammatic representation of the space of states, the conjugation operation $^\dagger$ is implemented by vertically reflecting a fusion tree to obtain a \emph{splitting tree}, taking the complex conjugate of all fusion tree coefficients $c^{a_1\ldots a_{2n}}$, and reversing the direction of all arrows on the tree. In this paper, we will prefer lower indices for the coefficients of a splitting tree, e.g. $c'_{a_1\ldots a_{2n}}$.
The inner product of two diagrams is then performed by connecting the leaves of the fusion and the splitting tree, subject to the requirement that leaves which are connected represent anyons (or punctures) at the same location on the manifold, and that the charges of the connected leaves coincide. Where these conditions do not hold, the inner product of two diagrams is zero.
Recall that the fusion tree is a 2+1-dimensional structure projected onto a two-dimensional page, and thus when performing this connection, both trees must be represented in equivalent projections. Conversion between projections may be achieved by a sequence of appropriately oriented braids.
Assuming that the inner product has not yet been found to be zero, then once the trees have been connected, $F$ moves are performed, loops are eliminated according to the rule given in \fref{fig:innerprod_sphere}, and trivial punctures are removed, until the resulting diagram has been reduced to a number. This number is then the value of the inner product.
\begin{figure}[tp]
\includegraphics[width=246.0pt]{innerprod_sphere}
\caption{Elimination of loops during evaluation of the inner product. The numerical factor given is appropriate to the diagrammatic isotopy convention.\label{fig:innerprod_sphere}}
\end{figure}%
Extension to states represented by a weighted sum over multiple labelled diagrams follows from bilinearity.
\subsubsection{Diagrammatic representation of operators\label{sec:discoperators}}
Now that we have presented the diagrammatic formulation for anyonic states and for the inner product, we are in a position to construct anyonic operators. Where these operators act on the entire system, the construction is trivial as an operator is constructed in the usual manner, as a sum over bras and kets:
\begin{equation}
\hat O = \sum_{i,j} O_{ij} |\psi_i\rangle\langle\psi_j|.\label{eq:braketoperator}
\end{equation}
For anyons the bra is replaced by a splitting tree, the ket by a fusion tree, and the coefficient bears indices corresponding to all labels on the splitting and fusion trees [e.g. \fref{fig:discoperators}(i)].
\begin{figure}[tp]
\includegraphics[width=246.0pt]{discoperators}
\caption{Examples of anyonic operators on the disc with $n$ punctures. (i)~A global operator, acting on the total Hilbert space of the system. (ii)~A local operator, acting on $r$ adjacent anyons.
(iii)~The braid operator.
(iv)~The periodic translation operator $\hat T^\mrm{D}$ for a ring of anyons on fixed lattice sites on the disc, closing away from the observer.\label{fig:discoperators}}
\end{figure}%
However, we may also wish to define operators which act only on a finite subregion of the disc. In the same way that the fusion tree specifies how all the anyons in the system may be obtained starting from the vacuum state, in an appropriate basis we may interpret a portion of the fusion tree as specifying how all the anyons within a physically localised subregion may be obtained from a single initial charge. For example, in \fref{fig:anyonstates_sphere}(i), we see that charges $a_1$, $a_2$ and $a_3$ are obtained by splitting an initial charge of $x_2$, and in diagram (ii), charges $a_3$ and $a_4$ are obtained from $\tilde x_2$. We require that our operators respect superselection rules associated with
the charge labels of the UMBTC, and consequently they cannot change this total charge, but their action within this region is otherwise unconstrained. A completely general local operator acting on $r$ adjacent sites on the disc may therefore be written in the form of \fref{fig:discoperators}(ii). As with the state of a system, the choice of fusion and splitting trees employed in this figure merely represent a choice of basis in which to represent the operator, and any alternative choice would have been equally valid. We also note that this construction includes the definition of a global operator on the disc, as the special case $r=n$.
Finally, we note that while any operator on the disc may be represented in the form of \fref{fig:discoperators}(ii), it may frequently be advantageous to represent certain special operators in other forms. Thus, for example, while the braid operator corresponding to the oriented exchange of a pair of anyons may be represented in the form of \fref{fig:discoperators}(ii) for $r=2$, it is usually more convenient to represent it in the form of diagram (iii), from which its unitarity is obvious by diagrammatic isotopy. Similarly, consider a ring of anyons occupying fixed lattice sites on the disc. Exploiting topological invariance, we may construct our fusion tree such that these lattice sites lie in a line at the top of the diagram, and the closure of the ring is implicit, being either towards or away from the observer. If, for definiteness, we assume that the ring closes away from the observer, then we may expediently represent the operator corresponding to periodic translation by one site using the diagram of \fref{fig:discoperators}(iv). Note that this operator may be constructed by composing a series of braids [\fref{fig:discoperators}(iii)], and also that it
respects the interpretation of the vertical axis as a fictional timeline for the creation of the state, as
the motions of the anyons under the action of this operator are strictly monotonic in time. When this operator is applied to a state, the resulting diagram then describes a process whereby particles are created, migrate to their initial lattice sites, and then
all move one site periodically around the lattice (\fref{fig:translatedisc}).
\begin{figure}
\includegraphics[width=246.0pt]{translatedisc}
\caption{Application of the translation operator $\hat T^\mrm{D}$ to a state of $n$ anyons on the disc. The untranslated state $|\psi\rangle$ is given in \fref{fig:anyonstates_sphere}(iv).\label{fig:translatedisc}}
\end{figure}%
\subsection{Anyons on surfaces of higher genus (\lowercase{e.g.} torus)\label{sec:genus1+}}
Having reviewed the diagrammatic formulation for systems of anyons on the disc or sphere, we will now extend this formulation to surfaces of higher genus, by exploiting the association between modular anyon models and 2+1D TQFTs.
\subsubsection{Diagrammatic representation of states\label{sec:torusfusiontree}}
Extension to surfaces of higher genus is achieved by means of manifold surgery, performed on the punctured manifold inhabited by the 2+1D TQFT. We will be particularly interested in a specific example, the $n$-punctured torus, but the techniques which we will develop are entirely general and thus may be applied to construct diagrammatic representations for states of anyonic systems on surfaces of arbitrary genus.
We begin by noting that the
torus may be constructed from the
sphere by introducing punctures at the north and south poles, then distorting the sphere so that the puncture at the north pole descends vertically, and the puncture at the south pole rises vertically. When these punctures come into contact, they are sutured (\fref{fig:maketorus}).
\begin{figure}
\includegraphics[width=246.0pt]{maketorus}
\caption{Construction of the torus from the sphere by introducing two punctures, deforming the resulting punctured sphere by migrating the punctures towards the centre, and suturing.\label{fig:maketorus}}
\end{figure}%
Now, we wish to repeat this process for a manifold on which there exists a TQFT. We recognise that
through the use of Wilson loop operators, charge labels may be
associated with the punctures $a_\mrm{N}$ and $a_\mrm{S}$ at the north and south poles of the sphere respectively. On the sphere, prior to performing the suturing, these observables are independent. On the torus, after suturing, they are topologically equivalent up to a reversal in orientation. Importantly, these observables may be computed purely from the fields on the path of the loop itself, and thus their calculation proceeds identically whether or not the punctures are sutured. From this we infer two important results. First, suturing of these punctures only yields a consistent TQFT on the torus if the values of all Wilson loop observables on the north puncture are the duals of the same observables evaluated on the south puncture. Second, the space of states for the TQFT on the torus is isomorphic to the space of states on the 2-punctured sphere subject to this constraint. In \fref{fig:Pt} we see the operator $\hat P_\mrm{T}$ which projects from the Hilbert space of the \textit{n}+2-punctured sphere to a reduced Hilbert space isomorphic to the Hilbert space of the $n$-punctured torus.
\begin{figure}[tp]
\includegraphics[width=246.0pt]{PT}
\caption{Operator $\hat P_\mrm{T}$ (in the diagrammatic isotopy convention).\label{fig:Pt}}
\end{figure}%
If we now describe the Hilbert space on the \textit{n}+2-punctured sphere in terms of a fusion tree in the region of $\mbb{R}_3$ colloquially described as ``outside'' the sphere (i.e. extending from the surface of the unpunctured sphere to infinity), then we may use the surgical procedure described to construct a fusion tree for the $n$-punctured torus. Bringing together and suturing the punctures at the north and south poles corresponds to bringing together the equivalent branches of the fusion tree to form a loop.
It is important to note that when constructing the torus from the sphere by means of the surgery procedure described, one necessarily obtains a fusion tree in the region which is again ``outside'' the torus. This is because the branches of the fusion tree on the sphere which terminate in the north pole and south pole punctures must close to form a non-trivial cycle around the torus, and this can only occur if the fusion tree on the sphere inhabits the ``outside'' space. A fusion tree ``inside'' the torus may be obtained by the alternative procedure of first constructing a fusion tree ``inside'' the sphere, lengthening the sphere into an open cylinder with the polar punctures at its ends, and then bending this cylinder around into a loop and suturing (\fref{fig:spheretotorus2}).
\begin{figure}
\includegraphics[width=246.0pt]{maketorus2}
\caption{Alternative procedure to construct a torus from the 2-punctured sphere. The procedure presented here is
compatible with a fusion tree constructed ``inside'' the torus, whereas the procedure presented in \protect{\fref{fig:maketorus}} is
compatible with a fusion tree constructed ``outside'' the torus.\label{fig:spheretotorus2}}
\end{figure}%
Given the existence of this relationship between anyon models on surfaces of higher genus and anyon models on the sphere, we see that on surfaces of higher genus we may employ the pairs-of-pants decomposition approach to construct a fusion tree basis in precisely the same way as we did for the sphere.
Some example fusion trees for the $n$-punctured torus are shown in \fref{fig:torusstates}(i)-(ii).
\begin{figure}[tp]
\includegraphics[width=246.0pt]{torusstates}
\caption{COLOR ONLINE. (i)-(ii)~Fusion trees for systems of $n$ anyons on the the torus. The corresponding bases are related by means of a series of $F$ moves, and will be denoted $B_1$ and $B_2$ respectively.
In fusion tree diagrams on the torus, we will often place a $*$ inside loops which correspond to a non-trivial cycle on the torus, to remind that these loops constitute an important part of the description of the state and cannot be eliminated using \protect{\fref{fig:innerprod_sphere}}. That is, the $*$ signifies the presence of a topological obstruction. Note that for a given state, the value of $x_n$ is unaffected by changing between bases $B_1$ and $B_2$ and this is reflected in the labelling of the diagram.
(iii)~Fusion tree for the unpunctured torus. (iv)~Measurements $\hat W_a$ and $\hat W_b$ are associated with non-trivial cycles on the torus.
\label{fig:torusstates}}
\end{figure}%
The fusion tree for the unpunctured torus is given in \fref{fig:torusstates}(iii), and may be obtained using the pairs-of-pants approach by introducing a trivial puncture on the torus, constructing the fusion tree (where the puncture with the inward arrow in \fref{fig:pantsconstruction} is sutured to one of the punctures with an outward arrow), and then deleting the line carrying charge $\mbb{I}$ which is associated with the trivial puncture.
Again in these examples, each line on the fusion tree diagram may be associated with a particular Wilson loop operator in the TQFT. However, on this occasion in addition to the measurements associated with punctures on the surface of the torus, there are also two measurements associated with the non-trivial cycles of the torus [\fref{fig:torusstates}(iv)], and in a given basis, only one of these will encircle a line of the fusion tree. For example, consider the torus with no punctures. The fusion tree may be constructed either ``outside'' the torus (in the region of $\mbb{R}^3$ which extends to infinity), or ``inside'' the torus (in the region of $\mbb{R}_3$ which does not extend to infinity).
The labellings of these two fusion trees both constitute a basis of states, and they are related by means of the topological $S$ matrix,
\begin{align}
S_{ab}&=\frac{1}{\mc{D}}~\raisebox{-13.5pt}{\includegraphics{Smatrixloops}}\label{eq:Smatrix}\\
\rule{0pt}{22pt}\mc{D}&=\sqrt{\sum_a d_a^2},
\end{align}
according to
\begin{equation}
\raisebox{-12pt}{\includegraphics[width=220.0pt]{Srelate}.}\label{eq:Srelate}
\end{equation}
(Note that an anyon model can consequently only be consistently defined on the torus iff the topological $S$ matrix is unitary. This property is the defining characteristic of a modular anyon model.)
In one of these bases, the fusion tree is encircled by Wilson loop operator $\hat W_A$ of \fref{fig:torusstates}(iv), and in the other basis, by operator $\hat W_B$. Thus by describing a state in one of these bases, we specify the probability amplitudes for the outcomes of measurements around both non-trivial cycles of the torus.
We note that for the torus without punctures, the fusion tree admits as many different labellings as there are species of anyons in the model.
When the fusion rules of the UBMTC correspond to a group $\mc{G}$, the number of labels corresponds to the number of elements in the group, $|\mc{G}|$, and we shall employ this notation even when the fusion rules do not form a group.
The Hilbert space of the unpunctured torus is thus $|\mc{G}|$-dimensional.
As an example consider Kitaev's toric code, which is commonly understood as exhibiting independent electric and magnetic charges, and has fusion rules corresponding to the group $\mbb{Z}_2\otimes\mbb{Z}_2$.
In the present terminology, we identify each element of $\mbb{Z}_2\otimes\mbb{Z}_2$ as a separate charge, which we will denote $\mbb{I}$, $e$, $m$, and $em$. In the language of electric and magnetic charges, $\mbb{I}$ is the uncharged vacuum state, $e$ corresponds to the presence of an electric charge, $m$ to a magnetic charge, and $em$ corresponds to the presence of both.
For the toric code, all states without punctures are ground states, and thus on the torus the ground state subspace has dimension 4, or equivalently we may say that the ground state on the torus is 4-fold degenerate. Similarly the dimension of the Hilbert space of states on an unpunctured manifold of genus $g$ can easily be seen to be $|\mc{G}|^g$, reproducing the well-known ground state degeneracy of $4^g$ for the toric code on a surface of genus $g$.
Finally, we draw attention to charge $\tilde x_{n-1}$ in \fref{fig:torusstates}(ii). Due to the presence of the topological obstruction denoted by $*$, the loop in this fusion tree is not subject to the delta-function constraints of \fref{fig:innerprod_torus} which prohibit the existence of tadpole diagrams on the disc. On the torus, charge $\tilde x_{n-1}$ is not constrained to be $\mbb{I}$.
\subsubsection{Inner product\label{sec:torusinnerprod}}
We now introduce a process for computing the inner product on the torus, which is derived from the inner product on the sphere by means of the process of manifold surgery described in \sref{sec:torusfusiontree}.
This construction generalises immediately to all orientable non-self-intersecting surfaces of higher genus.
Consider the inner product $\langle\psi'^\mrm{T}|\psi^\mrm{T}\rangle$ between two states $|\psi^\mrm{T}\rangle$ and $|\psi'^\mrm{T}\rangle$ on the torus. For each state in turn we
reverse
the construction given in \sref{sec:torusfusiontree}, cutting the
torus so that it is transformed into a surface isomorphic to the sphere with punctures at north and south poles, and then mapping each state $|\psi^\mrm{T}\rangle$, $|\psi'^\mrm{T}\rangle$ on the torus to an equivalent state $|\psi^\mrm{D}\rangle$, $|\psi'^\mrm{D}\rangle$ lying within the support of $\hat P_\mrm{T}$ on the disc.
As a notation convention, superscripts of T and D in this paper will be used to indicate that a particular state or operator lives on the torus or sphere/disc respectively. In contrast the $_\mrm{T}$ on $\hat P_\mrm{T}$ is written in subscript, and so is just part of the name we have chosen for this operator and does not denote the topology of the manifold on which the operator exists.
The inner product between two states on the torus is now simply taken to be the inner product between the two equivalent states on the disc,
\begin{equation}
\langle\psi'^\mrm{T}|\psi^\mrm{T}\rangle = \langle\psi'^\mrm{D}|\psi^\mrm{D}\rangle.\label{eq:torusIP}
\end{equation}
We may summarise the computation of the inner product on the torus as follows: First, the fusion and splitting trees are connected at their leaves, as described for the sphere, and any mismatch between charges results in an inner product of zero. If the inner product has not yet been found to be zero, then $F$ moves and \fref{fig:innerprod_sphere} are applied repeatedly until the diagram is reduced to a sum of terms having the form shown in \fref{fig:innerprod_torus}.
\begin{figure}[tp]
\includegraphics[width=246.0pt]{innerprod_torus}
\caption{Evaluation of the inner product on the singly-punctured torus, in the diagrammatic isotopy convention. (i)~Opposition of torus fusion and splitting trees. (ii)~Equivalent diagram on the sphere. (iii)~Numerical value. Note that in the diagrammatic isotopy convention, mapping states on the torus to states on the sphere effectively amounts to the removal of two vertices, along with their associated numerical factors. This introduces a factor of $\sqrt{d_{x_1}d_{x'_1}}$ in step (ii).
\label{fig:innerprod_torus}}
\end{figure}%
These are then evaluated as shown, to obtain the
value of
the inner product.
It is instructive to compare this formulation of the inner product with that presented in Appendix~A of \rcite{konig2010}. The formulation of the inner product introduced by \citeauthor{konig2010} similarly guarantees that the physically permissible unique labellings of the fusion tree of the punctured torus yield an orthogonal basis for the Hilbert space, and differs only in the normalisation factors which must be associated with some of the diagrams
(see \tref{tab:torusnorms}).
\begin{table}[bp]
\caption{Inner products of unnormalised diagrams on the 1-punctured torus, for Fibonacci anyon statistics. Labels $a_1$, $x_1$, and $x'_1$ refer to diagram~(i) of \protect{\fref{fig:innerprod_torus}}, and $a'_1$ is set equal to $a_1$.
All inner products not listed below are zero in both conventions.\label{tab:torusnorms}}~\\
\begin{tabular}{|c|c|c|}
\hline
$x_1$, $a_1$, $x'_1$ & Convention of \protect{\sref{sec:torusinnerprod}} & Convention of \protect{\rcite{konig2010}} \\
\hline
$\mbb{I}$, $\mbb{I}$, $\mbb{I}$ & 1 & 1 \\
$\tau$, $\mbb{I}$, $\tau$ & $\phi^2$ & 1 \\
$\tau$, $\tau$, $\tau$ & $\phi^{5/2}$ & $\sqrt{\phi}$\\\hline
\end{tabular}
\end{table}%
\subsubsection{Operators on surfaces of higher genus\label{sec:torusoperators}}
As with the sphere, we will now address the diagrammatic representation of operators on surfaces of higher genus. We will begin with a general discussion, and once again will examine explicit examples on the torus.
On a surface of genus $g$, operators may correspond to physical processes acting either on the entire manifold, or on a finite subregion of the manifold. Where operators act on the entire manifold, a completely general construction may once again be achieved by replacing the bras and kets of \Eref{eq:braketoperator} with fusion and splitting tree diagrams for states of the appropriate genus.
However, for operators acting on a finite subregion of the manifold the situation may be simplified somewhat. Momentarily neglecting the existence of punctures on the physical manifold, we examine the topology of the area of support of the operator. If this area of support now lies entirely within a submanifold of genus $g'<g$,
then
the operator may be represented using fusion and splitting trees of genus $g'$.
For example, if we consider an operator on the torus whose support lies within a region which is (again momentarily ignoring any anyons within it) topologically the unpunctured disc, then by locality we need only consider the portion of the fusion tree corresponding to any anyons which do lie within that disc. We then choose a basis where this portion of the tree connects to the rest of the fusion tree via only a single line, such that if we excised this disc from the manifold as a whole, that line would describe the charge on the boundary of the disc. We may now represent the operator in the form of an operator on the disc as described in \sref{sec:discoperators}, and apply it by connecting it with the relevant portion of the fusion tree, as shown in the example of \fref{fig:discoperatorontorus}. We will, however, make one warning: If the operator shown in \fref{fig:discoperatorontorus} were to be applied to anyons $a_6$ and $a_1$, then transformation from the basis shown into one in which $a_6$ and $a_1$ were adjacent would require the application of the periodic translation operator, which is a non-local operator and will be discussed in \sref{sec:torustranslate}.
Extension of this approach to surfaces and operators of higher genus is straightforward.
\begin{figure}[tp]
\includegraphics[width=246.0pt]{discoperatorontorus}
\caption{An operator $\hat O^\mrm{T}$ acts on a region of the 6-punctured torus which contains two anyons, and which, if these anyons were not present, would be topologically an unpunctured disc. The above diagrams represent an expression of the form $|\psi^{\prime\mrm{T}}\rangle=\hat O^\mrm{T}|\psi^\mrm{T}\rangle$, and the basis on the torus has been chosen for convenience.\label{fig:discoperatorontorus}}
\end{figure}%
There exists one further observation to be made with respect to surfaces of higher genus. Much as the torus admits operators of genus 0 and genus 1, a surface of genus $g$ will admit operators whose support is a region of genus $g'$, for any $g'\leq g$.
We are not aware of any notation convention for the description of such operators, and on surfaces having genus higher than 1, there is the potential for ambiguity as a given operator diagram of genus $g'$ may refer to any of $\left(\begin{array}{c}g\\g'\end{array}\right)$ different physical processes. We suggest it may be appropriate to distinguish the different non-trivial cycles using different symbols in the manner of the $*$ in Figs.~\ref{fig:torusstates}--\ref{fig:discoperatorontorus}, but we do not develop such a system formally at this time, and note that ambiguity can always be avoided by writing such an operator in terms of full fusion and splitting trees on the surface of genus $g$.
\subsubsection{Periodic translation on the torus\label{sec:torustranslate}}
We now consider a specific example system which will be of interest in \sref{sec:PBC}.
Suppose we have a system of anyons on a torus, arranged on a periodic lattice such that this lattice encircles either the large or the small non-trivial cycle of the torus.\footnote{In describing the non-trivial cycles as ``large'' and ``small'', we assume the torus to have a circular cross-section with respect to a radial cut. More accurately, we term a non-trivial cycle ``large'' if the torus may be smoothly deformed, without self-intersections and without extending to infinity, to be radially symmetric about an axis which is encircled by this non-trivial cycle.}
As these two situations are topologically equivalent, we
choose it to encircle specifically the large non-trivial cycle with no loss of generality.\footnote{We do not consider rings which twist around both cycles of the torus.
} How can we, in the diagrammatic notation, most efficiently represent the process of simultaneously translating each anyon around the torus by one site?
If we construct our fusion tree ``inside'' the torus, advance each anyon one site around the torus, and project onto the page, then the periodic translation operator is seen to act on the torus as shown in \fref{fig:torustranslate}.
\begin{figure*}
\includegraphics[width=492.0pt]{torustranslation}
\caption{The translation operator $\hat T^\mrm{T}$ on the torus, (i)~represented as an operator in the bra-ket form of \protect{\Eref{eq:braketoperator}}, and (ii)~represented as a mapping between a state $|\psi\rangle$ and the translated state $|\psi'\rangle=\hat T^\mathrm{T}|\psi\rangle$.
The presence of the star in the braided portion of the diagram indicates that the periodic translation passes around the same non-trivial cycle on the torus as the loop labelled $x_n$.
\label{fig:torustranslate}}
\end{figure*}%
We may therefore write the torus translation operator simply as
\begin{equation}
\hat T^\mrm{T} =~\raisebox{-12pt}{\includegraphics{translopstar}}{}\,.\label{eq:T^T}
\end{equation}
Note the presence of the star in the diagrammatic representation of the translation operator, which indicates that the periodic translation passes around a non-trivial cycle on the torus, as opposed to merely cyclically permuting the anyons locally on a disc-like region of the torus-shaped manifold.
Implementation of cyclic permutation on the torus in the ``inside'' basis poses an interesting challenge. In contrast with the cyclic translation operator on the disc [$\hat T^\mrm{D}$, \fref{fig:discoperators}(iv)], the operator $\hat T^\mrm{T}$ can not be constructed from
local operations by composing a series of braids. Instead we must introduce another new operator, given in \fref{fig:spintranslate}, which we will call the \emph{modified} translation operator, $\hat T^\mrm{T}_\mrm{M}$.
\begin{figure}[tp]
\includegraphics[width=246.0pt]{spintranslate}
\caption{Definition of an operator $\hat T^T_\mathrm{M}$ on the torus, which cyclically permutes the degrees of freedom $a_1,\ldots,a_n$ and $x_1,\ldots,x_n$ in basis $B_1$.
\label{fig:spintranslate}}
\end{figure}%
By diagrammatic isotopy, we see that the action of this operator $\hat T^\mrm{T}_\mrm{M}$ in basis $B_1$ is to cyclically permute the degrees of freedom $a_1,\ldots,a_n$ and $x_1,\ldots,x_n$. We may further
use diagrammatic isotopy to redraw $\hat T^\mrm{T}_\mrm{M}$ in the form
\begin{equation}
\hat T^\mrm{T}_\mrm{M}=\,\raisebox{-12pt}{\includegraphics{magtranslopstar}}{}\,,\label{eq:magtransdiag}
\end{equation}
and this
may be rewritten using \fref{fig:basischange}(ii) as
\begin{equation}
\hat T^\mrm{T}_\mrm{M}=\left(R^{a_n\overline{a_n}}_{\mbb{I}}\right)^{-1}~\raisebox{-12pt}{\includegraphics{translopstar}}{}\,.\label{eq:magtrans}
\end{equation}
Comparing with \Eref{eq:T^T}, we see that $\hat T^\mrm{T}$ and $\hat T^\mrm{T}_\mrm{M}$ differ only by the charge-dependent phase $R^{a_n\overline{a_n}}_{\mbb{I}}$. We therefore conclude that the application of the periodic translation operator on the torus to a labelled fusion tree in basis $B_1$ constructed ``inside'' the torus is equivalent to multiplication by $R^{a_n\overline{a_n}}_{\mbb{I}}$ followed by cyclic permutation of all anyon indices $a_1,\ldots,a_n$ and internal indices $x_1,\ldots,x_n$.
To construct a diagrammatic representation of the periodic translation operator in an ``outside'' basis, we will proceed somewhat differently. This time, let us begin with a state written in basis $B_2$ and constructed in the region ``outside'' the torus.
First, we map this state
to the \textit{n}+2-punctured sphere. We then perform a further mapping of this sphere to the infinite plane, to obtain the situation depicted in \fref{fig:planetopview}(i) where the arrangement of punctures is shown on the plane of the page.
\begin{figure}
\includegraphics[width=246.0pt]{planetopview}
\caption{(i)~Aerial view of $n+2$ punctures on the infinite plane equivalent to $n$ punctures in a non-trivial ring on the torus. The fusion tree on the plane imposes a linearisation on these punctures, and we may choose this to be as given by the black line. (ii)~The corresponding fusion tree. We may assume this fusion tree to inhabit the curved plane obtained by extending the black line of diagram~(i) into the plane of the page. The grey arrows in (i) indicate the process of periodic translation of the anyons on the ring. Note that during the process of translation, one anyon crosses the plane of the fusion tree while passing between the punctures $a_N$ and $a_S$. This is reflected in the periodic translation operator, labelled (iii). Application of the operator (iii) to states in the form of fusion tree (ii) yields states expressed in the fusion tree basis of diagram (iv).
\label{fig:planetopview}}
\end{figure}%
Introducing a fusion tree for this arrangement of punctures,
as shown in \fref{fig:planetopview}(ii), it is easy to construct the appropriate translation operator on the infinite plane [\fref{fig:planetopview}(iii)]. This operator maps states in the basis of \fref{fig:planetopview}(ii) to states in the basis of \fref{fig:planetopview}(iv), in which
the anyon $a_n$ is explicitly braided around the south polar puncture. The equivalent operator on the torus in basis $B_2$ is given in \fref{fig:torustranslation_throughloop}, where anyon $a_n$ is seen to braid \emph{through} the loop which carries the flux through the torus.
\begin{figure}
\includegraphics[width=246.0pt]{torustranslation_throughloop}
\caption{Periodic translation operator around the larger non-trivial cycle of the torus, expressed in basis $B_2$ constructed in the ``outside'' space, and represented as a mapping between a state $|\psi\rangle$ and the translated state $|\psi'\rangle=\hat T^\mathrm{T}|\psi\rangle$.
\label{fig:torustranslation_throughloop}}
\end{figure}%
This may also be intuitively understood by explicitly constructing the fusion tree in the ``outside'' space, as shown in \fref{fig:torustranslation_intuit}, and observing that during periodic translation of the punctures, one anyon is necessarily threaded through the loop of the fusion tree.
\begin{figure}[tp]
\includegraphics[width=246.0pt]{torustranslation_intuit}
\caption{In this diagram we see the process of periodic translation represented schematically on the actual toroidal manifold, with the fusion tree visible in the ``outside'' region of $\mbb{R}_3$. It is seen that, in bases constructed in the ``outside'' space, periodic translation on the torus threads an anyon through the non-trivial loop of the fusion tree.\label{fig:torustranslation_intuit}}
\end{figure}%
Interestingly, and in contrast with bases constructed ``inside'' the torus, the translation operator for a basis ``outside'' the torus can be implemented entirely in terms of local operations once the state has been mapped to the equivalent sphere. This approach can not be applied to ``inside'' bases, as reversing the surgery process given in \fref{fig:spheretotorus2}
involves cutting the ring of anyons, and this leaves the process of periodic translation on the equivalent sphere undefined.
Once again, notice that in either basis, the translation operator on the torus respects the arrow of time of the associated 2+1D TQFT: In Figs.~\ref{fig:torustranslate} and \ref{fig:torustranslation_throughloop} the trajectories of the punctures are all monotonic in the vertical direction.
\subsubsection{Topological symmetry operators\label{sec:Yoperators}}
Finally, we will
find it useful to introduce one more class of operator on the torus which admits a special graphical representation. Consider now a torus with a ring of punctures around the large non-trivial cycle. These operators, which we will denote $\hat Y_b^\mrm{T}$, describe a process whereby a pair of anyons carrying charges $b$ and $\bar b$ are created from the vacuum, travel around opposite sides of a non-trivial cycle on the torus coplanar with the ring of punctures and without braiding, and then annihilate back to the vacuum.
Expressed as a map from a state $|\psi\rangle$ to a state $|\psi'\rangle$ where $|\psi\rangle$ and $|\psi'\rangle$ are written in a fusion tree basis in the ``inside'' space, an operator
$\hat Y^\mrm{T}_b$ may be written as shown in \fref{fig:Yb}(i).
\begin{figure*}
\includegraphics[width=492.0pt]{Yb}
\caption{Operator $\hat Y^\mrm{T}_b$ acts on a state on the torus. In (i), the anyon pair $b$ and $\bar b$ travel around the large non-trivial cycle, and the fusion tree is constructed in the ``inside'' space. In (ii), the anyon pair still travel around the large non-trivial cycle, but the fusion tree is constructed in the ``outside'' space.\label{fig:Yb}}
\end{figure*}%
If we now re-express the state $|\psi\rangle$ in a basis constructed ``outside'' the torus using \Eref{eq:Srelate}, operator $\hat Y^\mrm{T}_b$ then takes the form shown in \fref{fig:Yb}(ii).
Using the identity
\begin{equation}
\raisebox{-20pt}{\includegraphics{Sunlink}}\label{eq:Sunlink}
\end{equation}
we can see from \fref{fig:Yb}(ii) that $\hat Y^\mrm{T}_b$ will have eigenvalues $S_{ba_n}/S_{b\mbb{I}}$ in the physical portion of the Hilbert space, and a state will be an eigenvector of $\hat Y^\mrm{T}_b$ iff it is not in a superposition over label $x_n$.
In the present notation, the $\hat Y$ operator employed in \rcite{feiguin2007} would be denoted $\hat Y^\mrm{T}_\tau$ and is constructed in the
``inside'' basis, as per \fref{fig:Yb}(i).
\section{Periodic boundary conditions\label{sec:PBC}}
In this section we will consider periodic chains of anyons first on the torus (\sref{sec:PBCtorus}) and then on the disc (\sref{sec:PBCdisc}). We further specialise to models in which every site in the chain carries a fixed, identical charge, and consequently in this section, and also in the next,
we set $a_1=a_2=\ldots=a_n$. For each topology (torus and disc) we will introduce a translation-invariant local Hamiltonian written as a sum of local terms, and take as a specific example the AFM nearest neighbour interaction for a chain of Fibonacci anyons.
We assume that an operator which is local acts only on a disc-like subregion of the manifold (i.e. if it is an operator acting on the torus, it does not include any non-trivial cycles). Consequently such an operator may be written in terms of
a fusion tree
defined on the disc, as per \fref{fig:discoperators}(ii).
To express the states of our system, we shall use the basis given in \fref{fig:anyonstates_sphere}(iv) on the disc, and in \fref{fig:torusstates}(ii) on the torus. The ring of punctures will be taken as encircling the large non-trivial cycle of the torus, and the fusion tree is constructed in the ``inside'' space.
For the Fibonacci AFM interaction, which is a nearest neighbour interaction, all terms of the Hamiltonian take the form of \fref{fig:discoperators}(ii) for $r=2$.
For clarity, from this point forwards we shall only provide explicit treatments for nearest neighbour Hamiltonians, though most of the arguments and techniques presented readily generalise to $r>2$.
\subsection{Mapping to spins\label{sec:spinmapping}}
To study these systems, we will employ the technique described in Ref.~\olcite{feiguin2007}, whereby the degrees of freedom for a one-dimensional system of anyons with fixed charges and nondegenerate fusion rules may be
mapped to a spin chain. In the bases of \fref{fig:anyonstates_sphere}(iv) and \fref{fig:torusstates}(i) (basis $B_1$), for a system of $n$ anyons the Hilbert space of the system is spanned by the $p$ free parameters of the fusion tree, $x_1\ldots x_p$, where $p=n-3$ on the disc and $p=n$
on the torus, and these $p$ free parameters may be identified with a spin chain of local dimension $|\mc{G}|$.
Because the Hilbert space of the spin chain is larger than the Hilbert space of the anyon chain, the Hilbert space of the spin chain is restricted to admit only those states which correspond to valid fusion trees under the anyonic fusion rules.
\subsection{Periodic translation on a chain of spins\label{sec:spintranslate}}
We note that there exists a special relationship between the process of periodic translation on a chain of spins and periodic translation on a ring of anyons encircling a non-trivial cycle of the torus. Under the mapping of \sref{sec:spinmapping}, each degree of freedom $x_1,\ldots,x_n$ on the torus is mapped to a site on the spin chain, and periodic translation on the system of spins, which we will denote $\hat T^\mrm{S}$, cyclically permutes these labels by one place.
For a state satisfying $a_1=a_2=\ldots=a_n$,
an equivalent effect may be obtained for the fusion diagram of the torus by applying the operator $\hat T^\mrm{T}_\mrm{M}$ discussed in \sref{sec:torustranslate}.
Furthermore, for fixed anyon charges $a_1,\ldots,a_n$,
the factor $\left(R^{a_n\overline{a_n}}_\mbb{I}\right)^{-1}$ in \Eref{eq:magtrans} is a constant.
Consequently,
for a ring of fixed, identical anyons on the torus,
periodic translation
of the ring of anyons is equivalent up to a phase to periodic translation on the associated spin chain, and
for Fibonacci anyons this phase is given\footnote{There are in fact four different UBMTC exhibiting Fibonacci statistics, of which two satisfy $R^{\tau\tau}_\mbb{I} = e^{4\pi\mathrm{i}/5}$ and the other two satisfy $R^{\tau\tau}_\mbb{I} = e^{-4\pi\mathrm{i}/5}$. In this paper, for definiteness, we shall take $R^{\tau\tau}_\mbb{I} = e^{4\pi\mathrm{i}/5}$. In studying a physical system described by one single Fibonacci UBMTC
we introduce an implicit breaking of symmetry,
and this permits the nondegenerate ground state to take on a non-zero momentum in \protect{\sref{sec:PBCtorus}}. See also Appendix~\protect{\ref{apdx:chiral}}.} by $R^{\tau\tau}_\mbb{I}=e^{4\pi\mathrm{i}/5}$.
For any operator $\hat O$ on such a chain, we therefore have the identity
\begin{equation}
\hat T^\mrm{T}\hat O\,\hat T^{\mrm{T}\dagger} = \hat T^\mrm{T}_\mrm{M}\,\hat O\,\hat T^{\mrm{T}\dagger}_\mrm{M} = \hat T^\mrm{S}\hat O\,\hat T^{\mrm{S}\dagger}.\label{eq:equivtrans}
\end{equation}
Some care
is still required in the computation of momenta of translation-covariant states, as
when translation by one site on the torus introduces a phase of $e^{\mathrm{i}\theta^\mrm{T}}$, translation of the equivalent state by one site on the spin chain will introduce a phase of $e^{\mathrm{i}\theta^\mrm{S}}=\left(R^{a_n\overline{a_n}}_\mbb{I}\right)^{-1}e^{\mathrm{i}\theta^\mrm{T}}$.
In previous work\cite{feiguin2007,trebst2008} this complication has been avoided by following a convention
whereby momenta are computed relative to the ground state.
In the present paper, however, we will retain the full phase
shift for didactic clarity.
\subsection{Hamiltonian with periodic boundary conditions on the torus\label{sec:PBCtorus}}
We now introduce a translation-invariant anyonic Hamiltonian, $\hat H^\mrm{A,P,T}$. The superscripts $\mrm{A}$, $\mrm{P}$, and $\mrm{T}$ indicate that the Hamiltonian is anyonic, periodic, and constructed on
the torus respectively.
For $r=2$, we may write
\begin{equation}
\begin{split}
\hat H^\mrm{A,P,T}
&= \sum_{i=0}^{n-1} \left(\hat T^\mrm{T}\right)^i\left(\hat h^\mrm{A}_{1,2}\right)\left(\hat T^{\mrm{T}\dagger}\right)^i\\
&= \sum_{i=1}^n \hat h^\mrm{A}_{i,i+1}
\end{split}\label{eq:H^APT}
\end{equation}
where local operator $\hat h^\mrm{A}_{i,i+1}$ acts on lattice sites $i$ and $i+1$, and takes the form of \fref{fig:localoperators}. Unless otherwise stated, the evaluation of position indices such as $i+1$ is assumed to be periodic in the range $1\ldots n$, so (for example) site $n+1$ is identified with site 1.
\begin{figure}[tp]
\includegraphics[width=246.0pt]{localoperators}
\caption{Form of the two-site local operator used as a term in the local Hamiltonians $\hat H^\mrm{A,P,T}$ \protect{\eref{eq:H^APT} and $\hat H^\mrm{A,P,D}$ \eref{eq:H^APD}}. Charges $a_i$, $a'_i$, $a_{i+1}$, and $a'_{i+1}$ are assumed to be fixed.
\label{fig:localoperators}}
\end{figure}%
As a specific example, we will consider the AFM interaction on the golden chain, for which all anyons $a_i$ on the lattice are constrained to have charge $\tau$.
Because the charges $a_i$, $a_{i+1}$, $a'_i$, and $a'_{i+1}$
are fixed and there are no degeneracy indices, we may denote the elements of $\hat h^\mrm{A}_{i,i+1}$ by $\left(h^\mrm{A}_{i,i+1}\right)_x$ where $x$ corresponds to the fusion product of the Fibonacci anyons on sites $i$ and $i+1$ respectively. The AFM Hamiltonian favours the fusion path $\tau\times\tau\rightarrow 1$, and we therefore assign $\left(h^\mrm{A}_{i,i+1}\right)_1=-1$ and $\left(h^\mrm{A}_{i,i+1}\right)_\tau=0$.
As demonstrated by \citeauthor{feiguin2007},\cite{feiguin2007} for the golden chain we may construct a three-body operator $\hat h^\mrm{S}$ on the spin chain whose action is locally equivalent to the two-body operator $\hat h^\mrm{A}$ on the system of anyons. To do so we introduce the spin chain equivalent of applying an $F$ move at sites $i$ and $i+1$ of basis $B_1$, for $i<n$:
\begin{equation}
\begin{split}
&\hat F^\mrm{S}_{i-1,i,i+1}|x_{i-1}x_{i}x_{i+1}\rangle\\
&\quad=\sum_{\tilde x_{i}} \left(F^{a_{i}a_{i+1}x_{i+1}}_{x_{i-1}}\right)_{x_{i}\tilde x_{i}} |x_{i-1}\tilde x_{i} x_{i+1}\rangle\label{eq:defineF},
\end{split}
\end{equation}
where $^\mrm{S}$ indicates an operator acting on the spin chain. We then write
\begin{equation}
\hat h^\mrm{S}_{i-1,i,i+1} = \left(\hat F^\mrm{S}_{i-1,i,i+1}\right)^\dagger\,\hat h^{\prime\mrm{S}}_{\p{\prime}i}\,\hat F^\mrm{S}_{i-1,i,i+1},
\end{equation}
where
\begin{equation}
\hat h^{\prime\mrm{S}}_{\p{\prime}i}|\ldots,x_{i},\ldots\rangle = \left(h^\mrm{A}_{i,i+1}\right)_{x_{i}}|\ldots,x_{i},\ldots\rangle.
\end{equation}
For a chain of Fibonacci anyons, $a_i=\tau$ for all values of $i$, and the above construction yields $\hat h^\mrm{S}_{i-1,i,i+1}$ corresponding to $\hat h^\mrm{A}_{i,i+1}$ for all $i<n$.
To construct the final term we will exercise some caution, as $a_n$ and $a_1$ are presently located at opposite ends of the fusion diagram. We therefore begin with the expression
\begin{equation}
\hat h^\mrm{A}_{n,1} = \hat T^\mrm{T}\hat h^\mrm{A}_{n-1,n}\hat T^{\mrm{T}\dagger}.
\end{equation}
Since we have fixed the charges of all punctures $a_i$ to be equivalently $\tau$, we may apply \Eref{eq:equivtrans} to obtain
\begin{equation}
\hat h^\mrm{A}_{n,1} = \hat T^\mrm{T}_\mrm{M}\hat h^\mrm{A}_{n-1,n}\hat T^\mrm{T}_{\mrm{M}\dagger}.
\end{equation}
As $\hat T^\mrm{M}$ is equivalent to translation on the chain of spins, $\hat T^\mrm{S}$, we see that
\begin{equation}
\hat h^\mrm{S}_{n-1,n,1} = \hat T^\mrm{S}\hat h^\mrm{S}_{n-2,n-1,n}\hat T^{\mrm{S}\dagger}
\end{equation}
as might have been expected.
The total spin chain Hamiltonian may therefore be written
\begin{equation}
\begin{split}
\hat H^\mrm{S,P,T} &= \sum_{i=0}^{n-1} \left(\hat T^\mrm{S}\right)^i\left(\hat h^\mrm{S}_{1,2,3}\right)\left(\hat T^{\mrm{S}\dagger}\right)^i\\
&= \sum_{i=1}^n \hat h^\mrm{S}_{i-1,i,i+1}
\end{split}\label{eq:H^SPT}
\end{equation}
on a periodic spin chain of length $n$.
For Fibonacci anyons with nearest neighbour interactions, Hamiltonians \eref{eq:H^APT} and \eref{eq:H^SPT} are both quantum critical Hamiltonians.
The low energy properties of such systems are described by a conformal field theory (CFT), and the
scaling dimensions
of the local primary fields
can be extracted from the low energy spectrum of the translation invariant critical model on a finite lattice with periodic boundary conditions.\cite{cardy1996,difrancesco1997}
The results from exactly diagonalising $\hat H^\mrm{S,P,T}$ for AFM Fibonacci chains of lengths 24 and 25 are presented in \tref{tab:torus1}.
\begin{table}[bp
\caption{Energy spectra for rings of (i)~24 and (ii)~25 Fibonacci anyons interacting via an AFM nearest neighbour interaction on the torus, shifted and rescaled to yield scaling dimensions for the associated conformal field theory. For even numbers of anyons, this is the minimal model $\mc{M}(4,3)$, associated with the tricritical Ising model. For an odd number of anyons, we obtain the spectrum of $\mc{M}(4,3)$ with a $Z_2$ twist.\protect{\cite{mossa2008}} The scaling dimensions for odd numbers of anyons may be obtained from those for even numbers of anyons by fusing the corresponding scaling fields with the generator of this internal $Z_2$ symmetry, the field $\varepsilon''$, as noted in \protect{\rcite{feiguin2007}}.\label{tab:torus1}}
~\\
\begin{tabular}{|c|cc|c|}
\hline
\multicolumn{4}{|c|}{(i) 24 anyons}\\
\hline
\hline
Numerics&\multicolumn{2}{|c|}{Prediction from CFT}&Flux through torus\\
\hline
0.0000 & 0 & & $\mbb{I}$\\
0.0750 & ~0.0750~ & ($\frac{3}{40}$) & $\tau$\\
0.1989 & 0.2000 & ($\frac{1}{5}$) & $\tau$\\
0.8826 & 0.8750 & ($\frac{7}{8}$) & $\mbb{I}$\\
\p{$^*$}1.0622$^*$ & 1.0750 & ($\frac{3}{40}+1$) & $\tau$\\
\p{$^*$}1.1784$^*$ & 1.2000 & ($\frac{1}{5}+1$) & $\tau$\\
1.1841 & 1.2000 & ($\frac{6}{5}$) & $\tau$\\
\p{$^*$}1.8540$^*$ & 1.8750 & ($\frac{7}{8}+1$) & $\mbb{I}$\\
\p{$^*$}1.9469$^*$ & 2.0000 & ($0+2$) & $\mbb{I}$\\
\p{$^*$}1.9843$^*$ & 2.0750 & ($\frac{3}{40}+2$) & $\tau$\\
\p{$^*$}2.0180$^*$ & 2.0750 & ($\frac{3}{40}+2$) & $\tau$\\
\hline
\end{tabular}\\~\\~\\
\begin{tabular}{|c|cc|c|}
\hline
\multicolumn{4}{|c|}{(ii) 25 anyons}\\
\hline
\hline
Numerics&\multicolumn{2}{|c|}{Prediction from CFT}&Flux through torus\\
\hline
0.0750 & 0.0750 & ($\frac{3}{40}$) & $\tau$\\
\p{$^*$}0.7000$^*$ & ~0.7000~ & ($\frac{7}{10}$) & $\tau$\\
0.8587 & 0.8750 & ($\frac{7}{8}$) & $\mbb{I}$\\
\p{$^*$}1.0662$^*$ & 1.4750 & ($\frac{3}{40}+1$) & $\tau$\\
\p{$^*$}1.4887$^*$ & 1.5000 & ($\frac{3}{2}$) & $\mbb{I}$\\
\p{$^*$}1.6641$^*$ & 1.7000 & ($\frac{7}{10}+1$) & $\tau$\\
\p{$^*$}1.6863$^*$ & 1.7000 & ($\frac{7}{10}+1$) & $\tau$\\
\p{$^*$}1.8163$^*$ & 1.8750 & ($\frac{7}{8}+1$) & $\mbb{I}$\\
\p{$^*$}2.0008$^*$ & 2.0000 & ($0+2$) & $\mbb{I}$\\
\p{$^*$}2.0176$^*$ & 2.0750 & ($\frac{3}{40}+2$) & $\tau$\\
\p{$^*$}2.0516$^*$ & 2.0750 & ($\frac{3}{40}+2$) & $\tau$\\
\hline
\end{tabular}
\\~\\~\\
$0\equiv(\mbb{I},\mbb{I})$, $\frac{3}{40}\equiv(\sigma,\sigma)$, $\frac{1}{5}\equiv(\varepsilon,\varepsilon)$, $\frac{7}{10}\equiv(\varepsilon,\varepsilon')$ or $(\varepsilon',\varepsilon)$, $\frac{7}{8}\equiv(\sigma',\sigma')$, $\frac{6}{5}\equiv(\varepsilon',\varepsilon')$, $\frac{3}{2}\equiv(\varepsilon'',\mbb{I})$ or $(\mbb{I},\varepsilon'')$\\~\\
$^*$ Eigenvalue is twofold degenerate
\end{table}%
The energy eigenvalues have been shifted and rescaled to give the scaling dimensions of the corresponding CFT, which for this Hamiltonian is the minimal model associated with tricritical Ising model, $\mc{M}(4,3)$.
For each scaling dimension, \tref{tab:torus1} also gives a parameter
referred to as the ``flux through the torus''.
If we write our
states
on the torus in basis $B_2$ constructed in the ``outside'' region of $\mbb{R}_3$, then this is simply the value of charge label $x_n$, which may be measured using the family of operators $\hat Y^\mrm{T}_b$. It is not difficult to see from \Eref{eq:T^T} and Figs.~\ref{fig:Yb} and \ref{fig:localoperators} that any operator $\hat Y^\mrm{T}_b$ will commute with a Hamiltonian of local terms $\hat H^\mrm{A,P,T}$ on the torus, and thus for Fibonacci anyons we may simultaneously diagonalise $\hat H^\mrm{A,P,T}$ and $\hat Y^\mrm{T}_\tau$. It therefore follows that we may associate every energy eigenstate with a corresponding eigenvalue of $\hat Y^\mrm{T}_\tau$, which in turn corresponds to the measurement of a well-defined charge $x_n$.
For a translation-invariant Hamiltonian such as we have here, we may also assign a momentum to each state as shown in the dispersion diagram of \fref{fig:dispersion}.
This diagram clearly shows the distinction between (i)~periodic translation on sites of the spin chain and (ii)~periodic translation of anyons on the torus, with the difference
between $\hat T^\mrm{S}$ and $\hat T^\mrm{T}$
resulting in a relative phase shift of
$R^{\tau\tau}_\mbb{I}=e^{{4\pi\mathrm{i}}/{5}}$ as described in \sref{sec:spintranslate}.\cite{Note5}
\begin{figure}[tp]
\includegraphics[width=246.0pt]{dispersion}
\caption{COLOR ONLINE. Energy vs. phase diagram, where $e^{\mrm{i}\theta}$ is the phase acquired by energy eigenstates on translation by one site, for the Hamiltonian of the golden chain with AFM interaction (i)~made translation-invariant on the corresponding periodic spin chain with $p=24$, and (ii)~naturally translation-invariant on a non-trivial ring of anyons on the torus with $n=24$. Squares mark theoretical values for primary fields, and circles show selected descendants. Solid lines bound the energies from below.
The system of spins yields a dispersion diagram identical to that of anyons on the torus, except that the momenta are shifted by an offset of
$-\mathrm{i}\ln\left[\left(R^{\tau\tau}_\mbb{I}\right)^{-1}\right]=-\frac{4\pi}{5}$ as described in \sref{sec:spintranslate}.
\label{fig:dispersion}}
\end{figure}%
Interestingly, the non-zero momentum of the ground state in \fref{fig:dispersion}(ii) implies that it is possible to distinguish between clockwise and anti-clockwise translations of the ground state of the AFM golden chain.
This observation is discussed
in Appendix~\ref{apdx:chiral}.
\subsection{Hamiltonian with periodic boundary conditions on the disc\label{sec:PBCdisc}}
To construct a periodic translation operator on the disc, we recognise that the anyon sites in \fref{fig:anyonstates_sphere}(iv) lie on a circle, which must be assumed to close either towards or away from the reader. Opting for the latter, we may define the periodic translation operator on the disc according to \fref{fig:discoperators}(iv). By inspection we see that translation may be implemented by means of repeated application of the braiding operator of \fref{fig:discoperators}(iii), which we will denote $\hat B^\mrm{A}_{i,i+1}$. Denoting the anyonic periodic translation operator on the disc of \fref{fig:discoperators}(iv) by $\hat T^\mrm{D}$, we have
\begin{equation}
\hat T^\mrm{D}=\prod_{i=1}^{n-1}\hat B^\mrm{A}_{i,i+1}.\label{eq:TD}
\end{equation}
We may now introduce a translation-invariant Hamiltonian
\begin{equation}
\begin{split}
\hat H^\mrm{A,P,D} &= \sum_{i=0}^{n-1} \left(\hat T^\mrm{D}\right)^i\left(\hat h^\mrm{A}_{1,2}\right)\left(\hat T^{\mrm{D}\dagger}\right)^i\\
&= \sum_{i=1}^n \hat h^\mrm{A}_{i,i+1}
\end{split}\label{eq:H^APD}
\end{equation}
where $^\mrm{D}$ denotes that this Hamiltonian acts on the disc.
As before, we also introduce a spin chain whose sites correspond to the degrees of freedom of the anyonic fusion tree. However, on this occasion we use the fusion tree of \fref{fig:anyonstates_sphere}(iv) as a basis of states, and the corresponding spin chain is of length $p=n-3$.
Near the centre of the chain we may construct the spin equivalent of an $F$ move in the same way as for the torus \eref{eq:defineF}, but we see that operators near the end of the chain will act on a reduced number of sites. For example, an $F$ move acting on sites $a_2$ and $a_3$
acts only on spin variables $x_1$ and $x_2$,
mapping $|x_1x_2\rangle$ into $\sum_{\tilde x_1} (F^{a_1a_2a_3}_{x_2})_{x_1\tilde x_1} |\tilde x_1 x_2\rangle$.
We will continue to denote the spin chain counterparts of these operators by $\hat F^\mrm{S}_{i,i+1,i+2}$, with the understanding that when evaluating \Eref{eq:defineF} on the disc, any indices $x_0$ or $x_{p+1}$ are to be replaced by charges $a_1$ and $\overline{a_n}$ respectively, and any indices $x_{-1}$ or $x_{p+2}$ are to be replaced by the vacuum charge $\mbb{I}$. We do not modify the spin chain, which continues to run from $x_1$ to $x_p$. This behaviour manifestly breaks translation invariance on the spin chain.
For values of $i$ sufficiently distant from 1 or $n$ we may also map $\hat h^\mrm{A}_{i,i+1}$ onto a three-site spin operator as before, although this is now denoted $\hat h^\mrm{S}_{i-2,i-1,i}$ as it acts on spin sites $x_{i-2}$, $x_{i-1}$, and $x_i$.
By using the extended definition of $\hat F^\mrm{S}_{i,i+1,i+2}$ we may even write down spin operators equivalent to $\hat h^\mrm{A}_{1,2}$, $\hat h^\mrm{A}_{2,3}$, $\hat h^\mrm{A}_{n-2,n-1}$, and $\hat h^\mrm{A}_{n-1,n}$. However, for $\hat h^\mrm{A}_{n,1}$ we must introduce the spin chain equivalent of the anyonic periodic translation operator on the disc, $\hat T^\mrm{D}$.
To do this, we first construct the spin chain counterpart to the anyonic braiding operator
given in \fref{fig:discoperators}(iii). This is achieved by introducing a unitary operator $\hat R_i^\mrm{S}$ derived from the tensor $R$ in \fref{fig:basischange}(ii), which operator multiplies a state $|x_i\rangle$ by a phase $R^{a_{i+1}a_{i+2}}_{x_i}$. Using this we may write the
spin chain equivalent of \fref{fig:discoperators}(iii) as
\begin{equation}
\hat B^\mrm{S}_{i,i+1,i+2} = (\hat F^\mrm{S}_{i,i+1,i+2})^\dagger \hat R^\mrm{S}_{i+1} \hat F^\mrm{S}_{i,i+1,i+2}.
\end{equation}
As with $\hat F^\mrm{S}$, the same special identifications for $x_{-1}$, $x_0$, $x_{p+1}$, and $x_{p+2}$ must be made when applying
either $\hat R^\mrm{S}$ or $\hat B^\mrm{S}$
to a state. Using $\hat{B}^\mrm{S}$ we can define an operator $\hat T^{\prime\mrm{S}}$ on the spin chain which is equivalent to periodic translation on the lattice of anyons,
\begin{equation}
\hat T^{\prime\mrm{S}} = \prod_{i=0}^{p+1} \hat B^\mrm{S}_{\spinsite{i-1},\spinsite{i},\spinsite{i+1}},\label{eq:anyontrans}
\end{equation}
and thus compute the spin chain Hamiltonian which is equivalent to $\hat H^\mrm{A,P,D}$:
\begin{equation}
\hat H^\mrm{S,P,D} = \sum_{i=0}^{p+2} \left(\hat T^{\prime\mrm{S}}\right)^i \left(\hat h^\mrm{S}_{\spinsite{1},\spinsite{2},\spinsite{3}}\right) \left(\hat T^{\prime\mrm{S}\dagger}\right)^i.\label{eq:H^SPD}
\end{equation}
The Hamiltonian $\hat H^\mrm{S,P,D}$ is clearly not translation invariant under the natural definition of translation on a periodic spin chain. However, it does exhibit
translation invariance under the action of the anyon-derived translation superoperator $\hat T^{\prime\mrm{S}}(\cdot)\hat T^{\prime\mrm{S}\dagger}$.
Away from the edges of the fusion tree, the action of
$\hat T^{\prime\mrm{S}}(\cdot)\hat T^{\prime\mrm{S}\dagger}$ is equivalent to translation on the system of spins, such that (for example) $\hat T^{\prime\mrm{S}}(\hat h^\mrm{S}_{1,2,3})\hat T^{\prime\mrm{S}\dagger}=\hat h^\mrm{S}_{2,3,4}$. However, this does not hold where the translation would yield an operator crossing between sites 1 and $p$. Instead, $\hat T^{\prime\mrm{S}}(\hat h^\mrm{S}_{p-2,p-1,p})\hat T^{\prime\mrm{S}\dagger}$ yields a two-site operator acting on spin sites $p-1$ and $p$, and it is necessary to apply $\hat T^{\prime\mrm{S}}(\cdot)\hat T^{\prime\mrm{S}\dagger}$
six times to map $\hat h^\mrm{S}_{p-2,p-1,p}$ into $\hat h^\mrm{S}_{1,2,3}$, with none of the intermediate terms resembling a translation on the spin system of the original operator $\hat h^\mrm{S}_{p-2,p-1,p}$. Nevertheless, the complete Hamiltonian satisfies $\hat T^{\prime\mrm{S}}(\hat H^\mrm{S,P,D})\hat T^{\prime\mrm{S}\dagger} = \hat H^\mrm{S,P,D}$.
The results of exactly diagonalising $\hat H^\mrm{S,P,D}$ are given in \tref{tab:disc1}, and a dispersion diagram is plotted in \fref{fig:dispersion_disc} for comparison with the torus [\fref{fig:dispersion}(ii)].
\begin{table}[bp
\caption{Energy spectra for rings of (i)~24 and (ii)~25 Fibonacci anyons interacting via an AFM nearest neighbour interaction on the disc, shifted and rescaled to yield scaling dimensions for the associated conformal field theory. For even numbers of anyons, this is the minimal model $\mc{M}(4,3)$, associated with the tricritical Ising model. For an odd number of anyons, we obtain operators from the spectrum of $\mc{M}(4,3)$ with a $Z_2$ twist.\protect{\cite{mossa2008}} The scaling dimensions for odd numbers of anyons may be obtained from those for even numbers of anyons by fusing the corresponding scaling fields with the generator of this internal $Z_2$ symmetry, the field $\varepsilon''$, as noted in \protect{\rcite{feiguin2007}}.\label{tab:disc1}}
~\\
\begin{tabular}{|c|cc|}
\hline
\multicolumn{3}{|c|}{(i) 24 anyons}\\
\hline
\hline
Numerics&\multicolumn{2}{|c|}{CFT prediction}\\
\hline
0.0000 & 0 &\\
0.8750 & ~0.8750~ & ($\frac{7}{8}$)\\
\p{$^*$}1.8380$^*$ & 1.8750 & ($\frac{7}{8}+1$)\\
\p{$^*$}1.9301$^*$ & 2.0000 & ($0+2$)\\
\p{$^*$}2.7012$^*$ & 2.8750 & ($\frac{7}{8}+2$)\\
\p{$^*$}2.7771$^*$ & 2.8750 & ($\frac{7}{8}+2$)\\
\hline
\end{tabular}~~~
\begin{tabular}{|c|cc|}
\hline
\multicolumn{3}{|c|}{(ii) 25 anyons}\\
\hline
\hline
Numerics&\multicolumn{2}{|c|}{CFT prediction}\\
\hline
0.8750 & 0.8750 & ($\frac{7}{8}$)\\
\p{$^*$}1.5000$^*$ & ~1.5000~ & ($\frac{3}{2}$)\\
\p{$^*$}1.8250$^*$ & 1.8750 & ($\frac{7}{8}+1$)\\
\p{$^*$}2.4107$^*$ & 2.5000 & ($\frac{3}{2}+1$)\\
\p{$^*$}2.6939$^*$ & 2.8750 & ($\frac{7}{8}+2$)\\
2.7598 & 2.8750 & ($\frac{7}{8}+2$)\\
\hline
\end{tabular}
\\~\\~\\$0\equiv(\mbb{I},\mbb{I})$, $\frac{7}{8}\equiv(\sigma',\sigma')$, $\frac{3}{2}\equiv(\varepsilon'',\mbb{I})$ or $(\mbb{I},\varepsilon'')$\\~\\$^*$ Eigenvalue is twofold degenerate
\end{table}%
\begin{figure}[tp]
\includegraphics[width=246.0pt]{dispersion2}
\caption{COLOR ONLINE. Energy vs. phase diagram, where $e^{\mrm{i}\theta}$ is the phase acquired by energy eigenstates on translation by one site, for the golden chain with AFM interaction on the disc with $n=24$. Squares mark theoretical values for primary fields, and circles show selected descendants. Solid lines bound the energies from below. Note that
phase shifts
are in agreement with those computed on the torus [\protect{\fref{fig:dispersion}(ii)}], and are offset by $\frac{4\pi}{5}$ relative to the corresponding values on the spin chain [\protect{\fref{fig:dispersion}(i)}].
\label{fig:dispersion_disc}}
\end{figure}%
As with the torus the energy spectrum has been shifted and rescaled to give the scaling dimensions of local scaling operators in $\mc{M}(4,3)$.
This behaviour, where translation invariance exists relative to an operator which is not the natural translation operator on the spin chain,
has previously been observed for certain $SU(2)_k$-invariant spin chain Hamiltonians by \textcite{grosse1994} It is now known that a
relationship exists between $SU(2)_k$-invariant spin chains and chains of $SU(2)_k$ anyons, and although present research has concentrated on anyons on the torus,\cite{trebst2008} it is nevertheless likely that the models of \citeauthor{grosse1994} may similarly
be
mapped into interactions of $SU(2)_k$ anyons on the disc.
The form of the anyonic translation operator also has practical implications for the restriction of the Hilbert space of the spin chain mentioned in \sref{sec:spinmapping}, and these technical details are discussed in
Appendix~\ref{apdx:hilbertspace}.
\subsection{Relationship between the torus and the disc}
\subsubsection{Mapping between disc and torus states\label{sec:mapdisctorus}}
By comparing Tables~\ref{tab:torus1} and \ref{tab:disc1} we see that on the disc, we compute scaling dimensions which correspond to those obtained for trivial flux through the torus. The reason for this may be seen by comparing Figs.~\ref{fig:anyonstates_sphere}(iv) and \fref{fig:torusstates}(ii). By restricting the flux through the torus $\tilde{x}_n$ in basis $B_2$ to be $\mbb{I}$, we obtain a fusion tree identical to that of \fref{fig:anyonstates_sphere}(iv).
The action of the translation-invariant Hamiltonian
\begin{equation}
\hat H^\mrm{A,P,X} = \sum_{i=0}^{n-1} \left(\hat T^\mrm{X}\right)^i\left(\hat h^\mrm{A}_{1,2}\right)\left(\hat T^{\mrm{X}\dagger}\right)^i
\end{equation}
where $\mrm{X}$ stands for $\mrm{T}$ on the torus and $\mrm{D}$ on the disc is therefore equivalent in both cases, and we obtain the observed correspondences between the energy spectrum of the torus and the disc.
Away from criticality, the Hamiltonian is insensitive to non-local properties of the system. In the thermodynamic limit each energy level is therefore $|\mc{G}|$-fold degenerate on the torus,
with the degeneracy enumerated by the different values which may be taken by the flux through the torus. As the disc corresponds to the torus with the flux constrained to be $\mbb{I}$, the eigenvalues on the disc in the thermodynamic limit are the same, but nondegenerate.
At criticality, the energy spectrum is no longer necessarily independent of the flux through the torus, and the degeneracy of energy levels on the torus may be broken, with different flux sectors exhibiting different energy spectra.
Nevertheless, the identification between the disc and the torus with trivial flux persists, and for critical local Hamiltonians applied both on the torus and on the disc,
the disc necessarily exhibits the same energy spectrum
as obtained on the torus
when the flux through the torus is constrained to be $\mbb{I}$.
Recall now that each scaling dimension obtained from the energy spectrum is associated with a local scaling operator.\cite{cardy1996,difrancesco1997} In \rcite{feiguin2007}, it is argued that the classification of scaling dimensions on the torus by the value of the flux through the torus translates into an equivalent classification of local scaling operators, and that when in the ground state,
only local scaling operators associated with a flux of $\mbb{I}$ may cause local perturbations.
\subsubsection{Conformal field theory with a defect\label{sec:CFTwdefect}}
In Secs.~\ref{sec:PBCtorus} and \ref{sec:PBCdisc} we have seen that a spin chain of length $p$ may be used, via appropriate mappings, to represent states of a system of either $p$ anyons on the torus or $p+3$ anyons on the disc. We also observed that each of these systems comes with its own definition of translation invariance. For the torus this corresponds (up to a state-dependent phase) to the natural definition of translation on a periodic spin chain, whereas for the disc this is given by operator $\hat T^\mrm{S}$ of Eq.~\eref{eq:anyontrans} but nevertheless corresponds (when applied to an operator using the adjoint action) to the natural definition of translation invariance on the spin chain on sites sufficiently far from $x_1$ and $x_p$. It is therefore natural to interpret the difference between these two models as being equivalent to the introduction of a defect in translation invariance.
We will show by construction that this defect has the unusual property of being invertible. That is, there exists a second defect which, when introduced manually, will annihilate the original defect and restore the full spectrum for a system of anyons on the torus, albeit a torus of length $p-3$.
We now construct a Hamiltonian on a disc of Fibonacci anyons which reproduces the spectrum of the AFM Hamiltonian on the torus. This Hamiltonian satisfies translation invariance on the disc---i.e. it is invariant under the adjoint action of $\hat T^\mrm{D}$ \eref{eq:TD}---except for two local terms.
These terms define a defect $\mc{D}$. The spectrum of the resulting Hamiltonian for a ring of $n$ anyons on the disc is equivalent to that of $n-3$ anyons on the torus. If $n$ is even, then $n-3$ is odd, and as noted below \tref{tab:torus1}, the spectrum of a ring of an odd number of anyons on the torus is equivalent to that of an even number of anyons with a $Z_2$ twist.\cite{mossa2008} Thus the fusion of defect $\mc{D}$ with the $Z_2$ twist constitutes the inverse of the defect which arises from application of the translation superoperator $\hat T^\mrm{D}(\cdot)\hat T^{\mrm{D}\dagger}$.
The Hamiltonian exhibiting defect $\mc{D}$ takes the form
\begin{equation}
\hat H^\mrm{A,P,D\rightarrow T} = \sum_{i=3}^{n-3} \hat h^\mrm{A}_{i,i+1} + \hat h^\mrm{A}_{n-2,n-1,n,1,2} + \hat h^\mrm{A}_{n-1,n,1,2,3}\label{eq:Hdefect_Disc}
\end{equation}
for a ring of $n$ anyons on the disc, and details of its construction for the AFM or FM Fibonacci chain
are given in Appendix~\ref{apdx:anyondefect}.
As a consequence of the invertibility of the defect in translation, it follows that we may compute the spectrum of a system of anyons on the torus or the disc using a system of anyons also, in each case, either on the torus or on the disc.
In this section we have shown how to extract the spectrum of a system of anyons on the torus from a system of anyons on the disc, by means of a local modification to the Hamiltonian.
In \sref{sec:mapdisctorus} we noted that one can extract the spectrum of a system of anyons on the disc from a system of anyons on the torus by means of a global operator restricting the flux through the torus to be $\mbb{I}$, and
for completeness we now note that this may be achieved by adding to the Hamiltonian an appropriate 1-site local operator acting on site $x_n$ the spin chain. This 1-site operator applies an arbitrarily large energy penalty to states for which $x_n\not=\mbb{I}$.
When this term is introduced, the resulting Hamiltonian has
a spectrum equivalent to a system of $n$ anyons on the disc.
Understanding that systems of anyons on the disc and on the torus each admit their own natural notion of translation invariance, and that these are equivalent up to a specific defect, provides another interpretation of the different operator spectra of Tables~\ref{tab:torus1} and \ref{tab:disc1}. The study of defects and boundaries in conformal field theory, both in the continuum\cite{difrancesco1997,cardy2006} and on the lattice,\cite{evenbly2010a} has shown that the presence of a local defect in a critical system may globally affect the scaling operator spectrum. In the present example, we have shown that the AFM golden chain on the disc is equivalent to the AFM golden chain on the torus with a defect, and that this defect eliminates the scaling operators whose scaling dimensions exhibit a flux of $\tau$ in \tref{tab:torus1}, and likewise eliminates their corresponding eigenvalues from the spectrum of the Hamiltonian.
\subsection{Comparison with models on a spin chain\label{sec:heisenberg}}
In this section we have dealt with models such as the AFM golden chain, which are primarily defined on a ring of anyons. It is instructive to compare these with models such as the Heisenberg model, which is defined on a spin chain but which can also be expressed in the diagrammatic notation used in this paper. As an example, we will now consider the spin-$\frac{1}{2}$ AFM Heisenberg model with periodic boundary conditions, assumed to be constructed on a manifold which is topologically the disc. Because this model possesses SU(2) symmetry, we may represent it in the diagrammatic notation using a UBMTC based on the fusion rules of SU(2). States can be represented in the form of \fref{fig:heisenberg}(i), and the Hamiltonian takes the form of \fref{fig:heisenberg}(ii).
\begin{figure}
\includegraphics[width=246.0pt]{heisenbergbasis2}
\caption{(i)~Fusion tree used as a basis of states for the $n$-site Heisenberg spin chain. Note that in contrast to \protect{\fref{fig:anyonstates_sphere}}(iv), the total charge (or spin) is not constrained to be zero, and
a tree with total charge $x_{n-1}$ therefore represents a vector space of
dimension $d_{x_{n-1}}$.
(ii)~Diagrammatic representation of the Hamiltonian for the AFM spin-$\frac{1}{2}$ Heisenberg model.\label{fig:heisenberg}}
\end{figure}%
We may now
analyse this system in two different ways. Either we may obtain the energy spectrum by exactly diagonalising the original spin chain, and compute momenta using the natural definition of translation on that spin chain, or we may write it in the diagrammatic notation, and map this to a different spin chain as
described in \sref{sec:spinmapping}. We would then compute the energy on the chain of fusion tree variables $x_1,\ldots,x_{n-1}$, and the momenta using translation operator $\hat T^\mrm{D}$ from \fref{fig:anyonstates_sphere}(i).
As is to be expected, the results obtained using these different methods agree to the limits of numerical precision.
This
observation
has bearing upon the definition of the translation operator. If we use the
definitions for $\hat T^\mrm{D}$ and $\hat T^\mrm{T}$ given in \fref{fig:anyonstates_sphere}(i), \Eref{eq:T^T}, and \fref{fig:torustranslation_throughloop}, then we find that the ground state of the AFM golden chain has non-trivial momentum. Suppose that, instead, we assume
a momentum of zero for the ground state of the golden chain, and adopt
\begin{equation}
\hat T^\mrm{D}_\mrm{M}=\left(R^{a_n\overline{a_n}}_\mbb{I}\right)^{-1}\,\raisebox{-12pt}{\includegraphics{translop}}{}\,\label{eq:magtransdisc}
\end{equation}
as the translation operator
on the disc and
\begin{equation}
\hat T^\mrm{T}_\mrm{M}=\left(R^{a_n\overline{a_n}}_\mbb{I}\right)^{-1}\,\raisebox{-12pt}{\includegraphics{translop}}{}\,\tag{\ref{eq:magtrans}}
\end{equation}
on the torus (noting that $\hat T^\mrm{T}_\mrm{M}$ is the modified translation operator originally introduced in \sref{sec:torustranslate}, corresponding to cycling of the fusion tree variables $x_1,\ldots,x_{n-1}$).
For consistency we would then also have to use \Eref{eq:magtransdisc} when working with the diagrammatic representation of the AFM Heisenberg chain, with $R^{a_n\overline{a_n}}_\mbb{I}
=R^{\frac{1}{2}\frac{1}{2}}_0=-1$. However, the momenta obtained using this operator are
inconsistent with results obtained by exactly diagonalising the original spin chain, indicating that $\hat T^\mrm{D}$, and not $\hat T^\mrm{D}_\mrm{M}$, is the correct definition for the periodic translation operator on the disc.
As we require that
the torus with trivial flux be consistent with the disc, we also
obtain that $\hat T^\mrm{T}$, and not $\hat T^\mrm{T}_\mrm{M}$, is the correct periodic translation operator on the torus. Thus
study of the AFM Heisenberg spin chain supports our claim that the ground state of the AFM golden chain has non-zero momentum, as observed in \fref{fig:dispersion}(ii) for the torus and \fref{fig:dispersion_disc} for the disc.
\section{Open boundary conditions\label{sec:OBC}}
On 1-D systems with open boundary conditions the situation is somewhat simpler, but
again some care must be taken as the fusion tree basis will again depend upon the topology of the quantum liquid.
For example, as in \rcite{feiguin2007}, one might choose to study the Hamiltonian corresponding to free boundary conditions on the torus,
\begin{equation}
\hat H^{\mrm{A,F,T}} = \sum_{i=1}^{n-1} \hat h^\mrm{A}_{i,i+1},
\end{equation}
where F denotes free boundary conditions, which maps to the spin chain as
\begin{equation}
\hat H^\mrm{S,F,T} = \sum_{i=1}^{n-1} \hat h^\mrm{S}_{\spinsite{i-1},\spinsite{i},\spinsite{i+1}}.\label{eq:HF_torus}
\end{equation}
(This may be contrasted with \Eref{eq:H^SPT}.)
Similarly, one could place the same Hamiltonian on the disc:
\begin{align}
\hat H^{\mrm{A,F,D}} &= \sum_{i=1}^{n-1} \hat h^\mrm{A}_{i,i+1},\\
\hat H^\mrm{S,F,D} &= \sum_{i=1}^{n-1} \hat h^\mrm{S}_{\spinsite{i-2},\spinsite{i-1},\spinsite{i}}.\label{eq:HF_disc}\end{align}
Once again the spectrum for the Hamiltonian on the disc is seen to be a subset of that on the torus (\tref{tab:OBC_fixed}), and once again by means of appropriate modifications of the Hamiltonians, corresponding to alternative choices of boundary conditions,
we may obtain either set of scaling dimensions on either topology.
\begin{table}[bp]
\caption{Numerical results and CFT assignments for the smallest scaling dimensions on open chains of Fibonacci anyons of length $n$ with AFM coupling and free boundary conditions: (i)-(ii)~On the torus, Hamiltonian $\hat H^\mrm{S,F,T}$ \protect{\eref{eq:HF_torus}}. (iii)-(iv) On the disc, Hamiltonian $\hat H^\mrm{S,F,D}$ \protect{\eref{eq:HF_disc}}.
\label{tab:OBC_fixed}}
~\\
\begin{tabular}{|c|cc|}
\hline
\multicolumn{3}{|c|}{(i) 24 anyons, torus}\\
\hline
\hline
Numerics&\multicolumn{2}{|c|}{CFT prediction}\\
\hline
\p{$^*$}0.0000$^*$ & 0 &\\
0.6000 & ~0.6000~ & ($\frac{3}{5}$)\\
1.6009 & 1.6000 & ($\frac{3}{5}+1$)\\
\p{$^*$}2.0186$^*$ & 2.0000 & ($0+2$)\\
2.5765 & 2.6000 & ($\frac{3}{5}+2$)\\
2.5808 & 2.6000 & ($\frac{3}{5}+2$)\\
\hline
\end{tabular}~~~
\begin{tabular}{|c|cc|}
\hline
\multicolumn{3}{|c|}{(ii) 25 anyons, torus}\\
\hline
\hline
Numerics&\multicolumn{2}{|c|}{CFT prediction}\\
\hline
0.1000 & 0.1000 & ($\frac{1}{10}$)\\
1.1000 & ~1.1000~ & ($\frac{1}{10}+1$)\\
\p{$^*$}1.4845$^*$ & 1.5000 & ($\frac{3}{2}$)\\
2.0901 & 2.1000 & ($\frac{1}{10}+2$)\\
\p{$^*$}2.4670$^*$ & 2.5000 & ($\frac{3}{2}+1$)\\
3.0524 & 3.1000 & ($\frac{1}{10}+3$)\\
\hline
\end{tabular}
\\~\\~\\
\begin{tabular}{|c|cc|}
\hline
\multicolumn{3}{|c|}{(iii) 24 anyons, disc}\\
\hline
\hline
Numerics&\multicolumn{2}{|c|}{CFT prediction}\\
\hline
0.0000 & 0 &\\
2.0000 & ~2.0000~ & ($0+2$)\\
2.9762 & 3.0000 & ($0+3$)\\
3.9137 & 4.0000 & ($0+4$)\\
3.9820 & 4.0000 & ($0+4$)\\
4.7976 & 5.0000 & ($0+5$)\\
\hline
\end{tabular}~~~
\begin{tabular}{|c|cc|}
\hline
\multicolumn{3}{|c|}{(iv) 25 anyons, disc}\\
\hline
\hline
Numerics&\multicolumn{2}{|c|}{CFT prediction}\\
\hline
1.5000 & 1.5000 & ($\frac{3}{2}$)\\
2.5000 & ~2.5000~ & ($\frac{3}{2}+1$)\\
3.4726 & 3.5000 & ($\frac{3}{2}+2$)\\
3.4956 & 3.5000 & ($\frac{3}{2}+2$)\\
4.4025 & 4.5000 & ($\frac{3}{2}+3$)\\
4.4621 & 4.5000 & ($\frac{3}{2}+3$)\\
\hline
\end{tabular}
\\~\\~\\
$0\equiv\mbb{I}$, $\frac{1}{10}\equiv\varepsilon$, $\frac{3}{5}\equiv\varepsilon'$, $\frac{3}{2}\equiv\varepsilon''$
\\~\\
$^*$ Eigenvalue is twofold degenerate
\end{table}
\section{Conclusion}
In summary, this paper may be divided into two parts. In the first (\sref{sec:ASO}) we have drawn attention to the importance of manifold topology in the study of anyonic systems. By appeal to the underlying TQFT, we have explained how to construct diagrammatic representations of anyonic states, operators, and the inner product, for surfaces of arbitrary genus, and we have done so explicitly for the torus, sphere, and disc. Applying these constructions to systems of fixed Fibonacci anyons (the golden chain), we have been able to construct explicit relationships between the energy spectra of equivalent models of interacting anyons on the torus, sphere, and disc.
For critical systems, this implies an equivalent relationship between the scaling operators of the system.
In the second part of the paper (Secs.~\ref{sec:PBC}--\ref{sec:OBC}) we used these results to study the behaviour of an example system, consisting of a ring or chain of interacting Fibonacci anyons on either the torus or the disc.
It has been shown that this chain is described by the same CFT as the tricritical Ising model, and that on the torus its criticality is topologically protected.\cite{feiguin2007} We have shown that that
criticality is similarly protected on the disc. We further demonstrated that protection of criticality on the disc may
be understood in terms of conformal field theory, where the system on the disc maps into a system on the torus with a defect in translation, and presence of that defect modifies the scaling operator spectrum which is exhibited (\sref{sec:CFTwdefect}).
As a whole, this paper therefore presents the means to relate systems of interacting anyons on manifolds of differing topology, and applies this to examples using Fibonacci anyons. Insight is gained into the topological protection of criticality of these systems and into the robustness of this protection across surfaces of different genus, with equivalent protection
shown to be exhibited on both the torus and the disc.
\begin{acknowledgments}
The authors would like to thank Miguel Aguado, Andreas Ludwig, Simon Trebst, and Matthias Troyer for insightful discussions. The authors acknowledge the support of the Australian Research Council (FF0668731, DP0878830, DP1092513, APA). This research was supported in part by the Perimeter Institute for Theoretical Physics.
\end{acknowledgments}
|
2,877,628,088,525 | arxiv | \section{Introduction}
\label{intro}
Realistic systems are always coupled to environments. Small effects
of the environment on the system can nicely be described using random
perturbations (noise). In Hamiltonian systems, noise induces dissipation,
can destroy the regular dynamics, and affects transport, to mention
few examples. The presence of noise can drastically change the dynamics and
some regions of the phase space, unaccessible for the conservative case,
can be reached when noise is considered. This occurs in typical mixed
phase spaces of two-dimensional (2D) Hamiltonian systems, where the KAM tori can
be treated as barriers in the phase space that cannot be transposed
\cite{Ott02}. In such cases the presence of noise allows chaotic trajectories
to penetrate tori leading to new features and behaviors. In general, the
presence of noise can modify the volume of invariant set that are scaled with
the magnitude of the noise \cite{Mills-06}, can enhance trapping effects in
chaotic scattering \cite{AltmannPRL}, and can change the escape rate from algebraic
to exponential decay in scattering regions \cite{Sanjuan-08, Sanjuan-09} and from
trajectories leaving from inside KAM curves \cite{Rodrigues-10}. In addition,
noise affects anomalous transport phenomena such as negative mobility and
multiple current reversals \cite{YangLong-2012}, enhances the creation and
annihilation rates of topological defects \cite{chaos15} and postpones the onset
of turbulence and stabilizes the three-dimensional waves which would otherwise
undergo gradient-induced collapse \cite{chaos11}. In systems with spatiotemporal
chaos, noise delays and advances the collapse of chaos \cite{PhysRevE.75.066209}.
The effect of noise in one-dimensional systems has already been studied
\cite{JPSJ-1998} as well as its influence on the transition to chaos in systems
which undergo period-doubling cascades \cite{crutchPRL,shraimanPRL}. For this
class of systems was demonstrated that noise can induces the escape from
bifurcating attractors \cite{PhysRevE.80.031147}.
In this contribution we study the effects of noise on the dynamics of the
standard map with mixed phase space adding a sequence of independent
random variables that follows three different distributions: Gaussian,
uniform and a power-law correlated (PLC) distribution. The motivation to
chose such distributions is related to the context of open systems. The
Gaussian distribution is connected to thermal baths, the uniform distribution
due to its simplicity and the PLC distribution related to a very actual
research area of finite and non-Markovian environments \cite{bao17,samyr14,
rosa08}, to mention a few. Our results show that, using uncorrelated noise, the
resulting dynamics does not depend significantly on the choice of the
distribution (Gaussian or uniform), as expected \cite{CRUTCHFIELD-82}. For a
PLC noise, algebraic decays for the recurrence time statistics (RTS) curves were
found, even for larger intensities of noise. The standard deviations of the
distributions are the relevant quantity to change the dynamics. We also show
that strong noise intensities induce an ergodic-like behaviour with exponential
decays of RTS; however, reminiscent of the regular islands is still visible in
the value of the Lyapunov exponent when compared to the noiseless case.
This work is organized as follows: In Sec. \ref{model} we present the
model as well the distributions used for generate the noise. Analytical
results for the stability of central points are also presented. In Sec.
\ref{psd} the changes in the phase space will be investigate. In Secs.
\ref{recur} and \ref{les} the dynamics of the standard map with noise will
be treated using RTS and Lyapunov exponent, respectively. The occupation of
the phase space as a function of time for each case is presented in Sec.
\ref{rate} and in Sec. \ref{conc} we summarize our main results.
\section{Standard map with noise}
\label{model}
The model used in this work is the paradigmatic Chirikov-Taylor standard
map with additive independent random variable at each time step described
by \cite{Karney-82}
\renewcommand{\arraystretch}{1.4}
\begin{equation}
\label{stand-map}
\begin{array}{rcll}
p_{n+1}&=& p_{n} + \dfrac{K}{2\pi}\sin(2\pi \hspace{0.02cm} x_{n}) +
\dfrac{D\xi_n}{2\pi} \hspace{0.04cm} & \hspace{0.4cm} [\mathrm{mod} \hspace{0.3cm} 1], \\
x_{n+1}&=& x_{n} + p_{n+1} & \hspace{0.4cm} [\mathrm{mod}
\hspace{0.3cm} 1], \\
\end{array}
\end{equation}
where $x_n$ is the position at the iteration $n=0,1,2,\ldots$, and $p_n$ its
conjugated momentum. $K$ is the nonlinear positive parameter, $\xi_n$ is the
random variable and $D$, also a positive parameter, controls the intensity of
$\xi_n$. The random variable was included in the above map in a distinct
way from that proposed in \cite{Karney-82}. The parameter $K$
is responsible for the changes in the nonlinear dynamics, so that for larger
values of $K$ stochasticity is obtained. The map (\ref{stand-map}) has fixed
points at $x_1=0$, $p_1=0$ and at $x_1=1/2$, $p_1=0$. Applying the stability
condition $|\mathrm{Tr}(\mathbf{J})| < 2$ for the trace of the Jacobian matrix
\cite{Lichtenberg}, we find $|2\pm K| < 2$, where the upper sign corresponds
to $x_1=0$ and the lower one to $x_1=1/2$. Solving the inequality, the point
at $x_1=0$ is always unstable since $K$ is positive. Considering $x_1=1/2$,
we see that for $K<4$ the fixed point is elliptic and for $K>4$ it is hyperbolic.
These two cases are shown in Fig.~\ref{ps-sm}, using $K=3.28$ in (a) and
$K=4.23$ in (b). For $K=3.28$ the fixed point is stable, while for $K=4.23$
trajectories trace a two hyperbolic branch inside the main KAM torus. For the
values of $K$ used in this work the destroyed KAM curves form Cantor sets that
eventually trap trajectories for a long time. This is called the sticky effect
and is characterized by a power-law decay for the RTS curves
\cite{Chir-Shep, Artuso, Zaslavsky2002, PhysRevLett.100.184101}.
\begin{figure}[!b]
\centering
\includegraphics*[width=0.97\columnwidth]{Fig1.pdf}
\caption{(Color online) Phase-space dynamics for (a) $K=3.28$, and (b)
$K=4.23$ using 100 initial conditions and $2\times 10^5$ iterations.
The fixed points (red circles) localized at $[x,p]=[1/2,0]$ in
the center of the figures are (a) elliptic and (b) hyperbolic.
{Red lines define the border of the chaotic recurrence region
to determine the RTS, {\it i.e.} above the upper line and below the lower
line.}}
\label{ps-sm}
\end{figure}
To generate an ensemble of uncorrelated random variables $\xi_n$ we choose:
(i) the Gaussian (G) distribution [see green plot in Fig.~\ref{dist}(a)] with
$\langle \xi_n\rangle=0$ and variance $0.22$, to guarantee that
$-1\le\xi_n \le1$, and (ii) the uniform (U) distribution (see blue plot) for
the same range, for which all values of $\xi_n$ have the same probability of
being picked up. To obtain a correlated noise for $\xi_n$ we consider the
standard map defined by
\renewcommand{\arraystretch}{1.4}
\begin{equation}
\label{sm-aux}
\begin{array}{rcll}
I_{n+1}&=& I_{n} +
\dfrac{K}{2\pi}\sin(2\pi \hspace{0.02cm} \theta_{n}), \\
\theta_{n+1}&=& \theta_{n} + I_{n+1}, \\
\end{array}
\end{equation}
\noindent where we can define the moment $I_n$ between the interval $[-1:1]$
and $\theta_n$ in $[0:1]$. Using $K=2.6$, case already studied before
\cite{MANCHEIN2013, RMS92}, we obtain a mixed phase space with KAM curves
and a huge stochastic region coexisting. For a given initial condition,
the sequence of $I_n$ and $\theta_n$ near homoclinic points generates a
sample of values that obey a time correlated random variable
\cite{Lichtenberg}. Doing $\xi_n = I_n$, this correlated variable is used
here to perturb the map (\ref{stand-map}). In such cases, the time
correlation can be determined from $C(t)=\langle \xi(t) \xi(t') \rangle$.
For a fully chaotic phase space is expected an exponential decay:
$C(t)\propto e^{-b\,t}$ with $t$. However, for a mixed
phase space the correlation $C(t)$ presents a power-law tail \cite{MEISS83,
KARNEY83, Chir-Shep, Artuso, Manchein2009}, as showed in Fig.~\ref{dist}(b)
for $10^7$ iterations of map (\ref{sm-aux}). While
$C(t) \propto t^{-\delta}$, the RTS curve for this mixed phase space follows
$P(\tau) \propto \tau^{-\gamma}$, with $\gamma \sim 1.60$ \cite{RMS92},
while $\delta$ and $\gamma$ are related by $\delta = \gamma - 1$
\cite{KARNEY83,Chir-Shep,Manchein2009}. The distribution of $\xi_n = I_n$,
which is PLC, is displayed in Fig \ref{dist}(a) by the yellow plot.
\begin{figure}[!t]
\centering
\includegraphics*[width=0.97\columnwidth]{Fig2.pdf}
\caption{(Color online) (a) Probability Distributions used to generate the
random values $\xi$. Gaussian (green), uniform (blue) and PLC noise
(yellow), all cases using values inside the interval $[-1:1]$. In (b) we
show in logarithm scale the correlation for the variable $I_n$ of a
standard map for $10^7$ time iterations using $K=2.6$ that follows a
power-law relation $C(t)\propto t^{-\delta}$.}
\label{dist}
\end{figure}
\subsection{Stability condition for the central point}
\label{analitic}
Using $D\neq 0$ in map (\ref{stand-map}), no periodic orbits exist anymore.
However, for {\it one} iteration it is possible to analyze the stability of
the ``fixed point'' under the influence of $D$. The ``fixed point'' at
$[x_1,p_1]=[1/2,0]$ is called the {\it central point}, and note that it is not a
fixed point anymore since the noise changes its location at each iteration. Even
though, a one-step stability analysis allows us to demonstrate that the presence of
small noise $D\xi_n/(2\pi)$ does not change the stability condition of the
central point. {In fact, for each iteration we are analyzing the one-step
stability of a ``fixed point'', the central point, whose location changes
anytime.} The {one-step} Jacobian matrix $\mathbf{J}_p$ of the
standard map (\ref{stand-map}) is given by:
\renewcommand{\arraystretch}{1.4}
\begin{equation}
\mathbf{J}_p =
\begin{bmatrix}
1 & K\cos(2\pi x_n) \\
1 & 1+K\cos(2\pi x_n)\\
\end{bmatrix}.
\label{jac1}
\end{equation}
The position of the central point is now
$p_1=0$, $x_1=-\frac{1}{(2\pi)}\arcsin[(D\xi_n)/K]$, and
using $\cos[\arcsin(x)] = \sqrt{1-x^2}$, the Jacobian $\mathbf{J}_p$
becomes
\renewcommand{\arraystretch}{1.6}
\begin{equation}
\mathbf{J}_p =
\begin{bmatrix}
1 & \pm K\sqrt{1-(D\xi_n)^2/K^2} \\
1 & 1\pm K\sqrt{1-(D\xi_n)^2/K^2} \\
\end{bmatrix}.
\label{jac2}
\end{equation}
with {eigenvalues
$h_{\pm}=\mathrm{Tr}(\mathbf{J}_p)/2\pm \sqrt{(\mathrm{Tr}(\mathbf{J}_p)^2-4)}/2$,
where} the trace {is given by}
\begin{equation}
\mathrm{Tr}(\mathbf{J}_p) = 2 \pm \sqrt{K^2-(D\xi_n)^2}.
\label{trace}
\end{equation}
{Forcing the eigenvalues of the Jacobian matrix to be
$|h_{\pm}|<1$ implies the stability condition}
$|\mathrm{Tr} (\mathbf{J}_p)|<2$. Applying this condition to the upper
signal, again we have only unstable points for any value of $(D\xi_n)^2$.
Considering the lower signal and $K=3.28$ and $K=4.23$, the stability
condition for each case remains unaltered for $|D\xi_n| \le 1$, values
that will be used in this work. {Therefore, all considered noise
intensities are not strong enough to change the stability condition
for the values of $K$ used here.}
\section{Phase-space dynamics}
\label{psd}
Plotting trajectories in phase space allows us to identify regions of chaotic
and regular motion for the standard map. When noise is included, initial
conditions chosen inside the stochastic sea can transpose the barrier of
tori and penetrate them. In Fig.~\ref{ps-noise} the phase-space dynamics
are shown for $K=3.28$ in (a)-(i) and $K=4.23$ in (j)-(r). The case with
Gaussian noise is displayed in the first line, (a)-(c) and (j)-(l), with
uniform noise in the second line, (d)-(f) and (m)-(o), and with PLC noise
in the third line, (g)-(i) and (p)-(r). Compared to Fig.~\ref{ps-sm}(a),
for $D=10^{-5}$ only some regular trajectories inside the main torus are
affected, as we can see in Fig.~\ref{ps-noise}(a), (d) and (g) for $K=3.28$.
The most emblematic case is $D=10^{-3}$, for which we have a mixture of
completely penetrated tori and other regions still unaccessible. The
increasing density of points inside the island from the case $D=0$ indicates
that larger sticky motion is expected (this will be shown later). For these
cases we can observe that the portion of phase space accessible for the
trajectory depends on the distribution used. Using the uniform distribution
with $D=10^{-3}$, the trajectories can access most of the phase space, while
for the Gaussian distribution there are a lot of regions not visited for
same noise intensity. This means that, using distributions for which extreme
values of $|\xi_n|$ are most likely to occur, it is possible to access
a larger portion of the phase space in the same time interval.
\begin{widetext}
$\quad$
\begin{figure}[!t]
\centering
\includegraphics*[width=1.0\columnwidth]{Fig3.pdf}
\caption{(Color online) Phase space of the map (\ref{stand-map}) for the
interval $(x_{\mbox{\tiny min}}, x_{\mbox{\tiny max}})=(0.25,0.75)$,
$(p_{\mbox{\tiny min}}, p_{\mbox{\tiny max}})=(-0.35,0.35)$ with $K=3.28$
[(a)-(i)] and $K=4.23$ [(j)-(r)] using different intensities of noise
indicated above each column. The first line displays the results for
Gaussian noise, the second line for uniform noise and the third one for PLC
noise. In all simulations we used 100 initial conditions
and iterated the map $3\times 10^5$ times.}
\label{ps-noise}
\end{figure}
\end{widetext}
For $D=10^{-1}$, an apparently fully chaotic motion is observed, at least
from the phase-space dynamics analysis. However, from the analytical
results, we know that the stable central point is still there so that
some reminiscent of regular motion is expected. It is worthwhile mentioning
here that for a better visualization of the phase-space dynamics we use
shorter time iterations when compared to results from Secs.~\ref{recur},
\ref{les} and \ref{rate}. However, conclusions made above about the
penetration of island should not be substantially changed for longer
iterations. Besides, results from the next sections corroborate these
findings.
\section{Recurrence Time Statistics}
\label{recur}
In this section we analyze the RTS for the system (\ref{stand-map}) using
different intensities $D$ for each distribution. The RTS is determined
numerically by counting the iteration times $\tau$ that the trajectory stays
outside of the recurrence region (defined inside the chaotic region). The
existence (or not) of the sticky motion is
recognized by the cumulative distribution $P_{\text{cum}}(\tau)$ defined by
\begin{equation}
P_{\text{cum}}(\tau) \equiv \displaystyle \sum_{\tau'=\tau}^{\infty} P(\tau').
\end{equation}
\noindent The quantity $P_{\text{cum}}(\tau)$ is a traditional method to
quantify stickiness in Hamiltonian \cite{PhysRevLett.100.184101,Artuso,
AltmannKantz,Shep2010,RMS91} and conservative three-dimensional systems
\cite{RMS92}, since events with long times $\tau$ in the RTS are associated to
times for which the trajectory was trapped to the nonhyperbolic components of
the phase space. {$P_{\text{cum}}(\tau)$ can be directly related with escape
time distributions by applying the ergodic theory of transient chaos in systems
with leaks \cite{TelPRL100, TelPRE79}}.
Although there is no general rule, algebraic decays of
$P_{\text{cum}}(\tau)$ for at least two decades indicate the existence of sticky
motion. When noise is added in Hamiltonian systems with mixed phase space, a
slow additional algebraic decay of RTS curves \cite{AltmannPRL, Chir-Shep}
and survival probability inside domains near the fixed point
\cite{Ketzmerick2012} was observed, which means that the trapping around
regular islands is enhanced due to trajectories that wander inside the
islands. This result was also found in a two-dimensional conservative map coupled
to an extra dimension without noise. In this case, trajectories remain trapped to
the extra dimensional action variable, and for very long times no recurrence occurs,
resulting in plateaus in the RTS curves \cite{RMS92}. In this section, our focus is
to study the relation of enhancing trapping due to $D$ and the kind of distribution
used, as well the influence of the stability condition of the central point of the
standard map.
The RTS plots for $K=3.28$ are presented in Fig. \ref{rts1}(a)-(c).
{For the recurrence box we use the chaotic region, displayed in
Fig.~\ref{ps-sm}. It can be shown that our results are essentially
independent on the choice of the recurrence region, as long it is located
inside the chaotic region \cite{SalaArtusoManchein}.}
For $D=0$ we observe the usual algebraic decay {$P_{\text{cum}}(\tau) \propto
\tau^{-\gamma}$} with $\gamma=1.55$, {indicating the well known sticky motion.}
For $D \ge 10^{-2}$ no events with long recurrence times exist anymore, and the
characteristic long tail of RTS gives place to an exponencial decay, a characteristic
of ergodic systems. The enhanced trapping, characterized by a slower algebraic decay
($\varepsilon=0.65$), is present for $D$ inside the interval $[10^{-5}:10^{-3}]$
and for all distribution. This is a consequence of the trapped motion inside de
island from the $D=0$ case, as observed by the larger density of point in Fig.
\ref{ps-noise}(b), (e) and (h). Since the trajectory is inside the island, there
is a probability of occurring a sequence of $D\xi_n$ that keeps the trajectory
trapped so that long times of recurrences are reached. The slower decay of the
RTS curves means a decrease in the number of recurrences in this interval.
Looking at Fig.~\ref{rts1}(c), the case for which a PLC noise was used, even
for $D=1$ a power-law regime is obtained for the RTS curve, what does not occur
for other distributions. The decay follows $P_{\text{cum}}(\tau) \propto
\tau^{-\beta}$, with $\beta = 2.0$, which characterizes a superdiffusive motion
on phase space. It is also interesting to note that for $D=10^{-3}$ and $D=10^{-2}$
in Fig.~\ref{rts1}(c) there is an exponential decay for long times and, increasing
the intensity $D$, the algebraic decay is recovered. This suggests that the
dynamics of the auxiliar map (\ref{sm-aux}) has great influence on the dynamics of
map (\ref{stand-map}) due the relation $D\xi_n = DI_n$.
The case $K=4.23$ is displayed in Figs.~\ref{rts1}(d)-(f). By comparison with
$K=3.28$, the enhanced trapping is not that efficient. The reason for this is
that the region of sticky motion is smaller, as observed in Fig.
\ref{ps-noise}(k), (n) and (q). Besides that, there is a hyperbolic fixed point
inside the main KAM torus forcing the trajectory to stay away from the center.
In this case we do not find algebraic decay for RTS curves when using a PLC
noise for larger values of $D$. Thus the sticky motion coming from the PLC is
not able to significantly keep the sticky motion from the map (\ref{stand-map})
when the central point is unstable. Another important conclusion is that the
RTS curves obtained for $K=3.28$, as well as for $K=4.23$, do not present
relevant changes using Gaussian distribution or uniform distribution.
\begin{widetext}
$\quad$
\begin{figure}[!t]
\centering
\includegraphics*[width=0.99\columnwidth]{Fig4.pdf}
\caption{(Color online) Cumulative distribution $P_{\text{cum}}(\tau)$ for
recurrence times $\tau$ to a region in the chaotic component of the phase
space using Gaussian, uniform and PLC noise distributions, as indicated
by the title on the top of each panel with $K=3.28$ and $K=4.23$ for the
first and second lines of the figures, respectively. {The small algebraic
decay $\varepsilon=0.65$ is related to the trapped motion, or enhanced
trapping inside the island, and $\beta=2.0$ to the superdiffusive motion.}}
\label{rts1}
\end{figure}
\end{widetext}
\section{Lyapunov exponent}
\label{les}
The quantity which measures the average divergence of nearby trajectories
is the Lyapunov exponent (LE) $\lambda$, which provides a computable measure
of the degree of stochasticity for a trajectory. A numerical method for
computing all $2N$ LEs (namely, the Lyapunov spectrum) in a $N$ degrees of
freedom system can be found in \cite{bggs80,wolf85}. This method includes
the Gram-Schmidt reorthonormalization procedure. For a randomly perturbed
system the technique to compute the Lyapunov spectrum is similar and we
just replace the deterministic trajectory $\boldsymbol{x}$ by the perturbed
sequence $\boldsymbol{x}^{(p)}$ \cite{CRUTCHFIELD-82,Mayer-Kress1981}.
Considering the system studied in this work, since the noise
$\xi_n$ is independent of $x_n$ and $p_n$, the fluctuations will not
affect the angles between expansion and contraction directions in the
tangent space, known as the angles between Lyapunov vectors
\cite{beims-gallas16-1,beims-gallas16-2}, but just the probability
distributions of variables.
\begin{figure}[!b]
\centering
\includegraphics*[width=0.98\columnwidth]{Fig5.pdf}
\caption{(Color online) The largest LE $\lambda_1$ for the map
(\ref{stand-map}) with $D=0$ using a grid of $10^3 \times 10^3$ initial
conditions for (a) $K=3.28$ and (b) $K=4.23$. For each trajectory,
$\lambda_1$ was calculated using $2\times 10^6$ time iterations.}
\label{ps0-lyap}
\end{figure}
To identify changes in the dynamics of the standard map with noise, we
divide the phase space of the map (\ref{stand-map}) for $K=3.28$ and $K=4.23$ in a
grid with $10^3 \times 10^3$ points. Each point is an initial condition
$[x_0,p_0]$. For trajectories starting at each combination of $[x_0,p_0]$, the
largest LE $\lambda_1$ was determined using $2 \times 10^6$ time iterations and
is codified by a gradient of colors in Fig. \ref{ps0-lyap} (see the color bar).
Clearly, we observe that initial conditions inside the regular islands have
$\lambda_1\sim 0.0$ (yellow points), with exception the unstable point in Fig.
\ref{ps0-lyap}(b) with small positive values of $\lambda_1$ (red points).
Initial conditions related to the chaotic trajectory have larger values of
$\lambda_1$ (blue and cyan points).
\begin{widetext}
$\quad$
\begin{figure}[!t]
\centering
\includegraphics*[width=0.97\columnwidth]{Fig6.pdf}
\caption{(Color online) The largest LE $\lambda_1$ for the map
(\ref{stand-map}) with $K=3.28$ [(a)-(i)], codified by the same gradient
color used in Fig. \ref{ps0-lyap}(a), and $K=4.23$ [(j)-(r)], codified by the
same gradient color used in Fig. \ref{ps0-lyap}(b). The first line was
obtained using the Gaussian noise, the second line using uniform noise and the
third line using the PLC noise. The value of $D$ for each case is indicated
above the correspondent column.}
\label{ps1-lyap}
\end{figure}
\end{widetext}
If a perturbed trajectory $\boldsymbol{x}^{(p)}$ is considered, as the intensity
$D$ of noise increases, sensitive changes can be observed in the value of
$\lambda_1$ that are displayed in Fig. \ref{ps1-lyap}(a)-(i) for $K=3.28$ and
(j)-(r) for $K=4.23$, using the Gaussian noise on the first line, uniform noise
on the second line, and the PLC noise on the third. The first observation is that
by increasing the values of $D$ the islands are penetrated and destroyed in all
cases. For $D=10^{-1}$ (Figs. \ref{ps1-lyap}(c), (f) and (i) for $K=3.28$ and
(l), (o) and (r) for $K=4.23$), the phase space becomes totally chaotic and the
same value of $\lambda_1$ is obtained for all initial conditions. In other
words, the phase space becomes ergodic-like and exponential decays are
expected for the RTS curves, as observed in Section \ref{recur}. The relevant
point here is to analyze how the dynamics became ergodic-like using
distinct distributions. The Gaussian distribution does not considerably affects
the trajectory for small values of $D$ when compared to the other distributions.
This becomes evident when comparing the yellow region (or red) from Figs.
\ref{ps1-lyap}(a), (d), (g), [the same for (j), (m) and (p)] that display the
case $D=10^{-5}$ for the Gaussian, uniform, and PLC noise, respectively. The
amount of yellow (red) points is larger (smaller) in Figs. \ref{ps1-lyap}(a) and
(j). Besides, it is interesting to observe that when trajectories penetrate the
islands due to noise, they tend to stay close to hyperbolic points from the tori
transforming the dynamics more unstable (yellow $\to$ red).
Looking at the case $D=10^{-3}$, Figs. \ref{ps1-lyap} (b), (e) and (h) for
$K=3.28$ and (k), (n) and (q) for $K=4.23$, it is possible to note that there
are no more initial conditions that lead to stable trajectories (yellow points).
Using the uniform distribution [Figs. \ref{ps1-lyap}(e) and (n)] higher values
$\lambda_1$ ($\ge 0.5$) are obtained inside the regular islands from the
noiseless case. In addition, looking at the case $K=4.23$, an important result
is the fast increasing of $\lambda_1$ for initial conditions around the central
point, while for the stable case $K=3.28$ the nearby of the central point is
kept regular for reasonable values of $D$.
To finish this section, we would like to mention that for the ergodic-like case
$D=10^{-1}$, already discussed above, the values of $\lambda_1$ are {\it
smaller} than those obtained for the chaotic trajectory from $D=0$, which is
represented as cyan. In other words, instead of increasing the values
of $\lambda_1$ from the chaotic trajectory, the random distributions allow the
total penetration inside the islands and traces of the regular motion are
still visible in the asymptotic values of $\lambda_1$. Thus, the phase space of
the ergodic-like case, which has an exponential decay of the RTS and is totally
chaotic, still is influenced by some properties of the destroyed islands. This
is true for correlated and uncorrelated distributions and independent of the
presence of the stable or unstable central point.
\begin{widetext}
$\quad$
\begin{figure}[!t]
\centering
\includegraphics*[width=0.99\columnwidth]{Fig7.pdf}
\caption{(Color online) {Percentage of the phase-space occupied area
$A(\%)$
as a function of time for the three distributions and (a)-(c) $K=3.28$ and
(d)-(f) $K=4.23$ for some values of $D$. The inset of each case shows the
region visited by the trajectory. Blue points represent the visited region
for $D=0$, blue$+$yellow points represent the area visited for $D=10^{-5}$
and blue$+$yellow$+$red is the region visited by the trajectory for
$D=10^{-1}$, all cases after $10^8$ time iterations.}}
\label{ocp}
\end{figure}
\end{widetext}
\section{Phase-space occupation}
\label{rate}
The last analysis presented in this work is the occupation rate of the phase
space as the intensity $D$ increases. In Fig. \ref{ocp}, the percentage of
visited area $A(\%)$ of the phase space is displayed as function of the number
of iterations $n$ for some values of $D$. Using $D=10^{-1}$ it is possible to
access $100\%$ of the phase space after $n \approx 7 \times 10^6$ time
iterations for any distribution (see the red curves in all panels of
Fig.~\ref{ocp}).
For $D=10^{-3}$ the whole phase space is occupied just for $K=4.23$ [Fig.
\ref{ocp}(d)-(f)], while for $K=3.28$ [Fig. \ref{ocp}(a)-(c)] it is possible
only when the uniform distribution is used, and the whole phase space is visited
after $n \approx 2.8\times 10^7$ iterations [see the green curve in Fig.
\ref{ocp}(b)]. In this case, the abrupt increases of $A(\%)$ means that the
penetration inside the island is also almost abrupt and not asymptotic.
When the cases $D=0$ and $D=10^{-5}$ are compared, small differences are
observed and we need to look at the insets that display the visited area of the
phase space for different values of $D$. In all insets, blue points represent
the region visited by the trajectory for $D=0$ and blue$+$yellow points
represent the region visited for $D=10^{-5}$, both cases after $10^8$
iterations. Therefore the case $D=10^{-5}$ allows trajectories to access the
high order resonances located around the main torus, what is prohibited for
$D=0$, resulting in a small difference between the area occupied in each case.
For some intervals of time the trajectory can be trapped in these small island
and the visited area for $D=10^{-5}$ (yellow curves) can be smaller than the
case $D=0$ (blue curves), as we can see in Figs. \ref{ocp}(d) for $n \approx
1.6\times 10^{6}$ and (f) for $n \approx 1.8\times 10^7$. {It is important
to emphasize that these results are obtained using the initial condition
$x_0=0.159146$, $p_0=-0.470110$, localized in the chaotic sea. If other
values are used, the curves may changed slightly but the main conclusions
remain unaltered.}
\section{Conclusions}
\label{conc}
To summarize, we study the effects of perturbing the standard map randomly using
an additive variable $\xi_n$ that can follow a Gaussian, a uniform and a PLC
distribution. This last one was generated using the deterministic standard map
with mixed dynamics. For all distributions, the RTS demonstrates that sticky
motion is enhanced for small values of the noise intensity, namely $10^{-5}\le
D\le 10^{-4}$. The power-law exponent characterizing this decay is $\varepsilon=
0.65$. Here the noise tends to increase hyperbolic points from the rational tori
from the noiseless case. For intermediate values, $10^{-3}\le D\le 10^{-2}$,
power-law decays with the same $\varepsilon$ are observed for earlier times, but
followed asymptotically by exponential decays. This reflects the fact that larger
noise intensities allow an earlier penetration of the island
and the time correlation decays faster, {\it i.e}, the system
becomes ergodic-like for earlier times when compared to smaller noise
intensities.
For $D=10^{-1}$, all (with one exception) RTS curves decay exponentially and an
ergodic-like motion is expected. However, the largest Lyapunov exponent is
smaller when compared to the Lyapunov exponent from the noiseless case. This
means that reminiscent of the sticky motion due to the destroyed islands is
still affecting the chaotic dynamics. The mentioned exception occurs when a PLC
noise is used and $K=3.28$. In this case we found an algebraic decay with
$\beta=2.0$, which represents a superdiffusive motion through the island.
In fact, with these results it is pretty clear that the most relevant
quantity to allow penetration of the island is the standard deviation of the
distributions.
Another issue considered in this work was the stability condition for the
central point of the phase space. To compare the two possible conditions we
study the influence of noise on the standard map for two different values for
the nonlinearity parameter: $K=3.28$, for which the central point is stable,
and $K=4.23$, for which the central point is unstable. The RTS curves from the
Section \ref{recur} demonstrate that, in the presence of noise, the two cases
behave similarly but the transition to stochasticity, when increasing $D$, is
faster for the unstable case.
To finish, we would like to relate our results to higher-dimensional systems.
Noise can be interpreted as the net effect of extra dimensions. If the dynamics
of the extra dimensions is chaotic, the Gaussian distribution is a nice
description of such dynamics. If the dynamics of the extra dimensions behaves
like a conservative system with mixed phase space, then the PLC distribution
should be adequate to describe the net effect. In this context, the global
structure of regular tori in a generic $4$D symplectic map was analyzed
\cite{baecker14} and, recently, the decay of RTS was studied to give a nice
explanation about the island penetration through one extra dimension
\cite{RMS92}.
\acknowledgments{R.M.S. thanks CAPES (Brazil) and C.M. and M.W.B. thank CNPq
(Brazil) for financial support. The authors also acknowledge computational
support from Carlos M.~de Carvalho at LFTC-DFis-UFPR.}
|
2,877,628,088,526 | arxiv | \section{Introduction}
\label{sec:intro}
Young star clusters are usually found embedded in molecular clouds from which they were recently born. The surrounding gas is expelled by feedback, in the forms of ultraviolet radiation, massive stellar winds from OB stars, and/or supernovae (SNe) explosion. The star clusters lose gravitational potential which is most important in determining the dissolution into the field \citep[see e.g..][]{1978A&A....70...57T, 1980ApJ...235..986H,1984BAAS...16..409M, 1997MNRAS.284..785G, 2000ApJ...542..964A, 2001MNRAS.323..988G, 2003MNRAS.338..665B, 2003MNRAS.338..673B, 2005ApJ...630..879F, 2006MNRAS.369L...9B, 2007MNRAS.380.1589B, 2011MNRAS.414.3036S, 2016MNRAS.460.2997L, 2017A&A...600A..49B, 2017ApJ...838..116F, 2018MNRAS.476.5341F, 2017A&A...605A.119S, 2018ApJ...863..171S, 2020IAUS..351..507S}
In this scenario, the amount of feedback is usually assumed to be strong enough to completely disrupt the molecular cloud and as a consequence prevent any further star formation \citep{2011ApJ...729..133M,2010ApJ...709...27W}. Another possible scenario is positive feedback. As the energy and momentum inserted is pushing out the cloud into a shell-like structure the corresponding density might locally trigger collapse and thus another star-formation event \citep{2012ApJ...744..130K}.
\citet{2017MNRAS.470.4453R} introduced the code \textsc{warpfield} (\textbf{W}inds \textbf{A}nd \textbf{R}adiation
\textbf{P}ressure: \textbf{F}eedback \textbf{I}nduced \textbf{E}xpansion, col\textbf{L}apse and \textbf{D}issolution), which models another scenario called failed feedback, where the molecular cloud can (re)-collapse. This semianalytic 1D model for isolated massive clouds with masses $\geq10^5 \text{M}_\odot$ describes the dynamics and structure of the expanding or contracting shell due to winds, SNe, radiation pressure, and gravity. This approach allows us to explore a large range of parameters of star formation efficiency (\ensuremath{\epsilon_{SF}}\xspace), density ($n_0$), and metallicity in a reasonable quantity of CPU-time. A new version of the code was introduced in \citet{2019MNRAS.483.2547R} where the treatments of the thermal evolution of the gas were improved.
30 Doradus, located in the Large Magellanic Cloud (LMC), is a massive star forming region. In its center, the cluster NGC~2070 hosts a younger massive subcluster, R136. It appears that older stellar population in NGC~2070 did not produce enough feedback to take apart its parental molecular cloud, which could retain or re-accrete part of its mass, and form R136 as a massive second generation cluster. The last has been supported by simulation, e.g., \citet{2017MNRAS.465.1375S} showed that under dense conditions ($n \gtrsim 10^5 \ \text{cm}^{-3}$ in a cloud of $10^6$ M$_\odot$), the feedback produced by stellar winds may not be as stronger as it is needed to disperse the cloud. There are other massive young clusters, which show evidence for multiple generations of stars, e.g.\ Sandage-96 exhibits a bimodal age separation of at least 10 Myr \citep{2005ApJ...626L..49P,2007ApJ...659L..41P,2009ApJ...695..619V} or the Orion nebula cluster with an even smaller age spread of less than 1 Myr \citep{2017A&A...604A..22B}.
The best evidence for multiple stellar generations in compact star clusters comes from the observations of globular clusters \citep[see e.g..][]{2009A&A...505..117C}. These could be a result of the (re)-collapse of gas ejecta from older generation asymptotic giant branch stars \citep{2008MNRAS.391..825D}, fast-rotating massive stars \citep{2007A&A...464.1029D} or interactive massive binaries \citep{2009A&A...507L...1D}. However, these scenarios typically predict that the second generation of stars is much less massive than the older generation, specifically for a small age difference, and so they are not applicable to 30 Doradus where both stellar populations have roughly equal mass.
We structured the paper as follow. First, in Section 2, we explain the method and parameter space where we develop our study. In Section 3, we investigate 30 Doradus scenario and match the observables with a different range of \textsc{warpfield} models and $N$-body simulations. In Section 4, we discuss the results and we conclude in Section 5.
\section{Method and Initial Conditions}
\subsection{Properties of 30 Doradus}
The main cluster in the 30 Doradus region, NGC~2070, contains two stellar generations. The older population has an age of $\sim$ 3-7 Myr \citep{Brandl1996,Walborn1997,Selman1999,Sabbi2012,Cignoni2015} and the younger population $\sim$ 0.5-2 Myr which also appears to be more concentrated towards the centre \citep{Massey1998,Selman1999,Sabbi2012,Cignoni2015,2020MNRAS.499.1918B,2022arXiv220211080B} called R136 or formally known as RMC 136. The masses of the clusters are poorly constrained. R136 has a range of mass between 2.2x10$^4$-1x10$^5$ M$_\odot$ \citep{Hunter1995,Andersen2009,Cignoni2015} and the whole cluster NGC~2070 6.8x10$^4$-5x10$^5$ M$_\odot$ \citep{Selman1999,Bosch2001,Bosch2009,Cignoni2015}. In this zone is observed ionized gas which forms bubbles containing hot, X-ray emitting gas \citep{2006AJ....131.2164T}. \citet{Pellegrini2011} using [SII]/H$\alpha$ observations showed that the H II region around NGC~2070 has the shape of a hemispherical bowl. The whole sphere has a radius of 40 - 60 pc and R136 has an offset approximately 12 pc from its centre. The shell radius surrounding R136 as the centre is $\sim$ 30 - 70 pc. We summarize these values in Tab. \ref{tab:tabpar}.
\subsection{Modeling approach}
Our goal is to find the cloud-cluster parameter space capable of reproducing the observables of 30 Doradus sensitive to cluster evolution. To address this problem, we study a range of molecular clouds and cluster masses, resulting in different \ensuremath{\epsilon_{SF}}\xspace. The evolution of the clouds is followed using the code \textsc{warpfield} 2.1 \citep{2017MNRAS.470.4453R}. As the clouds expand, the gravitational potential is changing, which is introduced into the $N$-body calculation of the stellar dynamics as a time-evolving external potential. The dynamics of the star clusters is followed using the code \textsc{Nbody6++GPU} \citep{2015MNRAS.450.4070W} modified for our purpose in order to read in information from the \textsc{warpfield} code. We note that \textsc{warpfield} calculates the overall feedback produced by a star cluster located in the centre of the cloud. The energy injected from the stars to the cloud produces its expansion, resulting in one of the following outcomes:
\begin{enumerate}[left= 0pt]
\item The cluster inject enough feedback dispersing the cloud.
\item The cluster does not inject enough feedback and after an initial period of expansion, gravity overtakes and the cloud collapse again and gives birth to a new stellar generation.
\item The subsequent evolution can follow (i) or (ii), which means the process could be repeated multiple times leading to the formation of multiple stellar populations until the cloud is finally dispersed.
\end{enumerate}
Using \textsc{warpfield}, we create clouds with masses of $3.16~\times~10^{5}$~ M$_\odot$ following uniform profiles which host star clusters of different masses. From the observational data, the R136 appears to be more massive than the old cluster. We fix the new cluster to have a value of \ensuremath{\epsilon_{SF}}\xspace = 0.20 and we try for older stellar component \ensuremath{\epsilon_{SF}}\xspace between 0.01 and 0.10. To emulate our star clusters, we follow Plummer density profiles with $R_\text{pl}$ = 1 pc and the stellar mass is changed to achieve the \ensuremath{\epsilon_{SF}}\xspace required. We randomly create 10 different Plummer distributions using \textsc{mcluster} \citep{2011MNRAS.417.2300K} following a \citet{2001MNRAS.322..231K} initial mass function (IMF). The masses are randomly located along the different Plummer distributions to obtain two samples. Each sample consists in 10 clusters with mass segregation and 10 non-segregated. To be consistent with \textsc{warpfield} calculations, we are not using the stellar evolution features from \textsc{Nbody6++GPU}. We are evolving massive stars until they reach their maximum ages according to \citet{2012A&A...537A.146E} as \textsc{warpfield} follows. The size of the cluster is a free parameter for \textsc{warpfield}, which determines the radial 1d feedback. We use $R_\text{pl}$ = 1 pc as is commonly assumed for young clusters in the range of mass used in this work \citep{2016A&A...586A..68P}.
\begin{table}
\caption{Summary of observational parameters to match with our models.}
\label{tab:tabpar}
\centering
\begin{tabular}{|c|c|}
\hline
Observable & Value \\ \hline\hline
Age first stellar generation & 3-7 Myr \\
Age second stellar generation & 0.5-2 Myr \\
Mass second stellar generation & 2.2x10$^4$-1x10$^5$ M$_\odot$ \\
Total mass NGC~2070 & 6.8x10$^4$-5x10$^5$ M$_\odot$\\
Shell radius & 30 - 70~pc \\
\hline\hline
\end{tabular}
\end{table}
From \textsc{warpfield} outputs, we obtain the cloud boundary for different times. These boundary conditions set the initial conditions the photoionized/photo dissociation region/cloud interface. Using {\sc CLOUDY} \citep{2017cloudy} these initial conditions result in a radial density profile for every time step. To calculate the potential and forces, we use for each of the snapshots Poisson's equation:
\begin{equation}
\nabla^2 \phi(r) = 4\pi G \rho(r),
\end{equation}
where $\phi(r)$ is the radial gravitational potential produced by the cloud, G the gravitational constant and $\rho(r)$ is the radial density profile obtained from cloudy reduction. From the potential calculation, we obtain the radial force $F(r)$ as:
\begin{equation}
F(r)=-\frac{d}{dr}\phi(r).
\end{equation}
For each case, we use 10 different statistical realisations of the star cluster following a Plummer density distributions, all starting in virial equilibrium including the gas.
For embedded star clusters the virial ratio $\alpha$ is used, which is defined as:
\begin{equation}
\alpha=\frac{T}{|\Omega|},
\end{equation}
where $T$ refers to the total kinetic energy, and $\Omega$ is the total gravitational potential. A value of $\alpha=0.5$ means the embedded star cluster is in virial equilibrium, $\alpha<0.5$ means contraction and $\alpha>0.5$ means expansion. After an exploration of different $\alpha$ states for the second generation of stars, we report results from $\alpha=0.3$ which can reproduce closely the observations of R136 presented by \citet{2021A&A...649L...8K} (hereafter K2021).
\subsection{Analysis}
One key parameter characterizing the dynamical state of a star cluster is the level of mass segregation, which we quantify using the ``mass segregation ratio'' parameter (\ensuremath{\Lambda_\text{MSR}}\xspace) introduced by \citet{2009MNRAS.395.1449A}. It is defined as:
\begin{equation}
\ensuremath{\Lambda_\text{MSR}}\xspace=\frac{\left<l_{\text{norm}}\right>}{l_{\text{massive}}} \pm \frac{\sigma_{\text{norm}}}{l_{\text{massive}}}.
\end{equation}
For this parameter, a value of \ensuremath{\Lambda_\text{MSR}}\xspace $ \sim $ 1 indicates no mass segregation, i.e., low and high mass stars are similarly distributed. \ensuremath{\Lambda_\text{MSR}}\xspace $ \gg $ 1 indicates strong mass segregation, i.e., massive stars are located close to each other and \ensuremath{\Lambda_\text{MSR}}\xspace $ < $ 1 means inverse mass segregation, i.e., high mass stars are more dispersed than the rest of the cluster. We compute {\ensuremath{\Lambda_\text{MSR}}\xspace} for all stars in the system and for the first and second stellar generations separately.
The procedure to match with the observable of NGC~2070 followed in this work is summarized as:
\begin{enumerate}[left= 0pt]
\item We let evolve an initial $N$-body cluster (1GEN), in equilibrium with the gas (\ensuremath{\alpha_\text{1GEN}}\xspace $=0.5$), until the moment when {\textsc{warpfield}} indicates that there is a second starburst ((re)-collapse).
\item We stop the simulation and we add a second $N$-body cluster (2GEN).
\item We scale the velocities of the stars for 2GEN to get \ensuremath{\alpha_\text{2GEN}}\xspace~$=0.3$.
\item We continue the simulation until reach 8 Myr, which is already 1 Myr older than the current age of NGC~2070.
\end{enumerate}
We use 5 different Plummer distributions to represent the 1GEN and another 5 for the 2GEN. In this study we consider the two cases: both stellar generations either start with mass segregation or without. We compare the central distance for the massive stars and the Lagrangian radii for each generation. We also study \ensuremath{\Lambda_\text{MSR}}\xspace parameter as a function of simulation time. We are looking for simulations that evolve to produce the observable values in Tab. \ref{tab:tabpar}, and a \ensuremath{\Lambda_\text{MSR}}\xspace $ < 1$ when the massive stars of 1GEN and 2GEN are compared, i.e., the older massive stars more dispersed than the younger as NGC~2070 exhibits. For all parameters, we show the average of 5 different realizations.
\begin{table}
\caption{Summary of parameter space explored in this work. The first column shows the initial density of the clouds which initially have a mass of $3.16x10^5$ M$_\odot$. The \ensuremath{\epsilon_{SF}}\xspace for the embedded star clusters are shown in the second column and the time when the clouds (re)-collapse are shown in column third. The temporal duration over which the models match with observations is shown in column fourth. The average separation of the central distance for massive stars between generations at the moment of the match is shown in columns five and sixth for simulations with mass segregation and without, respectively.}
\label{tab:tabsims1}
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$n_0$&\ensuremath{\epsilon_{SF}}\xspace$_1$ & (re)-collapse & $\Delta$ time &\multicolumn{2}{|c|}{$D$[1GEN] - $D$[2GEN] (pc)}\\
($\text{cm}^{-3}$) &\ensuremath{\epsilon_{SF}}\xspace$_2$ & time (Myr) & (Myr)& SEG & NOSEG\\
\hline
6000& 0.04-0.20 & 2.63 & 0.60 & 2.93 $\pm$ 0.61& 3.37 $\pm$ 0.58\\
6000& 0.05-0.20 & 3.35 & 0.70 & 3.28 $\pm$ 0.73 & 3.60 $\pm$ 0.92\\[4 pt]
7000& 0.05-0.20 & 2.62 & 0.50 & 3.60 $\pm$ 0.39& 3.61 $\pm$ 0.73\\
7000& 0.06-0.20 & 3.38 & 0.70 & 2.78 $\pm$ 0.44& 3.76 $\pm$ 0.64\\[4 pt]
8000& 0.05-0.20 & 2.21 & 0.50 & 3.69 $\pm$ 0.46& 3.72 $\pm$ 0.65\\
8000& 0.06-0.20 & 2.62 & 0.50 & 3.09 $\pm$ 0.52& 3.19 $\pm$ 0.41\\
8000& 0.07-0.20 & 3.40 & 0.70 & 2.59 $\pm$ 0.46& 2.48 $\pm$ 0.93\\[4 pt]
9000& 0.04-0.20 & 1.76 & 0.40 &3.79 $\pm$ 0.46& 3.72 $\pm$ 0.62\\
9000& 0.05-0.20 & 1.95 & 0.40 &3.85 $\pm$ 0.44& 3.91 $\pm$ 0.45\\
9000& 0.06-0.20 & 2.22 & 0.50 & 3.13 $\pm$ 0.36& 3.10 $\pm$ 0.38\\
9000& 0.07-0.20 & 2.63 & 0.50 & 2.53 $\pm$ 0.63& 2.73 $\pm$ 0.60\\
9000& 0.08-0.20 & 3.48 & 0.70 & 1.94 $\pm$ 0.31& 3.12 $\pm$ 0.70\\[4 pt]
10000& 0.06-0.20 & 1.94 & 0.40 & 3.45 $\pm$ 0.45& 3.36 $\pm$ 0.44\\
10000& 0.07-0.20 & 2.22 & 0.40 & 2.98 $\pm$ 0.48& 3.53 $\pm$ 0.53\\
10000& 0.08-0.20 & 2.65 & 0.50 & 2.17 $\pm$ 0.59& 2.95 $\pm$ 0.51\\
10000& 0.09-0.20 & 3.61 & 0.70 & 1.97 $\pm$ 0.59& 2.86 $\pm$ 0.71\\
\hline
\end{tabular}}
\end{table}
\begin{figure}
\includegraphics[width=\columnwidth]{{WFshell_M5.5_vert}.pdf}
\caption{Shell radius evolving in time from \textsc{warpfield} models. Different colours and symbols represent the respective \ensuremath{\epsilon_{SF}}\xspace. The ranges of time when the models match with the observable are highlighted by a green thick line.}
\label{fig:rWF55}
\end{figure}
\section{Results}
\label{sec:results}
\subsection{WARPFIELD clouds}
\label{sec:WFclouds}
We first constrain our parameter space by finding \textsc{warpfield} clouds which can reproduce the ages of the two stellar populations and the shell radius. We explore different values of \ensuremath{\epsilon_{SF}}\xspace from 0.01 until 0.10 for the first cluster and a fixed \ensuremath{\epsilon_{SF}}\xspace = 0.20 for the second star cluster. These choices allow us to match the observed mass of R136. We summarize the successful \textsc{warpfield} models in Tab. \ref{tab:tabsims1} where the first and second columns indicate the initial density and the \ensuremath{\epsilon_{SF}}\xspace pair, respectively. The shell radii evolution for each of the cases in our parameter space obtained from \textsc{warpfield} simulations are shown in Fig. \ref{fig:rWF55}. Every panel shows a cloud with different initial density. We have for every initial density more than one \ensuremath{\epsilon_{SF}}\xspace pair which are represented with different line styles. Initially, the clouds expand due to stellar feedback exerted by the central cluster. After that, depending on the initial density and the cluster mass, the shell radii reach a maximum followed by a (re)-collapse. The (re)-collapse times are shown in Tab. \ref{tab:tabsims1}, column third. The moment of (re)-collapse increases as we use larger \ensuremath{\epsilon_{SF}}\xspace for each cloud. This is expected as a more massive cluster keeps the cloud expansion for a longer time due to higher feedback. After the second starburst, the shells expand again and for all cases, the expansions continue until we stop the simulation. We highlight with a thicker green line the zone where the stellar ages and the shell radius match the observables (see Tab. \ref{tab:tabpar}). For all cases, the left sides of the matching zones start when the minimum shell radius is found ($\sim 30$ pc) and the right limit when the 2GEN maximum age is reached. The temporal duration ($\Delta$) of these zones are summarized in Tab. \ref{tab:tabsims1}, column fourth with values between 0.4 and 0.7 Myr. Two clusters together produce a faster expansion of the shell, as a larger amount of feedback is added to the cloud. If less massive 2GEN clusters (\ensuremath{\epsilon_{SF}}\xspace < 0.20) are taken into account, these zones are much shorter as the shells need more time to reach the minimum size, approaching or even passing the maximum 2GEN age. The inclusion of mass segregation does not change the \textsc{warpfield} cloud evolution, as the 1D model simply assumes all feedback is injected from the cluster centre. This is a appropriate assumption as the shell radius exceeds the cluster radius during most evolutionary phases, except at the very end of (re)-collapse when a new stellar generation is formed (Domínguez et al. 2022, submitted to MNRAS).
\begin{figure*}
\includegraphics[width=0.99\textwidth]{{wradmass_bygen_M5.5_RH1.0_Q0.3_R136_SEG_NOSEG_mod}.pdf}
\caption{Central distance of massive stars ($M > 20$ $\text{M}_\odot$) vs time. The first generation (1GEN) is denoted by red filled symbols and a red solid line. The second generation (2GEN) is denoted by blue empty symbols and dashed blue lines. If the simulations start with mass segregation (SEG) or not (NOSEG) is represented by circles or squares, respectively. The Black dashed line is the shell radius. The green zone indicates where the ages of 1GEN and 2GEN match with the shell radius. The times when SNe start for each generation are denoted by orange and cyan vertical dashed lines, respectively. The information of the initial cloud density ($n_0$) and star formation pairs (\ensuremath{\epsilon_{SF}}\xspace) are given in every panel.}
\label{fig:rmasm55SEGNOSEG}
\end{figure*}
\subsection{Massive stars}
\label{sec:masstars}
In Fig. \ref{fig:rmasm55SEGNOSEG}, we compare the central distance evolving in time for massive stars ($M>20 \text{M}_\odot$) by stellar generation. We show the central distance evolution for each of the cases weighted by its luminosity according to \citet{2011A&A...535A..56G} to achieve more specific information about the location of the most massive stars which predominate in brightness and quantity of feedback. Simulations starting with mass segregation (SEG) are shown with circles and with no mass segregation (NOSEG) are denoted by squares. The evolution for 1GEN is denoted by red filled symbols and 2GEN by blue empty symbols. The green zone is the matching zone described in Sec. \ref{sec:WFclouds} and the shell radius is represented by a black dashed line. We also indicate when the first SN occurs for each generation with orange and cyan dashed lines, respectively. This is $t \sim t_0 + 3.5$ Myr, where $t_0 = 0$ Myr for 1GEN and for 2EGN is the time when the cloud (re)-collapses. The clusters cover a range of masses for 1GEN of $1.30 \times 10^4 \ \text{M}_\odot \leq M_\text{1GEN} \leq 2.85 \times 10^4 \ \text{M}_\odot$. On the low mass limit, the small number of stars is not enough to cover the whole IMF mass range and this is only complete for \ensuremath{\epsilon_{SF}}\xspace $\geq 0.06$. On the other hand, for 2GEN we have $M_\text{2GEN} \approx 5.90 \times 10^4$, which completes the IMF sample.
For SEG simulations, we observe that 1GEN massive stars (solid line, filled red circles) reach outer positions due to cloud expansion affecting them. At the moment of the (re)-collapse, a strong gravitational potential on the centre is produced due to the high density of the cloud and 1GEN expansion is reversed. After the starburst and with the second cloud expansion, the massive stars are found travelling inward toward the centre. A small contraction of the distribution of the older stars is observed, which is followed by a steady expansion until the end of the simulation. We do not observe a clear effect of the SNe as they mostly start with the clusters already in expansion. On the other hand, 2GEN massive stars (dashed line, empty blue circles) start more concentrated, as described by their initial mass configuration. For 2GEN clusters birthed with $\alpha=0.3$, the stellar distributions contract and stabilize in a more concentrated state compared to 1GEN, until the moment when SNe start. At 8 Myr, we observe the expansion of these younger stars is less than the older, practically ignoring the change in gravitational potential, and remaining always more concentrated than 1GEN.
For NOSEG simulations, 1GEN massive stars (solid line, filled red squares) begin, as expected, less concentrated than SEG clusters. We note, however, that they show similar dynamical evolution to clusters with initial segregation. The same description can be applied for 2GEN (dashed line, empty blue squares) with central distances always larger than each SEG pair. The effect of the SNe is less visible and after this point, SEG and NOSEG curves approach common values at late times. As before, we observe mass segregation between the different aged populations.
At the moment when the different curves cross the green zone, the older massive stars are more expanded than the younger massive stars. They show different separations and their values are summarized in Tab. \ref{tab:tabsims1} in columns four and five for SEG and NOSEG cases, respectively. The separation cover values between 1.68-4.10 pc. No trend is observed for the different initial density clouds and \ensuremath{\epsilon_{SF}}\xspace pairs. Our best and worst models are for SEG sample correspond to $n_0 = 9000$ cm$^{-3}$ with \ensuremath{\epsilon_{SF}}\xspace = 0.04-0.20 and $n_0 = 10000$ cm$^{-3}$ with \ensuremath{\epsilon_{SF}}\xspace = 0.09-0.20, respectively. For NOSEG sample, our best model is for $n_0 = 6000$ cm$^{-3}$ with \ensuremath{\epsilon_{SF}}\xspace = 0.05-0.20 and $n_0 = 10000$ cm$^{-3}$ with \ensuremath{\epsilon_{SF}}\xspace = 0.09-0.20, respectively. Even when we refer to them as "worst model", they still show a age-mass segregation. We also find similar low value in SEG sample for $n_0 = 8000$ cm$^{-3}$ with \ensuremath{\epsilon_{SF}}\xspace = 0.06-0.20.
\begin{figure*}
\includegraphics[width=\textwidth]{{lagradii_bygen_M5.5_RH1.0_R136_SEG_NOSEG_less}.pdf}
\caption{Lagrangian radii ($R_f$) vs time. The first generation (1GEN) is denoted by filled symbols and the second generation (2GEN) is denoted by empty symbols. If the simulations start with mass segregation (SEG) or not (NOSEG) is represented by solid lines or dashed lines, respectively. Shell radius, matching zone and SNe times are as Fig. \ref{fig:rmasm55SEGNOSEG}. Each $R_f$ colour is given in the legend.}
\label{fig:rlagm55SEGNOSEG}
\end{figure*}
\subsection{All stars distribution}
\label{sec:allstarsdist}
We also study the spatial location of the whole stellar distribution with different Lagrangian radii ($R_f$). In specific, we use $R_f =$ 0.1, 0.3, 0.5 and 0.7 separated by generation. We show the results of $R_f$ in Fig. \ref{fig:rlagm55SEGNOSEG}, where 1GEN is represented by filled symbols and 2GEN by empty symbols. Each $R_f$ has a different symbol and colour described in the legend. SEG and NOSEG simulations are represented by a solid and a dashed line, respectively. The shell radius, the matching zone and the SNe beginning are represented as before. In order to do not over-plot the symbols, we shift the SEG and NOSEG results in $\pm 0.10$ Myr resulting in SEG information first.
As we work with Plummer distribution, SEG and NOSEG mass distributions show initially the same values for the different $R_f$. After this point, we observe that the global evolution followed by both stellar distributions is very similar, being difficult to make a difference without the shift applied to every snapshot. At the moment of the (re)-collapse, the 1GEN stars are not immediately travelling inwards, as it is also observed when only massive stars are analyzed because of the strong gravitational potential produced by the high gas density towards the centre. The different $R_f$, for the older stellar component, only shrink shortly after the new starburst, as the stars need time to change their velocities that were heading outward. After a small contraction, the expansion is resumed until the end of the simulation. The effect of SNe for the old star generation is not appreciable due to the larger expansion produced by the cloud dispersal. For 2GEN, the cloud expansion is not producing big changes in the stellar distribution as they show roughly constant values after the initial contraction due to our initial virial state, until the point when SNe start when in most cases, a new rate of expansion is observed. The behaviour described above is valid for all our sample. About the final star locations, only until $R_f = 0.1$ of 1GEN (pink filled squares) and in some cases $R_f = 0.3$ (purple right triangles) can be comparable with $R_f = 0.7$ 2GEN (green empty diamonds). The rest of 1GEN $R_f$ are always further away from the main concentration of stars. For the case of $R_f= 0.3$ 1GEN can only be comparable with 2GEN $R_f$ when \ensuremath{\epsilon_{SF}}\xspace $\geq 0.08$, being deeper as the 1GEN stellar mass increases. The shell radius (black dashed line) is always larger than the bigger $R_f$. It is only smaller when the (re)-collapse phase is reached, but it reaches a larger position very fast after the second starburst. Every panel shows that the new start cluster which is representing R136 is more concentrated than the old stellar component during the whole simulation so during the green zone when \textsc{warpfield} matches the other observables the $N$-body simulations also match the observed stellar distribution.
\begin{figure*}
\includegraphics[width=0.98\textwidth]{{lambda_bygen_M5.5_RH1.0_Q0.3_R136_SEG_mod}.pdf}
\caption{Mass segregation (\ensuremath{\Lambda_\text{MSR}}\xspace) evolution vs time for simulations starting with mass segregation (SEG) and panels ordered as before. Massive stars from the first generation (1GEN) or second generation (2GEN) are denoted as 1GEN$_{mas}$ and 2GEN$_{mas}$, respectively. Low mass stars are referred as 1GEN$_{low}$ or 2GEN$_{low}$. The rest of the stars excluding the sample of comparison are refereed as ALL. The solid black line shows a value of \ensuremath{\Lambda_\text{MSR}}\xspace $= 1$ and the dashed black line shows a value of \ensuremath{\Lambda_\text{MSR}}\xspace $= 0.5$. The vertical orange and blue lines indicate when the first event of SN is taking place for 1GEN and 2GEN respectively. The symbols and colours for each comparison sample are shown in the legend.}
\label{fig:lambdam55SEG}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{{lambda_bygen_M5.5_RH1.0_Q0.3_R136_NOSEG_mod}.pdf}
\caption{Same as in Fig. \ref{fig:lambdam55SEG} but now for simulations starting without mass segregation (NOSEG).}
\label{fig:lambdam55NOSEG}
\end{figure*}
\subsection{Mass segregation}
We use the \ensuremath{\Lambda_\text{MSR}}\xspace to compare different combinations of star samples from 1GEN, 2GEN or mixed. We consider SEG and NOSEG models separately as we find large differences compared to the analysis above. We measure the mass segregation ratio for the following six combinations:
\begin{itemize}[left= 0pt]
\item Comparison of first generation massive stars (1GEN$_{mas}$) with the first generation low mass stars (1GEN$_{low}$) (red down filled triangles).
\item Comparison of 1GEN$_{mas}$ with the rest of the stars (ALL) (pink filled squares).
\item Comparison of 1GEN$_{mas}$ together with the second generation massive stars (2GEN$_{mas}$) with ALL (purple filled right triangles).
\item Comparison of 2GEN$_{mas}$ with second generation low mass stars (2GEN$_{low}$) (blue down empty triangles).
\item Comparison of 2GEN$_{mas}$ with ALL (light blue empty squares).
\item Comparison of 1GEN$_{mas}$ with 2GEN$_{mas}$ (orange empty right triangles).
\end{itemize}
For the SEG sample, we show in Fig. \ref{fig:lambdam55SEG} the previously described combination of \ensuremath{\Lambda_\text{MSR}}\xspace evolving in time with the panels following the same order as the previous plots. As before the moments when the first SN takes place for 1GEN and 2GEN are indicated by orange and blue vertical dashed lines respectively. The highest level of mass segregation introduced in this sample is detected for the initial conditions at 0 Myr. After this, the gas expulsion occurs and the clusters expand, which produces a reduction in the level of mass segregation. We observe that closely before the second starburst, when the cloud is collapsing and slowly bringing stars from outer locations, the level of mass segregation improves in a small degree and around this value is where it stabilizes and remains until the end of the simulation. The rest of the combinations can only start to be measured after the (re)-collapse as they include stars from 2GEN and our observations are based in comparison to the sample just described. Taking again the 1GEN massive stars but now compared to the rest of the stars (1GEN$_{mas}$ vs ALL) shown as pink filled squares, a lower \ensuremath{\Lambda_\text{MSR}}\xspace is measured with values close to one or at least always below the comparison sample. All massive stars compared to the rest of the stars results (1GEN$_{mass}$ - 2GEN$_{mass}$ vs ALL) are shown with purple right triangles. In this sample, the starting level of mass segregation is higher (\ensuremath{\Lambda_\text{MSR}}\xspace $\gtrsim$ 2), which is followed by a decrease but always higher than the comparison sample. The youngest massive stars compared to their respective low mass stars (2GEN$_{mas}$ vs 2GEN$_{low}$) are shown with empty blue down triangles. The initial level of \ensuremath{\Lambda_\text{MSR}}\xspace starts with the level of mass segregation introduced as an initial parameter, which is reduced as before, but not as much, then it oscillates and finishes at the end of the simulation with a value close to the initial. The massive stars of 2GEN are compared with the rest of the stars (2GEN$_{mass}$ vs ALL) and the results are shown with light blue empty squares. \ensuremath{\Lambda_\text{MSR}}\xspace shows higher values as it is including stars from 1GEN which are more expanded than the youngest low mass stars, then it oscillates as the previous sample finishing with values higher than the initial. In some of the cases, the final \ensuremath{\Lambda_\text{MSR}}\xspace can be similar to the initial if 1 $\sigma$ error is taken into account. The last combination is the most important for the goal of this work as it describes if the older massive stars are more expanded than the second generation. We denote this comparison as 1GEN$_{mas}$ vs 2GEN$_{mas}$ and it is shown as orange empty right triangles. We observe a very stable value of inverse mass segregation for all the cases remaining close to the level of mass segregation at the moment when the second stellar generation is introduced to the simulation. At the end of the simulation, we measure values of \ensuremath{\Lambda_\text{MSR}}\xspace comparable or even less to the initial. The moments of the first SNe for each generation are shown with a dashed vertical orange and blue line respectively. The SNe of the 1GEN are not producing a big change on the value of \ensuremath{\Lambda_\text{MSR}}\xspace but for the cases which include 2GEN as the comparison sample alone, the SNe are producing instabilities on the \ensuremath{\Lambda_\text{MSR}}\xspace parameter producing that the rate of increase is reduced and in some cases, when more SNe events are possible, even a decreasing trend towards the end. At the moment when the other observational parameters match the values in the literature (green zone), for all cases, the youngest stellar generation, which is representing R136, is more concentrated than the older stars.
For the NOSEG sample, we show the result for the same combinations described before in Fig. \ref{fig:lambdam55NOSEG}. Initially, we find \ensuremath{\Lambda_\text{MSR}}\xspace = 1 as we set up the simulation. Some small decreases or increases are observed but quickly returning to one. After the (re)-collapse, small increases are observed in \ensuremath{\Lambda_\text{MSR}}\xspace, with the simulations ending at values slightly above one, with maximum values during the simulation typically \ensuremath{\Lambda_\text{MSR}}\xspace $\sim$ 1.5.
\ensuremath{\Lambda_\text{MSR}}\xspace remains close to 1.0 as a result of the fairly uniform global expansion of the cluster (see Sec. \ref{sec:allstarsdist}). From this, we may infer that massive stars are scattered almost at the same level as the low mass stars. When we start to measure 1GEN$_{mas}$ vs ALL after the (re)-collapse, we observe that \ensuremath{\Lambda_\text{MSR}}\xspace < 1, as the old stellar component has already expanded, it finds a new cluster more concentrated in the centre and in consequence \ensuremath{\Lambda_\text{MSR}}\xspace detects inverse mass segregation. The level of mass segregation remains stable until the end of the simulation with a value of $\sim$ 0.5. The massive stars of both generation compared with the low mass component shows as 1GEN$_{mass}$ - 2GEN$_{mass}$ vs ALL starts with a \ensuremath{\Lambda_\text{MSR}}\xspace $\sim$ 1.0 but continuously increasing with values closely above the \ensuremath{\Lambda_\text{MSR}}\xspace for 1GEN$_{mas}$ vs 1GEN$_{low}$. At the end of the simulation, we detect a minimum mass segregation of 1.5 and taking into account 1 $\sigma$ error a few maximum values of 2. The 2GEN$_{mas}$ vs 2GEN$_{low}$ comparison shows initially a \ensuremath{\Lambda_\text{MSR}}\xspace = 1.0, as we set up followed by a slow increase until the end of the simulation following closely the previous sample overlapping in the average level of mass segregation finishing with similar values. Comparing all the stars with the massive stars of the new cluster, as 2GEN$_{mass}$ vs ALL, an initial level of mass segregation of at least 1.5 is detected followed by a continuous increase until the end of the simulation. The final \ensuremath{\Lambda_\text{MSR}}\xspace at 8 Myr are always larger than 2 with most of the cases showing \ensuremath{\Lambda_\text{MSR}}\xspace $\sim$ 2.5 and a few maximum average values of 3. The comparison of the old and young massive component defined as 1GEN$_{mas}$ vs 2GEN$_{mas}$ shows inverse mass segregation with \ensuremath{\Lambda_\text{MSR}}\xspace < 0.5 during the whole time of measurement. This value slowly decreases, reaching a level of mass segregation even smaller than the initial. The SNe effect follows the same description as the SEG case showing also instabilities from the moment the SNe for the 2GEN start to take place. At the moment when this last comparison sample is inside the green zone, all the cases show inverse mass segregation as we are aiming to achieve.
\begin{figure}
\includegraphics[width=\columnwidth]{{Lambda_central_RH1.0_Q0.3_2GEN_ALL_BOTH_2D_M10Mo_M5.5_R136_SEG_NOSEG}.pdf}
\includegraphics[width=\columnwidth]{{RLambda_central_RH1.0_Q0.3_2GEN_ALL_BOTH_2D_M10Mo_M5.5_R136_SEG_NOSEG}.pdf}
\caption{Top panel shows the level of mass segregation (\ensuremath{\Lambda_\text{MSR}}\xspace) measured for different sample sizes of chosen random stars ($N_\text{MST}$). The bottom panel shows \ensuremath{\Lambda_\text{MSR}}\xspace for different radii. The left and right panels show the results excluding (2GEN) or including (ALL) the old stellar component, respectively. The cyan zones are the observational values from K2021. The different time snapshots and initial mass segregation are indicated by different symbols as the legend denotes. The initial conditions for this case are a cloud of $n_0$ = 10000 cm$^{-3}$ and star clusters according to $\epsilon_{SF}$= 0.07-0.20.}
\label{fig:seg_comp_obs}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{{Rdens_central_RH1.0_Q0.3_2GEN_ALL_BOTH_M10Mo_M5.5_R136_SEG_NOSEG}.pdf}
\includegraphics[width=\columnwidth]{{RsumM_central_RH1.0_Q0.3_2GEN_ALL_BOTH_M10Mo_M5.5_R136_SEG_NOSEG}.pdf}
\includegraphics[width=\columnwidth]{{RMtot_central_RH1.0_Q0.3_2GEN_ALL_BOTH_M10Mo_M5.5_R136_SEG_NOSEG}.pdf}
\caption{Top panels show the projected mass density ($\rho$) of the central zone. The different curves show the fitting lines from the observational study (K2021). The central panels show the surface density ($\sum$) within a given radius. The bottom panels show the total stellar mass ($M_\text{tot}$) within a given radius. The left and right panels show the results excluding (2GEN) or including (ALL) the old stellar component. The cyan lines in the central and bottom panels are the observational values from K2021 and the red line and orange circles are the respective values for simulations starting with mass segregation and not, respectively. The different time snapshots and initial mass segregation are indicated by different symbols as the legend denotes. The initial conditions for this case are $n_0$ = 10000 cm$^{-3}$ and $\epsilon_{SF}$= 0.07-0.20.}
\label{fig:R_rho_surf_Msum}
\end{figure}
\section{Observational measurements}
We compute the \ensuremath{\Lambda_\text{MSR}}\xspace and density profile of the central zone of our simulations for snapshots at 1.0, 1.5 and 2.0 Myr, trying to match with the observational measurements done by K2021. In our simulations, we have accurate positions, masses and ages for all the particles which for an observational study is not achievable. K2021 presented a detection mass sensitivity which is low in the central zone and it is increasing towards the outer parts. We blind our sample according to the probability of detection presented by them (see Figure 11, bottom panel in K2021). In our simulations, we have two stellar components which would be difficult to distinguish directly in an observation, especially in the small and concentrated central zone. In order to study both cases, we proceed excluding and including the old stars. We find no significant differences between our models, hence, we present the best match which corresponds to the simulations starting with a cloud of $n_0$ = 10000 cm$^{-3}$ and initial clusters with $\epsilon_{SF}$= 0.07 followed, after the (re)-collapse, by our representation of R136 with $\epsilon_{SF}$= 0.20. We present our results using the same plots and units presented by K2021 for a direct comparison.
\subsection{Central mass segregation}
We measure the \ensuremath{\Lambda_\text{MSR}}\xspace parameter following the methodology in K2021 including their completeness limitation and observational biases. In Fig. \ref{fig:seg_comp_obs}, top panels, we show the \ensuremath{\Lambda_\text{MSR}}\xspace calculated for different sample sizes of chosen random stars ($N_\text{MST}$). The cyan zone represents the 1 sigma range from K2021. In the left panel, where the old stars are excluded, our simulations show a flatter trend than the observational results. The central values of \ensuremath{\Lambda_\text{MSR}}\xspace match the central zone for some cases only for $N_\text{MST} \leq 100$ and for larger $N_\text{MST}$ we can only reach the cyan zone through the 1 sigma error. In the right panel, where we include the stars from the old component, shows an even flatter curve, with central values below and above the cyan zone. For this case, some of the central \ensuremath{\Lambda_\text{MSR}}\xspace values are matching for $N_\text{MST} \leq 150$. The 1 sigma ranges as well, for most of the cases, reach the observational zone but with less spread as we increase the $N_\text{MST}$. The differences between the different 2GEN ages or initial mass segregation are small.
In the bottom panels, we measure the level of mass segregation for different radii. We can only match the cyan zone for $R \geq 0.4$ pc, taking into account the 1 sigma error. At a $R = 0.2$ pc, our results show a \ensuremath{\Lambda_\text{MSR}}\xspace close to 1 but K2021 shows a larger value of mass segregation, being the only radius where we measure the largest differences between our studies as the cyan zone is never reached either excluding or not the older component. The different time snapshots show similar average values or at least intersect the 1 sigma error.
\subsection{Central density profile}
We measure the 2D mass density profiles for a given radius as it was done by K2021 and we summarize the results in Fig. \ref{fig:R_rho_surf_Msum} top panel. The different curves are the mass density profiles shown in K2021 at different estimated ages. Our results are matching the curve close to the centre ($R < 0.2$~pc) and staying slightly above until $R \sim 1$~pc where again match the solid lines. The same behaviour is shown when the old component is excluded or not. We do not observe big differences for any time snapshots or initial mass segregation. The results shown in K2021 are also not matching the curves perfectly as we can see from the cyan zone which denotes the spread of the results measured by the authors.
In the central panels, we show the surface density for different given radii. We find that our results follow similar curves but the final values show differences. The final mass density found by K2021 is $\sum = 2.7 \times 10^3$ M$_\odot/\text{pc}^2$ and our best match in this case is given in the left panel with small differences as $-0.1 \times 10^3$ M$_\odot/\text{pc}^2$ and $+0.2 \times 10^3$ M$_\odot/\text{pc}^2$ for NOSEG and SEG simulations respectively. In the right panel, where all stars are included, higher surface densities in the order of $\geq +0.4$ M$_\odot/\text{pc}^2$. The results are in both cases inside the cyan zone but when the old component is included the values approach the top limit.
In the bottom panels, we show the stellar mass for given radii. As before, the closest values are observed in the left panels. K2021 estimated a total mass of M$_\text{tot}=1.5 \times 10^4$ M$_\odot$ and our results can match this value for SEG simulations and with a difference of less than 10\% ($-0.1 \times 10^4$ M$_\odot$) for NOSEG simulations. In the right panel, NOSEG simulations also find a close value with a difference of less than 10\% ($+0.1 \times 10^4$ M$_\odot$). Simulations starting with mass segregation enclose more mass, resulting in a value above the observation measurement of $+0.3 \times 10^4$ M$_\odot$. We also observe that in the right panel our results are closer to the top limits of the cyan zone.
\section{Discussion}
In this work we demonstrate that the stellar distribution observed in NGC~2070 is consistent with an older stellar cluster, dynamically relaxed, hosting in its centre a youngest more massive star cluster known as R136. We achieve this through $N$-body simulations coupled with a semi-analytic 1D model for evolution of cloud/cluster systems.
We evolve a molecular cloud initially with $M_\text{cloud}~=~3.16~\times~10^{5}$~M$_\odot$ trying initial uniform densities ($n_0$) between 6000-10000~cm$^{-3}$ holding different star clusters leading in \ensuremath{\epsilon_{SF}}\xspace between 0.04-0.09. We scale the velocities in order to obtain dynamical equilibrium ($\alpha=0.5$). All the combinations shown in this work include clusters which produce insufficient feedback to dissolve the cloud, despite being massive and young. As a consequence, the cloud (re)-collapses and a second starburst occurs. We fix this second star cluster to have a \ensuremath{\epsilon_{SF}}\xspace = 0.20. The last imposition is made in order to have more time to match the ages of both stellar generations and the shell radius. After several attempts exploring the best parameter space to reproduce the observables of R136, we find that a second star cluster starting in dynamical equilibrium is not able to match observations. We explore different $\alpha$, and we find that the best dynamical state for the new stellar component is $\alpha=0.3$ i.e., the second cloud expansion is holding a new cluster that is initially contracting. The dynamics of the second cluster is affected by the dynamics of an expanded older less massive stellar component and the expanding cloud which is removing gravitational potential. We also explore if our results can vary if both stars clusters start with mass segregation or not. In this paper, we only include a summary of the results for the successful $\alpha$.
We study NGC~2070 as a whole measuring the average distances to the centre for only the massive stars ($M > 20$ M$_\odot$) weighted by their luminosity. We find for all the cases that the new massive stars stay captive closer to the centre and the remaining older massive stars are at further locations as is observed in NGC~2070. It is important to mention that this is not saying that there are no old massive stars close to the centre, in fact, we detect them but they are not that many to reduce the average central distance. We observe that the different stellar components can be easier recognized if the star clusters start with mass segregation as the new massive stars are found much more concentrated than their pairs. At later stages, the expansion of the new stellar component is larger and we achieve this as a consequence of the SNe which have more time to be produced.
The massive stars which belong to the clusters starting with mass segregation are found, on average, closer to the centre due to their initial imposed configurations and this difference is more visible for the newer massive stars.
We continue studying the Lagrangian radii of both stellar components. We observe that independent of the initial level of mass segregation, the old stellar component is always found more extended than the new cluster. The initial contraction for the second stellar generation because of our imposed virial ratio is visible along all the layers. Its expansion is stronger when the new SNe start to be produced at later stages of our simulations but does not influence the matching scenarios as this occurs later than the moment when all the observable are intersecting. The clusters which start with mass segregation are slightly more concentrated than their pairs and this is observable through our complete sample.
To quantify our model in a physical way, we measure the level of mass segregation for the different cases using the \ensuremath{\Lambda_\text{MSR}}\xspace parameter. As the new stellar component is more concentrated than the old star cluster, we expect to find a \ensuremath{\Lambda_\text{MSR}}\xspace $< 1$, when we compare the massive stars from the older generation with the new massive stars. We show the results in separated plots this time as the initial \ensuremath{\Lambda_\text{MSR}}\xspace differ highly when we start with mass segregation (\ensuremath{\Lambda_\text{MSR}}\xspace $>> 1$) or not (\ensuremath{\Lambda_\text{MSR}}\xspace $\sim 1$). The old cluster which starts with mass segregation loses this configuration due to the cloud expansion and at the moment of the inclusion of the new star cluster, their \ensuremath{\Lambda_\text{MSR}}\xspace $ \gtrsim 1$ i.e., a cluster without mass segregation as their pairs which at the same moment also exhibit the same distribution. After the inclusion of the new clusters, the comparison between the different samples shows similar behaviours. At the moment when the observables in \textsc{warpfield} are matched every combination of initial conditions show a value \ensuremath{\Lambda_\text{MSR}}\xspace $\sim 0.5$ for the cases without initial mass segregation or even less when we start with segregated clusters. Along our whole parameter space, the NGC~2070 stellar configuration is detected regardless of any of the conditions on the initial conditions here used.
We also compare to the study of K2021, who present observations of the central region of NGC~2070 where the younger cluster R136 is located and discuss the resulting radial mass segregation and density profiles. We proceed as closely as possible to their approach and we find a good match with the observations. Unlike the results on mass segregation previously exposed, we exclude any star with a central distance larger than 1.4 pc, as the observational study has done. We can closely match the descending trend for the \ensuremath{\Lambda_\text{MSR}}\xspace parameter as we increase the size of the sample (N$_\text{MST}$). Using only the new stellar component which in our work represent R136, we do not match the exact central values for every case, but our 1 sigma error bars are always close to the observational values. Adding the remaining stars from the old stellar component in this zone, we can match the central values. This means some older stars can be also contaminating the observational study. We measure \ensuremath{\Lambda_\text{MSR}}\xspace at different central distances and we find an increasing trend. The increasing trend is also found by K2021, with only one exception at a central distance of 0.2 pc. This discrepancy is found with our without the old stellar component. Our central values are slightly below the observational results but always intersected by 1 sigma error bars. It has been detected, very close to the R136 centre, a star with a mass $\sim$ 300 M$_\odot$, but in this work, we did not extend our initial mass function further than $\sim$ 120 M$_\odot$ as expecting to find this massive star in our simulation also very close to the centre can be challenged due to the stochastic dynamical interactions. This star in the very centre taken into account by observers improves the value of mass segregation at R $\leq 0.2$ pc showing the biggest difference between our works. In the referenced study, the values of mass segregation cover a range of 1.0 $\leq$ \ensuremath{\Lambda_\text{MSR}}\xspace $\leq 1.28$ which is very small for this parameter and it can vary easily depending on the random star selection \citep{2009MNRAS.395.1449A}. For radial density profiles, we find good agreement between observations and our simulations. We can match very accurately the observational values with our central values. We can only match these radial profiles if at the moment of the introduction of the new star cluster, instead of being in equilibrium, it is contracting. We try with different virial ratios ($\alpha \leq 0.5$) and the best agreement with observations is $\alpha=0.3$ which are the results presented in this work. We find this independent of the initial mass segregation, initial cloud density, and star formation efficiency pairs.
\section{Conclusions}
We conclude that an evolving molecular cloud with an initial mass of 3.16$\times 10^5$ M$_\odot$ giving birth to two stellar generations can well reproduce the observational characteristics of the central region of 30 Doradus in the Large Magellanic Cloud. Our model of an older first-generation star cluster with a mass between 1.26$\times 10^4$ M$_\odot$ and 2.85$\times 10^4$ M$_\odot$, starting in virial equilibrium, followed by a younger second-generation cluster of $\approx 6.32\times 10^4$ M$_\odot$, starting contracting with a virial ratio of 0.3, can match the stellar configuration observed in NGC~2070 consisting in an old expanded cluster hosting in its centre a youngest more massive star cluster known as R136. The resulting new stellar component shows close agreement with mass segregation observations of R136 excepting the very central zone ($R < 0.2$ pc) where a $\sim$ 300 M$_\odot$ is located which has been not included in this work. Whether we include remnants from the old component or not, our simulations match the density profile of the central zone of NGC~2070. Therefore, this result is independent of the probable contamination by old stars in K2021.
We caution that there may be other configurations that lead an equally good match to the observational constraints.
The approach presented here is kept simple in order to allow for the investigation of a large parameter space. Subsequent studies based on complex and computationally more expensive 3D radiation-hydrodynamic simulation can use our best fit model as starting point.
We observe that the second stellar generation, representing R136, remains more concentrated than the first generation, which can be well understood as a natural outcome of the stellar dynamical evolution in the time-varying potential. We mention, that the \textsc{warpfield} model could in principle produce more massive star clusters that also match the ages and shell radius of NGC~2070, however, in these cases the spatial distribution of stars is typically too extended to be compatible with the observational constraints.
\section*{Acknowledgments:} We acknowledgement support from ANID (CONICYT-PFCHA/Doctorado acuerdo bilateral DAAD/62170008) and from from the German Academic Exchange Service (DAAD) in funding program 57395809. The authors acknowledge support by the state of Baden-W\"urttemberg through bwHPC and the German Research Foundation (DFG) through grant INST 35/1134-1 FUGG providing the computing power to conduct the simulations suite presented here. We also thank the DFG for financial aid via the Collaborative Research Center SFB881 (Project-ID 138713538) {\em The Milky Way System} in subprojects A1, B1, B2, and B8, and we acknowledge encouragement from the Heidelberg Cluster of Excellence EXC 2181 (Project-ID 390900948) {\em STRUCTURES: A unifying approach to emergent phenomena in the physical world, mathematics, and complex data} funded by the German Excellence Strategy.
\section*{DATA AVAILABILITY STATEMENT}
The data of the full set of simulations (see Tab. \ref{tab:tabsims1}) presented in this article will be shared on reasonable request to the corresponding author.
\bibliographystyle{mnras}
|
2,877,628,088,527 | arxiv | \section{Introduction}
\label{sec:intro}
For many years it was thought that the expansion of the universe should be slowing due to the gravitational attraction of matter, but measurements of Type Ia supernovae and other observations have shown that the expansion rate is in fact accelerating \citep{1998AJ....116.1009R,1999ApJ...517..565P}. This accelerating expansion is generally attributed to an unknown component of the energy density of the universe commonly referred to as ``dark energy.'' One of the goals of future cosmological probes (e.g. LSST, JDEM, and Euclid) \citep{2001ASPC..232..347T, 2005ASPC..339...95T,2009arXiv0901.0721A, 2010arXiv1001.3349B}, is to determine constraints on dark energy equation of state parameters, e.g. $w \equiv P/\rho$ and $\mathrm{w_a} \equiv dw/da$ \citep{2007IJMPD..16.1581J}, where $P$ is the pressure from dark energy, $\rho$ is its mass density, and $a$ is the scale factor of the universe (normalized to be 1 today).
In order for these experiments to be successful, we require information about the redshift of all objects used to make measurements. However, it is impractical to measure spectroscopic redshifts for hundreds of millions of galaxies, especially extremely faint ones. We can measure the redshift of many more objects from photometric information, e.g. by using a large set of spectroscopic redshifts to create templates of how color varies with redshift \citep{1995AJ....110.2655C}. However current and future spectroscopic surveys will be highly incomplete due to selection biases dependent on redshift and galaxy properties \citep{2006MNRAS.370..198C}. Because of this, along with the catastrophic photometric errors\footnote{such as contamination from overlapping or unresolved objects; this is a frequent problem in deep surveys, particularly at high redshifts, cf. Newman et al. 2010} that can occur at a significant ($\sim 1\%$) rate \citep{2009ApJ...699..958S, 2010MNRAS.401.1399B}, photometric redshifts are not as well understood as redshifts determined spectroscopically. If future dark energy experiments are to reach their goals, it is necessary to develop a method of calibrating photometric redshifts with high precision \citep{2006astro.ph..9591A, 2006MNRAS.366..101H, 2006ApJ...636...21M}. Current projections for LSST cosmic shear measurements estimate that the true mean redshift of objects in each photo-z bin must be known to better than $\sim0.002(1+z)$ \citep{2006ApJ...644..663Z, 2006JCAP...08..008Z, 2006ApJ...652..857K, 2006AIPC..870...44T} with stringent requirements on the fraction of unconstrained catastrophic outliers \citep{2010arXiv1002.3383H}, while the width of the bin must be known to $\sim0.003(1+z)$ \citep{2009arXiv0912.0201L}.
In this paper we test a new technique for calibrating photometric redshifts measured by other algorithms, which exploits the fact that objects at similar redshifts tend to cluster with each other. If we have two galaxy samples, one with only photometric information and the other consisting of objects with known spectroscopic redshifts, we can measure the angular cross\hyp{}correlation between objects in the photometric sample and the spectroscopic sample as a function of spectroscopic $z$. This clustering will depend on both the intrinsic clustering of the samples with each other and the degree to which the samples overlap in redshift. Autocorrelation measurements for each sample give information about their intrinsic clustering, which can be used to break the degeneracy between these two contributions. The principal advantage of this technique is that, while the two sets of objects should overlap in redshift and on the sky, it is not necessary for the spectroscopic sample to be complete at any given redshift. Therefore it is possible to use only the brightest objects at a given $z$, from which it is much easier to obtain secure redshift measurements, to calibrate photometric redshifts. Even systematic incompleteness (e.g. failing to obtain redshifts for galaxies of specific types) in the spectroscopic sample is not a problem, so long as the full redshift range is sampled. This method is effective even when the two samples do not have similar properties (e.g. differing luminosity and bias).
We here describe a complete end-to-end implementation of cross\hyp{}correlation methods for calibrating photometric redshifts and present the results of applying these algorithms to realistic mock catalogs. Throughout the paper we assume a flat $\Lambda$CDM cosmology with $\Omega_m$=0.3, $\Omega_{\Lambda}$=0.7, and Hubble parameter $H_0=100h$ km s$^{-1}$ Mpc$^{-1}$, where we have assumed $h$=0.72, matching the Millennium simulations, where it is not explicitly included in formulae. In \S \ref{sec:datasets} we describe the catalog and data sets used to test cross\hyp{}correlation methods. In \S \ref{sec:method} we provide a description of the reconstruction techniques used in detail, and in \S \ref{sec:results} we provide the results of the calculation. In \S \ref{sec:conclusion} we conclude, as well as give a more concise description of the steps taken, providing a recipe for cross\hyp{}correlation photometric redshift calibration.
\section{Data Sets}
\label{sec:datasets}
To test this method, it is necessary to construct two samples of galaxies, one with known redshift (``spectroscopic'') and the other unknown (``photometric''). We have done this using mock DEEP2 Redshift Survey light cones produced by Darren Croton. A total of 24 light cones were constructed by taking lines-of-sight through the Millennium Simulation halo catalog \citep{2006astro.ph..8019L} with the redshift of the simulation cube used increasing with distance from the observer \citep{2007MNRAS.376....2K}. The light cones were then populated with galaxies using a semi-analytic model whose parameters were chosen to reproduce local galaxy properties \citep{2006MNRAS.365...11C}. Each light cone covers the range $0.10<z<1.5$ and corresponds to a $0.5\times2.0$ degree region of sky. The galaxies in this mock catalog will have properties (including color, luminosity, and large-scale structure bias) which vary with redshift due to the same factors believed to affect real galaxy evolution. The semi-analytic model used is certainly imperfect, but yields samples of galaxies that pose the same difficulties (e.g. bias evolution and differences in clustering between bright and faint objects) as real surveys will exhibit; they therefore provide a realistic test of our ability to reconstruct redshift distributions of faint samples using spectroscopy of only a brighter subset.
The spectroscopic sample is generated by selecting 60\% of objects with observed $R$-band magnitude $R<24.1$, which gives a sample whose characteristics resemble the DEEP2 Galaxy Redshift survey (Newman et al. 2010, in prep.). The mean number of spectroscopic objects over the 24 light cones is $35,574$. The size of this sample is comparable to the number of objects predicted to be needed for calibration using template-based methods ($\sim10^5$ \citep{2009arXiv0912.0201L, 2008ApJ...682...39M}). However, this sample differs greatly in what it contains: it consists only of relatively bright objects, rather than having to be a statistically complete sample extending as faint as the objects to which photometric redshifts will be applied (a necessity for accurate training or template development, as the spectral energy distributions of faint galaxies are observed to lie outside the range luminous galaxies cover, both at $z\sim 0$ and $z\sim 1$ \citep{2006ApJ...647..853W, 2010PASP..122..485M}. Studies such as \citet{2010MNRAS.401.1399B} have assumed for such projections that 99.9\% redshift success can be achieved for faint galaxy samples (e.g. of photometric-redshift outliers); however, that is a failure rate more than two orders of magnitude lower than that actually achieved by current large surveys on 10-meter class telescopes such as VVDS \citep{2005A&A...439..845L}, ZCOSMOS \citep{2007ApJS..172...70L}, or DEEP2 (Newman et al. 2010, in prep.), surveys which are 1.5-5 magnitudes shallower than the limits of Stage III and Stage IV surveys such as DES and LSST. In contrast, as noted in \S \ref{sec:intro}, the cross\hyp{}correlation techniques we focus on in this paper do not require a complete spectroscopic sample, and hence do not require improvements in redshift success over existing projects to provide an accurate calibration.
The other sample, referred to hereafter as the photometric sample, is constructed by selecting objects in the mock catalog down to the faintest magnitudes available, with the probability of inclusion a Gaussian with $\langle z \rangle = 0.75$ and $\sigma_z = 0.20$. This emulates choosing a set of objects which have been placed in a single photometric redshift bin by some algorithm with Gaussian errors. It should be noted that, since the redshift distribution of the mock catalog we select from is not uniform, the resulting redshift distribution of the photometric sample is not a pure Gaussian. The overall redshift distribution of all objects in the catalog is fit well using a 5th degree polynomial, so the net distribution of the photometric sample can be well represented by the product of this polynomial and a Gaussian. After applying this Gaussian selection to the mock catalog, we then randomly throw out half of the selected objects in order to cut down on calculation time. The mean number of objects in the final photometric sample over the 24 light cones is $44,053$.
The mock catalog includes both the cosmological redshift as well as the observed redshift for each object. The observed redshift shows the effects of redshift-space distortions \citep{1998ASSL..231..185H}, and is the redshift value used for objects in the spectroscopic sample. When plotting the redshift distribution of the photometric sample we use the cosmological redshifts for each object (differences are small). Fig. \ref{fig:ndist} shows the number of galaxies as a function of redshift for each sample, as well as the entire catalog. While there is complete information on the actual redshift distributions for both samples in the catalog, only the distribution of the spectroscopic sample is assumed to be known in our calculations. We assume no information is known about the redshift distribution of the photometric sample, and attempt to recover it using only correlation measurements.
\begin{figure}[t]
\centering
\includegraphics[totalheight=3.4in]{f1}
\caption{The total number of galaxies in each sample as a function of redshift, summed over the 24 fields, binned with $\Delta z = 0.04$. The solid line is the overall redshift distribution for all galaxies in the mock catalogs, the dashed line is the distribution for our photometric sample (selected from the overall sample via a Gaussian in $z$, emulating objects placed in a single photometric redshift bin), while the dot-dashed line is the redshift distribution for our spectroscopic sample, selected to have magnitude $R<24.1$.}
\label{fig:ndist}
\end{figure}
\section{Method}
\label{sec:method}
After constructing the two samples of objects from each mock catalog, we can use standard correlation measurements and exploit the clustering of galaxies to recover the redshift distribution of the photometric sample. From here on, the spectroscopic sample, with known observed redshifts, will be labeled '$s$', and the photometric sample, with redshifts assumed unknown, will be labelled '$p$'.
The most fundamental correlation measurements we use are the real space two-point correlation function and the angular two-point correlation function. The real space two-point correlation function $\xi(r)$ is a measure of the excess probability $dP$ (above that for a random distribution) of finding a galaxy in a volume $dV$, at a separation $r$ from another galaxy\citep{1980lssu.book.....P}:
\begin{equation}
dP=n[1+\xi(r)]dV ,
\end{equation}
where $n$ is the mean number density of the sample. The angular two-point correlation function $w(\theta)$ is a measure of the excess probability $dP$ of finding a galaxy in a solid angle $d\Omega$, at a separation $\theta$ on the sky from another galaxy \citep{1980lssu.book.....P} :
\begin{equation}
dP=\Sigma[1+w(\theta)]d\Omega ,
\end{equation}
where $\Sigma$ is the mean number of galaxies per steradian (i.e., the surface density). From the spectroscopic sample we measure the real space two-point autocorrelation function, $\xi_{ss}(r,z)$, and from the photometric sample we measure the angular two-point autocorrelation function, $w_{pp}(\theta)$. These measurements give information about the intrinsic clustering of the samples. We also measure the angular cross\hyp{}correlation function between the spectroscopic and photometric sample, $w_{sp}(\theta,z)$, as a function of redshift. This is a measure of the excess probability of finding a photometric object at an angular separation $\theta$ from a spectroscopic object, completely analogous to $w_{pp}$.
Modeling $\xi(r)$ as a power law, $\xi(r)=(r/r_0)^{-\gamma}$, which is an accurate assumption from $\sim 0.5$ to $\sim 20 h^{-1}$ comoving Mpc for both observed samples and those in the mock catalogs, we can determine a relation between the angular cross\hyp{}correlation function $w_{sp}(\theta,z)$ and the redshift distribution. Following the derivation in \cite{2008ApJ...684...88N} (cf. eq. 4),
\begin{equation}
\label{eq:wsp}
w_{sp}(\theta,z) = \frac{\phi_p(z)H(\gamma_{sp})r^{\gamma_{sp}}_{0,sp}\theta^{1-\gamma_{sp}}D(z)^{1-\gamma_{sp}}}{dl/dz} ,
\end{equation}
where $H(\gamma)=\Gamma(1/2)\Gamma((\gamma-1)/2)/\Gamma(\gamma/2)$ (where $\Gamma(x)$ is the standard Gamma function), $\phi_p(z)$ is the probability distribution function of the redshift of an object in the photometric sample, $D(z)$ is the angular size distance, and $l(z)$ is the comoving distance to redshift $z$. Hence, to recover $\phi_p(z)$ from $w_{sp}$, we also must know the basic cosmology (to determine $D(z)$ and $dl/dz$), as well as the cross\hyp{}correlation parameters, $r_{0,sp}$ and $\gamma_{sp}$. It has been shown that uncertainties in cosmological parameters have minimal effect on the recovery of $\phi_p(z)$\citep{2008ApJ...684...88N}. To determine the cross\hyp{}correlation parameters, we use the assumption of linear biasing, under which the cross\hyp{}correlation is given by the geometric mean of the autocorrelations of the two samples, $\xi_{sp}(r)=(\xi_{ss}\xi_{pp})^{1/2}$. Thus we need to measure the autocorrelation functions for each sample and determine their parameters, $r_0$ and $\gamma$.
\subsection{Autocorrelation of the Spectroscopic Sample}
\label{sec:autospec}
We first need to determine how the real space autocorrelation function of the spectroscopic sample, $\xi_{ss}$, evolves with redshift. To do this we bin the spectroscopic objects in redshift and measure the two-point correlation function as a function of projected separation, $r_p$, and line-of-sight separation, $\pi$, for the objects in each bin. However, since it is affected by redshift-space distortions in the line of sight direction, it is difficult to measure the evolution of $\xi_{ss}(r)$ accurately directly from the observed $\xi(r_p,\pi)$. However, as we describe later, we can use $\xi(r_p,\pi)$ to derive the projected correlation function, $w_p(r_p)$, which is not significantly affected by redshift-space distortions. The evolution of the projected correlation function with redshift can be related to the evolution of $\xi(r)$.
To begin we measure $\xi_{ss}$ in bins of $r_p$ and $\pi$, using the Landy \& Szalay estimator \citep{1993ApJ...412...64L}:
\begin{equation}
\label{eq:xi}
\xi=\frac{1}{RR}\!\left[DD\left(\!\frac{N_R}{N_D}\!\right)^{\!\!2}\!-2DR\left(\!\frac{N_R}{N_D}\!\right)+RR\right] ,
\end{equation}
where DD, DR, and RR are the number of object pairs in each bin of $r_p$ and $\pi$ -- i.e., the number of cases where an object of type B is located a separation of $r_p$ and $\pi$ away from an object of type A -- considering pairs between objects in the data catalog and other objects in the data catalog, between the data catalog and a random catalog, or within the random catalog, respectively; we will describe these catalogs in more detail shortly. Here $N_D$ and $N_R$ are the total numbers of objects in the data and random catalogs. For each object pair, we calculated the projected separation, $r_p$, and the line-of-sight separation, $\pi$, using the equations:
\begin{align}
\label{eq:rp}
r_p&=D(z_{mean})\Delta \theta \\
\label{eq:pi}
{\rm and~~~}\pi&= |z_1-z_2|\ \frac{dl}{dz} \bigg|_{z_{mean}} ,
\end{align}
where $z_1$ and $z_2$ are the redshifts of the two objects in a pair, $\Delta \theta$ is their angular separation on the sky, and $z_{mean}=(z_1+z_2)/2$.
We calculate DD by measuring the transverse and line-of-sight distance between every pair of objects in the data sample and binning those distances to find the number of pairs as a function of $r_p$ and $\pi$. In this case the data sample is all of the objects in the chosen spectroscopic $z$-bin. In turn, RR is the pair count amongst objects in a ``random'' catalog, and DR is the cross pair count calculated using pairs between data objects and random catalog objects. We construct the random catalog to have the same shape on the sky as the data catalog, but its objects are randomly distributed with constant number of objects per solid angle (taking into account the spherical geometry).
To measure the real space correlation function, the random catalog must also have the same redshift distribution as the data catalog. To produce this, we first determine a smooth function that fits the overall redshift distribution of the spectroscopic sample and construct the random catalog to match. We had difficulty finding a single function that fit the entire distribution of $R<24.1$ galaxies in the Millennium mock from $z=0.1$ to $z=1.5$, so we used different functional forms over different redshift ranges. The best fit resulted from using $\phi_s(z) \sim z^2\exp(-z/z_o)$ for $0<z<1.03$ and $\phi_s(z) \sim A (1+z)^{\beta}$ for $z>1.03$. We bin the objects in each field into bins of $\Delta z=0.04$. Combining the distributions of all 24 fields and fitting via least-squares gave values of $z_o=0.232\pm0.003$ and $\beta=-2.74\pm0.18$. We then used these values, choosing a value of $A$ to force continuity at $z=1.03$, to define the redshift distribution used to generate the random catalogs. The random catalog for each field contained $\sim10$ times the number of objects as its corresponding data catalog.
After constructing the random catalogs, we calculate the pair counts in each redshift bin. For each field, both the data and random catalogs are divided into subsamples ("z-bins") according to their redshift, and DD, DR, and RR are calculated for each bin of $r_p$ and $\pi$ using only objects within a given z-bin. In the $r_p$ direction we binned the separations in $\log(r_p)$ over the range $-3 <\log(r_p)< 2.5$ with $\Delta\log(r_p)=0.1$, where $r_p$ is in $h^{-1}$Mpc. In the $\pi$ direction we binned the separations over the range $0 <\pi< 30\ h^{-1}$Mpc, with $\Delta\pi=1.0\ h^{-1}$Mpc. We calculated the pair counts in 10 z-bins covering the range $0.11<z<1.4$, where the size and location of each z-bin was selected so that there were approximately the same number of objects in each one.
When interpreting correlation measurements for the spectroscopic sample, we must take into account the effects of redshift-space distortions \citep{1998ASSL..231..185H}. Since these only affect distance measurements along the line of sight, we integrate $\xi(r_p,\pi)$ in the $\pi$ direction, which gives the projected correlation function, $w_p(r_p)$. Modeling $\xi(r_p,\pi)$ as a power law and solving for $w_p(r_p)$ analytically gives
\begin{align}
\label{eq:wp}
w_p(r_p)&=2\int^\infty_0\xi[(r^2_p+\pi^2)^{1/2}]d\pi \\
\label{eq:wpa}
&=r_p\left(\frac{r_0}{r_p}\right)^{\!\!\gamma} H(\gamma) ,
\end{align}
where $H(\gamma)$ is defined following equation \ref{eq:wsp}. We thus can recover $\gamma_{ss}(z)$ and $r_{0,ss}(z)$ by fitting a power-law model to $w_p(r_p)$ in each z-bin, allowing us to measure how the correlation function evolves with redshift. Because for our field geometry, signal-to-noise is poor at large scales, we fit for $w_p(r_p)$ up to $r_p=10\ h^{-1}$Mpc. The lower limit of $r_p$ used for the fit varied with redshift. We found in the highest redshift bins the behavior of $w_p(r_p)$ diverged from a power law, likely due to the semi-analytic model not populating group-mass halos with enough blue galaxies compared to DEEP2 data \citep{2008ApJ...672..153C}. Hence, for $z<0.8$ we fit over the range $0.1<r_p<10\ h^{-1}$Mpc, while for $z>0.8$ we fit over $1.0<r_p<10\ h^{-1}$Mpc.
We cannot measure $\xi(r_p,\pi)$ to infinite line-of-sight separations, so to calculate $w_p(r_p)$ we must integrate $\xi(r_p,\pi)$ out to $\pi_{max}=30\ h^{-1}$Mpc and then apply a correction for the fraction of the integral missed. In fact, in measuring $w_p(r_p)$, instead of evaluating $\xi(r_p,\pi)$ and then integrating, we simply summed the paircounts in the $\pi$ direction so DD, DR, and RR are functions of $r_p$ only; this method yielded more robust results. From equation \ref{eq:wp} (integrating to $\pi_{max}$ instead of infinity) we find
\begin{equation}
\label{eq:wp2}
w_p(r_p)= 2\left(\!\frac{1}{RR}\!\left[DD\!\left(\!\frac{N_R}{N_D}\!\right)^{\!\!2}\!-2DR\left(\!\frac{N_R}{N_D}\!\right)+RR\right]\right)\pi_{max},
\end{equation}
where DD, DR, and RR are the paircounts summed over the $\pi$ direction. For the correction, we first calculate $w_p(r_p)$ by summing the pair counts out to $\pi_{max}$, and then fit for $r_0$ and $\gamma$ using the analytic solution given in equation \ref{eq:wpa}. Using those parameters, we calculate $\int^{\pi_{max}}_0\xi(r_p,\pi)d\pi/\int^{\infty}_0\xi(r_p,\pi)d\pi$. We divide the observed $w_p(r_p)$ by this quantity and refit for $r_0$ and $\gamma$. This process is repeated until convergence is reached.
\subsection{Autocorrelation of the Photometric Sample}
\label{sec:autophot}
Since we assume the photometric sample contains no redshift information (or, more realistically, that any available redshift information was already exploited by placing objects into a redshift bin), we determine its autocorrelation parameters by measuring the angular autocorrelation function, $w_{pp}(\theta)$, and relating it to $r_{0,pp}$ using Limber's equation \citep{1980lssu.book.....P}:
\begin{equation}
\label{eq:wpp}
w_{pp}(\theta)=H(\gamma_{pp})\theta^{1-\gamma_{pp}}\!\!\int^\infty_0\!\!\phi^2_p(z)r^{\gamma_{pp}}_{0,pp}\frac{D(z)^{1-\gamma_{pp}}}{dl/dz}dz,
\end{equation}
where $\gamma_{pp}$ may be measured directly from the shape of $w_{pp}(\theta)$. We again measure the angular autocorrelation of the photometric sample using a Landy \& Szalay estimator:
\begin{equation}
w_{pp}(\theta)=\frac{1}{RR}\!\left[DD\left(\!\frac{N_R}{N_D}\!\right)^{\!\!2}\!-2DR\left(\!\frac{N_R}{N_D}\!\right)+RR\right] ,
\end{equation}
where DD, DR, and RR are the paircounts as a function of separation, $\theta$, and $N_D$ and $N_R$ are the number of objects in the data and random catalogs for the field. For angular correlation measurements the random catalog consists of objects randomly distributed on the sky in the same shape as the data catalog. Again, the random catalog is $\sim10$ times larger than the data catalog. For each sample, we calculated the $\theta$ separation of every pair and binned them in $\log(\theta)$ over the range $-3 <\log(\theta)< 0.4$ with $\Delta\log(\theta)=0.1$, where $\theta$ is measured in degrees.
The angular correlation function can be related to the spatial correlation function: $w_{pp}(\theta)=A_{pp}\theta^{1-\gamma_{pp}}$, where $A_{pp} \sim r^{\gamma_{pp}}_{0,pp}$ \citep{1980lssu.book.....P}. However, since the observed mean galaxy density in a field is not necessarily representative of the global mean density, our measurements of $w_{pp}(\theta)$ need to be corrected by an additive factor known as the integral constraint. To estimate this, we fit $w_{pp}(\theta)$ using a power law minus a constant, e.g. $w_{pp}(\theta)=A_{pp}\theta^{1-\gamma_{pp}}-C_{pp}$, where $C_{pp}$ is the integral constraint. For measuring the parameters we fit over the range $0.001^{\circ} <\theta< 0.1^{\circ}$. We found that fitting over this smaller range reduced the error in the amplitude measurements, although the error in the integral constraint (which is essentially a nuisance parameter) increases. For autocorrelation measurements this has little impact. We use the measured $\gamma_{pp}$, along with the parameters of the spectroscopic sample ($\gamma_{ss}(z)$ and $r_{0,ss}(z)$) and an initial guess of $r_{0,pp}$ to determine an initial guess of $r^{\gamma_{sp}}_{0,sp}$, employing the linear biasing assumption that $r^{\gamma_{sp}}_{0,sp}=(r^{\gamma_{ss}}_{0,ss}r^{\gamma_{pp}}_{0,pp})^{1/2}$.
We expect the correlation length of the photometric sample, $r_{0,pp}$, to be a function of redshift, as both the underlying dark matter correlation function and the large-scale structure bias of the sample will evolve with $z$, both in the real universe and in our mock catalogs. To account for this, we assume the redshift dependence of the scale length, $r_0$, will be similar for both the photometric and spectroscopic samples (we considered several alternatives, but this yielded the best results); for our calculations we set $r_{0,pp}(z) \propto r_{0,ss}(z)$, with an initial guess of $r_{0,pp}(z)=r_{0,ss}(z)$. We then refine our initial guess for $r^{\gamma_{sp}}_{0,sp}$ by measuring the angular cross\hyp{}correlation function in each redshift bin.
\subsection{Cross\hyp{}correlation and $\phi_p(z)$}
\label{sec:crossphi}
To find $w_{sp}(\theta,z)$, we measure the cross\hyp{}correlation between objects in spectroscopic z-bins with all objects in the photometric sample. We bin the spectroscopic sample over the range $0.19<z<1.39$ with a bin size of $\Delta z=0.04$ and measure $w_{sp}(\theta)$ for each bin using the estimator
\begin{align}
w_{sp}(\theta)=&\frac{1}{R_sR_p}\!\left[D_sD_p\!\left(\!\frac{N_{R_s}N_{R_p}}{N_{D_s}N_{D_p}}\!\right)\!-D_sR_p\!\left(\!\frac{N_{R_s}}{N_{D_s}}\!\right) \right. \nonumber \\
&\left. -R_sD_p\!\left(\!\frac{N_{R_p}}{N_{D_p}}\!\right)+R_sR_p\right] ,
\end{align}
where $D_sD_p$, $D_sR_p$, $R_sD_p$, and $R_sR_p$ are the cross pair counts between samples as a function of $\theta$ separation, and $N$ is the number of objects in each sample. The cross pair counts are calculated by measuring the observed number of objects from one sample around each object in another sample. For example, $D_sD_p$ is the number of objects in the photometric sample around each spectroscopic object as a function of separation. For this measurement, each sample (the objects in the spec-z bin and the photometric sample) has their own random catalog that is $\sim10$ times bigger than their corresponding data catalog. These are once again constructed by randomly distributing objects on the sky in the same shape as the data catalog.
For each z-bin we measured $w_{sp}(\theta)$ in logarithmic bins of 0.1 in $\log(\theta)$ over the range $-3 <\log(\theta)< 0.4$, with $\theta$ measured in degrees. As with the autocorrelation function, we fit $w_{sp}(\theta)=A_{sp}\theta^{1-\gamma_{sp}}-C_{sp}$; the integral constraint is nonnegligible in these measurements. Again we fit over the range $0.001^{\circ} <\theta< 0.1^{\circ}$ to reduce the error in the amplitude measurements. In some z-bins, particularly where the amplitude, $A_{sp}$, is small, we found a significant degeneracy between $A_{sp}$ and $\gamma_{sp}$ when fitting. One can understand this as there being a pivot scale at which clustering is best constrained; one can simultaneously vary $A_{sp}$ and $\gamma_{sp}$ and still match $w_{sp}$ at that scale. To remove this degeneracy, we fixed $\gamma_{sp}$ in each bin, and only fit for the amplitude and integral constraint. Since the clustering of the samples with each other is expected to be intermediate to the intrinsic clustering of each sample, we estimated $\gamma_{sp}$ with the arithmetic mean of $\gamma_{pp}$ and $\gamma_{ss}$. Using $A_{sp}$ and $\gamma_{sp}$, as well as the initial guess for $r^{\gamma_{sp}}_{0,sp}$, we determine an initial guess of the redshift distribution $\phi_p(z)$. Rewriting equation \ref{eq:wsp} gives
\begin{equation}
\label{eq:phi}
\phi_p(z) = \frac{dl/dz}{D(z)^{1-\gamma_{sp}}H(\gamma_{sp})r^{\gamma_{sp}}_{0,sp}}A_{sp}(z) .
\end{equation}
We then use the resulting $\phi_p(z)$, along with $A_{pp}$ and $\gamma_{pp}$, to redetermine $r_{0,pp}$ using Equation \ref{eq:wpp}, which we use to redetermine $r^{\gamma_{sp}}_{0,sp}$ and thus $\phi_p(z)$. This process is repeated until convergence is reached.
\begin{figure*}[t]
\centering
\includegraphics[totalheight=5.9in]{f2}
\caption{The median value of $10^4$ measurements of the projected two-point correlation function of the spectroscopic sample, $w_p(r_p)$, in each redshift bin. Each measurement is made by averaging the paircounts of four fields selected at random from the 24 total fields. Error bars show the standard deviation of the measurements; i.e., they indicate the expected errors from a spectroscopic survey of four 1 square degree fields. The standard error in the plotted points is smaller than these error bars by a factor of $\sqrt{6}\ (2.45)$. At high redshift $w_p(r_p)$ deviates from a power law, whereas observed samples do not, due to the semi-analytic model not containing enough blue galaxies in group-mass halos. The solid line depicts a power-law model for $w_p(r_p)$, using the median values of the fit parameters $r_{0,ss}$ and $\gamma_{ss}$ across the $10^4$ measurements. The dashed line is the same in all panels; it is included to help make changes in the slope (i.e., $\gamma_{ss}$) and the amplitude (i.e., $r_{0,ss}$) with redshift clearer. We can see that changes in the amplitude with redshift are much more significant than changes in the slope.}
\label{fig:wprp}
\end{figure*}
\begin{figure}[t]
\centering
\includegraphics[totalheight=3.4in]{f3}
\caption{The correlation function parameters resulting from power-law fits to $w_p(r_p)$, $r_{0,ss}$ and $\gamma_{ss}$, as a function of redshift. The points are the median values of $10^4$ measurements, and hence correspond to the parameters used to generate the lines in Fig. \ref{fig:wprp}; the error bars are the standard deviation of each parameter amongst the measurements. The standard error in the plotted points is smaller than these error bars by a factor of $\sqrt{6}\ (2.45)$. Each measurement is made by averaging the paircounts of four fields selected at random from the 24 total fields. While both parameters decrease with redshift, we see that changes in $r_{0,ss}$ are substantially greater than changes in $\gamma_{ss}$.}
\label{fig:rogm}
\end{figure}
\section{Results}
\label{sec:results}
For the remainder of the paper, we will frequently refer to making a ``measurement'' of the correlation functions and $\phi_p(z)$. Each measurement is done by selecting four fields at random out of the 24 mock catalogs, summing their pair counts, and calculating all necessary quantities; no information on 'universal' mean values of any measured quantity is used, but rather only that available from the chosen four fields. We select four fields in order to emulate redshift surveys like DEEP2 and VVDS, in which data is typically obtained from of order four separate fields; hence a 'measurement' in our parlance is roughly equivalent to utilizing the information coming from a single survey. To obtain the following results, we made $10^4$ measurements; we used the median values to evaluate statistical biases in a given quantity and the standard deviation to evaluate random uncertainties. In each plot following the points are the median values and the error bars are the standard deviations, which gives the error on a single measurement. Because (given the large number of measurements) these medians should closely match the mean of the 24 fields, the standard error in a plotted point should be smaller than the plotted error bars by a factor of $\sqrt{6}$.
\begin{figure}[t]
\centering
\includegraphics[totalheight=3.4in]{f4}
\caption{The median value of $10^4$ measurements of the two-point correlation function of the photometric sample, $w_{pp}(\theta)$, corrected for the integral constraint. Each measurement is made by averaging the paircounts of four fields selected at random from the 24 mock catalogs. Error bars show the standard deviation of the measurements. The standard error in the plotted points is smaller than these error bars by a factor of $\sqrt{6}\ (2.45)$. The solid line is the fit to $w_{pp}(\theta)$ using the median values of the fit parameters $A_{pp}$ and $\gamma_{pp}$; a power-law model provides an excellent fit to the data.}
\label{fig:wpp}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[totalheight=5.9in]{f5}
\caption{The median value of $10^4$ measurements of the cross\hyp{}correlation between the photometric and spectroscopic samples, $w_{sp}(\theta)$, in each redshift bin, corrected for the integral constraint. Each measurement is made by averaging the paircounts of four fields selected at random from the 24 total fields. Error bars show the standard deviation of the measurements. The standard error in the plotted points is smaller than these error bars by a factor of $\sqrt{6}\ (2.45)$. The solid line is the fit to $w_{sp}(\theta)$ using the median values of the fit parameters $A_{sp}$ and $\gamma_{sp}$. The dashed line is to help make changes in the amplitude, $A_{sp}(z)$, with redshift clearer; in the fits shown the slope, $\gamma_{sp}(z)$, is forced to be constant with $z$. It is clear that the amplitude of the correlation is much greater in the central region of the redshift range where there are more photometric objects. The error on $w_{sp}(\theta)$ does not vary strongly with redshift, but rather the errors appear larger where there are few objects as a consequence of plotting on a logarithmic scale: i.e., the amplitude of the correlation is smaller in those regions, which leads to a much larger fractional error in $w_{sp}(\theta)$, and hence much larger error in log($w_{sp}$), where $w_{sp}$ is small, even though the error in $w_{sp}$ itself remains unchanged.}
\label{fig:wsp}
\end{figure*}
It should be noted that we are ignoring the weak cross correlation that should result from gravitational lensing by large-scale structure \citep{2008ApJ...684...88N,2010MNRAS.401.1399B}. These correlations can be predicted directly from galaxy number counts \citep{2005ApJ...633..589S}; planned surveys such as LSST will extend fainter than their nominal depth over limited regions of sky \citep{2009arXiv0912.0201L}, so no extrapolation will be required. It should also be possible to use the initial estimate of $\phi_p(z)$ to predict the lensing induced cross\hyp{}correlation signal at a given redshift, and therefore iteratively remove its contribution. Because these correlation effects are weak, straightforward to deal with, and not present in the mock catalogs available to us, we do not consider them further here.
To determine the evolution of the autocorrelation parameters of the spectroscopic sample we measured $w_p(r_p)$ in z-bins of varying widths. Fig. \ref{fig:wprp} shows the median and standard deviation of $w_p(r_p)$ for $10^4$ measurements in each spectroscopic z-bin, with the correction for finite $\pi_{max}$ applied as described above. We then fit each measurement of $w_p(r_p)$ for the autocorrelation parameters. The solid lines in Fig. \ref{fig:wprp} show the results of equation \ref{eq:wpa} corresponding to the median $r_{0,ss}$ and $\gamma_{ss}$ for all measurements in a given z-bin, while Fig. \ref{fig:rogm} shows the accuracy with which we can measure the evolution of $r_{0,ss}$ and $\gamma_{ss}$ with redshift. Both parameters decreasing with redshift is consistent with measurements in real samples which show bluer galaxy samples have smaller $r_0$ and $\gamma$ \citep{2008ApJ...672..153C}; a constant observed magnitude limit will correspond to a selection at a bluer and bluer rest frame band as redshift goes up, increasingly favoring bluer objects for selection.
The autocorrelation parameters for the photometric sample are determined from the shape of $w_{pp}(\theta)$. Fig. \ref{fig:wpp} shows the median and standard deviation of $10^4$ measurements of $w_{pp}(\theta)$, corrected for the integral constraint. A fit to each measurement gives estimates of autocorrelation parameters. Taking the median values and standard deviations gives $A_{pp}=5.48\times10^{-4}\pm2.73\times10^{-4}$ and $\gamma_{pp}=1.55\pm0.045$. The solid line in Fig. \ref{fig:wpp} corresponds to these median values. The scale length of the photometric sample, $r_{0,pp}(z)$, was assumed to be proportional to $r_{0,ss}(z)$; this yielded superior results to other simple assumptions. The proportionality constant may then be found using an initial guess of $r_{0,pp}=r_{0,ss}$ to calculate $\phi_p(z)$ using cross\hyp{}correlation techniques, leading to a refined estimate of $r_{0,pp}$ using Limber's equation (eqn. \ref{eq:wpp}). That refined $r_{0,pp}$ is then used to make an improved measurement of $\phi_p(z)$, which is used to obtain a yet-improved measure of $r_{0,pp}$, etc. After convergence was reached, we found that on average $r_{0,pp}/r_{0,ss}=1.068$.
To determine the evolution of the cross\hyp{}correlation parameters, we measure the angular cross\hyp{}correlation, $w_{sp}(\theta, z)$, between objects in successive spectroscopic z-bins and the photometric sample. Fig. \ref{fig:wsp} shows the median and standard deviation of $w_{sp}(\theta)$ for $10^4$ measurements in each z-bin, corrected for the integral constraint. Fitting each measurement for the cross\hyp{}correlation parameters with fixed $\gamma_{sp}$ as described above and taking the median gives the amplitude, $A_{sp}(z)$, shown in Fig. \ref{fig:asp}. The solid lines in Fig. \ref{fig:wsp} correspond to the median of the best-fit parameters from each measurement.
Combining the intrinsic clustering information from the autocorrelation parameters of each sample with the amplitude of the cross\hyp{}correlation, $A_{sp}(z)$, together with the basic cosmology, gives the recovered redshift distribution. We found that a linear fit of $r_{0,ss}$ and $\gamma_{ss}$ versus z resulted in a better recovery of $\phi_p(z)$ than using each bin's value directly, resulting in a $\sim32\%$ reduction in the $\chi^2$ of the final reconstruction as compared to the true redshift distribution. Fitting the correlation function over a limited $\theta$ range, as described in $\S$ \ref{sec:crossphi}, reduced the measured error in $\phi_p(z)$ for each z-bin by $\sim25\%$ on average, reducing the $\chi^2$ in comparing the reconstructed and true redshift distributions by $\sim30\%$. We also tried modeling $\gamma_{sp}$ as constant with z using the arithmetic mean of $\gamma_{ss}(z=0.77)$ and $\gamma_{pp}$. This resulted in a $\sim20\%$ increase in the $\chi^2$ of the final fit.
Fig. \ref{fig:phi} shows the median and standard deviation of $10^4$ measurements of $\phi_p(z)$ compared to the actual distribution. To determine the actual distribution, we found the mean true distribution of the four fields corresponding to each measurement and took the median across the $10^4$ measurements; this should accurately match the true mean of the redshift distributions over the 24 fields. Each measurement was normalized so that integrating $\phi_p(z)$ over the measured redshift range gives unity before the median was taken. It is important to note that the reconstruction techniques we have implemented thus far will recover the actual redshift distribution of objects in the photometric sample. This will in general deviate from the true, universal redshift distribution of objects of that type due to sample/cosmic variance. We describe and test methods for recovering the underlying universal distribution in $\S\ref{sec:errorest}$.
\begin{figure}[t]
\centering
\includegraphics[totalheight=3.4in]{f6}
\caption{The median value of $10^4$ measurements of $A_{sp}$, the amplitude of $w_{sp}$, in each redshift bin. Each plotted point corresponds to the amplitude of one of the model lines shown in Fig. \ref{fig:wsp}. Each measurement is made by averaging the paircounts of four fields selected at random from the 24 mock catalogs. Error bars show the standard deviation of the measurements. The standard error in the plotted points is smaller than these error bars by a factor of $\sqrt{6}\ (2.45)$. The amplitude is larger in the central region of the redshift range where there are more photometric objects, which is expected since the degree to which the two samples overlap in redshift contributes to the strength of the cross\hyp{}correlation function.}
\label{fig:asp}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[totalheight=3.4in]{f7}
\caption{Plot of the redshift distribution recovered using cross\hyp{}correlation techniques. The solid line is the actual distribution of the photometric sample (combining all 24 fields), while the points are the median reconstructed values from $10^4$ measurements. Error bars show the standard deviation of the recovered distribution when performing cross\hyp{}correlation reconstruction in 4 $0.5 \times 2$ deg fields, emulating the data available from existing deep redshift surveys. The standard error in the plotted points is smaller than these error bars by a factor of $\sqrt{6}\ (2.45)$. Each measurement is made by averaging the paircounts of four fields selected at random from the 24 mock catalogs. The recovered distribution follows the true distribution closely, even picking up the irregular dip due to sample variance (also known as cosmic variance) at the peak.}
\label{fig:phi}
\end{figure}
We also looked at how well redshift distributions may be recovered in a single, 1 square degree field. For each field, the correlation functions were calculated using only the information from that field. To weight each bin when fitting for correlation-function parameters, the fit was calculated using errors given by the standard deviation of the correlation function in each $\theta$ bin over the 24 fields. This mimics the common situation where we have few fields with data and errors are determined from simulations. For a single field, a linear fit for the evolution of the spectroscopic-sample correlation function parameters was not a good model, so we used the calculated parameters in each z-bin. Fig. \ref{fig:phising} shows the recovered distribution, $\phi_p(z)$, in each of the 24 fields, compared to the true redshift distribution of the photometric sample in that field.
\begin{figure*}[t]
\centering
\includegraphics[totalheight=5.9in]{f8}
\caption{Plot of the recovered redshift distribution for each of the 24 fields, using only pair counts from a single field in the reconstruction. The error bars in the first plot are the standard deviation of $\phi_{p,rec}(z)-\phi_{p,act}(z)$ amongst the 24 fields; they should be representative of the expected error for each panel. For each field, all errors used in fitting are based on standard deviations across the 24 fields. This mimics a common situation where we have only one field, but use errors determined from simulations to weight points for fitting. The reconstruction generally captures the variation amongst fields due to sample/cosmic variance.}
\label{fig:phising}
\end{figure*}
\begin{figure}[t]
\centering
\includegraphics[totalheight=3.4in]{f9}
\caption{The variance of $10^4$ measurements of the autocorrelation of the photometric sample, $w_{pp}(\theta)$ (thick solid line), compared to predicted error terms from Bernstein 1994. The thick dashed line shows the sum of all the variance terms; it corresponds well to the observed variance save at the largest scales, where the Bernstein 1994 model is overly conservative (a consequence of the assumption made in that work that the angular separations considered are significantly smaller than the size of the field). From equation 38 in \cite{1994ApJ...424..569B}, the thin solid black line is the term that scales as $w^2$, corresponding to the variance in the integral constraint, which dominates at large $\theta$. The thin three-dot-dash line is the term that scales at $w^3$, and the thin dot-dash line is the term that scales as $\mathrm{1/N}$. The thin dashed black line is the term that scales as $\mathrm{1/N^2}$ and is comparable to the Poisson error, which dominates in the weak clustering formalism used by Newman (2008). The 'observed' variance in $w_{pp}(\theta)$ is much larger than the weak clustering prediction; the same is true of $w_{sp}(\theta)$, although to a lesser degree.}
\label{fig:errors}
\end{figure}
\subsection{Correlation Measurement Errors}
\label{sec:correrrors}
In the course of our calculation of the redshift distribution, we found that the error in $\phi_p(z)$ for each redshift bin was larger than expected from the error model used in \cite{2008ApJ...684...88N}, which uses the standard, classical weak-clustering formalism. This formalism predicts that Poisson uncertainties should dominate when the clustering strength (e.g. the value of $w_{sp}$) is small compared to unity \citep{1980lssu.book.....P}. Upon further investigation we determined that the error in all correlation function measurements were larger than expected according to this model, which led to the excess error in $\phi_p(z)$. This additional error is associated with extra variance terms identified by \cite{1994ApJ...424..569B}, which contribute significantly even in the weak-clustering limit, contrary to the classical assumption. These extra terms are dominated by the variance in the integral constraint, which has a significant impact if spectroscopic samples cover only a few square degrees of sky.
Fig. \ref{fig:errors} compares the four terms of the predicted error from Bernstein's error model to our measured error for $w_{pp}(\theta)$. Bernstein's error model assumes the separation is much smaller than the field size, so we see for small $\theta$ the predicted variance does follow our measured variance closely, and then deviates as the separation becomes comparable to the field size. The integral constraint term dominates at large $\theta$ values. In order to calculate some of the variance terms of Bernstein's model we required values for $q_3$ and $q_4$, which are used to relate the three- and four-point correlation functions to the two-point correlation function assuming hierarchical clustering. For this we used the values measured by Bernstein in simulation catalogs, $q_3=0.32$ and $q_4=0.1$ \citep{1994ApJ...424..569B}. This gave a better fit to our results than the values observed in local galaxy samples \citep{1992ApJ...394...87M,1992ApJ...390..350S}.
From Fig. \ref{fig:errors} we see that the measured variance can be orders of magnitude larger than errors predicted using the weak-clustering assumption (though the difference is a smaller factor for $w_{sp}$, whose errors dominate in reconstructing $\phi_p(z)$). This excess variance will have a significant impact on the error budgets of planned dark energy experiments (see the next section for quantitative estimates); it is dominated by the variance in the integral constraint, whose effect increases with decreasing field size, so errors may be greatly reduced by surveying galaxies over a larger area ($>\sim 100$ square degrees instead of $\sim 4$). For instance, the proposed BigBOSS survey\citep{2009arXiv0904.0468S} would provide a near-ideal sample for cross\hyp{}correlation measurements (using both galaxies and Lyman $\alpha$ absorption systems at redshifts up to $\sim 3$). We may also reduce this effect by using better correlation function estimators which reduce the effect of the integral constraint. One example of such a robust estimator relies on convolving the two-point correlation function with a localized filter \citep{2007MNRAS.376.1702P}. We are currently testing the $\phi_p(z)$ reconstruction using this estimator to determine its impact on the error in our recovered redshift distribution.
\begin{figure*}[t]
\centering
$\begin{array}{@{\hspace{-.2in}}c@{\hspace{-.2in}}c}
\includegraphics[totalheight=3.4in]{f10a} &
\includegraphics[totalheight=3.4in]{f10b} \\
\end{array}$
\caption{Plots of the recovered and mean true redshift distribution of the 24 fields, after the overall redshift distribution of all galaxies in the mock catalogs, $dN/dz$, is divided out, as described in $\S\ref{sec:errorest}$. On the left is the reconstruction before applying a correction for sample/cosmic variance based on fluctuations in the spectroscopic redshift distribution in the fields observed, and on the right is the reconstruction after that correction. There is a significant improvement in the reconstruction. The plot on the right corresponds to the reconstruction of the probability an object falls in the photometric redshift bin as a function of its true $z$ (or, equivalently, the reconstruction of the photometric redshift error distribution), rather than reconstructing the actual redshift distribution (affected by sample/cosmic variance) of galaxies in a particular set of fields, as was depicted in Fig. \ref{fig:phi}. The solid line in each panel is the true normalized distribution of the photometric sample and the points are the median values of $10^4$ measurements. The true distribution matches the Gaussian selection function used for creating the photometric sample, by construction. Error bars show the standard deviation of the recovered distribution. The standard error in the plotted points is smaller than these error bars by a factor of $\sqrt{6}\ (2.45)$. Each measurement is made by averaging the paircounts of four fields selected at random from the 24 total mock catalogs available. As shown here, if we know the amplitude of fluctuations from cosmic variance at a given redshift (using the variance in the distribution of spectroscopic galaxies), as well as the overall distribution of the parent sample (e.g. from combining redshift distributions from all photometric redshift bins), we can accurately reconstruct the true selection probability distribution.}
\label{fig:phicvar}
\end{figure*}
\begin{figure}[t]
\centering
\includegraphics[totalheight=3.4in]{f11}
\caption{Results of cross\hyp{}correlation reconstruction of a selection function consisting of two equal-amplitude Gaussian peaks centered at $z=0.5$ and $z=1.0$, each with $\sigma_z=0.1$. The solid line is the true distribution of the photometric sample (combining all 24 fields), while the points are the median reconstructed values from $10^4$ measurements. Error bars show the standard deviation. The standard error in the plotted points is smaller than these error bars by a factor of $\sqrt{6}\ (2.45)$. Each measurement is made by averaging the paircounts of four fields selected at random from the 24 mock catalogs. This plot is analogous to the right panel of Fig. \ref{fig:phicvar}; as in that case, we are reconstructing the selection function of the sample rather than its redshift distribution. The effects of bias evolution should be greater in this case, however, as the sample is less concentrated in redshift. The recovery remains accurate here, despite the larger bias evolution and very different $\phi_p(z)$.}
\label{fig:phi2peak}
\end{figure}
\subsection{Error Estimates}
\label{sec:errorest}
In this subsection, we investigate the impact of these excess correlation function measurement errors on our ability to recover the parameters (i.e. the mean and $\sigma$) of the true redshift distribution for the photometric sample, and compare the results to Monte Carlo tests done in \cite{2008ApJ...684...88N}. For each measurement we have a recovered distribution and an associated true distribution for that set of four fields. We will test the recovery both of the underlying, universal distribution used to construct the photometric sample (i.e. $\langle z \rangle=0.75$, $\sigma_z=0.20$) and of the actual redshift distribution of the objects selected in a given set of fields (which will differ due to sample/cosmic variance; cf. $\S\ref{sec:results}$).
Before we can fit for Gaussian parameters, we must account for the fact that our photometric sample has a redshift distribution which differs from a true Gaussian because the total sample we drew from (with Gaussian probability as a function of $z$) was not uniformly distributed in redshift. One can think of the actual distribution of the photometric sample in a given bin as a product of three factors: the overall redshift distribution of all objects in the Universe (essentially, the rising curve in \ref{fig:ndist}); the fractional deviation from the Universal mean of the number of objects in a given field at a given redshift, i.e. sample/cosmic variance; and the Gaussian function used to select objects for the photometric redshift bin.
The first two factors need to be removed from both the true and recovered distributions if we are to test the recovery of the third; this is implemented differently for each case. For the true distribution, we divide each measurement by the overall $dN/dz$ of all of the objects in the four fields used in that measurement. This removes the overall distribution shape as well as the fluctuations due to sample variance, and gives a true distribution that closely matches the Gaussian selection function applied to construct the sample.
In principle we could do the same for the recovered distribution, but that would not be practical in real applications, as we can determine the overall shape of the redshift distribution of the overall photometric sample using photometric redshifts, but photo-z errors will prevent measuring fluctuations in the number of objects within bins of small $\Delta z$. Hence,
we correct the recovered $\phi_p(z)$ using a low-order polynomial fit to the shape of the overall sample's $dN/dz$, but use the fluctuations (compared to a smooth fit) in the observed redshift distribution of the spectroscopic sample $dN_s/dz$, which will be known from the same observations used to perform cross\hyp{}correlation measurements, to correct for sample variance. This correction assumes that deviations from the mean in both samples behave similarly with redshift; we might expect their amplitude to scale with the large-scale-structure bias of a given sample, but we do not apply any correction for that here. In tests, we have found that a correction using fluctuations in $dN_s/dz$ was as effective in constraining parameters as one based on fluctuations in the $dN/dz$ of the overall sample our photometric subsample was selected from, and so we focus on the former, more realistic technique.
In more detail, we first divided the recovered distribution by a smooth fit (using a 5th-degree polynomial function) to the overall $dN/dz$ of the entire simulation averaged over all 24 fields. This eliminates gradients associated with the shape of the parent sample's overall redshift distribution without removing deviations due to sample variance. To correct for the latter, we need to quantify the fluctuations in the spectroscopic sample relative to a mean distribution. For this smooth, mean distribution, $\langle dN_s/dz\rangle$, we used the same fit to the redshift distribution of the spectroscopic sample averaged over all 24 fields which was employed to construct the random catalogs for autocorrelation measurements ($\S\ref{sec:autospec}$). Using a fit to a given set of four fields would make little difference, as the deviation from the smooth fit at a given redshift bin due to sample variance are much larger than the deviations between the smooth fit to 4 or 24 fields. We then calculate the ratio $dN_s/dz/\langle dN_s/dz\rangle$, where $dN_s/dz$ is the redshift distribution of the spectroscopic sample averaged over the four fields used in that measurement, and correct for sample variance by dividing each measurement of $\phi_p(z)$ by this quantity.
After applying these corrections to each distribution, each measurement is normalized so that their integral is unity, and then fit for $\langle z \rangle$ and $\sigma_z$ using a normalized Gaussian fitting function. Fig. \ref{fig:phicvar} shows the median and standard deviation of $10^4$ measurements of the recovered $\phi_p(z)$ before and after correcting for sample variance. In both plots the fit to the overall $dN/dz$ is divided out. It is clear to the eye that the distribution corrected for sample variance is a better fit to the underlying selection function; more quantitatively, it reduces errors in determining the parameters of the Gaussian selection function by $\sim10\%$.
We assess the reconstruction of the photometric sample in two ways. First, we compare the reconstructed parameters, $\langle z \rangle$ and $\sigma_z$, of the Gaussian selection function to the true values, known by construction. Second, we compare the reconstructed parameters of the selection function to the parameters of a Gaussian fit to the actual normalized distribution of each set of four fields used. The latter method should be more robust to systematic errors in the 'true' $dN/dz$ we divide each measurement by.
For the first test, where $\langle z\rangle_{true} = 0.75$ and $\sigma_{z,true} = 0.20$, we find $\langle \langle z \rangle_{rec} - \langle z \rangle_{true}\rangle = 7.796\times10^{-4}\pm7.415\times10^{-3}$ and $\langle \sigma_{z,rec} - \sigma_{z,true} \rangle = 8.140\times10^{-4}\pm8.545\times10^{-3}$, where as usual the values given are the median and standard deviation of all measurements, respectively. The second test, where $\langle z\rangle_{true}$ and $\sigma_{z,true}$ are determined by a Gaussian fit to the true distribution of each measurement, we find $\langle \langle z\rangle_{rec}-\langle z\rangle_{true}\rangle = 7.259\times10^{-4}\pm7.465\times10^{-3}$ and $\langle \sigma_{z,rec}-\sigma_{z,true}\rangle = 4.724\times10^{-4}\pm8.546\times10^{-3}$. In all cases, the bias is not statistically significant (the standard error against which each bias estimate must be compared is smaller than the quoted standard deviations by a factor of $\sqrt{6}$), but in any event the overall bias of both parameters is considerably smaller than the associated random errors, and will therefore have little effect when added in quadrature. These errors are still larger than the estimated requirements for future surveys (i.e. $\sigma \sim 2-4\times10^{-3}$, as described in \S\ref{sec:intro}). For cross\hyp{}correlation techniques to meet these requirements, this excess error will need to be reduced. We discuss a few options for this in $\S\ref{sec:correrrors}$.
A number of choices we have made on how to model and measure correlation function parameters (e.g. using a fit for the dependence of the spectroscopic sample's autocorrelation parameters on z vs. using the values for a given z-bin directly; assuming $r_{0,pp} \propto r_{0,ss}$ vs. a constant $r_{0,pp}$; or allowing $\gamma_{sp}(z)$ to decrease with redshift vs. forcing a constant $\gamma_{sp}$) can affect both the bias and error in these measurements. We have tested reconstruction with alternate methods to those described here and found that the random errors in $\langle z \rangle$ and $\sigma_z$ are much more robust to these changes than the bias. When varying the three correlation parameters as described previously, the standard deviation of the measurements never varied by more than $\sim10\%$, but the bias in some cases increased significantly. For measurements of $\langle z \rangle$, the alternative parameter models yielded biases of $0.006-0.009$, making them statistically significant compared to the random errors. For $\sigma_z$, the biases under the different scenarios were of similar order of magnitude as our standard method, except for the case of using the measured values for the spectroscopic correlation function parameters ($r_0$ and $\gamma$) in each z-bin instead of a fit. This yielded a bias in $\sigma_z$ of $\sim-0.009$. From this we see that the methods used to measure correlation parameters need to be considered carefully, since inferior methods can cause the bias to become comparable to random errors.
From equation 13 in \cite{2008ApJ...684...88N}, the predicted errors in $\langle z \rangle$ using the weak clustering formalism are essentially identical to the errors in $\sigma_z$; that is true to $\sim 20$\% in our results. This error is a function of $\sigma_z$, as well as the surface density of photometric objects on the sky, $\Sigma_p$, the number of objects per unit redshift of the spectroscopic sample, $dN_s/dz$, and the cross correlation parameters, $\gamma_{sp}$ and $r_{0,sp}$. We use the mean values of these parameters from our catalogs and find that the predicted error on both parameters is $\sigma=1.064\times10^{-3}$. This is considerably smaller than our measured error, which is not surprising given the extra error terms in the correlation function discussed in $\S\ref{sec:correrrors}$.
Our analysis throughout this paper has considered the case of a single-peaked, Gaussian selection function for placing objects in a photometric bin. However, different distributions would yield similar results, as the error in the recovery of $\phi_p(z)$ at a given redshift depends primarily on the characteristics of the spectroscopic sample and the overall size of the photometric sample, but not $\phi_p(z)$ itself \citep{2008ApJ...684...88N}. We illustrate this in Fig. \ref{fig:phi2peak}, where we have applied the same analysis techniques described above (and laid out in the recipe in \S\ref{sec:conclusion}) for a selection function that consists of two equal-amplitude Gaussian peaks centered at $z=0.5$ and $z=1.0$, each with $\sigma_z=0.1$; this figure can be compared to the right panel of Fig. \ref{fig:phicvar}. We note that, since in this scenario the objects selected are less concentrated in redshift, the effects of bias evolution (as predicted by the semi-analytic models used) should be greater here than in our standard case, but our recovery remains accurate.
\section{Conclusion}
\label{sec:conclusion}
Section \ref{sec:method} has described in detail the steps we took to recover the redshift distribution, $\phi_p(z)$, of a photometric sample by cross\hyp{}correlating with a spectroscopic sample of known redshift distribution. We will now summarize the procedure used to make this calculation, to facilitate its application to actual data sets.
\begin{list}{\labelitemi}{\leftmargin=1em}
\item \textbf{Obtain the necessary information for each sample; RA, dec and redshift for the spectroscopic sample, and RA and dec for the photometric sample.}
\item \textbf{Create the random catalogs for each sample. (\S \ref{sec:autospec}-\ref{sec:crossphi})}
\item \textbf{Calculate the data-data, data-random, and random-random paircounts for each correlation function.}
\begin{list}{\labelitemi}{\leftmargin=1em}
\item For $w_p(r_p)$: bin the spectroscopic sample and its corresponding random catalog in redshift. In each spectroscopic z-bin, calculate $\Delta r_p$ and $\Delta \pi$ for each pair and bin the pair separations into a grid of $\log(r_p)$ and $\pi$. Then sum the paircounts in the $\pi$ direction. (\S \ref{sec:autospec})
\item For $w_{pp}(\theta)$: using the '$p$' sample and its random catalog, calculate $\Delta \theta$ for each pair and bin the pair separations into log($\theta$) bins. (\S \ref{sec:autophot})
\item For $w_{sp}(\theta,z)$: bin the spectroscopic sample and its corresponding random catalog in redshift. For each spectroscopic z-bin, calculate the pair separations, $\Delta \theta$, for pairs between the '$s$' and '$p$' samples and their random catalogs and bin them into $\log(\theta)$ bins. (\S \ref{sec:crossphi})
\end{list}
\item \textbf{Use the paircounts to calculate the correlation functions using standard estimators (e.g. Landy \& Szalay). (\S \ref{sec:autospec}-\ref{sec:crossphi})}
\item \textbf{Calculate the parameters of $w_p(r_p)\ (r_{0,ss}(z), \gamma_{ss}(z))$ and $w_{pp}(\theta)\ (A_{pp}, \gamma_{pp})$ by fitting as described above. (\S \ref{sec:autospec}-\ref{sec:autophot})}
\item \textbf{Use the autocorrelation parameters along with an initial guess of $r_{0,pp}$ (e.g. $r_{0,pp}\sim r_{0,ss}$) to calculate $r^{\gamma_{sp}}_{0,sp}(z)=(r^{\gamma_{ss}}_{0,ss}r^{\gamma_{pp}}_{0,pp})^{1/2}$. (\S \ref{sec:autophot})} This gave a more accurate reconstruction of $\phi_p(z)$ (reducing $\chi^2$ by 33\%) than the assumption $r_{0,pp}=$ constant; in fact, a calculation of $\xi_{pp}(r)$ from the simulation sample directly showed $r_{0,pp}$ to have similar behavior to $r_{0,ss}$. Using a linear fit of $r_{0,ss}(z)$ and $\gamma_{ss}(z)$ reduced $\chi^2$ by $\sim32\%$ compared to utilizing the noisier reconstructed values in each z-bin.
\item \textbf{Estimate $\gamma_{sp}=(\gamma_{ss}+\gamma_{pp})/2$}. Using this $\gamma_{sp}$, calculate the amplitude, $A_{sp}(z)$, of $w_{sp}(\theta,z)$ by fitting as described above. (\S \ref{sec:crossphi}) We fit over the range $0.001^{\circ} <\theta< 0.1^{\circ}$. We found that fitting over this smaller $\theta$ range resulted in smaller errors in the amplitude, $A_{sp}(z)$, which reduced the error in $\phi_p(z)$ for each z-bin by $\sim25\%$ on average. We fix $\gamma_{sp}$ because of degeneracies between $\gamma_{sp}$ and $A_{sp}$ when fitting them simultaneously. This degeneracy is especially strong in regions where $\phi_p(z)$ is small. We also tried modeling $\gamma_{sp}$ as constant with z using the arithmetic mean of $\gamma_{ss}(z=0.77)$ and $\gamma_{pp}$; however, that method increased the $\chi^2$ of the final fit by $\sim20\%$.
\item \textbf{Combining the results of the last two steps and the assumed cosmology, calculate $\phi_p(z)$ using equation \ref{eq:phi}. (\S \ref{sec:crossphi})} We also tried calculating $\phi_p(z)$ using the integrated cross\hyp{}correlation function, $\tilde w(z)$, integrating to an angle equivalent to a comoving distance $r_{max}=10h^{-1}$ Mpc (Newman 2008); however, that method produced inferior results.
\item \textbf{Using $\phi_p(z)$, along with the calculated $A_{pp}$ and $\gamma_{pp}$, in equation \ref{eq:wpp} gives a new $r_{0,pp}$, which is then used to recalculate $r^{\gamma_{sp}}_{0,sp}(z)$. Putting this back into equation \ref{eq:phi} gives a new $\phi_p(z)$. This is repeated until convergence is reached. (\S \ref{sec:crossphi})}
\item \textbf{To recover the underlying/universal distribution of objects of the type selected for the photometric sample, rather than the distribution within the specific fields chosen for observation, correct for sample/cosmic variance using the fluctuations in the redshift distribution of the spectroscopic; i.e., construct a smooth function describing the overall redshift distribution of the spectroscopic sample, $\langle dN_s/dz\rangle$, and divide $\phi_p(z)$ by the ratio $dN_s/dz/\langle dN_s/dz\rangle$. (\S \ref{sec:errorest})}
\end{list}
We have shown in this paper that by exploiting the clustering of galaxies at similar redshifts we can accurately recover the redshift distribution of a photometric sample using its angular cross\hyp{}correlation with a spectroscopic sample of known redshift distribution, using mock catalogs designed to match the DEEP2 Galaxy Redshift Survey. This test includes the impact of realistic bias evolution and cosmic variance. Our error estimates for the recovered mean and standard deviation of the distribution are larger than those predicted previously, but improvements could be obtained either by using more optimal correlation function estimators or by surveying the same number of galaxies distributed over a wider area of sky. Based on these tests we expect that this technique should be able to deliver the performance needed for dark energy experiments.
In a recent paper \citep{2009arXiv0910.3683S}, cross\hyp{}correlation techniques were applied to mock data generated by populating a single time slice of an N-body dark matter simulation using various halo models. They develop a pipeline for calculating the redshift distribution of a photometric sample using cross\hyp{}correlation measurements and the autocorrelation of a spectroscopic sample, $\xi_{ss}(r,z)$. They do not attempt to model the bias although they do examine how varying the bias of the two samples affects the reconstruction (i.e. using radically different halo models). The catalogs constructed to test their method are significantly larger in volume than our individual mock catalogs, and while the number of objects in their photometric sample is comparable to ours, their spectroscopic sample is much smaller, which would be expected to lead to larger errors (Newman 2008), as observed. Another major difference is the use of a smoothness prior in reconstruction, which was not done here. While \citet{2009arXiv0910.3683S} found that cross\hyp{}correlation techniques were generally successful in reconstructing redshift distributions, these conclusions were primarily qualitative due to the limited sample sizes and source densities of the mock samples used, along with less-optimal correlation measurement techniques. In this paper, we have used simulations which include much less massive halos, allowing us to perform quantitative tests of cross\hyp{}correlation techniques using sample sizes and source densities comparable to those which will be used in realistic applications.
Several techniques for calibrating photometric redshifts using only photometric data have also been developed \citep{2006ApJ...651...14S, 2009arXiv0910.4181Z, 2010arXiv1002.2266B, 2009arXiv0910.2704Q}; in general, such techniques require priors or assumptions on biasing which can be relaxed or tested in spectroscopic cross\hyp{}correlation measurements. In \cite{2009arXiv0910.2704Q}, spectroscopic/photometric cross\hyp{}correlation techniques have now been applied to real data using the COSMOS dataset. Using data from a single field, they are able to determine typical photo-z uncertainties well, even when ignoring the effects of bias evolution. However, when constraining catastrophic photo-z errors, methods which ignore these effects should break down, as bias evolution should be a much greater problem over broad redshift intervals than in the core of the photo-z error distribution.
In future work, we will explore alternate methods of measuring correlation functions that are invariant to the variance in the integral constraint (e.g. \cite{2007MNRAS.376.1702P}). This should reduce errors in the measurement of the redshift distribution, which we found to be larger than expected due to extra variance terms in the correlation function measurements not considered previously. We also plan to test this technique with mock catalogs in which photometric redshifts have been 'measured' on simulated LSST photometry, rather than simply assuming a redshift distribution. We will also apply this method to real data using photometric and spectroscopic samples from the AEGIS survey \citep{2007ApJ...660L...1D}.
The authors wish particularly to thank Darren Croton for developing the mock catalogs used and making them publicly available. We would also like to thank Andrew Hearin, Arthur Kosowsky, David Wittman, Michael Wood-Vasey, Andrew Zentner, Tony Tyson, Gary Bernstein, Nikhil Padmanabhan, Dragan Huterer, Hu Zhan, and in particular Alexia Schulz for useful discussions during the course of this work, and also Ben Brown and Brian Cherinka for their technical expertise in performing these calculations. This work has been supported by the United States Department of Energy.
|
2,877,628,088,528 | arxiv | \section{Introduction}
Since thousands of years, the epidemic spreading of diseases has been one of the greatest threats to societies around the world. Now, with increased mobility, diseases tend to spread faster and wider due to air traffic and, in fact, often globally. The resulting pandemics can cost the lives of millions of people and disrupt the social, economic, public, political, and cultural life.
\par
These challenges are countered with new measurement and vaccination technologies \cite{VDD20}. In particular, digital technologies have enabled data-driven and AI-based methods \cite{AWSCCGCDKG20,VJKH20,bharadwaj2021computational}, which have become quite popular. Some organisations envision a future with ubiquitous health measurements, perhaps even using in-body sensors \cite{eltorai2016microchips}. In any case, a data-driven approach is now common. It often assumes that the overall picture of the situation will become more accurate with a greater amount of data.
\par
According to Chris Anderson's view of Big Data, the data deluge will make the scientific method obsolete \cite{ChrisAndersonWiredMagazine2008}. If we just had enough data, it would reveal the truth by itself. It would show us where the problems are and what we had to do to fix them. This point of view, however, has been questioned.
Conventional Big Data analytics suffers from a variety of possible problems, including the following:
(i) If there are correlations between two variables $A$ and $B$, it is not clear whether $A$ causes $B$ or $B$ causes $A$, or whether a third variable $C$ causes $A$ and $B$, while this matters a lot for the effectiveness of measures taken. For recent advances
in causal inference see \cite{pearl2009causal}.
(ii) Detected patterns in the data are not always meaningful~\cite{S13}.
They can be accidental, and the patterns detected results of overfitting \cite{H03,Y19}. Spurious correlations are, in fact, a wide-spread problem \cite{calude2017deluge}. An appropriate understanding of statistical learning theory is, therefore, crucial \cite{hastie2009elements}.
(iii) Measurements may be biased or the investigated sample non-representative. Therefore, patterns learned may be biased, too. This causes issues in many applications of AI~\cite{L18b}. For example, undesirable discrimination of women, people of color, and minorities has often been found, when machine learning is applied to data of job applicants or face recognition \cite{RGMBLD20,L18}. However, biases are also relevant for testing strategies and modeling of epidemics~\cite{bottcher2021TestStatistics,bentley2021error,campbell2020bayesian}.
(iv) Classification errors are generally an issue. In particular, for almost every measurement method, there are errors of first kind and errors of second kind, i.e. false positives (''false alarms'') and false negatives (''alarms that do not go off'') \cite{colquhoun2014investigation,banerjee2009hypothesis}. As a result, the sensitivity of a measurement may be high, but its precision may be quite limited or even disappointing. A highly sensitive method would avoid that alarms don't go off, but may cause many false alarms. This is, for example, the case for many ''predictive policing'' approaches, where there are thousands of false positives for every real terrorist. A similar problem is expected to occur in connection with the tracing of potentially infected people, as infection is a probabilistic process. For example, when using mass surveillance tools to determine likely infections~\cite{ram2020mass}, the expected outcome is that many healthy people will be put in quarantine \cite{do2020covid}.
(v) The measurement of model parameters will be always possible with a finite precision only. Consequently, the exact parameters may never be known, but rather a ``confidence interval''. Any of the values in the confidence interval could be the true parameter. However, the implications for these different values may be pretty different, which limits the accuracy particularly in case of sensitive parameter dependencies.
(vi) In complex, networked systems with probabilistic behavior and cascading effects (such as epidemic spreading), ``fat-tailed distributions'' pose additional challenges to data analytics and machine learning~\cite{taleb2020single,beirlant2006statistics}.
In the following, we will investigate, to what extend some of the above issues may undermine an accurate assessment of the state of epidemics, even if a large amount of data is available. This is important in particular, as it is increasingly common to respond to measurement-based predictions rather than to current data, in order to take proactive measures. As we will discuss, this can have undesirable side effects. One response to an anticipated increase of infections may be to engage in more testing to capture the expected rise. The motivation for this is clear: if infections are underestimated, hospitals might not be able to handle the number of emergencies, while an overestimation may lead to unnecessary lockdowns with severe socio-economic consequences. In both cases, unnecessary loss of lives may occur. However, is it always better to make more tests?
\par
The precondition for accurate predictions is to have reliable methods to judge the actual epidemic situation well. The currently used epidemic modeling methods try to describe the disease dynamics by interacting ``agents'' on a population, meta-population, or individual level~\cite{vespignani2020modelling, engbert2021sequential, maier2020effective, jia2020population, siegenfeld2020opinion}. Such models are, for example, used to assess the influence of population density, demographic factors, mobility, or social interactions on the actual disease dynamics~\cite{van2011gleamviz}. Data-driven or machine learning models make fewer assumptions about the actual dynamics and are applicable to a broader range of prediction problems, but they come at the cost of less explainability. Of course, it is also possible to combine classical modeling approaches with machine learning \cite{wu2021deepgleam}.
\par
However, it seems that in many policy decisions, issues related to measurement processes are not yet sufficiently considered. Our contribution, therefore, highlights problems related to monitoring epidemics through measurement processes. As it turns out, it is dangerous to assume that data-driven approaches would be largely exact, or that more measurements or tests or data would automatically give a better picture.
In the following, we will demonstrate the possible pitfalls of such an approach by analysing the estimation errors resulting from measurement errors, mean value approximations, randomness and network interactions. While certain corrections for the effect of false positives and false negatives have been proposed before~\cite{bottcher2021TestStatistics,campbell2020bayesian,bentley2021error,catala2021robust,wu2020substantial}, here we present a framework that adds a measurement model to a model of epidemic dynamics, considering also stochastic and network effects. We further discuss how to correct for biases with mean-field and Bayesian approaches and what are the fundamental limitations in estimating the state of epidemics.
\section{Epidemic Models}
Depending on the characteristics of a disease, there are different compartmental models in epidemiology to reflect the spreading of a disease and the recovery from it by coupled differential equations. These are mean value equations implicitly assuming that the infection process is well characterized by averages and that correlations do not matter. In the following, we will present three common examples of such epidemiological spreading models.
\subsection{SIR Model}
The SIR model \cite{kermack1927contribution,anderson1992infectious} assumes that all people recover from the disease and are immune after recovery. It distinguishes Susceptible, Infected, and Recovered. Intuitively, their numbers at time $t$ are represented by $S(t)$, $I(t)$, and $R(t)$. The increase in the number of Infected is proportional to the number of Susceptible and the number $I$ of those who can infect them. In the notation used below, the proportionality constant is $b$. Infected recover at a rate $c$. Hence,
the differential equations describing the change of their numbers in the course of time are
\begin{eqnarray}
\dot{S} &=& -bSI \, , \\
\dot{I} &=& bSI - cI \, , \\
\dot{R} &=& cI \, ,
\end{eqnarray}
where we use the notation $\dot{Z} = dZ/dt$. Moreover, we have the normalization equation
\begin{equation}
S(t) + I(t) + R(t) = N \, ,
\end{equation}
where $N$ is the number of people in the population.
\subsection{SEIR Model}
The SEIR model~\cite{anderson1992infectious} assumes that all people recover from the disease, but are immune after recovery. Besides Susceptible, Infected, and Recovered, it considers an Exposed category representing people who have caught the disease, but are not infectious, yet. Intuitively, the numbers at time $t$ are represented by $S(t)$, $I(t)$, $R(t)$, and $E(t)$. For simplicity, we will assume that the Exposed do not have symptoms of the disease (yet), i.e. they (erroneously) appear to be healthy. The increase in the number of Exposed is proportional to the number of Susceptible and the number $I$ of those who can infect them. The proportionality constant is $b$. After some time, Exposed turn into Infected (with symptoms). This happens at a rate $a$. Infected finally recover at a rate $c$. Hence, the coupled differential equations describing the change of their numbers in the course of time are:
\begin{eqnarray}
\dot{S} &=& -bSI \, , \\
\dot{E} &=& bSI - aE \, , \\
\dot{I} &=& aE - cI \, , \\
\dot{R} &=& cI \, .
\end{eqnarray}
Moreover, we have the normalization equation
\begin{equation}
S(t) + E(t) + I(t) + R(t) = N \, .
\end{equation}
\subsection{SEIRD Model}
The SEIRD model~\cite{anderson1992infectious} assumes that some people recover from the disease and are immune after recovery, while some people die. Besides Susceptible, Exposed, Infected, and Recovered, it considers a Died category. Intuitively, the numbers at time $t$ are represented by $S(t)$, $E(t)$, $I(t)$, $R(t)$ and $D(t)$. The increase in the number of Exposed is proportional to the number of Susceptible and the number $I$ of those who can infect them. The proportionality constant is $b$. Exposed turn into Infected with a rate $a$. Infected recover with a rate $c$ and die with a rate $d$. The differential equations describing the change of their numbers in the course of time are:
\begin{eqnarray}
\dot{S} &=& -bSI \, , \\
\dot{E} &=& bSI - aE \, , \\
\dot{I} &=& aE - cI - d I\, , \\
\dot{R} &=& cI \, , \\
\dot{D} &=& dI \, .
\end{eqnarray}
Moreover, we have the normalization equation
\begin{equation}
S(t) + E(t) + I(t) + R(t) + D(t) = N \, .
\end{equation}
\subsection{Measuring the State of an Epidemic}
The underlying question of this paper is: how well can the state of epidemics be measured, using common test and monitoring methods? When discussing this, we must consider that all test methods have false positive rates and false negative rates \cite{banerjee2009hypothesis}. This also applies to virtual measurement methods such as those based on inference from tracing data. We assume that Infected are being tested with probability $p$ and people without symptoms ($S$, $E$, and $R$) with probability $q$. When the testing is focused on infected people with symptoms, we will have $q\le p$ or even $q\ll p$.
\par
In the following, for the sake of illustration, we will focus on the SEIRD model as ground truth. Furthermore, let the false positive rate (FPR) when healthy people are tested be $\alpha$, and the false negative rate (FNR) be $\beta$ when Infected are tested, but $\beta'$ when Exposed are tested. Under these circumstances, the expected number of positive tests is
\begin{equation}
N_p(t) = (1-\beta) p I(t) + (1-\beta') q E(t) + \alpha q [S(t)+R(t)] \, .
\label{ex}
\end{equation}
From this formula, we can immediately see that $N_p(t)$ can be assumed to be proportional to the number $I(t)$ of Infected only if $\beta'=1$ and $\alpha=0$, which is unrealistic, or if $q=0$, i.e. ``healthy'' people are not tested. Otherwise, $N_p(t)$ can be quite misleading. In fact, the term $\alpha q (S+R)$ may dominate the other terms, if a large number of people without symptoms is being tested. In such a case, most positive tests would be false positives, and the number of positive tests would increase with the overall number $N_T$ of tests made. Testing a large number of people without symptoms might, therefore, produce pretty arbitrary and misleading numbers. As a consequence, the number of positive tests is not a good basis for policy measures. In particular, if the testing rate $q$ is increased with the anticipated number of positive tests $N_p(t+\Delta t)$ at some future point in time $t+\Delta t$, this may lead to a ``lock-in effect''. Then, for a long time, politics may not find a way out of the vicious cycle of more predicted cases triggering more tests, which implies more measured and predicted cases, and so on.
\par
It is much better to compare the number $N_p$ of positive tests given by Eq. \eqref{ex} with the overall number of tests
\begin{equation}
N_T(t) = p I(t) + q [S(t)+E(t)+R(t)] \, .
\end{equation}
Accordingly, the estimated (measured) proportion of infected people is
\begin{equation}
\fbox{ $ \displaystyle
\hat{s}(t) = \frac{N_p(t)}{N_T(t)} = \frac{(1-\beta) p I + (1-\beta') q E + \alpha q (S+R)}{p I + q (S+E+R)} $}
\label{success}
\end{equation}
Here, we have dropped the argument $(t)$ on the rhs for simplicity. Let us now compare this measurement-based estimate with the actual (true) proportion of infected among living people,
\begin{equation}
s(t) = \frac{I(t)}{N'(t)} \, .
\end{equation}
\begin{equation}
N'(t) = N- D(t) = S(t)+E(t)+I(t)+R(t)
\end{equation}
is the number of living people in the considered population. Then, the relative error becomes
\begin{equation}
\epsilon := \frac{|\hat{s}(t)- s(t)|}{|s(t)|}
= \left| \frac{[(1-\beta) p I + (1-\beta') q E + \alpha q (S+R)]}{[p I + q (N'-I)]} \, \frac{N'}{I} - 1\right|
\, .
\end{equation}
If the tests measure the Exposed as if they were ``still healthy'', it is reasonable to assume $(1-\beta') = \alpha$.\footnote{because Exposed tested positive would then be considered ``false positives''---they would be valued ``true positive'', once they have transitioned to the ``Infected''} This simplifies the previous formula to
\begin{equation}
\epsilon = \frac{|\hat{s}- s|}{|s|}
= \left| \frac{[(1-\beta) p I + \alpha q (N'-I)]}{[p I + q (N'-I)]} \, \frac{N'}{I} - 1\right|
= \left| \frac{[(1-\beta) p s + \alpha q (1-s)]}{s[p s + q (1-s)]} - 1\right|
\, . \label{relerr}
\end{equation}
Let us investigate two limiting cases.
First, if we assume that only people with symptoms are tested, then $q=0$ and
\begin{equation}
\epsilon = \frac{|\hat{s}- s|}{|s|}
= \left| \frac{(1-\beta)}{s} \, - 1\right| \, .
\end{equation}
Accordingly, the smaller the actual proportion $s(t)$ of Infected, the bigger will the relative error be.
Second, if the number $(N'-I) = (S+E+R)$ of people without symptoms typically outweighs the Infected by far, one may assume that $\alpha q (S+E+R) = \alpha q (N'-I) \gg (1-\beta) p I$ and $q (N'-I) \gg p I$. In this case, we get
\begin{equation}
\epsilon = \frac{|\hat{s} - s|}{|s|}
\approx \left| \frac{\alpha}{s} - 1\right|\, .
\end{equation}
\par
In both cases, we can see that the relative error increases with the inverse of the proportion $s$ of Infected. Accordingly, the relative error $\epsilon$ surprisingly increases with the number of healthy people. This relative error might be huge. In fact, it is particularly big in case of ``mild'' epidemics characterized by $s(t) \approx 0$. Again,
given the finite value of the false positive rate $\alpha$, mass testing of people who feel healthy is not really advised. It might lead to a large overestimation of the actual disease rate. That is, the state of the epidemic may be wrongly assessed.
\par
However, there is a correction formula for the effect of false positives and negatives, and for biased samples with $p \ne q$. Similar to the error and bias correction introduced in Ref.~\cite{bottcher2021TestStatistics}, to estimate the fraction of unhealthy people correctly, we need to find a function $\hat{s}_c$, which transforms the estimate $\hat{s}$ into $s$.
Considering that the estimate $\hat{s}$ of $s$ is given by \eqref{success}, and
$s = I/N' = I / (S+E+R)$, where $N' = (S+E+I+R)$, we find
\begin{equation}
\hat{s}(s)
= \frac{(1-\beta) p s + \alpha q (1-s)}{(p-q) s + q}
\label{hats}
\end{equation}
under the previously made assumption $(1- \beta') = \alpha$.
From this, we can derive the corrected value $\hat{s}_c$ of $\hat{s}$ via the inverse function of $\hat{s}(s)$:
\begin{equation}
\fbox{ $ \displaystyle \hat{s}_c (\hat{s}) := s(\hat{s}) = \frac{(\hat{s}-\alpha)q}{(1-\beta)p - \alpha q - (p-q)\hat{s}} $}
\label{corrected}
\end{equation}
This formula should only be used in the range $0 < \hat{s}-\alpha < 1 - \alpha -\beta$.
For non-biased samples ($p=q$), the formula simplifies to
\begin{equation}
\hat{s}_c (\hat{s}) = \frac{\hat{s} - \alpha}{1 - \alpha - \beta}
\end{equation}
As \figref{Fig2} shows, this correction can be very effective in fitting the true value $I/N'$, if the parameters $\alpha$, $\beta$, $p$ and $q$ are known.
\begin{figure}
\parbox[b]{.5\linewidth}{ \includegraphics[width=\linewidth]{fig_2a.pdf}}%
\hspace{.05\linewidth}%
\parbox[b]{.45\linewidth}{\caption{Estimation of the true proportion $I/N'$ of Infected (green solid line) based on the fraction \eqref{success} of positive tests (blue dotted line) and the correction formula \eqref{corrected} (red dashed-dotted line). The corrected estimate tends to be very close to the true value when the parameters $\alpha$, $\beta$, $p$ and $q$ of the test method are known exactly. The curves displayed here are for a SEIRD dynamics with parameters $N=100'000$, $a=1/5$, $b=50$, $c=1/14$, $d=1/14$.\label{Fig2}\vspace*{1cm} } }
\end{figure}
However, we would like to point out that the above analysis and correction formula are based on a mean-value approximation. Let us, therefore, investigate the likely role of stochastic measurement errors. A common approach for this is the use of a binomial distribution. It describes the probability to have $n_p$ positive tests among $N_T$ tests, if the ``success rate'' (of having a positive test result) is $\hat{s}$. Accordingly, the binomial distribution reads
\begin{equation}\label{eq:nc_noerror}
P(n_p|N_T)= \binom{N_T}{n_p}\hat{s}^{n_p}(1-\hat{s})^{N_T-n_p} \, ,
\end{equation}
where $N_T = pI + q(S+E+R)$ represents again the total number of tests made.
For the binomial distribution, the expected number of confirmed cases is
\begin{equation}
\langle n_p \rangle = N_T \, \hat{s} = N_p \, ,
\end{equation}
and the variance is
\begin{equation}
\mbox{Var}(n_p) = N_T \, \hat{s} (1-\hat{s}) \, .
\end{equation}
As a consequence, the relative standard deviation is
\begin{equation}
\frac{\sqrt{\mbox{Var}(n_p)}}{\langle n_p \rangle}
= \frac{\sqrt{N_T \, \hat{s} (1-\hat{s})}}{N_T \, \hat{s}}
= \frac{1}{\sqrt{N_T}} \sqrt{\frac{1-\hat{s}}{\hat{s}} } \, .
\end{equation}
Here, more tests are better, as expected. While the relative standard deviation may be significant if $N_T$ is small (in the beginning of an epidemics) or if $\hat{s}\approx 0$, for a large enough number of tests, the relative variance will be rather negligible, say, of the order of 1\%. Therefore, the above mean value equations and correction formula \eqref{corrected} appear to be applicable in many cases, when the uncertainty in $\alpha$, $\beta$, $p$ and $q$ is low.
In reality, however, the parameter values showing up in formula \eqref{corrected} are estimates $\hat{\alpha}$, $\hat{\beta}$, $\hat{p}$ and $\hat{q}$, which may deviate from the true parameter values $\alpha$, $\beta$, $p$, and $q$. Inserting these in formula \eqref{corrected} instead, one may find considerable under- or over-estimations of the true proportion of Infected in a population (see \figref{fig_s_correction}).
\begin{figure}[ht]
\centering
\includegraphics[width=0.4\linewidth]{fig_2b.pdf}
\includegraphics[width=0.4\linewidth]{fig_2c.pdf} \\[-5mm]
\includegraphics[width=0.4\linewidth]{fig_2d_q0.06p_0.8_vary_p.pdf}
\includegraphics[width=0.4\linewidth]{fig_2d_q0.06p_0.8_vary_q.pdf}
\caption{Estimation of the true proportion $I/N'$ of Infected (green solid line) based on the fraction \eqref{success} of positive tests (blue dotted line) and the correction formula \eqref{corrected} (other lines according to the color scales on the right). In these figures we assume that only 3 of the 4 parameters $\alpha=0.05$, $\beta=0.15$, $p=0.8$, and $q=0.06$ are known exactly, while the estimate of the fourth one (represented with a hat on top) is uncertain and, therefore, varied. Specifically, in \eqref{corrected} we have replaced $\alpha$ by $\hat{\alpha}$ in the top left figure, $\beta$ by $\hat{\beta}$ in the top right figure, $p$ by $\hat{p}$ in the bottom left figure and $q$ by $\hat{q}$ in the bottom right figure. All figures show results for a SEIRD dynamics with $N=100'000$, $a=1/5$, $b=50$, $c=1/14$, $d=1/14$.}
\label{fig_s_correction} \label{Fig3}
\end{figure}
\par
A further complication occurs, if the assumption $(1-\beta') = \alpha$ does not hold (e.g.\ when the testing procedure recognizes Exposed as ``already infected'' people, even though they do not have symptoms yet). Then, it is generally not possible to derive a correction formula such as \eqref{corrected}. To derive such a formula, one would need to separately measure Infected and Exposed, requiring two different, specific tests, but this is often not realistic to assume. One would have to use additional assumptions or Bayesian inference (see below).
\par
Last but not least, network effects may also produce significant deviations from the mean-value approximations above (which implicitly assumed homogeneous mixing and a low variance of measurement errors). To assess their size, in the following section we will use a more sophisticated simulation approach for the SEIR/SEIRD model that considers stochastic as well as network effects.
\section{STOCHASTIC SIMULATION OF EPIDEMIC SPREADING IN SOCIAL NETWORKS}\label{sec:model}
Let us start by discussing the full stochastic SIR model on a contact network $G=(V,E)$, defined by a set of nodes $V$ (corresponding to the individuals) and a set of edges $E$ (representing their contacts). The SIR process is determined by parameters $(b, c)$, where a susceptible individual becomes exposed with rate $b$ by having a contact with an infected neighbour, and an infected node recovers or dies with a rate $c+d$.
These processes have exponential inter-event time distributions $\psi(\tau)=b e^{-b \tau}$ (spreading) and $\phi(\tau)=(c+d) e^{-(c+d) \tau}$ (recovery or death).
\par
For a given contact network $G$ and the stochastic SIR epidemic spreading model, we create weighted networks $\left\lbrace G_k \right\rbrace$ and simulate realizations of the stochastic spreading dynamics on them. Each weighted network $G_k$ is linked to a possible outcome of the epidemic spreading process, starting from a randomly selected ``source'' node. In order to extract samples of epidemic trajectories of the full stochastic SIR model on a contact network, we use a SP-KMC shortest-path Kinetic Monte Carlo method \cite{PhysRevResearch.2.033121,tolic2018simulating}. A \textit{time-respecting weighted network} instance $G_k$ is created by taking the input network $G$ and assigning weights to the edges of the network according to
\begin{equation}
\label{eq:Poissonweights}
\rho_{i,j} =
\begin{cases}
-\ln(x)/b & \mbox{ if }-\ln(x)/b \leq -\ln(y)/(c+d) \\
\infty & \mbox{if } -\ln(x)/b > -\ln(y)/(c+d) \\
\end{cases}
\end{equation}
Here, $x,y$ are uniform random numbers $\in [0,1]$.
\begin{figure}[ht]
\centering
\includegraphics[width=0.45\linewidth]{ensemble_BA_no_diff_params2.pdf}
\includegraphics[width=0.45\linewidth]{ensemble_BA_diff_params2.pdf}
\caption{Ensemble of stochastic trajectories (SP-KMC sampling) on an ensemble of $10^3$ Barab\'asi-Albert networks with $N=100'000$, $m=3$, and $\gamma=1$.
Left: Scenario assuming that the parameters $\alpha=0.05$, $\beta=0.15$, $p=0.8$, $q=0.06$ of the test method are exactly known.
Right: Reconstruction for somewhat incorrect estimates $\hat{p}=0.75$ and $\hat{q}=0.1$, while the other parameters are assumed to be the same. The assumed parameters of the SEIRD dynamics are $a=1/5$, $b=3/14$, $c=1/28$, $d=1/28$. $Q_{0.01}$, $Q_{0.25}$, $Q_{0.75}$ and $Q_{0.99}$ represent 1, 25, 75, and 99 percent quantiles. The median values and error quantile bands are based on $10^3$ simulations.}\label{fig:BA_mean_field_ensemble}\label{Fig4}
\end{figure}
In order to work with the stochastic epidemic SEIR model, we extend the SP-KMC mapping by considering transitions from the Exposed state to Infected state with rate $a$. This process has the exponential inter-event incubation time distribution $\chi(\tau)= a e^{-a \tau}$.
In particular, a
\textit{time-respecting weighted network} instance $G_k$ is created by taking the input network $G$ and assigning
weights to the edges of the network instance as
\begin{equation}
\label{eq:Poissonweights}
\rho_{i,j} =
\begin{cases}
-\ln(x)/b -\ln(z)/a & \mbox{if } -\ln(x)/b \leq -\ln(y)/(c+d) \\
\infty & \mbox{if } -\ln(x)/b > -\ln(y)/(c+d) \\
\end{cases}
\end{equation}
Here, $x,y,z$ are uniform random numbers $\in [0,1]$.
We define the distance as shortest path on a weighted networks:
\begin{equation}
\mathit{d}_{G_k} (v_i,v_j) = \min_{\chi_{ij}} \sum_{(k,l)\in\chi_{ij}} \rho_{k,l} \, ,
\end{equation}
where $\chi_{ij}$ is the set of all possible paths from node $v_i$ to node $v_j$ on network $G_k$ and $\rho_{k,l}$ denotes the weights defined in~\eqref{eq:Poissonweights}. Now, epidemic properties can be extracted~\cite{PhysRevResearch.2.033121,tolic2018simulating} from the node and edge weighted networks $\left\lbrace G_k \right\rbrace$.
The run-time complexity of extracting a single epidemic trajectory is dominated by Dijkstra's shortest paths algorithm from a specific source node to others. It is of the order $O(L+M\log M)$, where $L$ denotes the number of edges in the network and $M$ the number of nodes.
\figref{fig:BA_mean_field_ensemble} shows on the left that the mean-field correction \eqref{corrected} works well if all measurement parameters $\alpha$, $\beta$, $p$, and $q$ are known. However, for somewhat incorrect parameter estimates, we observe considerable deviations from the mean-field correction. This establishes the need for Bayesian inference.
\begin{figure}
\centering
\includegraphics[width=0.38\linewidth]{network_prior_p_0.2_q_0.05_beta_dist_correct_M_1000_alpha_0.05_beta_0.15_prior_BA_1.pdf
\includegraphics[width=0.38\linewidth]{network_prior_p_0.2_q_0.05_beta_dist_correct_M_1000_alpha_0.05_beta_0.15_prior_BA_mplus3_1.pdf
\\
\includegraphics[width=0.38\linewidth]{network_prior_p_0.2_q_0.05_beta_dist_correct_M_1000_alpha_0.05_beta_0.15_prior_BA_gammaplus1_1.pdf
\includegraphics[width=0.38\linewidth]{network_prior_p_0.2_q_0.05_beta_dist_correct_M_1000_alpha_0.05_beta_0.15_prior_ER_1.pdf
\caption{Posterior distribution $P(I|n_p)$ when a network topology prior is used (in red) and when not (in blue). In scenario ``Correct'' we used a correct network ensemble contact prior for Barab\'asi-Albert networks with $N=100'000$, $m=3$, and $\gamma=1$, whereas in ``m+3'' we used a network ensemble contact prior for Barab\'asi-Albert networks with $m=6$ and $\gamma=1$. In ``$\gamma+1$'' we used a prior for Barab\'asi-Albert networks with $m=3$ and $\gamma=2$, and in ``ER'' we used a prior for Erd\"os-R\`enyi networks with $\lambda=6$, which is far from the true degree distribution. In all figures, the green line shows the ``ground truth'', against which the biased testing is performed. We observe that knowledge about the degree distribution of contacts helps to estimate the true ground truth. To estimate network priors within $t \in [60,65]$, we have used 100 networks from the ensemble.\label{fig_bayes_networkpriors}\label{Fig5}}
\end{figure}
If the bias parameters $p$ and $q$ are uncertain, one can use the prior distribution $\hat{p},\hat{q} \sim P(p,q)$
and integrate \eqref{eq:marginalization} over all possible values to obtain posterior distribution $P(I|N_p=n)$.
Then, the corrected estimate of the number of infected is given by the expected posterior
\begin{equation}
\hat{I} = \langle P(I=m|N_p=n) \rangle \, ,
\end{equation}
where $\langle . \rangle$ is the expected value of posterior $P(I=m|N_p=n)$, which is estimated using the Bayesian formula \eqref{eq:bayes}. In the absence of knowledge about the epidemic dynamics or network, one may use the uninformative prior. However, if there exists knowledge about the degree distribution $P(k)$ of network contacts, one can estimate $P(I_t=m)$ through the use of SP-KMC sampling on the network ensemble with a given degree distribution $P(k)$.
In Fig. \ref{fig_bayes_networkpriors} we show that, using stochastic trajectories generated with networks from a correct ensemble of networks, one can improve the estimation of $\hat{I}=\hat{s}N'$. However, when using a prior with wrong assumptions about the degree distribution of contacts, we see no improvement in the posterior estimate of the ground truth, compared to the uninformative prior. Overall, we find that both the degree distribution and the network density have effects on the quality of the Bayesian inference. Therefore, we would like to emphasise the importance of choosing correct priors for a reliable estimation of the ground truth.
\section{Summary, Conclusion, Discussion, and Outlook}
In this paper, we have addressed some challenges regarding epidemic monitoring and modelling, particularly regarding (i) the modeling itself and (ii) the applicability of models to the real world.
Regarding the first point, it matters to choose the right epidemiological model, but also to consider limitations of the common mean value approach. Stochastic and network effects---in combination with the underlying non-linear spreading dynamics---are relevant as well. Regarding the second point, one needs to consider errors of tests and estimation procedures underlying epidemic monitoring. This concerns false positive and false negative rates, but also biases in the measurement sample ($p\ne q$). Our goal was to analyse a good approximation of reality (where we have represented the latter by an assumed ground truth, specifically a SEIRD dynamics). Overall, we showed that data-driven approaches that rely only on reported infected cases can be misleading, if reliable measurement corrections are not available or used.
\par
Monitoring real-world epidemics is difficult, because one does not know the ground truth, and the goal of all types of modelling is to infer it. Although a simple mean-field correction works well when the measurement parameters are known (see Fig. \ref{Fig2}), substantial deviations may occur when there is some uncertain about them (see Fig. \ref{Fig3}), which is the usual case. Then, one might resort to a Bayesian correction for improvements~\cite{bottcher2021TestStatistics,campbell2020bayesian,bentley2021error,catala2021robust,wu2020substantial}, but this will also not remove all issues of measurement errors and uncertainties completely.
\par
For all the above reasons, forecasting the epidemic dynamics is a problem that has fundamental limitations even in the age of Big Data and Artificial Intelligence. Given the intrinsic limitations of tests, these problems are expected to stay around. Measurement problems using dynamical models~\cite{yang2014comparison, engbert2021sequential} may be further amplified if the test method or the social behavior of people are dynamically changed in response to the epidemic dynamics. Hence, data protection regulations are not the main problem for monitoring, forecasting and controlling epidemics. The limitations of a data-driven approach are a lot more fundamental in nature.
\vskip6pt
\enlargethispage{20pt}
{\it Data Accessibility:} This paper is not presenting any empirical data.\\[3mm]
{\it Author Contributions:} All authors developed the theoretical concepts and experiments together. NAF and VV created the computer code used and performed the simulations. All authors wrote the manuscript. DH initiated and coordinated the study. All authors read and approved the final manuscript.\\[3mm]
{\it Competing Interests:} The authors declare no competing interests.\\[3mm]
{\it Funding:} The authors acknowledge support by the SOBIGDATA++ project funded by the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 871042.\\[3mm]
{\it Acknowledgments:} We would like to thank Lucas Böttcher for inspiring discussions and scientific exchange regarding models of statistical measurement~\cite{bottcher2021TestStatistics} in the initial phase of the project.
|
2,877,628,088,529 | arxiv | \section{Introduction}
The proper identification of the difficulty levels of reading materials is a vital aspect of the language learning process. It enables teachers and educators alike to assign appropriate materials to young learners in which they can fully comprehend, preventing boredom and disinterest ~\cite{guevarra2011development}. However, assessing readability presents challenges, particularly when you have a large corpus of text to sift through. Manually extracting and calculating a wide range of linguistic features can be time-consuming and expensive and can lead to subjectivity of labels due to human errors \cite{deutsch2020}. To tackle this problem, more and more research in the field have focused on experimenting with automated methods for extracting possible linguistic predictors to train models for readability assessment.
\begin{comment}
Although there is extensive research on automated readability assessment in the English language, research into morphologically rich languages like Filipino is limited and relatively new. More recently, ~\cite{imperial2020exploring} used Natural Language Processing techniques to improve the readability of Filipino storybooks by incorporating language model features with traditional and lexical linguistic features. The results of their experimentations outperform the conventional readability formulas ~\cite{thorndike1921teacher, DuBay2004ThePO, kincaid1975derivation} and speed up the entire process automatically using Logistic Regression and Support Vector Machines.
\end{comment}
While automating readability assessment is a challenge itself, one of the original problem in the field starts with data. In the Philippines, the Mother-Tongue Based Multilingual Education (MTB-MLE) scheme was introduced by the Department of Education (DepEd) in 2013. With this initiative, there were little to no available tool for automatically assessing readability of reading resources, instructional materials, and grammatical materials in mother tongue languages aside from Filipino such as Cebuano, Hiligaynon, and Bikol ~\cite{medilo2016experience}. To answer this challenge, in this paper, we investigate various linguistic features ranging from traditional or surface-based predictors, orthography-based features from syllable patterns, and neural representations to develop a baseline readability assessment model for the Cebuano language. We use an array of traditional machine learning algorithms to train the assessment models with hyperparameter optimization. Our results show that using non-neural features are enough to produce a competitive model for identifying the readability levels of children's books in Cebuano.
\section{Previous Work}
Readability assessment has been the subject of research of linguistic experts and book publishers as a method of measuring comprehensibility of a given text or document. \citet{villamin1979pilipino} pioneered a readability assessment for the Filipino language in 1979. Hand-crafted indices and surface information from texts, such as hand counts of words, phrases, and sentences, are used in these formula-based techniques. An equivalent technique of traditional formula was applied on to Waray language ~\cite{oyzon2015validation} to complement the DepEd's MTB-MLE program in certain regions of the Philippines such as in Samar and Leyte. While traditional featured formulas relied on linear models, recent studies on readability research assessment have shifted their focus on expanding the traditional method to more fine-grained features. ~\citet{guevarra2011development} and \citet{macahilig2014content} introduced the use of a logistic regression model trained with unique word counts, total word and sentence counts, and mean log of word frequency. A few years later, lexical, syllable patterns, morphology, and syntactic features were eventually explored for readability of Filipino text by works of Imperial and Ong \cite{imperial2021application, imperial2020exploring, imperial2021diverse}.
\section{The Cebuano Language}
Cebuano (\textsc{ceb}) is an Austronesian language mostly spoken in the southern parts of the Philippines such as in major regions of Visayas and Mindanao. It is the language with the second highest speaker count\footnote{\url{https://www.ethnologue.com/language/ceb}} in the country with 27.5 million, just after Tagalog, where the national language is derived from, with 82 million speakers. Both Cebuano and Tagalog languages observe linguistic similarities such as in derivation, prefixing, disyllabic roots, and reduplication \cite{blake1904differences}. On the other hand, differences are seen in syntax such as use of particles (\textit{ay}, \textit{y}), phonetic changes, and morphological changes on verbs. Figure~\ref{language_tree} illustrates a portion of the Philippine language family tree emphasizing on where Cebuano originated. Cebuano is part of the Central Philippine subtree along with Tagalog and Bikol which can be attributed to their similarities and differences as mentioned. The full image can be viewed at \citet{oco2013dice}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=.46\textwidth,trim={5cm 0 3cm 0}, clip]{language.pdf}
\caption{Right portion of the Philippine language family tree highlighting origin of Cebuano.}
\label{language_tree}
\end{figure}
\subsection{Cebuano Readability Corpus}
We compiled the first Cebuano text corpus composed of 277 expert-annotated literary pieces uniform to the first three grade levels (L1, L2, and L3) of the Philippine primary education. For comparison to international grading systems, the standard age range for each level is 6-7, 7-8, and 8-9 respectively. We collected the materials from three online, open-sourced book repositories online: \textbf{Let's Read}, \textbf{Bloom Library}, and \textbf{DepEd Commons}. All materials are licensed under Creative Commons BY 4.0 allows redistribution in any medium or format provided proper attribution. Table~\ref{data} shows the distribution of the collected corpus. \\
\begin{table}[!htbp]
\small
\centering
\begin{tabular}{lllll}
\toprule
\textbf{Corpus} & \textbf{L1} & \textbf{L2} & \textbf{L3} & \textbf{Total} \\
\midrule
Let's Read & 6 & 21 & 50 & 82 \\
Bloom & 50 & 50 & 25 & 125 \\
DepEd & 22 & 1 & 4 & 27 \\
\midrule
\bf Total & \bf 76 & \bf 72 & \bf 79 & \bf 227 \\
\bottomrule
\end{tabular}
\caption{Distribution of compiled text passages in Cebuano.}
\label{data}
\end{table}
\noindent\textbf{Let’s Read.} Let’s Read\footnote{\url{https://www.letsreadasia.org/}} is an initiative by the The Asia Foundation to open-source culturally friendly children's books in diverse themes, characters, and settings. The resource materials from this repository are mostly sourced from BookLabs and translated by local volunteers across multiple languages including Cebuano. Let's Read covers a wide variety of genre such as gender equality, environment, understanding and empathy, and science and technology. We collected 82 Cebuano children's books from this website for our corpus. \\
\noindent\textbf{Bloom.} The Bloom Library\footnote{\url{https://bloomlibrary.org/}} is also an free repository of diverse children's books resources funded and maintained by the Summer Institute of Linguistics (SIL International). Similar to Let's Read, local volunteers can also upload high-quality and validated translations of book resources or original pieces to the platform. We collected 125 Cebuano children's books from this website for our corpus.\\
\noindent\textbf{DepEd Commons.} The Commons Library\footnote{\url{https://commons.deped.gov.ph/}} is an initiative by the Department of Education in the Philippines to grant free access to literature in various Philippine languages for students and teachers during the COVID-19 pandemic. We collected 27 Cebuano children's books from this website for our corpus.\\
\section{Linguistic Features}
In this study, we extracted three linguistic feature groups from our Cebuano text corpus: \textbf{traditional or surface-based features}, \textbf{orthography-based features}, and \textbf{neural embeddings}. To the best of our knowledge, no study has ever been conducted to assess and explore the readability assessment of Cebuano text using these features.
\subsection{Traditional Features (TRAD)}
Traditional or surface-based features are predictors that were used by experts for their old readability formulas for Filipino such as sentence and word counts in \citet{guevarra2011development}. Despite the claims that these features insufficiently measures deeper text properties for readability assessment \cite{redish2000readability}, since this is the pioneering study for Cebuano, we still considered these features for our baseline model development. In this study, we adapted the seven features of traditional features from existing works in Filipino ~\cite{imperial2020exploring, imperial2021application, imperial2021diverse} such as \textit{number of unique words, number of words, average word length, average number of syllables, total number of sentences, average sentence length} and \textit{number of polysyllable words}.
\subsection{Syllable Pattern (SYLL)}
Orthography-based features measure character-level complexity of texts through combinations of various syllable patterns ~\cite{imperial2021diverse}. Same as in Filipino, we adapted syllable patterns as features for the baseline model development but used only seven recognizable consonant-vowel combinations linguistically documented in the Cebuano language \cite{blake1904differences}. We used \textit{consonant clusters} and syllable pattern combinations of \textit{v, cv, cc, vc, cvc, ccv, ccvc} normalized by the number of words.
\subsection{Substitute Features using Neural Embeddings (NEURAL)}
The use of Transformer-based language model embeddings have shown to be an effective \textit{substitute} for handcrafted features in low-resource languages \cite{imperial-2021-bert}. Probing tasks have shown that these representations contain information such as semantic and syntactic knowledge \cite{rogers-etal-2020-primer} which can be useful in readability assessment. For this study, we extracted embedding representations with dimension size of 768 from the multilingual BERT model \cite{devlin-etal-2019-bert} as features for each instance from the Cebuano corpus. According to the training recipe of multilingual BERT, Cebuano data in the form of Wikipedia dumps was included in its development which makes the model a viable option for this study.
\begin{table}[!htbp]
\small
\centering
\begin{tabular}{ccccc}
\toprule
\textbf{Feature} & \textbf{Acc} & \textbf{Prec} & \textbf{Rec} & \textbf{F1} \\
\midrule
\bf TRAD & \bf 0.789 & \bf 0.754 & \bf 0.749 & \bf 0.750 \\
SYLL & 0.544 & 0.546 & 0.559 & 0.551 \\
\midrule
TRAD + SYLL & 0.719 & 0.721 & 0.722 & 0.718 \\
NEURAL & 0.754 & 0.759 & 0.766 & 0.757 \\
Combination & 0.737 & 0.714 & 0.729 & 0.714 \\
\bottomrule
\end{tabular}
\caption{Performance of finetuned Logistic Regression model.}
\label{table1}
\end{table}
\begin{table}[!htbp]
\small
\centering
\begin{tabular}{ccccc}
\toprule
\textbf{Feature} & \textbf{Acc} & \textbf{Prec} & \textbf{Rec} & \textbf{F1} \\
\midrule
TRAD & 0.718 & 0.728 & 0.685 & 0.676 \\
SYLL & 0.649 & 0.648 & 0.648 & 0.646 \\
\midrule
TRAD + SYLL & 0.789 & 0.787 & 0.791 & 0.784 \\
\textbf{NEURAL} & \textbf{0.807} & \textbf{0.813} & \textbf{0.812} & \textbf{0.811} \\
Combination & 0.789 & 0.788 & 0.789 & 0.793 \\
\bottomrule
\end{tabular}
\caption{Performance of finetuned Support Vector Machines model.}
\label{table2}
\end{table}
\begin{table}[!htbp]
\small
\centering
\begin{tabular}{ccccc}
\toprule
\textbf{Feature} & \textbf{Acc} & \textbf{Prec} & \textbf{Rec} & \textbf{F1} \\
\midrule
TRAD & 0.842 & 0.843 & 0.842 & 0.842 \\
SYLL & 0.579 & 0.579 & 0.586 & 0.580 \\
\midrule
\textbf{TRAD + SYLL} & \textbf{0.873} & \textbf{0.852} & \textbf{0.858} & \textbf{0.852} \\
NEURAL & 0.772 & 0.776 & 0.761 & 0.763 \\
Combination & 0.825 & 0.801 & 0.804 & 0.799 \\
\bottomrule
\end{tabular}
\caption{Performance of finetuned Random Forest model.}
\label{table3}
\end{table}
\section{Experiment Setup}
The task at hand is a multiclass classification problem with three classes being the aforementioned grade levels. We specifically chose traditional learning algorithms such as Logistic Regression, Support Vector Machines, and Random Forest for building the baseline models for post-training interpretation techniques described in the succeeding sections. To reduce bias, a $k$-fold cross validation where $k=5$ was implemented. For the intrinsic evaluation, we used standard metrics such as accuracy, precision, recall and macro F1-score. In addition, we also used grid search to optimize the following model-specific hyperparamters: solver and regularization penalties for Logistic Regression, kernel type, maximum iterations, and regularization penalties for Support Vector Machines, and number of estimators, maximum features, and maximum depth for Random Forest.
\section{Results}
To assess the effectiveness of the proposed framework in the experimentation, we examined model performances on three different ablation studies: (a) linguistic features only, (b) neural embeddings only, and (c) combination of the two via concatenation. The results of each fine-tuned model utilizing the given evaluation metric are showed in Tables~\ref{table1},~\ref{table2}, and~\ref{table3}.
Across the board, the best performing model and feature combination for Cebuano achieved approximately 87.3\% for all metrics using the combination of TRAD and SYLL features with Random Forest. This top performing model makes used of 100 tree estimators, automatically adjusted maximum features, and a max depth of 20. Interestingly, the feature combination and the algorithm of choice is also the \textit{same} for Filipino readability assessment as seen in the work of\citet{imperial2021diverse}. This may suggest that, despite language differences and similarities, the use of surface-based features such as counts and syllable patterns are accepted for both Filipino and Cebuano languages in the readability assessment task. Referring again to Figure~\ref{language_tree} for emphasis, both languages are part of the Central Philippine subtree which opens the possibility of a cross-lingual application of linguistic features for future research.
This effectiveness of surface-based features is also seen for the optimized Logistic Regression model where using TRAD features obtained the best performance. In the case of the optimized Support Vector Machine model, the use of neural embeddings alone obtained better scores than the combination of traditional and syllable pattern features. This result affirms the observation in \citet{imperial-2021-bert} where the extracted neural embeddings can serve as substitute features and can relatively be at par with handcrafted features.
\section{Discussion}
\subsection{Model Interpretation}
To understand more about which specific linguistic feature is contributive during model training, we used two versions of model interpretation algorithms specifically used for Random Forest models: \textbf{permutation on full model} and \textbf{mean decrease in impurity (MDI)} as shown in Figures~\ref{feature_importance_full} and ~\ref{feature_importance_mdi} respectively. Feature permutation recursively adds a predictor to a null model and evaluates the growth in accuracy while mean decrease impurity adds up all weighted impurity score reductions or homogeneity averaged for all tree estimators \cite{breiman2001random}. From both the feature importance results, the most important feature is the \textit{v\_density} or \textit{singular vowel density}. This may indicate that the denser the vowels in a word, the more complex the text becomes. Likewise, both \textit{cv\_density} and \textit{consonant clusters} emerged as second top predictors for both analysis which may suggest that in Cebuano, words with combined consonants with no intervening vowels are more apparent in complex sentences than from easier ones.
\begin{figure}[h]
\centering
\includegraphics[width=.45\textwidth]{feature_importance_MDI_80.pdf}
\caption{Feature importance by mean decrease impurity.}
\label{feature_importance_mdi}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=.45\textwidth]{feature_importance_permutation_80.pdf}
\caption{Feature importance by permutation on full model.}
\label{feature_importance_full}
\end{figure}
\subsection{Feature Correlation}
We also looked at model-independent feature analysis techniques through \textbf{Spearman correlation} with respect to readability levels. Table~\ref{featureRankingCorrelation} shows the top ten highly correlated features. In support to the findings described in Sections 6 and 7.1, all correlated linguistic features belong to the TRAD and SYLL feature sets with \textit{number of unique words} at the top. This may suggest that the density of unique words may increase relative to the readability level in a positive direction. In addition, \textit{cv}, \textit{cvc}, and \textit{ccv} densities are the only syllable pattern features that placed top in both model-dependent and independent feature interpretation techniques. This may hint further potential as readability predictors for other text domains. To note, the \textit{cv}-pattern in Cebuano is one of the most common consonant-vowel combinations \cite{zorc1976bisayan,yap2019cebuano}.
\begin{table}[!htbp]
\small
\begin{center}
\begin{tabular}{|c|c|c|}
\hline \bf Feature Set & \bf Predictor & \bf $\rho$ \\
\hline {TRAD} & unique\_words & 0.337 \\
\hline {SYLL} & cv\_density & 0.327\ \\
\hline \multirow{2}{*}{TRAD} & word\_count & 0.298 \\
& average\_sentence\_len & 0.295 \\
\hline {SYLL} & cvc\_density & 0.293 \\
\hline {TRAD} & sentence\_count & 0.292 \\
\hline \multirow{2}{*}{SYLL} & consonant\_cluster & 0.293 \\
& ccv\_density & 0.217 \\
\hline {TRAD} & polysyll\_count & 0.192 \\
\hline {SYLL} & vc\_density & 0.190 \\
\hline
\end{tabular}
\caption{Feature ranking using Spearman correlation. }
\label{featureRankingCorrelation}
\end{center}
\end{table}
\section{Outlook}
We developed the first ever baseline machine learning model for readability assessment in Cebuano. Among the three linguistic feature groups extracted to build the model, the combination of traditional or surface-based features (TRAD) with syllable pattern based features (SYLL) produced the highest performance using an optimized Random Forest model. One of the main challenges in the field is the limited amount of resource for tools and data especially for low-resource languages \cite{vajjala2021trends}. To answer this call and encourage growth of research in this direction, we open-sourced the compiled dataset of annotated Cebuano reading materials and the code for model development.
\section*{Acknowledgements}
The authors would like to thank the anonymous reviewers for their valuable feedback. This project is supported by the Google AI Tensorflow Faculty Grant awarded to Joseph Marvin Imperial.
|
2,877,628,088,530 | arxiv | \section{Introduction}
The recent surge in popularity of podcasts presents a big opportunity and a unique set of challenges to existing content discovery and recommendation systems. Podcasts usually require active attention from a listener for extended periods unlike listening to music. Subjective attributes such as the speaker's presentation style, type of humor, or the production quality could influence the listener's preference but are hard to discern from a text description.
In the video domain, movie trailers allow a viewer to preview some content and make a subjective decision to watch a film. The frequent release schedule for podcasts would make the production of such trailers for each episode impractical. Audio summaries have shown promise in improving the performance of spoken document search algorithms \cite{spina2017extracting}. We propose a method to create short podcast audio summaries in an automated manner. Such summaries could inform the listener about the topics of the podcast as well as subjective attributes like presentation style and production quality.
Podcasts present a unique set of challenges for an audio-based summarization algorithm. For example, podcasts usually focus on spoken word content and often contain overlapping speech from multiple speakers, free-form speech, audio effects, background music, and advertisements. A supervised learning algorithm operating in the audio domain would have to identify the nature of an audio segment before being able to judge its importance. This would require a large amount of training data manually annotated by listening to the audio in multiple passes, which is a difficult and time-consuming process.
However, as podcasts largely contain spoken-word content, summarization can also be performed in the text-domain on the transcript of an episode. In this work, we present our `Pod'cast `Summ'arization (\textit{PodSumm}) method to obtain podcast audio summaries guided by the text domain. PodSumm works by first transcribing the spoken content of a podcast, then, identifying important sentences in the transcript, and finally, stitching together the respective audio segments. We introduce a protocol and create an internal dataset specific to this task. In summary, we introduce the concept of podcast audio summaries to aid in content discovery. These summaries allow a listener to rapidly preview an episode before investing time in listening to the entire episode.
\section{Related work} \label{section:related_work}
\subsection{Text summarization}
Neural models consider this task as a classification problem where a neural encoder creates a latent representation of the sentences, followed by a classifier scoring the sentences on their importance towards creating a summary \cite{nallapati2017, zhang2018neural}. With the rising popularity of Deep Neural Networks and Transformers \cite{vaswani2017attention}, pre-trained language models, particularly transformer models such as BERT \cite{devlin2019bert}, have shown promise in a wide range of NLP tasks. BERT can express the semantics of a document and obtain a sentence level representation. Recent approaches to text summarization like PreSumm \cite{liu2019text} and MatchSum \cite{zhong2020extractive} leverage BERT and achieve state-of-the-art performance on many benchmark datasets. These present a promising avenue for further development and expansion to other application domains.
\subsection{Speech summarization}
Speech summarization requires directly processing audio streams and providing snippets to create a combined audio summary. Prior solutions to this task have modelled this problem as a feature classification problem \cite{furui2003speech},a speech-text co-training problem \cite{xie2010semi} and graph clustering problem \cite{garg2009clusterrank}. Neural extractive summarization such as reinforcement learning \cite{wu2018learning}, hierarchical modeling \cite{liu2019hierarchical} and sequence-to-sequence modeling \cite{keneshloo2019deep} have shown promising results though on a limited variety of data. Automated speech summarization has many open research problems such as multi-party speech, spontaneous speech, handling disfluencies, and more.
\subsection{Podcast Summarization}
There has been limited research on automated methods for podcast audio summarization. The diversity and narrative nature of podcasts with spontaneous speech, music, audio effects, and advertisements may present challenges for existing speech summarization methods. To address this issue, we pose the podcast audio summarization problem as multi-modal data summarization where we create an audio summary of a podcast with guidance from the text-domain.
\section{Our Method} \label{section:methods}
\subsection{PodSumm Architecture}
The \textit{PodSumm} method comprises a sequence of steps, starting with the original audio stream and resulting in an audio summary obtained as an output (figure \ref{fig:PodSumm_blockdiag}). The first stage of the process is Automatic Speech Recognition (ASR), which generates a transcript. We then process the text to segment each podcast transcript into sentences. Subsequently, we use a fine-tuned text summarization model to select important sentences for inclusion in the final summary. We discuss each stage in detail below.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{images/Figure_PodSumm_System.pdf}
\caption{Block diagram of the PodSumm method and its modules.}
\label{fig:PodSumm_blockdiag}
\end{figure}
\subsubsection{Automatic Speech Recognition} \label{subsubsec:ASR}
ASR methods perform the task of speech-to-text transcription. They handle complexities related to varied accents, prosody and acoustic features, and speaker demographics. The quality of ASR transcriptions varies significantly and depends on the underlying training data \cite{Hannun2014DeepSS}. We leveraged a publicly available off-the-shelf solution (AWS Transcribe \footnote{https://aws.amazon.com/transcribe/}) \cite{di2019robust}, which allowed us to limit errors and focus on other core modules our pipeline.
\subsubsection{Text Processing}
The audio transcripts obtained in section \ref{subsubsec:ASR} above contain tuples of 1) text for individual words or punctuation marks 2) the start and end timestamps from the audio, and 3) a confidence score for the text prediction. We use Spacy \footnote{https://spacy.io/usage/linguistic-features\#sbd} to segment the text into sentences, and their corresponding start and end times in the audio. Additionally, we force a sentence break where a pause of over 2 seconds between words occurs. This helps us to better handle the cases where the ASR method missed a punctuation mark, which frequently occurs when music is played between speech segments.
\subsubsection{Text summary generation}
We then build text summaries by selecting appropriate sentences from the transcript, by leveraging advances in the field of extractive text summarization. We choose the PreSumm \cite{liu2019text} model, which builds upon BERT \cite{devlin2019bert} to obtain a sentence level encoding, and stacks inter-sentence transformer layers to capture document-level features for summarization.
We find that a PreSumm model pre-trained on the CNN/DailyMail dataset \cite{hermann2015teaching} does not produce adequate summaries for podcasts. Motivated by the lack of research datasets for this task, we created a dataset to further fine-tune the model for podcasts, described in Section \ref{section:dataset}.
The extractive PreSumm model performs summarization on a document with sentences $[sent_1 , sent_2, \dots, sent_m]$ by assigning a score $y_i \in [0,1]$ to each $sent_i$, indicating exclusion from or inclusion in the summary. The model is trained using a binary classification entropy loss to capture difference in prediction $\hat{y_i}$ and ground truth label $y_i$.
\subsubsection{Audio generation}
The predictions of the text summarization model include the sentence indices and respective scores. Using the stored sentence offsets, the audio representing the selected sentences are stitched together to obtain an audio summary.
\subsection{Dataset Creation} \label{section:dataset}
To address the lack of datasets for the task of podcast summarization, we curate a dataset to support the development and evaluation of our method. We selected 19 unique podcast series from different genres, selecting on average 16.3$\pm$6.28 episodes per series. The dataset contains a total of 188 hours of podcasts, with an average duration of 36.5 $\pm$19.8 minutes per episode. We built an annotation tool that presented the annotator with a sequence of sentences from the transcript of the episode, as well as the metadata from the podcast feed including the original audio of the episode. Each sentence was paired with the respective audio segment, derived using the offsets of each segment. Additionally, the annotation tool dynamically generated audio and text summaries based on annotator's selection enabling them to verify their choices.
The annotator was instructed to follow the protocol outlined below.
\begin{enumerate}
\item \label{item:listen} Read the provider submitted description (if available) or listen to the audio of the podcast episode to understand the context and core message.
\item \label{item:select} Select the set of sentences that represent the summary of the podcast. The raters were requested to select continuous sequences of sentences where possible to minimize cuts in the audio while keeping the total summary length within 30-120 seconds.
\item \label{item:playback} Listen to the newly created sentence summary and repeat the above steps if necessary. Submit the annotations when a satisfactory summary is obtained.
\end{enumerate}
The resulting annotations include a set of sentence indices selected by the annotator as the most suitable candidates to create a summary. Due to resource limitations, each episode was annotated by a single annotator, due to which we are unable to compute the inter-annotator agreement. Discarding some outliers, we find that it took 3 minutes 57 seconds $\pm$ 4 minutes 51 seconds to annotate a single episode. We collected a total of 309 episodes with an average of 14.57 $\pm$ 7.01 selected sentences per summary.
\subsection{Model Training}
We begin with a PreSumm \cite{liu2019text} model pre-trained on the CNN/DailyMail dataset for 18000 steps \cite{hermann2015teaching} provided by the authors \footnote{https://github.com/nlpyang/PreSumm}, who report strong performance (ROUGE-$\mathit{(1,2,l)}=(43.85,20,34,39.90)$). We then fine-tune the model on our podcast dataset for 6000 steps as described in \cite{liu2019text}, beyond we noticed overfitting on our training set. The pre-trained model allows position embeddings of length 512, which we deemed sufficient for our application as the annotations in our dataset were contained within the first 512 tokens even for longer episodes.
Model checkpoints were saved and evaluated on the test set for every 1000 steps. The best performing model checkpoint was used for ablation experiments and to report system performance. For predicting summaries on new unseen data, we obtain the predicted scores for each sentence. Subsequently, top-$\mathit{n}$ sentences are selected from the rank-ordered candidates to create the final summary.
\subsection{Evaluation Metrics}
We report the precision, recall and F-measure for the ROUGE-1, ROUGE-2, ROUGE-$\mathit{l}$ scores \cite{lin-2004-rouge}. The metrics were selected to measure the ability of the model to produce summaries with overlapping words in comparison to the reference (recall), the prediction (precision), and average (F-measure). The $\mathit{n}$ in the ROUGE-$\mathit{n}$ metric signifies the unigram ($\mathit{n}$=1,single word overlap); bigram ($\mathit{n}$=2, consecutive word overlap) and longest common sequence overlap ($\mathit{n}$=$\mathit{l}$).
\subsection{Cross Validation Experiment} \label{section:cross_val}
Our current dataset consists of a total of 309 podcast episodes. This number is small in comparison to datasets such as CNN/DailyMail (312,102 data-label pairs) \footnote{https://github.com/abisee/cnn-dailymail} \cite{DBLP:journals/corr/SeeLM17, hermann2015teaching}.
To mitigate the effect of sampling bias, we report the mean and standard deviation of the ROUGE metrics from a $\mathit{k}$-fold ($\mathit{k}$=5) cross-validation experiment. The model was trained on the training split (80\% or 247 samples) and performance reported on the test split (20\% or 62 episodes), and the process was repeated for each fold.
\subsection{Data Augmentation}
We perform data augmentation to compensate for the relatively small size of our dataset and increase the generalization ability of the model. We observe that most previews and advertisements which should not be included in a summary are similar across podcasts episodes. We here describe a method to automatically find segments of repetitive content and our augmentation procedure.
We first find the indices of the sentences in the transcript that also occur in other episodes across our dataset (e.g.[0,1,2,3,4,6,7,30,31,32,48]). We then clean up the indices, 1) to merge any near-by indices ([0,1,2,3,4] with [6,7]) into one large set, and 2) to remove any outliers ([48]). All such repetitive content segments are stored for use in augmentations. To generate an augmented output, if an episode has repetitive content, we replace, else, we prepend the transcript with a randomly selected repetitive segment to create a new data sample. For each transcript, we add 20 new samples for the total augmented data size of 5166 samples for the training set for each fold.
\subsection{Ablation Studies}
\subsubsection{Effect of number of candidate sentences}
Similar to PreSumm, we select the top-$\mathit{n}$ sentences with the highest scores as our predictions. We study the effect of varying the number of sentences selected to represent the summary from the rank-ordered candidates in the model prediction. In our experiment, $\mathit{n} \in (5,9,12,15)$ was varied and the ROUGE-$\mathit{(1,2,l)}$ scores are reported.
\subsubsection{Effect of data augmentation}
The data augmentation applied during training alters the repetitive content preceding the sentences relevant to the summary. To test the effect of the data augmentation scheme on the model performance, we performed a fine-tuning experiment with and without data augmentation and report the system performance metrics.
\section{Results}
We summarize our results and ablation studies in Table \ref{table:results_fine_tuning}. As outlined in section \ref{section:cross_val}, we report the mean and standard deviation of the F-measure for the 3 metrics over the 5-fold cross validation experiment. Similar to prior work \cite{nallapati2017,zhong2020extractive,liu2019text}, we use a simple baseline "LEAD-\textit{n}", where we select the N leading sentences from the document as a summary.
We find that LEAD-15 performs well, only slightly worse than a PreSumm model pre-trained on the CNN/DailyMail dataset with no-fine tuning. After fine-tuning on our dataset (PreSumm (FT ,$\mathit{k}=12$)), we find significant improvements in F-measure for all ROUGE-$\mathit{(1,2,l)}$ metrics over the baseline and the model with no fine-tuning. The model with augmentation, (PreSumm (FT + Aug ,$\mathit{k}=12$)) further improves performance, demonstrating that model performance on this task improves with even a small amount of task-specific data augmentation.
In the ablation study, we find that selecting the top ($\mathit{k}$=12) sentences produced the best results, compared to ($\mathit{k}$=5, 9 or 15).
We display the distribution of sentence indices in figure \ref{fig:sentence_index_distance}. The ground truth data distribution indicates that the initial sentences with less related to the podcast summary task, which is corroborated by the relatively high performance of the LEAD-15 baseline relative to other LEAD-$\mathit{k}$($\mathit{k}$=5, 9 or 12) scores. We also see that the model without fine-tuning, PreSumm(No FT), is biased to select sentences from the beginning of the document, which is likely a property of the CNN/DailyMail dataset. The distributions after fine-tuning (FT, FT + AUG) are closer to the ground truth distributions, which are reflected in the metrics. However, the tails of these models still appear to follow the distribution model without fine-tuning. This highlights the need for further analysis and model development on a large dataset to account for all possible variations of the underlying data.
We present an example transcript along with the model predictions for PreSumm (no FT, $\mathit{k}=12$) in table \ref{table:example_summary_no_ft} and PreSumm (FT + Aug, $\mathit{k}=12$) in table \ref{table:example_summary_1}. The former model with no fine tuning selects a lot of sentences that are not relevant to the episode. In table \ref{table:example_summary_1} we see 9 true positive sentences (in green), 1 false-positive sentences (in blue) and 5 false-negative (in red) sentences, 11 repeated content sentences (in magenta), 2 of which were falsely predicted by the model (in cyan). This demonstrates that our method is able to correctly identify important sentences from the podcast transcription. The transcript also shows some errors that have accumulated through the system, eg. variations in spoken words (\textit{.org} mistranscribed as \textit{dot org's}), incorrect sentence segmentation between \textit{It is 7:30 p.m.} and \textit{On January,30th}, etc. Errors like these can complicate any downstream text processing, for example, a reader may only identify 3 false-positive sentences in the above example, whereas the system identified 5 due to incorrect sentence segmentation.
\begin{table}[t]
\centering
\begin{tabular}{lccc}
\hline
Metric & ROUGE-$\mathit{1}$ & ROUGE-$\mathit{2}$ & ROUGE-$\mathit{l}$ \\ \hline
\multicolumn{4}{l}{Baseline} \\ \hline
LEAD-5 & 0.281 $\pm$ 0.019 & 0.166 $\pm$ 0.025 & 0.273 $\pm$ 0.018 \\
LEAD-9 &0.401 $\pm$ 0.029 & 0.257 $\pm$ 0.037 & 0.39 $\pm$ 0.028 \\
LEAD-12 & 0.465 $\pm$ 0.025 & 0.324 $\pm$ 0.032 & 0.455 $\pm$ 0.024 \\
LEAD-15 & \textbf{ 0.515 $\pm$ 0.026} & \textbf{ 0.389 $\pm$ 0.036} & \textbf{ 0.507 $\pm$ 0.026} \\ \hline
\multicolumn{4}{l}{Fine-Tuning} \\ \hline
PreSumm (no FT, $\mathit{k}=12$) & 0.527 $\pm$ 0.016 & 0.381 $\pm$ 0.024 & 0.518 $\pm$ 0.017 \\
PreSumm (FT, $\mathit{k}=12$) & 0.625 $\pm$ 0.028 & 0.511 $\pm$ 0.034 & 0.619 $\pm$ 0.029 \\
PreSumm (FT + Aug, $\mathit{k}=12$) & \textbf{0.636 $\pm$ 0.022} & \textbf{0.529 $\pm$ 0.032} & \textbf{0.631 $\pm$ 0.023} \\
\hline
\multicolumn{4}{l}{Ablation - $\mathit{k}$ sentences} \\ \hline
PreSumm $\mathit{k}=5$ & 0.563 $\pm$ 0.027 & 0.458 $\pm$ 0.041 & 0.554 $\pm$ 0.029 \\
PreSumm $\mathit{k}=9$ & 0.626 $\pm$ 0.023 & 0.516 $\pm$ 0.033 & 0.62 $\pm$ 0.023 \\
PreSumm $\mathit{k}=12$& \textbf{0.636 $\pm$ 0.022} & \textbf{0.529 $\pm$ 0.032} & \textbf{0.631 $\pm$ 0.023} \\
PreSumm $\mathit{k}=15$ & 0.628 $\pm$ 0.018 & 0.528 $\pm$ 0.027 & 0.623 $\pm$ 0.019 \\ \hline
\end{tabular}
\caption{Results for the baseline, 5-fold cross validation experiment and 2 ablation experiments for the PreSumm method. The F-measure for ROUGE-$\mathit{(1,2,l)}$ metrics for pre-trained PreSumm model and the model fine-tuned with PodSumm dataset reported on the test set for each fold. Summary statistics for each metric reported as mean $\pm$ std. dev. over the 5 folds.}
\label{table:results_fine_tuning}
\end{table}
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\textwidth]{images/sentence_index_dist.pdf}
\caption{Selected sentence index vs. the normalized count (over all sentences in dataset) in the Ground Truth, predictions from PreSumm (No FT), fine-tuned PreSumm (FT), and fine-tuned PreSumm with augmentation(FT + Aug)}
\label{fig:sentence_index_distance}
\end{figure}
\begin{table}[ht]
\centering
\fcolorbox{black}{white}{
\begin{minipage}{\textwidth}
Hey there real quick before we start the show. California. \textcolor {blue}{We are coming your way for a live show next month on February 19th} \textcolor {cyan}{we are super excited to finally get to come to Southern California.} \textcolor {cyan}{We will be in 1000 Oaks with K C. L.} \textcolor {cyan}{You talking about the 2020 race and more to get your tickets, head over to NPR prisons dot org's.} \textcolor {cyan}{Oh, and if you're in Iowa, we have a show there tomorrow night Friday night, and there are still a few tickets available.} \textcolor {cyan}{Okay, here's the show. } \textcolor {cyan}{Hey there, it's the NPR politics podcast.} \textcolor {red}{It is 7:30} \textcolor {red}{p.m.} \textcolor {red}{On January 30th.} \textcolor {magenta}{I'm Tamara Keith.} \textcolor {cyan}{I cover the White House.} \textcolor {magenta}{I'm Aisha Roscoe.} \textcolor {magenta}{I also cover the White House} \textcolor {cyan}{and I'm Susan Davis.} \textcolor {magenta}{I cover Congress.} \textcolor {green}{Senate will convene as a court of impeachment today.} \textcolor {green}{The Senate impeachment trial is continuing with more questions and answers, senators asking questions, the House managers and the president's legal team answering those questions.} \textcolor {red}{And in fact, as we take this, the Q and A is still going on, so things could happen.} \textcolor {red}{That's why we do a time stamp.} \textcolor {red}{Um, Aisha, I'm wondering what stood out to you about today?} \textcolor {red}{Well, a lot of what the questions seem to be about was getting at this idea of.} \textcolor {green}{Is there a limit to what a president can do to get re elected?} \textcolor {red}{Because one of the president's lawyers representing him, Alan Dershowitz, made this argument that most presidents think their re election is in the public interest.} \textcolor {red}{And therefore, if they take actions to kind of help their reelection as long as it's not illegal, it's OK.} \textcolor {red}{And it really seemed like the senators were probing the limits of how far that argument can go on.}
\end{minipage}
}
\caption{PreSumm (no FT, $\mathit{k}=12$) output with correct predictions in \textcolor{green}{green}, false negatives in \textcolor{red}{red} and false positive sentences in \textcolor{blue}{blue}. Sentences detected as repetitive content (\textcolor{magenta}{magenta}) and also falsely predicted by the model (\textcolor{cyan}{cyan})}.
\label{table:example_summary_no_ft}
\end{table}
\begin{table}[ht]
\centering
\fcolorbox{black}{white}{
\begin{minipage}{\textwidth}
Hey there real quick before we start the show. California. We are coming your way for a live show next month on February 19th \textcolor {magenta}{we are super excited to finally get to come to Southern California.} \textcolor {magenta}{We will be in 1000 Oaks with K C. L.} \textcolor {cyan}{You talking about the 2020 race and more to get your tickets, head over to NPR prisons dot org's.} \textcolor {cyan}{Oh, and if you're in Iowa, we have a show there tomorrow night Friday night, and there are still a few tickets available.} \textcolor {magenta}{Okay, here's the show. } \textcolor {magenta}{Hey there, it's the NPR politics podcast.} \textcolor {red}{It is 7:30} \textcolor {red}{p.m.} \textcolor {red}{On January 30th.} \textcolor {magenta}{I'm Tamara Keith.} \textcolor {magenta}{I cover the White House.} \textcolor {magenta}{I'm Aisha Roscoe.} \textcolor {magenta}{I also cover the White House} \textcolor {magenta}{and I'm Susan Davis.} \textcolor {magenta}{I cover Congress.} \textcolor {green}{Senate will convene as a court of impeachment today.} \textcolor {green}{The Senate impeachment trial is continuing with more questions and answers, senators asking questions, the House managers and the president's legal team answering those questions.} \textcolor {green}{And in fact, as we take this, the Q and A is still going on, so things could happen.} \textcolor {red}{That's why we do a time stamp.} \textcolor {red}{Um, Aisha, I'm wondering what stood out to you about today?} \textcolor {green}{Well, a lot of what the questions seem to be about was getting at this idea of.} \textcolor {green}{Is there a limit to what a president can do to get re elected?} \textcolor {green}{Because one of the president's lawyers representing him, Alan Dershowitz, made this argument that most presidents think their re election is in the public interest.} \textcolor {green}{And therefore, if they take actions to kind of help their reelection as long as it's not illegal, it's OK.} \textcolor {green}{And it really seemed like the senators were probing the limits of how far that argument can go on.} \textcolor {red}{At one point, there was a question from Senator Susan Collins from Maine, a Republican, and and a few other Republicans, including Senators Crepeau, Blunt and Rubio.} \textcolor {blue}{And remember, all of the questions are submitted in writing to the chief justice, who then reads them aloud.}
\end{minipage}
}
\caption{PreSumm (FT + Aug, $\mathit{k}=12$) output with correct predictions in \textcolor{green}{green}, false negatives in \textcolor{red}{red} and false positive sentences in \textcolor{blue}{blue}. Sentences detected as repetitive content (\textcolor{magenta}{magenta}) and also falsely predicted by the model (\textcolor{cyan}{cyan})}.
\label{table:example_summary_1}
\end{table}
\section{Discussion}
In this work, we proposed PodSumm, a method to automatically generate audio summaries of podcasts via guidance from the text-domain. The method involves transcribing the audio, followed by some text processing and text summarization. An audio summary is then generated by stitching the audio segments that correspond to the sentences selected by the text summarization. The resulting model fine-tuned on our dataset performed better than a LEAD-N baseline and a model trained on the CNN/DailyMail dataset.
As our method contains a sequence of steps, the performance of each module directly influences the final produced audio summaries. In this paper we heavily leverage prior work in different fields, we believe custom modules would bring significant advantages. For example, a sentence segmentation model that is robust to transcription errors, or missing punctuation due to background music would allow us to leverage cheaper, less accurate ASR solutions. Further research is needed to develop and understand the effects of the individual modules specific to podcasts.
Although our proposed method showed improved performance after fine-tuning on our dataset, we recognize that its smaller size may restrict the generalization ability of the model on unseen data. Manual annotation of a large corpus of podcast data for this task is prohibitively expensive, but techniques like data augmentation could alleviate these to some extent.
\section{Conclusion}
We present a novel method to create audio summaries for podcasts via guidance from the text domain, and discuss the strengths and limitations. This work establishes the proof of working principle and sets direction for future development into a fully learned and automated method for podcast speech summarization. We look forward to newer methods emerging from the research community leading to an improved listener experience.
\begin{acks}
The authors thank Josh Morris for his counsel, Chinting Ko for his guidance on ASR, Joseph Renner, Jeff Scott, Gannon Gesiriech and Zafar Rafi for their feedback on the manuscript, and the contributions of the our team members at the Media Technology Lab at Gracenote.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
|
2,877,628,088,531 | arxiv | \section{Introduction}
One of the most important recent findings of the higher-dimensional
General Relativity is a one-rotational black ring solution by Emparan and Reall
\cite{ref1}.
This solution is a vacuum, axially symmetric and asymptotically flat solution
of the five-dimensional General Relativity.
The topology of the event horizon is $S^1 \times S^2$. The black ring rotates
along the direction of the $S^1$. The extension of this solution to
a two-rotational one has not yet been achieved.
Recently the present authors found a black ring solution with $S^2$ rotation
by using a solitonic solution-generating technique \cite{Mishima:2005id}.
In the analysis we reduced the problem to the four-dimensional one
\cite{{Mazur:1987},{Dereli:1977},{Bruckman:1985yk}}
and applied the formula \cite{ref7} to obtain the metric functions.
The seed solution of this ring is a simple Minkowski spacetime.
Because the effect of rotation cannot compensate for
the gravitational attractive force, the ring has a kind of strut structure.
Figueras found a C-metric expression of $S^2$-rotating black ring solution
\cite{Figueras:2005zp}.
Tomizawa et al. showed that the same black ring solution
is obtained by using the
inverse scattering method \cite{Tomizawa:2005wv}.
In this paper we generate the black ring with $S^1$ rotation by
the solitonic solution-generating technique.
We find that the seed solution is not a Minkowski spacetime.
The obtained solution has more parameters
than Emparan and Reall's black ring.
It is therefore an extension of the result of Emparan and Reall
and we need some additional conditions to reduce the solution we obtained
to the black ring solution.
In this analysis we use prolate-spheroidal coordinates.
The relation between this and the canonical coordinates
considered by Harmark \cite{refHAR} are analyzed.
We also investigate the correspondence between the seed solutions and the
solitonic ones from the viewpoints of rod structure.
This viewpoint would be helpful to consider seed solutions for
further new five-dimensional solutions.
We cannot generate two-rotational solutions by the solution-generating
technique used here.
However if we use another technique, e.g., inverse scattering method,
for the seed solution used in this analysis or the seed with some corrections,
the two-rotational black ring solution may be obtained.
At first we briefly explain the procedure to generate axisymmetric solutions
in the five-dimensional general relativity.
The spacetimes which we considered
satisfy the following conditions:
(c1) five dimensions, (c2) asymptotically flat spacetimes,
(c3) the solutions of
vacuum Einstein equations, (c4) having three commuting Killing vectors
including time translational invariance and
(c5) having a single non-zero angular momentum component.
Under the conditions (c1) -- (c5),
we can employ the following Weyl-Papapetrou metric form
(for example, see the treatment in \cite{refHAR}),
\begin{eqnarray}
ds^2 &=&-e^{2U_0}(dx^0-\omega d\phi)^2+e^{2U_1}\rho^2(d\phi)^2
+e^{2U_2}(d\psi)^2 \nonumber \\ &&
+e^{2(\gamma+U_1)}\left(d\rho^2+dz^2\right) ,
\label{WPmetric}
\end{eqnarray}
where $U_0$, $U_1$, $U_2$, $\omega$ and $\gamma$ are functions of
$\rho$ and $z$.
Then we introduce new functions
$S:=2U_0+U_2$ and $T:=U_2$ so that
the metric form (1) is rewritten into
\begin{eqnarray}
ds^2 &=&e^{-T}\left[
-e^{S}(dx^0-\omega d\phi)^2
+e^{T+2U_1}\rho^2(d\phi)^2 \right. \nonumber \\&&\hskip 0cm \left.
+e^{2(\gamma+U_1)+T}\left(d\rho^2+dz^2\right) \right]
+e^{2T}(d\psi)^2.
\label{MBmetric}
\end{eqnarray}
Using this metric form
the Einstein equations are reduced to the following set of equations,
\begin{eqnarray*}
&&{\bf\rm (i)}\quad
\nabla^2T\, =\, 0, \\
&&{\bf\rm (ii)
\left\{\begin{array}{ll}
& \del{\rho}\gamma_T={\displaystyle
\frac{3}{4}\,\rho\,
\left[\,(\del{\rho}T)^2-(\del{z}T)^2\,\right]}\,\ \ \\[3mm]
& \del{z}\gamma_T={\displaystyle
\frac{3}{2}\,\rho\,
\left[\,\del{\rho}T\,\del{z}T\,\right], }
\end{array}\right. \\
&&{\bf\rm (iii)}\quad
\nabla^2{{\mathcal E}}_S=\frac{2}{{{\mathcal E}}_S+{\bar{{\mathcal E}}}_S}\,
\nabla{{\mathcal E}}_S\cdot\nabla{{\mathcal E}}_S , \\
&&{\bf\rm (iv)
\left\{\begin{array}{ll}
& \del{\rho}\gamma_S={\displaystyle
\frac{\rho}{2({{\mathcal E}}_S+{\bar{{\mathcal E}}}_S)}\,
\left(\,\del{\rho}{{\mathcal E}}_S\del{\rho}{\bar{{\mathcal E}}}_S
-\del{z}{{\mathcal E}}_S\del{z}{\bar{{\mathcal E}}}_S\,
\right)} \\
& \del{z}\gamma_S={\displaystyle
\frac{\rho}{2({{\mathcal E}}_S+{\bar{{\mathcal E}}}_S)}\,
\left(\,\del{\rho}{{\mathcal E}}_S\del{z}{\bar{{\mathcal E}}}_S
+\del{\rho}{{\mathcal E}}_S\del{z}{\bar{{\mathcal E}}}_S\,
\right)},
\end{array}\right. \\
&&{\bf\rm (v)}\quad
\left( \del{\rho}\Phi,\,\del{z}\Phi \right)
=\rho^{-1}e^{2S}\left( -\del{z}\omega,\,\del{\rho}\omega \right), \\
&&{\bf\rm (vi)}\quad
\gamma=\gamma_S+\gamma_T, \\
&&{\bf\rm (vii)}\quad
U_1=-\frac{S+T}{2},
\end{eqnarray*}
where $\Phi$ is defined through the equation (v) and the function
$\mathcal{E_S}$ is defined by
$
\,{{\mathcal E}}_S:=e^{S}+i\,\Phi\,.
$
The most non-trivial task to obtain new metrics is to solve
the equation (iii) because of its non-linearity.
To overcome this difficulty
here we use the method similar to the Neugebauer's
B\"{a}cklund transformation \cite{{Neugebauer:1980}}
or the Hoenselaers-Kinnersley-Xanthopoulos transformation \cite{Hoenselaers:1979mk}.
To write down the exact form of the metric functions,
we follow the procedure
given by Castejon-Amenedo and Manko \cite{ref7}.
In the five dimensional spacetime we start from the following form of a seed static metric
\begin{eqnarray}
ds^2 &=& e^{-T^{(0)}}\left[
-e^{S^{(0)}}(dx^0)^2
+e^{-S^{(0)}}\rho^2(d\phi)^2 \right. \nonumber \\ &&\hskip 0cm \left.
+e^{2\gamma^{(0)}-S^{(0)}}\left(d\rho^2+dz^2\right) \right]
+e^{2T^{(0)}}(d\psi)^2.
\nonumber
\end{eqnarray}
For this static seed solution, $e^{S^{(0)}}$, of the Ernst equation (iii),
a new Ernst potential can be written in the form
\begin{equation}
{\cal E}_S = e^{S^{(0)}}\frac{x(1+ab)+iy(b-a)-(1-ia)(1-ib)}
{x(1+ab)+iy(b-a)+(1-ia)(1-ib)},
\nonumber
\end{equation}
where $x$ and $y$ are the prolate spheroidal coordinates:
$
\,\rho=\sigma\sqrt{x^2-1}\sqrt{1-y^2},\ z=\sigma xy\,,
$
with $\sigma>0$. The ranges of these coordinates are
$1 \le x$ and $-1 \le y \le 1$.
The functions $a$ and $b$ satisfy the following
simple first-order differential equations
\begin{eqnarray}
(x-y)\del{x}a&=&
a\left[(xy-1)\del{x}S^{(0)}+(1-y^2)\del{y}S^{(0)}\right], \nonumber \\
(x-y)\del{y}a&=&
a\left[-(x^2-1)\del{x}S^{(0)}+(xy-1)\del{y}S^{(0)}\right], \nonumber \\
(x+y)\del{x}b&=&
-b\left[(xy+1)\del{x}S^{(0)}+(1-y^2)\del{y}S^{(0)}\right] , \nonumber\\
(x+y)\del{y}b&=&
-b\left[-(x^2-1)\del{x}S^{(0)}+(xy+1)\del{y}S^{(0)}\right]. \nonumber \\
\label{eq:ab}
\end{eqnarray}
The metric functions for the five-dimensional metric
(\ref{MBmetric}) are obtained
by using the formulas shown by \cite{ref7},
\begin{eqnarray}
e^{S}&=&e^{S^{(0)}}\frac{A}{B} \label{e^S} \\
\omega&=&2\sigma e^{-S^{(0)}}\frac{C}{A}+C_1 \label{omega} \\
e^{2\gamma}&=&C_2(x^2-1)^{-1}A
e^{2\gamma'}, \label{e_gamma}
\end{eqnarray}
where $C_1$ and $C_2$ are constants and
$A$, $B$ and $C$ are given by
\begin{eqnarray*}
A&:=&(x^2-1)(1+ab)^2-(1-y^2)(b-a)^2, \\
B&:=&[(x+1)+(x-1)ab]^2+[(1+y)a+(1-y)b]^2, \\
C&:=&(x^2-1)(1+ab)[(1-y)b-(1+y)a]
\nonumber\\ &&\hskip 0.9cm
+(1-y^2)(b-a)[x+1-(x-1)ab].
\end{eqnarray*}
In addition
the $\gamma'$ in Eq. (\ref{e_gamma}) is a $\gamma$ function corresponding to the static metric,
\begin{eqnarray}
ds^2 &=& e^{-T^{(0)}}\left[
-e^{2U^{\mbox{\tiny(BH)}}_0+S^{(0)}}(dx^0)^2
+e^{-2U^{\mbox{\tiny(BH)}}_0-S^{(0)}}\rho^2(d\phi)^2 \right.
\nonumber \\ &&\hskip -0.cm \left.
+e^{2(\gamma'-U^{\mbox{\tiny(BH)}}_0)-S^{(0)}}\left(d\rho^2+dz^2\right) \right]
+e^{2T^{(0)}}(d\psi)^2 \label{static_5}
\end{eqnarray}
where ${\displaystyle U_{0}^{\mbox{\tiny(BH)}}=\frac{1}{2}\ln\left( \frac{x-1}{x+1} \right)}$.
And then the function $T$ is equals to $T^{(0)}$ and $U_1$ is given by
the Einstein equation (vii).
Using the solution-generating technique described above, we construct
the $S^1$-rotating black ring solution obtained by Emparan and Reall.
The most important point is to find the seed metric of the
black ring solution. To do so, it is useful to use
the rod structures which was studied for the higher-dimensional Weyl solutions
by Emparan and Reall \cite{ref8} and for the nonstatic solutions
by Harmark \cite{refHAR}.
Using this rod structure analysis,
we found
the seed metric of the $S^1$-rotating black ring solution
by analogy with the relations between
the $S^2$-rotating black ring and its seed metric.
(See \cite{refHAR} for the definition of the rod structure.)
We show the schematic pictures of rod structures of
the $S^2$-rotating black ring and its seed solution in Fig.
\ref{fig:rods_S2} \cite{Mishima:2005id}.
Through the solitonic transformation
the segment $[-\sigma,\sigma]$
of semi-infinite spacelike rod which corresponds to the $\phi$-axis
is changed to the finite timelike rod.
To indicate that the $x^0$ and $\phi$ components of the eigenvector
are not zero, we put the finite rod between $x^0$ and $\phi$ axes in Fig. \ref{fig:rods_S2}.
In the resulting solution this segment corresponds to
the event horizon with $\phi$-rotation.
The rod structure of $S^1$-rotating black ring was investigated by
Harmark \cite{refHAR}. There are two semi-infinite spacelike rods
in the directions of $\partial/\partial \psi$ and
$\partial/\partial \phi$.
Note that these two semi-infinite spacelike rods
assure the asymptotic flatness of the spacetime.
Also there is a finite spacelike rod
in $\partial/\partial\psi$ direction. A finite timelike rod has finite and
semi-infinite spacelike rods in $\partial/\partial\psi$ direction on each side.
This timelike rod corresponds to an event horizon with $\phi$-rotation.
Now we
construct the seed solution for the $S^1$-rotating black ring.
For this purpose
we trace back from the rod structure of
$S^1$-rotating black ring
to the seed solution referring the analysis of $S^2$-rotating
black ring.
This can be achieved when we change the
finite timelike rod to the
finite spacelike rod in the $\partial/\partial\phi$ direction.
In Fig. \ref{fig:rods_ER} we show the schematic pictures of
these two rod structures.
\begin{figure}
\includegraphics[scale=0.33,angle=0]{rods_S2_seed.eps}\\
\includegraphics[scale=0.33,angle=0]{rods_S2.eps}
\caption{Schematic pictures of rod structures. The upper panel
shows the rod structure of Minkowski spacetime, which is a seed of
$S^2$-rotating black ring. The lower panel shows the rod structure
of the $S^2$-rotating black ring. The segment $[-\sigma,\sigma]$
of semi-infinite rod in the upper panel is tranformed to
the finite timelike rod with $\phi$-rotation
by the solution-generating transformation.
The eigenvector of the finite timelike rod in the lower panel has non-zero
$\phi$ component. Therefore we put this rod between $x^0$ and $\phi$ axes.
}
\label{fig:rods_S2}
\end{figure}
\begin{figure}
\includegraphics[scale=0.33,angle=0]{rods_ER_seed.eps}\\
\includegraphics[scale=0.33,angle=0]{rods_ER.eps}
\caption{Schematic pictures of rod structures. The upper panel
shows the rod structure of seed metric of
$S^1$-rotating black ring. The lower panel shows the rod structure
of $S^1$-rotating black ring. The finite spacelike rod
$[-\eta_1\sigma,\eta_2\sigma]$
in the upper panel is altered to the finite timelike rod by the solution-generating transformation.}
\label{fig:rods_ER}
\end{figure}
The seed metric of $S^1$-rotating black ring is summarized as follows.
The $\psi$-$\psi$ componet of the metric of
black ring and its seed are exactly same as each other.
Also the 0-0 component of the seed metric is $-1$.
The seed functions of
$S^1$-rotating black ring solution are obtained as
\begin{equation}
S^{(0)}=T^{(0)}= \tilde{U}_{\lambda\sigma} +\tilde{U}_{-\eta_1 \sigma}
-\tilde{U}_{\eta_2 \sigma},
\label{eq:seed}
\end{equation}
where the function ${\tilde{U}}_{d}$ is defined as
${\tilde{U}}_{d}:=\frac{1}{2}\ln\left[R_{d}+(z-d)\right]$
and $R_d=\sqrt{\rho^2+(z-d)^2}$.
We assume that
the parameters $\lambda$, $\eta_1$ and $\eta_2$ should satisfy
the following inequalities
\begin{eqnarray}
&& 1 \le \lambda,~~-1\le \eta_1\le 1,
~~~1\le \eta_2\le 1,~~~0\le \eta_1+\eta_2, \nonumber
\end{eqnarray}
to generate the black ring solution
because the timelike rod appears in the region $-\sigma \le z \le \sigma$
after the solitonic transformation.
Under these assumptions, the region of $x=1$ and $-\eta_1<y<\eta_2$
of the solitonic solution corresponds to the event horizon.
Also the region of $x>\lambda$ and $y=1$ is the fixed points of the
$\phi$ rotation.
The regions of $1<x<\lambda$ and $y=1$, $x=1$ and $\eta_2<y<1$,
$x=1$ and $-1<y<-\eta_1$, and $y=-1$ become fixed points of
$\psi$ rotation in the black ring spacetime.
Substituting the seed function (\ref{eq:seed})
into the differential equations (\ref{eq:ab}),
we obtain the solutions of these equations as,
\begin{equation}
a=\frac{\alpha}{2\sigma^{\frac{1}{2}}}
\frac{e^{2U_\sigma}+e^{2{\tilde{U}}_{{\lambda}\sigma}}}{e^{{\tilde{U}}_{{\lambda}\sigma}}}
\frac{e^{2U_\sigma}+e^{2{\tilde{U}}_{-\eta_1\sigma}}}{e^{{\tilde{U}}_{-\eta_1\sigma}}}
\frac{e^{{\tilde{U}}_{\eta_2\sigma}}}{e^{2U_\sigma}+e^{2{\tilde{U}}_{\eta_2\sigma}}}, \nonumber
\end{equation}
\begin{eqnarray}
&&b={2\sigma^{\frac{1}{2}}}{\beta}
\frac{e^{{\tilde{U}}_{{\lambda}\sigma}}}{e^{2U_{-\sigma}}+e^{2{\tilde{U}}_{{\lambda}\sigma}}}
\frac{e^{{\tilde{U}}_{-\eta_1\sigma}}}{e^{2U_{-\sigma}}+e^{2{\tilde{U}}_{-\eta_1\sigma}}}
\frac{e^{2U_{-\sigma}}+e^{2{\tilde{U}}_{\eta_2\sigma}}}{e^{{\tilde{U}}_{\eta_2\sigma}}},
\nonumber
\end{eqnarray}
where $\alpha$ and $\beta$ are integration constants and
$U_c:=\frac{1}{2}\ln[R_c-(z-c)]$.
Next we reduce the explicit expression for the $\gamma'$.
Read out the functions $S'$ and $T'$ from Eq. (\ref{static_5}) as
\begin{eqnarray}
S'&=&2\,U^{(BH)}_0+S^{(0)} \nonumber \\
&=& 2({\tilde{U}}_\sigma-{\tilde{U}}_{-\sigma})
+{\tilde{U}}_{{\lambda}\sigma}+{\tilde{U}}_{-\eta_1\sigma}-{\tilde{U}}_{\eta_2\sigma} \nonumber\\
T'&=&T^{(0)}={\tilde{U}}_{{\lambda}\sigma}+{\tilde{U}}_{-\eta_1\sigma}-{\tilde{U}}_{\eta_2\sigma},
\nonumber
\end{eqnarray}
and substitute them into
\begin{equation}
\del{\rho}{\gamma}'=
\frac{1}{4}\rho\left[(\del{\rho}S')^2-(\del{z}S')^2\right]
+\frac{3}{4}\rho\left[(\del{\rho}T')^2-(\del{z}T')^2\right],
\nonumber
\end{equation}
\begin{equation}
\del{z}{\gamma}'=
\frac{1}{2}\rho\left[\del{\rho}S'\del{z}S'\right]
+\frac{3}{2}\rho\left[\del{\rho}T'\del{z}T'\right],
\nonumber
\end{equation}
so we can confirm that $\gamma'$ is divided
as
\begin{eqnarray}
{\gamma}' &=& {\gamma}'_{\sigma,\sigma}+{\gamma}'_{-\sigma,-\sigma}
+{\gamma}'_{\lambda\sigma,\lambda\sigma}+{\gamma}'_{-\eta_1\sigma,-\eta_1\sigma}
+{\gamma}'_{\eta_2\sigma,\eta_2\sigma} \nonumber \\
&&
-2{\gamma}'_{\sigma,-\sigma}
+{\gamma}'_{\sigma,\lambda\sigma}+{\gamma}'_{\sigma,-\eta_1\sigma}
-{\gamma}'_{\sigma,\eta_2\sigma}
\nonumber \\ &&
-{\gamma}'_{-\sigma,\lambda\sigma}
-{\gamma}'_{-\sigma,-\eta_1\sigma}
+{\gamma}'_{-\sigma,\eta_2\sigma}
+2{\gamma}'_{{\lambda}\sigma,-\eta_1\sigma}
\nonumber \\ &&
-2{\gamma}'_{{\lambda}\sigma,\eta_2\sigma}
-2{\gamma}'_{-\eta_1\sigma,\eta_2\sigma} ,
\nonumber
\end{eqnarray}
where $\gamma'_{cd}$ satisfies the following equations
\begin{eqnarray}
\del{\rho}{\gamma}'_{cd}
&=&\rho\left[\del{\rho}{\tilde{U}}_{c}\del{\rho}{\tilde{U}}_{d}
-\del{z}{\tilde{U}}_{c}\del{z}{\tilde{U}}_{d}\right], \label{eq:drho_gm}\\
\del{z}{\gamma}'_{cd}
&=&\rho\left[\del{\rho}{\tilde{U}}_{c}\del{z}{\tilde{U}}_{d}
+\del{\rho}{\tilde{U}}_{d}\del{z}{\tilde{U}}_{c}\right]. \label{eq:dz_gm}
\end{eqnarray}
These equations (\ref{eq:drho_gm}) and (\ref{eq:dz_gm}) have the solution,
\begin{equation}
{\gamma}'_{cd}=\frac{1}{2}{\tilde{U}}_{c}+\frac{1}{2}{\tilde{U}}_{d}-\frac{1}{4}\ln Y_{cd},
\nonumber
\end{equation}
where $Y_{cd}:=R_cR_d+(z-c)(z-d)+\rho^2$.
Now the functions which is needed to express the full metric
are completely obtained.
The full metric is expressed as
\begin{eqnarray}
ds^2 &=&-\frac{A}{B}\left[dx^0-\left(2\sigma e^{-S^{(0)}}
\frac{C}{A}+C_1\right) d\phi\right]^2 \nonumber \\ &
+\frac{B}{A}e^{-2S^{(0)}}\rho^2(d\phi)^2
+e^{2S^{(0)}}(d\psi)^2
\nonumber \\ &&
+C_2 \sigma^2 \frac{x^2 - y^2}{x^2 -1}B e^{2({\gamma}'-S^{(0)})}
\left(\frac{dx^2}{x^2-1}+\frac{dy^2}{1-y^2}\right).
\nonumber \\ &&
\label{eq:metric_ER}
\end{eqnarray}
In the following
the constants $C_1$ and $C_2$ are fixed as
\[
C_1=\frac{\,\,2\sigma^{1/2}\,{\alpha}\,\,}{1+{\alpha}\beta},\ \ \
C_2=\frac{1}{\sqrt{2}(1+{\alpha}\beta)^2},
\]
to assure that the spacetime asymptotes to a five-dimensional
Minkowski spacetime globally.
We can confirm this by taking the asymptotic
limit, $x \rightarrow \infty$, of the metric.
The above solution is an extension of $S^1$-rotating black ring solution
because it contains singular cases.
In general the spacetime described by the metric (\ref{eq:metric_ER})
includes some harmful regions, for example, the region
where closed timelike curves exist.
In fact, the metric component
$g_{\phi\phi}$ becomes negative around $(x,y)=(1,1)$ and $(1,-1)$.
These singular behaviors are cured by setting the parameters $\alpha$ and
$\beta$ as
\begin{eqnarray}
\alpha = \sqrt{\frac{2(1-\eta_2)}{(\lambda -1)(1+\eta_1)}}, \hspace{0.2cm}
\beta = \sqrt{\frac{(\lambda +1)(1-\eta_1)}{2(1+\eta_2)}} .
\label{eq:beta}
\end{eqnarray}
The asymptotic form of ${{\mathcal E}}_{S}$ near the infinity $\tilde{r}=\infty$ becomes
\begin{eqnarray}
{{\mathcal E}}_{S}&=&\tilde{r}\cos\theta\,
\left[\,1\,-\,\frac{\sigma}{\tilde{r}^2}\,\frac{P({\alpha},\beta,{\lambda})}
{(1+\alpha\beta)^2}
\,+\cdots\right] \nonumber \\ && \hskip -0cm
+2\,i\,\sigma^{1/2}\,\left[\,\frac{\alpha}{1+\alpha\beta}
\,-\,\frac{2\sigma\cos^2\theta}{\tilde{r}^2}\,\frac{Q({\alpha},\beta,{\lambda})}
{(1+\alpha\beta)^3}
\,\,+\cdots\,\right],
\nonumber
\end{eqnarray}
where we introduced the new coordinates $\tilde{r}$ and $\theta$ through
the relations
\begin{equation}
x=\frac{\tilde{r}^2}{2\sigma}+\lambda-\eta_1-\eta_2, \ y=\cos 2\theta,
\nonumber
\end{equation}
and
\begin{eqnarray}
P({\alpha},\beta,{\lambda})
&=& 4(1 + \alpha^2 - \alpha^2 \beta^2) \nonumber \\
Q({\alpha},\beta,{\lambda})
&=& \alpha(2\alpha^2-\eta_1-\eta_2+{\lambda}+3)-2\alpha^2\beta^3 \nonumber\\
&&
-\beta\left[2(2\alpha\beta+1)(\alpha^2+1) \right. \nonumber\\
&& \left.
+(\eta_1+\eta_2-{\lambda}-1){\alpha}^2({\alpha}\beta+2)\right]. \nonumber
\end{eqnarray}
From the asymptotic behavior of the Ernst potential,
we can compute the mass
parameter $m^2$ and rotational parameter $m^2a_0$ as
\begin{eqnarray}
&& m^2 = \sigma \frac{P({\alpha},\beta,{\lambda})}{(1+\alpha\beta)^2}, \hspace{0.2cm}
m^2a_0 = 4\sigma^{3/2}\frac{Q({\alpha},\beta,{\lambda})}{(1+\alpha\beta)^3}. \nonumber
\end{eqnarray}
For the black ring solution we obtain
\begin{eqnarray}
m^2 &=&
\frac{8 \sigma (\eta_1 + \eta_2)(\lambda-\eta_2)}
{(\lambda-1)(1+\eta_1)(1+\eta_2)(1+\alpha\beta)^2}, \label{eq:m2_S1}\\
m^2a_0 &=&
m^2\frac{\sqrt{\sigma}(\alpha(1+\lambda\eta_1)-2\beta\eta_2)}{1+\alpha\beta},
\label{eq:m2a0_S1}
\end{eqnarray}
where we use the Eq.
(\ref{eq:beta}).
From Eqs. (\ref{eq:m2_S1}) and (\ref{eq:m2a0_S1})
we can obtain the usefull relation
\begin{equation}
\frac{a_0^2}{m^2} = \frac{\left((1+\lambda\eta_1)\sqrt{1-\eta_2^2}-
\eta_2\sqrt{(\lambda^2-1)(1-\eta_1^2)}\right)^2}
{4(\lambda-\eta_2)(\eta_1+\eta_2)}.
\label{eq:a0m}
\end{equation}
When $\eta_1=\eta_2=1$
the black ring becomes static because $\alpha=\beta=0$ from
Eq.
(\ref{eq:beta}).
The one-rotational black hole limit \cite{Myers:1986un}
of the black ring
is realized when we set
\begin{eqnarray}
\lambda=1+\epsilon, ~~~ \eta_2=1-k\epsilon,
\nonumber
\end{eqnarray}
where $k>0$ is constant,
and then, take the limit $\epsilon \rightarrow 0$.
The periods of $\phi$ and $\psi$ are defined as
\begin{eqnarray}
&& \Delta \phi = 2 \pi \lim_{\rho \rightarrow 0} \sqrt{\frac{\rho^2 g_{\rho\rho}}{g_{\phi\phi}}}
~~~\mbox{and}~~~
\Delta \psi = 2 \pi \lim_{\rho \rightarrow 0} \sqrt{\frac{\rho^2 g_{\rho\rho}}{g_{\psi\psi}}} \nonumber
\end{eqnarray}
to avoid a conical singularity.
We see that the period of $\phi$ is
$\Delta \phi = 2 \pi$
and the period of $\psi$ is
$\Delta \psi = 2 \pi$
outside the ring
and
\begin{equation}
\Delta \psi = 2\pi \frac
\left(\frac{{\lambda} - \eta_2}{{\lambda} + \eta_1}\right)
\left(1 + \sqrt{
\frac{({\lambda}-1)(1-\eta_1)(1-\eta_2)}
{({\lambda}+1)(1+\eta_1)(1+\eta_2)}}\right)}
{\sqrt{\frac{{\lambda} - 1}{{\lambda} +1}}+\sqrt{
\frac{(1-\eta_1)(1-\eta_2)}
{(1+\eta_1)(1+\eta_2)}}}, \nonumber
\end{equation}
inside the ring. In general there is a conical singularity inside or
outside the ring. This conical singularity is cured by setting the
parameters $\lambda$, $\eta_1$ and $\eta_2$ to satisfy the relation
\begin{eqnarray}
&& \left(\frac{{\lambda} - \eta_2}{{\lambda} + \eta_1}\right)
\left(1 + \sqrt{
\frac{({\lambda}-1)(1-\eta_1)(1-\eta_2)}
{({\lambda}+1)(1+\eta_1)(1+\eta_2)}}\right)
\nonumber \\
&& =\sqrt{\frac{{\lambda} - 1}{{\lambda} +1}}+\sqrt{
\frac{(1-\eta_1)(1-\eta_2)}
{(1+\eta_1)(1+\eta_2)}}.
\label{eq:balance}
\end{eqnarray}
Finaly we
consider the coordinates transformation between
the prolate-spheroidal coordinates used here and
the canonical coordinates analyzed by Harmark \cite{refHAR}.
See \cite{refHAR} for the notaiton and the exact expression of
the metric in the canonical coordinates.
We however use $\tilde{\rho}$ and $\tilde{z}$ for $\rho$ and $z$ of \cite{refHAR}.
Comparing the functional forms of $\psi$-$\psi$ components, we obtain
the relations between these two coordinates.
These two coordinates can be transformed into each other
through the relation
\begin{eqnarray}
\tilde{\rho}&=&\rho, \nonumber \\
\tilde{z} &=& z +\frac{\eta_1-\eta_2}{2}\sigma
=\sigma \left(x y+\frac{\eta_1-\eta_2}{2}\right). \nonumber
\end{eqnarray}
In addition,
the parameters should satisfy the following relations,
\begin{eqnarray}
\kappa^2 &=& \sigma\left(\lambda + \frac{\eta_1-\eta_2}{2}\right),
\label{eq:k2}\\
c &=& \frac{\eta_1+\eta_2}{2\lambda+\eta_1-\eta_2}, \label{eq:2}\\
b &=& \frac{(\lambda+1+(\lambda-1)\alpha\beta)^2-(\lambda^2-1)(1+\alpha\beta)^2}
{(\lambda+1+(\lambda-1)\alpha\beta)^2+(\lambda^2-1)(1+\alpha\beta)^2},
\label{eq:b}
\end{eqnarray}
to assure the equivalence of these two expressions.
Here the parameters $\alpha$ and $\beta$ satisfy
the conditions
(\ref{eq:beta}).
Also we have to rescale the $\rho$-$z$ part of the metric as
\begin{equation}
e^{2\nu}= \frac{1-b}{(1-c)^2} e^{2(\gamma + U_1)}. \nonumber
\end{equation}
Note that $b \ge c$ when $\lambda \ge 1$, $\eta_1 \le 1$ and
$\eta_2 \le 1$. From the static black ring condition $b=c$ \cite{refHAR},
we can derive the following relation
\begin{equation}
\frac{1-\eta_2}{1+\eta_2}=\left(\frac{1-\eta_1}{1+\eta_1}\right)
\left(\frac{\lambda-1}{\lambda+1}\right),
\nonumber
\end{equation}
which holds when $\eta_1=\eta_2=1$. Indeed the black ring is static
in this case because $\alpha=\beta=0$ from Eq.
(\ref{eq:beta}).
The black ring solution
becomes one-rotational black hole when we take the
limits $b,c \rightarrow 1$ \cite{refHAR}.
These limits surely correspond to the limits
$\lambda, \eta_2 \rightarrow 1$.
There are six parameters $(\lambda,\eta_1,\eta_2,\sigma,\alpha,\beta)$
in the metric (\ref{eq:metric_ER}) with two relations
(\ref{eq:beta}). While there are three parameters $(b,c,\kappa)$
in the canonical coordinates.
The parameter $\sigma$
appears only in the relation of $\kappa^2$, Eq. (\ref{eq:k2}),
and contributes to the scaling of the coordinates.
Thus we can freely fix one of parameters $(\lambda,\eta_1,\eta_2)$.
Here we set $\eta_1=1$ because the relations obtained above
become simple. In this case
we can inversely solve Eqs. (\ref{eq:k2}) - (\ref{eq:b}).
The results are
\begin{eqnarray}
&& \eta_1 = 1, ~
\eta_2 = \frac{2c+cb-b}{(1+c)b} ,~
\lambda = \frac{1}{b},~
\sigma = \frac{1+c}{1+b} b\kappa^2. \nonumber
\end{eqnarray}
When the black ring does not have a conical singularity,
these relations become
\begin{eqnarray}
&& \eta_1 = 1 ,~
\eta_2 = c,~
\lambda
= \frac{1+c^2}{2c} ,~
\sigma = \frac{2c}{1+c} \kappa^2, \nonumber
\end{eqnarray}
and the parameter $\eta_2$ lies in the range $0<\eta_2\le1$.
Next we consider the relations between physical variables.
The ADM mass and angular momentum of the black ring were derived by
Emparan and Reall \cite{ref1} as
\begin{equation}
M = \frac{3\pi\kappa^2 b}{2(1-c)},~~~
J_1=\sqrt{2}\pi\kappa^3\frac{\sqrt{b(b-c)(1+b)}}{(1-c)^2}. \nonumber
\end{equation}
These two variables are related to the mass and rotational parameters as
\begin{equation}
m^2 =\frac{8(1-c)^2}{3\pi(1-b)}M, ~~~
m^2a_0=\frac{4}{\pi}\frac{(1-c)^3}{(1-b)^{\frac{3}{2}}}J_1.
\nonumber
\end{equation}
Using the balanced black ring conditions (\ref{eq:balance}) with $\eta_1=1$,
the right hand side of Eq. (\ref{eq:a0m})
can be reduced to the following form,
\begin{equation}
\frac{a_0^2}{m^2}=\frac{(1+\eta_2)^3}{8\eta_2}
=\frac{27\pi}{32}\frac{J_1^2}{M^3}.
\nonumber
\end{equation}
This agrees with the previous results \cite{ref1}.
In this paper we rederived the $S^1$-rotating black ring solution
by the solitonic slution-generating technique.
Using the rod structure analysis
we found the seed solution of the black ring on the analogy
of the relation between the $S^2$-rotating black ring and its seed
solution. The relations between the seed and obtained solitonic solutions
can be easily understood through the analysis of their rod structures.
Thus the rod structure analysis is expected to be a useful guide
to construct seed solutions for new solutions.
In addition
we obtained the relations between the prolate-spheroidal coordinates
and the canonical coordinates.
This means that using
the coordinates transformation between
canonical and C-metric coordinates obtained by Harmark \cite{refHAR},
the prolate-spheroidal coordinates also
can be transformed into the C-metric coordinates.
As is the $S^2$-rotating black ring \cite{Tomizawa:2005wv},
the $S^1$-rotating black ring solution
would be generated from the seed solution obtained here
by the inverse scattering method.
Also it might be expected that the two-rotational black ring is obtained
from the seed solution or the seed with some deformations
by this method.
This work is partially supported by Grant-in-Aid for Young Scientists (B)
(No. 17740152) from Japanese Ministry of Education, Science,
Sports, and Culture
and by Nihon University Individual Research Grant for
2005.
|
2,877,628,088,532 | arxiv | \section{Introduction}
\setcounter{equation}{0}
Recently, Calabi-Yau singularities have played a prominent role in ``bottom-up'' approaches connecting string theory to particle physics. The basic idea behind these constructions is to first locate a set of D-branes giving rise to the desired particle physics model at the singularity and later on perform the embedding into a compact Calabi-Yau threefold (CY$_3$). In this paper we study the simplest type of CY$_3$ singularities, the conifold, and determine the non-perturbative quantum corrections to the effective action for the (bulk) modulus which controls the size of the vanishing cycle. We expect that these type of corrections will become relevant when embedding the singularity into a full-fledged CY$_3$ compactification.
Geometrically, a conifold point is a point in the moduli space where the CY$_3$ becomes singular by developing a set of conical singularities (nodes) with base $S^2 \times S^3$. Locally these nodes can be resolved by either carrying out a deformation, by expanding the node into an $S^3$, or a small resolution expanding the node into an $S^2$. The process of shrinking a set of two-cycles $S^2$ to points and subsequently re-expanding the singularities into $S^3$'s (or vice versa) is called a conifold transition and connects moduli spaces of CY$_3$ with different Hodge numbers \cite{Candelas:1989ug,Aspinwall:1993nu}.
When approaching a conifold point by degenerating a complex structure ($S^3 \rightarrow 0$) and a K\"ahler structure ($S^2 \rightarrow 0$) in type IIB and type IIA string compactifications, respectively, the vector multiplet moduli space of the low energy effective action (LEEA) develops a logarithmic singularity.
These singularities can be attributed to illegally integrating out D3 (D2) branes wrapping the vanishing cycles, which, at the conifold point, give rise to extra massless states \cite{SBH}. On the other hand we can also approach the conifold point by degenerating a K\"ahler structure on the type IIB or a complex structure on the type IIA side. This leads to a logarithmic singularity in the hypermultiplet sector of the LEEA. In this case, however, the theory has no BPS states which could wrap the vanishing cycles and could have an interpretation in terms of four-dimensional particles. One expects, however, that non-perturbative string effects originating from instantons \cite{Becker:1995kb} become important in this regime since the real part of their instanton actions are proportional to the volume of the shrinking cycle so that they are no longer suppressed in the limit where the cycle shrinks to zero. See fig. 1 for a schematic illustration. Indeed, as was shown in \cite{OV} in the context of IIA string theory, spacetime instanton effects survive in the effective field theory even in the rigid limit where gravity decouples, $M_{\rm Pl} \rightarrow \infty$.
More recently, new exact results were obtained in \cite{Robles-Llana:2006is} for IIB strings, in which the contributions coming from worldsheet instantons, D1-instantons and D(-1)-instantons to the effective action were determined. Since these results are obtained at a generic point in the moduli space, we can study the behavior near the conifold point, where a two-cycle shrinks to zero size and
gravity is decoupled.
In this limit only worldsheet and D1-instanton corrections survive, and we obtain the resulting IIB hypermultiplet moduli space metric in the neighborhood of the conifold.
Our analysis allows us to perform a non-perturbative test of mirror symmetry, which states that the hypermultiplet moduli spaces in type IIA and type IIB on the mirror, after including all quantum corrections, must be the same. After resumming the instanton series on the IIB side, we determine the mirror map and show that the resulting hyperk\"ahler geometry is exactly the one obtained in \cite{OV}. This provides a nice demonstration of open string mirror symmetry on the hypermultiplet moduli space.
\begin{figure*}[t]
\setlength{\unitlength}{1cm}
\epsfxsize=1\textwidth
\begin{center}
\leavevmode
\epsffile{Conifold.eps}\,
\end{center}
\parbox[c]{\textwidth}{\caption{\label{fig}{\footnotesize Illustration of
the CY$_3$ moduli space close to a conifold point. For more information on conifold singularities we refer to \cite{SBH,Greene:1996dh}.
}}}
\end{figure*}
\section{IIA: Summing up membrane instantons}
\setcounter{equation}{0}
In this section, we review the results of \cite{OV} who studied the
geometry of the type IIA hypermultiplet (HM) moduli space near a conifold
singularity associated with vanishing three-cycle ${\cal C}_3$,
\begin{equation}
\mbox{IIA HM conifold limit:}\quad z = \int_{{\cal C}_3} \Omega
\rightarrow 0\ .
\end{equation}
To decouple gravity, we consider the combined limit
\begin{equation}\label{CFlimitIIA}
z\rightarrow 0 \, , \quad \lambda \rightarrow 0 \, , \quad
{\rm with} \; \; \; \frac{|z|}{\lambda} = {\rm finite} \, ,
\end{equation}
where $\lambda$ is the string coupling constant.
In this limit, the moduli space becomes a four-dimensional hyperk\"ahler
space which at the classical level, develops a singularity at $z=0$.
The metric can be written as
\begin{equation}\label{HK-metric}
{\rm d}s^2= \lambda^2 [V^{-1}({\rm d}t-\vec{A}\cdot {\rm d}{\vec y})^2
+V |{\rm d}{\vec y}|^2]\ .
\end{equation}
Here, $\vec {y}=(u,z/\lambda,\bar z/\lambda)$ with $\lambda$ kept fixed and $u$ and $t$ are the RR scalars originating from the expansion of the RR 3-form with respect to the harmonic three-forms associated with the vanishing cycle $\mathcal{C}_3$ and its dual. The metric \eqref{HK-metric} is hyperk\"ahler if
\begin{equation}
V^{-1}\Delta V =0 \ ,\qquad \vec{\nabla} V = \vec{\nabla} \times
\vec{A}\ ,
\end{equation}
where
\begin{equation}
\Delta = \partial_{u}^2+4\lambda^2\partial_z\partial_{\bar z}\ .
\end{equation}
Classically, at string tree-level and for large $|z|$, the metric is determined by
\begin{equation}\label{cfsol}
V = \frac{1}{4\pi}\ln \Big(\frac{1}{z\bar{z}}\Big)\ , \qquad
A_u = \frac{\mathrm{i}}{4 \pi} \ln\left( \frac{z}{\bar{z}} \right) \, , \; A_z = 0 \, , \; A_{\bar{z}} = 0 \, ,
\end{equation}
and has a logarithmic singularity. Using T-duality, which exchanges vector
multiplets and hypermultiplets, this singularity has a counterpart
in the vector multiplet moduli space of the IIB theory compactified
on the {\it same} Calabi-Yau threefold, where it corresponds to the
appearance of massless black holes \cite{SBH}. In fact, the hyperk\"ahler
metric \eqref{HK-metric} is related to the vector multiplet moduli space
metric by the rigid c-map \cite{CFG,rcmap}. We demonstrate this in Appendix A. This
will be important for us, since we use a similar mechanism for the mirror
theory in the next section.
In \cite{OV} Ooguri and Vafa studied the resolution of the singularity
based on D2-brane instanton contributions. Thereby they focused on the situation where the period with respect to $\mathcal{C}_3$ (A-cycle, say) vanishes while the dual period (from the B-cycle) remained finite. In this case membrane instantons wrapping the vanishing cycle
generate exponential corrections to the hypermultiplet moduli space of the
form $\exp(-|z|/\lambda)$ with $\theta$-angle $\exp(2 \pi \mathrm{i} u)$, breaking the shift symmetry in $u$ to a discrete subgroup. Membrane instantons wrapping the dual B-cycle decouple in the rigid limit \eqref{CFlimitIIA}, so that the shift-symmetry in $t$ is unbroken. The instanton corrected $V$ was then found to be \cite{OV}
\begin{equation}\label{V-sum}
V = \frac{1}{4 \pi} \ln\left( \frac{\mu^2}{z \bar{z}} \right)
+ \frac{1}{2 \pi} \sum_{m \not = 0} K_0\left(2 \pi \frac{|m z|}{\lambda}
\right)
\; {\rm e}^{ 2 \pi \mathrm{i} m u }\ ,
\end{equation}
for some constant $\mu$.
This instanton sum contains the zero-th order modified Bessel function,
accompanied
by theta-angle-like terms set by the RR scalar $u$. The Bessel function
can further be expanded for large argument, yielding exponentially suppressed
terms of the form $\exp[-2\pi(|mz|/\lambda -imu )]$ together with an infinite
power series in $\lambda$ that describe the perturbative fluctuations
around the instantons.
To exhibit the resolution of the singularity, one can perform a
Poisson resummation,
\begin{equation}
V=\frac{1}{4\pi} \sum_{n=-\infty}^{\infty}\left(\frac{1}{{\sqrt{(u-n)^2
+z\bar{z}/\lambda^2}}}-\frac{1}{|n|}\right)+\rm{const}\ ,
\end{equation}
which leads to a regular metric at $z=0$.
In the case of $N$ three-cycles shrinking to zero size, it was argued in
\cite{OV,Greene:1996dh} that this leads to hyperk\"ahler metrics with $\mathbb{C}^2/Z_N$ singularities. This metric is again of the form \eqref{HK-metric}, with
$V\rightarrow NV$. In the next sections,
we will reproduce all these results from type IIB strings compactified on
the mirror Calabi-Yau, in which $N$ now counts the number of vanishing
two-cycles. Thereby, we perform a non-perturbative test of mirror symmetry.
\section{Conifold singularities in type IIA vector multiplets}
\setcounter{equation}{0}
To understand the origin of the conifold singularity in the IIB hypermultiplet
moduli space, it is insightful to first study its counterpart on the type
IIA vector multiplet side. The two sectors are related by T-duality and, at string tree-level, the IIB hypermultiplet moduli space is obtained from the IIA
vector multiplet moduli space by the c-map \cite{CFG,Ferrara:1989ik}.
At a generic point in the moduli
space, the special geometry is determined by a holomorphic prepotential $F(X)$ homogeneous of degree two, which receives perturbative $\alpha^\prime$ corrections from the worldsheet conformal field theory and worldsheet instantons
\begin{equation}
F(X) = F_{\rm cl}(X) + F_{\rm pt}(X) + F_{\rm ws}(X)\ .
\end{equation}
Here
\begin{equation}\label{VMPP}
\begin{split}
F_{\rm cl}(X) = & \, \frac{1}{3!} \, \kappa_{abc} \, \frac{X^a X^b X^c}{X^1} \, , \qquad
F_{\rm pt}(X) = \, \mathrm{i} \, \frac{ \zeta(3)}{2 (2 \pi)^3} \, \chi_E \, (X^1)^2 \, , \\
F_{\rm ws}(X) = & \, - \mathrm{i} \,\frac{1}{(2 \pi)^3} \, (X^1)^2 \, \sum_{k_a} \, n_{k_a} \, {\rm Li}_3\left( {\rm e}^{2 \pi \mathrm{i} k_a X^a / X^1} \right) \, ,
\end{split}
\end{equation}
with $\kappa_{abc}, \chi_E$ and $n_{k_a}$ the triple intersection numbers,
Euler number and instanton numbers of the CY$_3$ \cite{Candelas:1990rm} respectively
(see, e.g., \cite{Hori} for additional background).
Microscopically the scalar fields in the vector multiplet sector arise from expanding the K\"ahler form $J$ and the ten-dimensional NS two-form $\hat B$ in terms of harmonic two-forms $\omega_a$ of the CY$_3$ \cite{Bodner:1990zm},
\begin{equation}
\hat B = B_2 + b^a \, \omega_a \; , \quad J = t^a \, \omega_a \; , \quad a = 2, \ldots , h^{(1,1)} + 1 \, .
\end{equation}
These fields are combined into complexified K\"ahler moduli
\begin{equation}
z^a = b^a + \mathrm{i} t^a \, = \frac{X^a}{X^1} \, .
\end{equation}
It is useful to factor out $X^1$ and work with the holomorphic function $f(z)$
determined by
\begin{equation}
F(X) = (X^1)^2 f(z)\ ,
\end{equation}
which is also computed by the genus zero topological string amplitude.
We are now interested in the conifold limit of the prepotential given above. In the CY$_3$ geometry the conical singularity is obtained by shrinking the size of a holomorphic two-cycle $\mathcal{C}_\star$ to zero:
\begin{equation}\label{gcs}
\mbox{geometrical conifold singularity:} \quad t^\star \rightarrow 0 \, .
\end{equation}
We note, however, that the condition \eqref{gcs} is not sufficient for causing a singularity in the vector multiplet moduli space. Here the singularity arises if the {\it complexified} K\"ahler modulus is taken to zero:
\begin{equation}\label{mscs}
\mbox{moduli space conifold singularity:} \quad z^\star \rightarrow 0 \, .
\end{equation}
This implies that we can avoid hitting the singularity by giving a non-vanishing real part $b^\star$ to the complexified K\"ahler modulus. Thus in the moduli space the conifold singularity is a line of complex codimension one\footnote{This is different from the five-dimensional case \cite{FrankPhD} where such singularities are of real codimension one, so that a generic trajectory moving on the moduli space will not be able to avoid the singularity.}.
We can now take the conifold limit \eqref{mscs} for the prepotentials \eqref{VMPP}. Henceforth we consider the case of a conical singularity where one particular complexified K\"ahler modulus $z^\star = k_a z^a$ (for one particular and fixed vector $k_a$) shrinks to zero\footnote{We will drop the ''$\star$'' in the following.}, while the others are frozen to constant values. By inspection one then finds that the second derivatives of $f$ (determining the metric) arising from $f_{\rm cl}(z)$ and $f_{\rm pt}(z)$ are regular in this limit. Applying the expansion formula \eqref{Li3exp} to the worldsheet instanton contribution, one obtains (we
denote $N=n_{k_a}$ for the fixed vector $k_a$)
\begin{equation}\label{fcfl}
f_{\rm ws}(z) = \frac{N}{4 \pi \mathrm{i}} \, z^2 \, \ln(z) + \ldots \, ,
\end{equation}
where the dots give rise to regular contributions in the Lagrangian. Computing
\begin{equation}
\partial_z f_{\rm ws} = \frac{N}{2 \pi \mathrm{i}} \, z \, \ln(z) + \ldots \, ,
\end{equation}
one finds that this is in precise agreement with the singular behavior
found in the IIB vector multiplet sector when
going to the conifold point by shrinking $N$
Lagrangian three-cycles \cite{SBH}. In this case, however, $z$ is interpreted as a complex structure moduli arising from the periods of the holomorphic three-form of the CY$_3$.
The qualitative results of this section have already been discussed by Strominger \cite{SBH}, where it was argued that the conifold singularities in the type IIA vector multiplet sector originate from strong coupling effect involving worldsheet instantons.
\section{Conifold singularities in IIB hypermultiplets}
\setcounter{equation}{0}
\label{sect:5}
In this section, we derive the conifold singularities that arise in the
hypermultiplet moduli space of type IIB compactifications. As in
section 2, this singularity can be obtained from the rigid c-map on
the vector multiplet sector of the IIA theory. Here, we will rederive it
in a different way, starting from a generic point in the (tree-level)
hypermultiplet moduli space, and then taking the conifold limit in which
gravity decouples.
Our description of the hypermultiplet moduli space geometry uses the
conformal tensor
calculus combined with methods used in projective superspace.
In this way, $4n$-dimensional quaternion-K\"ahler geometry can be reformulated
in terms of $4(n+1)$-dimensional hyperk\"ahler geometry. For some
background material we refer to \cite{deWit:1999fp,deWit:2001dj,
Bergshoeff:2004nf,Rocek:2005ij,RSV,deWit:2006gn}.
The tree-level hypermultiplet
moduli space can be conveniently written down in projective
superspace \cite{Gates:1984nk},
in terms of a contour integral representation \cite{Hitchin:1986ea}
of the superspace Lagrangian density
\begin{equation}
\mathcal{L}(v, \bar{v}, x) = {\rm Im} \oint \, \frac{{\rm d} \zeta}{2 \pi \mathrm{i} \zeta} \,
H(\eta^I(\zeta))\ ,
\end{equation}
in terms of $h_{1,2}+2$ $N=2$ tensor multipets
\begin{equation}\label{TM}
\eta^I(\zeta)=\frac{v^I}{\zeta}+x^I-{\bar v}^I\zeta\ ,
\end{equation}
consisting of $N=1$ real linear multiplets $x^I$ and $N=1$ chiral multiplets
$v^I$. The Lagrangian density satisfies
\begin{equation}
{\cal L}_{x^Ix^J}+{\cal L}_{v^I{\bar v}^J}=0\ ,
\end{equation}
and expresses the fact that the dual hypermultiplets parameterize a
hyperk\"ahler manifold with $h_{1,2}+2$ commuting shift symmetries
\cite{Hitchin:1986ea}. Tensor multiplets can be used because the
hypermultiplet geometries
we need to consider have enough commuting isometries\footnote{This is correct
in the absence of three-brane and five-brane instantons, which are not relevant
for the purpose of this paper.}. The scalars of the tensor multiplets
transform as a triplet under $SU(2)$ R-symmetry
\begin{equation}\label{SU2vec}
\vr{I} = \left[2 \, v^I, \, 2 \, \bar{v}^I, \, x^I \right] \, , \quad \vr{I} \cdot \vr{J} = 2 v^I \bar{v}^J + 2 v^J \bar{v}^I + x^I x^J \, .
\end{equation}
For a given prepotential $F(X)$ encoding the vector multiplet couplings,
the dual tensor multiplet Lagrangian after the (local) c-map can be
obtained by evaluating the following contour integral \cite{Rocek:2005ij}
\begin{equation}\label{cint}
\mathcal{L}(v, \bar{v}, x) = {\rm Im} \oint \, \frac{{\rm d} \zeta}{2 \pi \mathrm{i} \zeta} \, \frac{ F(\eta^\Lambda)}{\eta^0} \, .
\end{equation}
Here $\eta^I = \{ \eta^0 , \eta^\Lambda \}$ with $\eta^0$ being the conformal compensator. The contour integral is taken around one of the roots $\zeta_+$ of $\zeta\eta^0$ and can be evaluated in a gauge invariant way \cite{Neitzke:2007ke}~\footnote{In \cite{sergei} a slightly more complicated formula for $\mathcal{L}$ has been given, taking into account the logarithmic singularity at $\zeta = 0$. The two expressions, however, only differ by terms linear in $x^I$ and therefore lead to the same Lagrangian.}
\begin{equation}\label{lagr-cmap}
\mathcal{L}(v, \bar{v}, x) = - \frac{\mathrm{i}}{2 r^0} \left( F(\eta_+^\Lambda) - \bar{F}(\eta_-^\Lambda) \right) \\
= - \frac{\mathrm{i}}{2 r^0} \left( (\eta_+^1)^2 f(z) - (\eta_-^1)^2 \bar{f}(\bar{z}) \right) \,,
\end{equation}
with
\begin{equation}
\eta_+^\Lambda = \eta^\Lambda(\zeta_+) = x^\Lambda - \frac{x^\Lambda}{2} \left( \frac{v^\Lambda}{v^0} + \frac{\bar{v}^\Lambda}{\bar{v}^0} \right) - \frac{r^0}{2} \left( \frac{v^\Lambda}{v^0} - \frac{\bar{v}^\Lambda}{\bar{v}^0}\right) \, ,
\end{equation}
$\eta_-=(\eta_+)^*$ and $z^a = \eta_+^a / \eta_+^1$.
{}From the superspace Lagrangian density
${\cal L}$, one can compute a tensor potential
\cite{deWit:2006gn}
\begin{equation}
\chi (v,\bar{v},x)=-{\cal L}(v,\bar{v},x) + x^I {\cal L}_{x^I} ,
\end{equation}
where ${\cal L}_{x^I}$ denotes the derivative with respect to $x^I$. Dualizing
the tensors to scalars, this potential becomes the hyperk\"ahler potential
of the corresponding hyperk\"ahler cone above the quaternion-K\"ahler
manifold \cite{deWit:2001dj}. Therefore, this function
determines the entire low-energy effective action.
Using the homogeneity properties of ${\cal L}$, one can derive the
identity
\begin{equation}
\frac{1}{2} \left( \chi_{x^Ix^J} + \chi_{v^I \bar{v}^J} \right) =
{\cal L}_{x^Ix^J}\ .
\end{equation}
The components ${\cal L}_{x^Ix^J}$ then appear in the kinetic terms of the
scalars $x^I$ and $v^I$ in the effective Lagrangian.
Close to the conifold locus $f(z)$ is given by \eqref{fcfl}.
Substituting into \eqref{lagr-cmap} yields
\begin{equation}\label{Lcfl}
\mathcal{L}^{\rm cf}(v, \bar{v}, x)
= - \frac{N}{8 \pi r^0} \left( (\eta_+^1)^2 \, z^2 \, \ln(z) + (\eta_-^1)^2 \, \bar{z}^2 \, \ln(\bar{z}) \right) \,.
\end{equation}
Here, only one tensor multiplet
($v,\bar{v},x$ with $x=k_ax^a$ etc.) captures the degrees of
freedom, and all others are frozen to constants.
As one can explicitly check, this function satisfies
\begin{equation}\label{dgl1}
\left( \partial_v \partial_{\bar{v}} + \partial_x^2 \right) \mathcal{L}^{\rm cf}(v, \bar{v}, x) = 0 \, .
\end{equation}
This is precisely the constraint coming from rigid $N=2$ supersymmetry and
expresses the fact that the geometry is four-dimensional hyperk\"ahler.
This is consistent
with the fact that in this limit, gravity is decoupled, and the target
space of hypermultiplets becomes hyperk\"ahler\footnote{The rigid limit in
special K\"ahler geometry was studied in detail in
\cite{Billo:1998yr}. It would be desirable to have a similar study for
hypermultiplets.}.
The function ${\cal L}$ is not yet to be compared with the function $V$
appearing in the hyperk\"ahler metric \eqref{HK-metric}. As shown in
\cite{Hitchin:1986ea}, the relation is (in the dilatation gauge $r^0=1$)
\begin{equation}
V = r^0{\cal L}_{xx}\ .
\end{equation}
Straightforward computation shows that, up to an additive
constant\footnote{This additive constant contributes to the parameter
$\mu$ in \eqref{V-sum} and depends on the particular CY$_3$ under
consideration.},
\begin{equation}\label{Lxxsing}
V = r^0{\cal L}_{xx} = - \frac{N}{4 \pi} \ln(z \bar{z}) \, ,
\end{equation}
which precisely matches \eqref{cfsol}.
This shows that at string tree-level, mirror symmetry works.
\section{IIB: Resummation of D1-instantons}
\setcounter{equation}{0}
The starting point for including the D1-instantons is the modular invariant tensor potential \cite{Robles-Llana:2006is}
\begin{equation} \label{TP}
\chi_{(1)} = - \frac{r^0 \tau_2^{1/2}}{(2 \pi)^3}
\sum_{ k_a } n_{k_a} \sideset{}{'} \sum_{m,n} \frac{\tau_2^{3/2}}{|m\tau + n|^3}\,
\big( 1 + 2 \pi |m\tau + n|\, k_a t^a \big) \, \mathrm{e}^{-S_{m,n}}\ ,
\end{equation}
with instanton action
\begin{equation}
S_{m,n} = 2\pi k_a \big( |m\tau + n|\, t^a - \mathrm{i} m\, c^a - \mathrm{i} n\, b^a
\big)\ .
\end{equation}
The primed sum is taken over all integers $(m,n) \in \mathbb{Z}^2 \backslash (0,0)$.
Here we used the notation and conventions as in \cite{Robles-Llana:2006is}, and adapted the normalization in such a way that it is consistent with the prepotential \eqref{VMPP}. The formula \eqref{TP} contains all contributions coming from
both worldsheet instantons (sum over $n$) and D1-instantons (sum over $m$), which are the only relevant configurations that survive in the conifold limit\footnote{Reference \cite{Robles-Llana:2006is} also
determined the contributions from
D(-1) instantons. They yield exponential corrections of the type
$\exp(-|m|\tau_2)$ and therefore vanish in the limit of vanishing string
coupling. Similar arguments show that three-brane and five-brane instantons
decouple in the conifold limit \eqref{CFlimitIIB}.}. The instanton action contains the dilaton-axion complex
\begin{equation}
\tau = \tau_1 + \mathrm{i} \tau_2 = a + \mathrm{i} {\rm e}^{-\phi}\ ,
\end{equation}
and the string coupling constant is given by $\lambda = {\rm e}^\phi$.
Furthermore, the $c^a$ are RR scalars that generate the theta-angle like terms
for the D1-instantons. The relation between the ``microscopic'' scalars $\tau, z^a, c^a$ and the scalars appearing in the tensor multiplets \eqref{TM} is given by \cite{Berkovits:1998jh,Neitzke:2007ke,Robles-Llana:2006is}
\begin{equation}
\tau = \frac{1}{(r^{0})^2} \left(\vr{0} \cdot \vr{1} + \mathrm{i} \, | \vr{0} \times \vr{1} | \right) \, , \;
z^a = \frac{\eta_+^a}{\eta_+^1} \, , \; c^a = \frac{(\vr{0} \times \vr{1}) \cdot (\vr{1} \times \vr{a})}{|\vr{0} \times \vr{1}|^2} \, .
\end{equation}
Following the discussion in section 4 we compute the function $\mathcal{L}_{xx}$ arising from \eqref{TP}. The corresponding calculation can be simplified by noting that both $\chi_{(1)}$ and $\mathcal{L}_{xx}$ are invariant under local $SU(2)$ R-symmetry. Thus we can adopt a particular $SU(2)$ gauge, e.g., setting $x^0 = x^1 = 0, v^0 = \bar{v}^0$ and then taking the derivatives of \eqref{TP} with respect to
$x,v,\bar{v}$. Re-expressing the result in gauge invariant variables we find
\begin{equation}\label{LxxD1}
\mathcal{L}_{xx}(x,v,\bar{v}) = \frac{N}{4 \pi r^0} \sideset{}{'} \sum_{m,n} \frac{1}{|m \tau + n|}
\, \mathrm{e}^{ -2 \pi \, \left( | m \tau + n| \, t - \mathrm{i} m c - \mathrm{i} n b \right) } \, .
\end{equation}
To compare with the type IIA results obtained by Ooguri and Vafa we also have to take the conifold limit. On the type IIB side this corresponds to
\begin{equation}\label{CFlimitIIB}
t \rightarrow 0 \, , \quad b \rightarrow 0 \, , \quad \tau_2 \rightarrow \infty \, , \quad {\rm with} \; \; \; \tau_2 |b + \mathrm{i} t| = {\rm finite} \, .
\end{equation}
Taking this limit requires resumming the instanton corrections appearing in \eqref{LxxD1}. For this purpose we split the double sum into the contributions coming from worldsheet instantons, $m=0$, and the D1-instantons plus their bound states, $m \not = 0, n \in \mathbb{Z}$:
\begin{equation}\label{Lxxsplit}
\begin{split}
\mathcal{L}_{xx}(x,v,\bar{v}) = & \, \frac{N}{4 \pi r^0} \sum_{n \not = 0} \frac{1}{|n|}
\mathrm{e}^{ -2 \pi \left( \, |n| \, t - \mathrm{i} n b \right) }
+ \frac{N}{4 \pi r^0} \sum_{m \not = 0} \, \sum_{n \in \mathbb{Z}} \frac{1}{|m \tau + n|}
\, \mathrm{e}^{ -2 \pi \left( \, |m \tau + n| \, t - \mathrm{i} m c - \mathrm{i} n b \right) } \, .
\end{split}
\end{equation}
The first term can be summed up easily
\begin{equation}\label{int1}
\begin{split}
\frac{N}{4 \pi r^0} \, \sum_{n \not = 0} \frac{1}{|n|} \exp
\Big\{ -2 \pi \left( |n| t - \mathrm{i} n b \right) \Big\}
& \, = - \frac{N}{4 \pi r^0} \ln\left(1 - {\rm e}^{2 \pi \mathrm{i} z} \right) + \mbox{c.c.} \\
& \, \simeq - \frac{N}{4 \pi r^0} \ln\left( z \bar{z} \right) \, ,
\end{split}
\end{equation}
where we took the conifold limit $z = b+\mathrm{i} t \rightarrow 0$ in the second line. Observe that this expression precisely reproduces \eqref{Lxxsing}. In order to take the conifold limit in the second line of \eqref{Lxxsplit} we first carry out a Poisson resummation in $n$. Using the results of appendix \ref{SecB.2} we find
\begin{equation}\label{int2}
\begin{split}
\frac{1}{4 \pi r^0} & \sum_{m \not = 0} \, \sum_{n \in \mathbb{Z}} \frac{1}{|m \tau + n|}
\, \mathrm{e}^{ -2 \pi \left( \, |m \tau + n| \, t - \mathrm{i} m c - \mathrm{i} n b \right) } \\
& = \frac{1}{2 \pi r^0} \sum_{m \not = 0} \, \sum_{n \in \mathbb{Z}}
K_0\left( 2 \pi |m \tau_2| \sqrt{t^2 + (b + n)^2} \right) \mathrm{e}^{2 \pi \mathrm{i} m (c - \tau_1 (b+n))} \\
& \simeq \frac{1}{2 \pi r^0} \sum_{m \not = 0} K_0(2 \pi \tau_2 |m z|) \; {\rm e}^{ 2 \pi \mathrm{i} m (c - \tau_1 b) } \, .
\end{split}
\end{equation}
Here we have taken the conifold limit \eqref{CFlimitIIB} in the second step. Note that in this limit the sum over $n$ localizes such that only the $n=0$ part gives a non-zero contribution.
Combining \eqref{int1} and \eqref{int2}, we then obtain the D1-instanton
corrected $\mathcal{L}_{xx}$ in the conifold limit
\begin{equation}
N^{-1}V=r^0\mathcal{L}_{xx} = \frac{1}{4 \pi } \ln\left( \frac{1}{z \bar{z}} \right)
+ \frac{1}{2 \pi } \sum_{m \not = 0} K_0(2 \pi \tau_2 |m z|) \; {\rm e}^{ 2 \pi \mathrm{i} m (c - \tau_1 b) }
\end{equation}
Comparing this to the instanton corrected function $V$ on the type IIA side
in \eqref{V-sum}, we find perfect agreement if we use the mirror map
\begin{equation}\label{mirrormap}
\lambda = \tau_2^{-1} \,, \qquad z^{\rm IIA} = z^{\rm IIB} \,, \qquad u = \pm(c - \tau_1 b) \, .
\end{equation}
The second relation states that, under mirror symmetry, the complex structure modulus $z^{\rm IIA}$ is equated to the complexified K\"ahler modulus $z^{\rm IIB}$ associated with the vanishing cycles, while the relation between $u$ and $c - \tau_1 b$ is determined up to a sign only. Note that selecting the minus sign, eq. \eqref{mirrormap} is precisely the classical mirror map obtained in \cite{BGHL}. This shows that the classical mirror map does not receive quantum corrections
once the conifold limit is taken.
\medskip
\noindent
\textbf{Acknowledgments}
\noindent
We thank Cumrun Vafa for suggesting this project and for reading an earlier
draft of this manuscript. We furthermore thank Bernard de Wit,
Daniel Robles-Llana, Martin Ro\v{c}ek, Jan Stienstra and Ulrich Theis for
valuable discussions. This work grew out of the 4th Simons Workshop in
Physics and Mathematics. We thank the YITP and the Department of Mathematics
at Stony Brook University for hospitality.\ FS is supported by the
European Commission Marie Curie Fellowship no.\ MEIF-CT-2005-023966.
This work is partially supported by the European Union RTN network
MRTN-CT-2004-005104 and INTAS contract 03-51-6346.
\begin{appendix}
\section{Conifold singularities from the rigid c-map}
\setcounter{equation}{0}
The leading term in the type IIB vector multiplet prepotential can be determined by monodromy arguments
\begin{equation}\label{VMPPIIB}
f_{\rm ws}(z) = \frac{1}{4 \pi \mathrm{i}} \, z^2 \, \ln(z) + \ldots \, ,
\end{equation}
where $z$ is associated with the vanishing period of the holomorphic three-form of the CY$_3$.
We now interpret \eqref{VMPPIIB} as the prepotential underlying a rigid special K\"ahler geometry. We can then use the rigid c-map \cite{CFG,rcmap} to construct the dual hyperk\"ahler metric. For a general (rigid) prepotential $F(X^I)$ the resulting metric reads (up to a trivial rescaling)
\begin{equation}\label{rcmet}
- \tfrac{1}{2} {\rm d} s^2 = \mathrm{i} \left({\rm d} F_I \, {\rm d} \bar X^I - {\rm d} \bar F_I \, {\rm d} X^I \right)
- N^{IJ} \left( {\rm d} B_I - F_{IK} {\rm d} A^K \right) \left( {\rm d} B_J - \bar{F}_{JL} {\rm d} A^L \right) \, ,
\end{equation}
where $F_I = \partial F / \partial X^I$ and $N^{IJ}$ being the inverse of
\begin{equation}
N_{IJ} = - \mathrm{i} \left( F_{IJ} - \bar{F}_{IJ} \right) \, .
\end{equation}
Evaluating \eqref{rcmet} for the prepotential \eqref{VMPPIIB} then yields
\begin{equation}
\begin{split}
{\rm d} s^2 = & \, \frac{1}{2 \pi} \ln(z\bar{z}) \, {\rm d} z \, {\rm d} \bar{z} - \frac{\pi}{\ln(z\bar{z})}
\left( {\rm d} B - \frac{1}{2 \pi \mathrm{i}} \ln(z) \, {\rm d} A \right) \left( {\rm d} B + \frac{1}{2 \pi \mathrm{i}} \ln(\bar{z}) \, {\rm d} A \right) \, .
\end{split}
\end{equation}
Changing coordinates
\begin{equation}
B = 2 \lambda \, t \, , \; A = - 2 \lambda \, u \, ,
\end{equation}
one finds precisely the metric \eqref{HK-metric} obtained from the solution \eqref{cfsol}.
\section{Polylogarithms and resummation techniques}
\setcounter{equation}{0}
In this appendix we collect various facts used in the main part of the paper by giving a brief introduction to polylogarithmic functions and Poisson resummation in sections \ref{SecB.1} and \ref{SecB.2}, respectively.
\subsection{Polylogology}
\label{SecB.1}
We start by summarizing some properties of polylogarithmic functions, essentially following the appendix B of ref.\ \cite{LMZ}. For $0 < z < 1$, the k-th polylogarithm is defined via the series expansion
\begin{equation}\label{PLD}
{\rm Li}_k (z) = \sum_{n=1}^{\infty} \frac{z^n}{n^k} \, .
\end{equation}
It can be analytically continued to a multivalued function on the complex plane. Polylogarithms with different values of $k$ are related by
\begin{equation}\label{polyrec}
z \frac{d}{dz} {\rm Li}_k(z) = {\rm Li}_{k-1}(z) \, .
\end{equation}
For $k = 1$ we have
\begin{equation}
{\rm Li}_1(z) = - \log(1-z) \, ,
\end{equation}
which we used to sum up \eqref{int1}. From the definition \eqref{PLD} we find
\begin{equation}
{\rm Li}_k(0) = 0 \; , \quad ( k \in \mathbb{Z}) \quad {\rm and} \quad {\rm Li}_k(1) = \zeta(k) \, , \quad {\rm for} \quad k > 1 \, .
\end{equation}
Polylogarithms at values $z$ and $1/z$ are related through the connection formula \cite{ConnectionFormula},
\begin{equation}
{\rm Li}_k(z) + (-1)^k \, {\rm Li}_k(1/z) = - \frac{(2 \pi \mathrm{i})^k}{k!} B_k\left( \frac{\log(z)}{2 \pi \mathrm{i}} \right) \, ,
\end{equation}
where $B_k(\cdot)$ are the Bernoulli polynomials. For Li$_3(z)$ this yields
\begin{equation}\label{CFLi3}
{\rm Li}_3(z) - {\rm Li}_3(1/z) = - \tfrac{1}{6} \log^3(z) - \tfrac{\mathrm{i} \pi}{2} \log^2(z) + \tfrac{\pi^2}{3} \log(z) \, .
\end{equation}
{}From the point of view of the main part of the paper, it is more natural to work with the variable $x$, $z = {\rm e}^x$. In this case \eqref{CFLi3} becomes
\begin{equation}\label{CFLix}
{\rm Li}_3(\mathrm{e}^x) = {\rm Li}_3(\mathrm{e}^{-x}) - \tfrac{1}{6} x^3 - \tfrac{\mathrm{i} \pi}{2} x^2 + \tfrac{\pi^2}{3} x \, .
\end{equation}
The conifold point corresponds to $x = 0$. At this point the function ${\rm Li}_3(\mathrm{e}^{-x})$ has a logarithmic branch point
\begin{equation}\label{Li3expans}
{\rm Li}_3(\mathrm{e}^{-x}) \simeq q(x) \log(x) + p(x) \quad {\rm for} \; x \rightarrow 0 \, ,
\end{equation}
where $q(x)$ and $p(x)$ are power series
\begin{equation}
q(x) = \sum_{j=0}^\infty q_j x^j \; , \qquad p(x) = \sum_{j=0}^\infty p_j x^j \, .
\end{equation}
Analytically continuing the ansatz \eqref{Li3expans} to Li$_3(\mathrm{e}^{x})$ using $\log(-x) = \log(x) + \mathrm{i} \pi$ and substituting into the connection formula \eqref{CFLix} we obtain the following expansion for small $x$
\begin{equation}\label{Li3exp}
\mathrm{Li}_3 \left(\mathrm{e}^{-x} \right) = - \frac{1}{2} x^2 \ln(x) + p(x)\ ,
\end{equation}
where $p(x) = \zeta(3) - \zeta(2)x + \tfrac{3}{4}x^2 + \frac{1}{12} x^3 + \mathcal{O}(x^4)$ is polynomial in $x$. With this identity it is then straightforward to determine the conifold limit of the prepotential \eqref{VMPP}.
\subsection{Poisson resummation}
\label{SecB.2}
Taking the conifold limit in the D1-brane instanton sector in
section 5 requires a Poisson resummation in the worldsheet instanton number $n$. The technical details of this computation are collected in this appendix.
The basic ingredient for Poisson resummation is the following identity for the Dirac delta-distribution
\begin{equation}
\sum_{n \in \mathbb{Z}} \delta(y - na) = \frac{1}{a} \sum_{n \in \mathbb{Z}} {\rm e}^{2 \pi \mathrm{i} n y/a} \, , \; a \in \mathbb{R}^+ \, .
\end{equation}
Multiplying with an arbitrary function $f(x+y)$ and integrating over $ y \in \mathbb{R}$ gives the Poisson resummation formula
\begin{equation}\label{PRF}
\sum_{n \in \mathbb{Z}} f(x+na) = \frac{1}{a} \sum_{n \in \mathbb{Z}} \tilde{f}(2 \pi n /a) \, \mathrm{e}^{2 \pi \mathrm{i} n x/a} \, .
\end{equation}
Here $f(x)$ and $\tilde f(k)$ are related by Fourier-transformation
\begin{equation}
\tilde f(k) = \int_{-\infty}^{\infty} {\rm d}x \, f(x) \, \mathrm{e}^{-ikx} \; , \qquad f(x) = \frac{1}{2 \pi} \int_{-\infty}^{\infty} {\rm d}k \, \tilde f(k) \, \mathrm{e}^{ikx} \, .
\end{equation}
We now apply this resummation to the second term in \eqref{Lxxsplit}. Comparing
\begin{equation}
\begin{split}
\sum_{n \in \mathbb{Z}} \frac{1}{\left| m \tau + n \right|} \, \mathrm{e}^{- 2 \pi (| m \tau + n| \, t - \mathrm{i} n b)}
= \sum_{n \in \mathbb{Z}} \frac{1}{\sqrt{ (m \tau_2)^2 + (n + m \tau_1)^2 }} \, \mathrm{e}^{- 2 \pi ( \sqrt{ (m \tau_2)^2 + (n + m \tau_1)^2} \, t - \mathrm{i} n b)}
\end{split}
\end{equation}
to the general formula \eqref{PRF} we identify
\begin{equation}\label{B.15}
\tilde f(2 \pi n) = \frac{2 \pi}{(\alpha^2 + (2 \pi n + \gamma)^2)^{1/2}} \mathrm{e}^{- (\alpha^2 + (2 \pi n + \gamma)^2)^{1/2} t} \, ,
\end{equation}
together with $a = 1, x = b, \alpha = 2 \pi m \tau_2$, and $\gamma = 2 \pi m \tau_1$. The (inverse) Fourier transform of \eqref{B.15} can be found using the following formula for Fourier cosine transformations \cite{IT}:
\begin{equation}\label{FCT1}
\int_0^\infty {\rm d}x \, (x^2 + \alpha^2)^{-1/2} \, {\rm e}^{-\beta \,(x^2 + \alpha^2)^{1/2}} \cos(xy) = K_0\left[\alpha (\beta^2 + y^2)^{1/2} \right] \, .
\end{equation}
Substituting the result back into \eqref{PRF} then establishes the identity
\begin{equation}\label{PR1}
\sum_{n \in \mathbb{Z}} \frac{1}{\left| m \tau + n \right|} \, \mathrm{e}^{- 2 \pi ( | m \tau + n| t - \mathrm{i} n b)} = 2 \sum_{n \in \mathbb{Z}} K_0\left(2 \pi |m \tau_2| (t^2 + (b+n)^2)^{1/2} \right) \mathrm{e}^{- 2 \pi \mathrm{i} m \tau_1 (b + n)} \, .
\end{equation}
This completes the derivation of the first step in \eqref{int2}.
\end{appendix}
|
2,877,628,088,533 | arxiv | \section{Introduction and summary}
Global gravitational anomalies \cite{Witten:1985xe} are anomalous phases picked by the partition function of quantum field theories under large diffeomorphisms of spacetime. Just as for local anomalies \cite{AlvarezGaume:1983ig}, their cancellation is required in quantum field theories arising as low energy effective descriptions of quantum theories of gravity, providing constraints on the latter. In non-gravitational theories, however, global anomalies need not vanish.
The aim of this paper is to compute the global gravitational anomalies of the 6-dimensional conformal field theories with (2,0) supersymmetry \cite{Witten:1995zh, Strominger:1995ac}, henceforth referred to as (2,0) theories. There are two main motivations for this computation, that will be presented in turn.
As we will explain in Section \ref{SecRemAnom}, the global anomaly of a $d$-dimensional quantum field theory $\mathfrak{F}$ is captured by an $\mathbb{R}/\mathbb{Z}$-valued geometric invariant ${\rm An}_\mathfrak{F}$ of $d+1$-dimensional manifolds. A large class of such invariants are Chern-Simons invariants, whose value on a $d+1$-dimensional manifold $U$ is given by the integral of a characteristic form of degree $d+2$ over a $d+2$-dimensional manifold $W$ bounded $U$. The knowledge of the local anomaly essentially amounts to the knowledge of a characteristic form $I$ in dimension $d+2$, and in simple cases, such as complex chiral fermions, ${\rm An}_\mathfrak{F}(U)$ is indeed simply given by the Chern-Simons invariant of $I$. However, such a formula can be consistent only when $I$ yields an integer whenever integrated over a closed manifold $W$. Indeed, this ensures that ${\rm An}_\mathfrak{F}(U)$ is well-defined modulo $\mathbb{Z}$.
The local anomaly of (2,0) theories has been computed in \cite{Harvey:1998bx} for theories in the A-series, in \cite{Yi:2001bz} for the D-series and a general formula, also valid for the E-series, has been conjectured in \cite{Intriligator:2000eq}. Given these expressions, it is easy to check that the corresponding degree 8 characteristic form $I$ does not integrate to an integer on closed $8$-dimensional manifolds (see equation \eqref{EqLocAn20Theory}). This shows that the Chern-Simons invariant of $I$ does not exist, and it is therefore an interesting task to determine the geometric invariant computing the anomaly of the (2,0) theory. We will show that the latter can be seen as the sum of the would-be Chern-Simons invariant of $I$ and an extra term that does not contribute to the local anomaly. While ill-defined separately, these two terms combine into a well-defined invariant of 7-dimensional manifolds.
The second motivation for the study of the global anomaly of (2,0) theories comes from the fact that they generate an impressive collection of supersymmetric theories in lower dimensions upon reduction. When reduced on a 4-manifold $X$, the (2,0) theory yields a 2-dimensional quantum field theory that can inherit a global gravitational anomaly, translating into a failure of modular invariance. The knowledge of the global anomaly of the (2,0) theory on generic 6-dimensional manifolds allows us in principle to compute the failure of modular invariance in the 2-dimensional theory in terms of the geometry and topology of $X$.
When reduced on a Riemann surface, the (2,0) theory yields a 4-dimensional supersymmetric theory. The latter admits an S-duality group given by the mapping class group of the Riemann surface \cite{Verlinde:1995mz, Witten:1997sc, Gaiotto:2009we}. The fact that the 6-dimensional theory has a global gravitational anomaly translates into the fact that the S-duality transformation of the 4-dimensional partition function is anomalous \cite{Vafa:1994tf, Labastida:1998sk}. Again, the knowledge of the 6-dimensional global gravitational anomaly allows us in principle to compute the anomalous transformation of the 4-dimensional theories under S-duality.
We will not venture into this interesting research program in the present paper, but only keep it in mind as a strong motivation for the derivation of a general anomaly formula for the (2,0) theory.
We can carry out rigorous computation of the global anomaly only for A-type theories. We use the fact that the latter can be realized on a stack of M5-branes in M-theory \cite{Strominger:1995ac}. In particular, there is a limit in which a set of $n$ parallel non-intersecting M5-branes flows to the $A_{n-1}$ (2,0) theory at a generic point of its Coulomb branch, together with a free tensor multiplet corresponding to the center of mass of the brane system. We showed recently in \cite{Monnierb} that the global anomaly of non-intersecting M5-branes vanishes, as is expected from the consistency of M-theory. In the present paper, we use this fact to derive the global anomaly of the (2,0) theory, in the same spirit as the derivation of the local anomaly in \cite{Harvey:1998bx}. To do so, we consider the M5-brane system above and pick a tubular neighborhood containing it. As we know that anomalies cancel in an M-theory spacetime including (non-intersecting) M5-branes, the anomaly of M-theory in the tubular neighborhood is due entirely to the presence of the boundary, and can essentially be computed by evaluating the M-theory Chern-Simons term on the boundary. One then obtains the anomaly of the (2,0) theory by subtracting the anomaly of the center of mass, which can be deduced from recent results about the global anomaly of the self-dual field \cite{Monnier2011a, Monniera}. One can then check explicitly that the geometric invariant obtained is well-defined, in the sense discussed above.
There is an essentially unique way of expressing the geometric invariant of the $A_n$ (2,0) theory in terms of Lie algebra data, and this provides a natural formula for the anomaly of the other (2,0) theories, which is automatically compatible with the exceptional isomorphisms between members of the A-D-E series. We check that the corresponding geometric invariant is well-defined as well for Lie algebras in the D and E series. A derivation of this formula in the $D_n$ case should be possible using the realization of the latter by $n$ M5-branes on a $\mathbb{R}^5/\mathbb{Z}_2$ orbifold. In this paper, we only point out that the anomaly of the $\mathbb{R}^5/\mathbb{Z}_2$ orbifold is not understood globally. Just like for the (2,0) theory, the Chern-Simons term obtained from the index density describing the local anomaly is ill-defined. In this case, however, we do not know how to compute the correct global anomaly.
In Section \ref{SecOrigHWZTerms}, we also present a simple picture for the appearance of the Hopf-Wess-Zumino terms present on the Coulomb branch of the (2,0) theory. Those terms can be thought of as the topological modes of the C-field living between the M5-branes, which have to persist when we scale distance between the M5-branes to zero in order to obtain the (2,0) theory.
Another interesting point is that the anomaly formula we derive suggests that more data is needed to define the (2,0) theory that was previously expected. In addition to a simply laced Lie algebra, a smooth oriented 6-manifold $M$, a rank 5 R-symmetry bundle $\mathscr{N}$ over $M$ and a spin structure on $TM \oplus \mathscr{N}$, we seem to need a global angular differential cohomology class on $\mathscr{N}$. This is a differential cohomology class on the 4-sphere bundle $\tilde{M}$ associated to $\mathscr{N}$, restricting on each fiber to a normalized top differential cohomology class on $\tilde{M}$. In the M-theory realization of the A-type theories, a choice of global angular differential cohomology class is required in order to perform the decoupling of the center-of-mass tensor multiplet. We should mention that when the fourth Stiefel-Whitney class of $\mathscr{N}$ vanishes, a canonical choice is available.
A conceptual way to think of anomalies is in terms of a field theory (in the mathematical sense of the term) in one dimension higher \cite{Freed:2014iua}. The geometric invariant computed in this paper is the partition function of this anomaly field theory. Other aspects of the anomaly field theory will be explored elsewhere \cite{Monnierc}. We should also mention that a discussion of the relation between the quantum field theory on a stack of M5-branes and a non-abelian Chern-Simons 7-dimensional theory appeared in \cite{Fiorenza:2012tb}.
We add two remarks to clarify the assumptions made in this paper and the caveats of the derivation. \footnote{We thank the referee for raising this point.} First, the anomaly cancellation check of \cite{Monnierb} was not quite complete, as it was assumed that all 7-dimensional manifolds $U$ involved in anomaly computations are bounded by 8-dimensional manifolds $W$. It was shown in \cite{Monnierb} that the possible obstruction, given by a certain cobordism group, is at most torsion. If the cobordism group turns out not to vanish, then the check in \cite{Monnierb} is incomplete and it is in principle possible that M-theory backgrounds containing certain configurations of M5-branes are anomalous under certain combinations of large diffeomorphisms and C-field gauge transformations. In this paper, we make the likely assumption that no such anomalies exist. (Their existence would imply a fundamental inconsistency of M-theory).
Second, to keep the derivation simple, we assume in this paper that the cobordism group vanishes, therefore that every $U$ is bounded by a $W$. This allows us to compute in Section \ref{SecEvCSTerm} the anomaly inflow using differential forms on $W$. As will be shown in \cite{Monnierc}, we are not losing any information from this assumption, because the anomaly inflow computation can be carried out on $U$, using the corresponding differential cocycles, and it yields the same result.
The paper is organized as follows. Section \ref{SecRemAnom} presents the relation between global anomalies of $d$-dimensional quantum field theories and geometric invariants of $d+1$-dimensional manifolds. We also review the known local anomalies of the (2,0) theories and explain why the associated Chern-Simons invariants are ill-defined. In Section \ref{SecGeomM5-branes}, we present aspects of the geometry of M5-branes necessary for our computation of the global anomaly. The derivation of the global anomaly of the A-type (2,0) theories is found in Section \ref{SecGlobAnAn}. We show that the anomaly formula determines a well-defined geometric invariant of 7-manifolds and comment on the appearance of conformal blocks and on the Hopf-Wess-Zumino terms present on the Coulomb branch of (2,0) theories. Section \ref{SecGenGlobAnForm} presents the general anomaly formula, conjecturally also valid for the D- and E-type theories, as well as a proof that the associated geometric invariants are well-defined.
\section{Some remarks about anomalies}
\label{SecRemAnom}
The aim of this section is to explain informally how the global anomaly of a $d$-dimensional quantum field theory can be described by a geometric invariant of $d+1$-dimensional manifolds. In Section \ref{SecAnomCob}, we introduce the anomaly line bundle and explain that its holonomies and transition functions can be computed by evaluating a geometric invariant on mapping tori and twisted doubles, respectively. In Section \ref{SecExamAnom}, we give some examples of anomalous theories and their geometric invariants. We introduce in Section \ref{SecIncons20TheoryAn} the local anomaly of the (2,0) theory and deduce a natural guess for its global anomaly. We explain why this naive guess cannot be correct, providing a motivation for the more careful derivation in the following sections.
\subsection{Global anomalies and cobordisms}
\label{SecAnomCob}
A global symmetry of a field theory on a $d$-dimensional manifold $M$ is associated to a current $J$. The latter can be sourced by a background field $A$, which belongs to an infinite-dimensional space of background fields $\mathcal{B}$. Two common examples of such symmetries are a global internal symmetry, described by a pointwise action of a Lie group $G$ on the fields of the theory, and the isometry group of spacetime, acting by pullback on the fields. The associated currents are the symmetry current and the energy-momentum tensor. The corresponding background fields in these two examples are a non-dynamical gauge field coupling to the current, and a (Riemannian or Lorentzian) metric on $M$.
We can also consider the local transformations associated to the global symmetry. In our first examples, such local transformations are generated by the action on the fields of a section $g$ of a $G$-bundle over $M$. In the second example, the local transformations are the diffeomorphisms of $M$, or a subset of those, if some structure necessary for the definition of the field theory needs to be preserved. While a local transformation does not leave the action invariant, its effect can be compensated by a corresponding transformation on the background fields. In the first example, this is achieved by changing the background gauge field by the gauge transformation associated to $g$. In the second example, this is achieved by pulling back the metric of $M$ via the diffeomorphism.
In the quantum theory, we say that the global symmetry suffers from an \emph{anomaly} if the quantum theory turns out not to be invariant under the combined action of the local transformations on the fields and on the background fields. More precisely, we can see the partition function of the quantum field theory (as well as the associated correlation functions) as functions over the space of background fields $\mathcal{B}$. An anomaly is present if these functions are not invariant under the action of the group $\mathcal{G}$ of local transformations on $\mathcal{B}$. For unitary theories, the lack of invariance of the partition function $Z$ is only by a phase. Our aim in the present paper is to give a formula for these phases in the case of the 6-dimensional superconformal theories with (2,0) supersymmetries, when the local transformations are diffeomorphisms of the 6-dimensional spacetime.
A fruitful point of view on anomalies is the following. If $Z$ is not invariant under $\mathcal{G}$, it cannot define a function on the quotient $\mathcal{B}/\mathcal{G}$, seen as the space of gauge invariant background field data. However, $Z$ does define a section of a unitary $\mathcal{G}$-equivariant line bundle on $\mathcal{B}$. For all practical purposes, a $\mathcal{G}$-equivariant line bundle on $\mathcal{B}$ can be taken as the definition of a line bundle over $\mathcal{B}/\mathcal{G}$, valid even when the quotient is singular. Therefore, instead of defining a function over $\mathcal{B}/\mathcal{G}$, in general $Z$ defines a section of a unitary line bundle $\mathscr{L}$ over $\mathcal{B}/\mathcal{G}$.
From now on, in order to have a unified treatment, we include in the space of background field $\mathcal{B}$ all the data required to define our quantum field theory. In particular, a point of $\mathcal{B}$ specifies the $d$-dimensional spacetime $M$. $\mathcal{G}$ is then not exactly a group, but a groupoid obtained by the union of the groups of local transformations for each $M$, acting each on the respective component of $\mathcal{B}$. In this more general setting, the partition function still defines the section of a line bundle $\mathscr{L}$ over $\mathcal{B}/\mathcal{G}$.
How can we describe unitary line bundles and their sections over $\mathcal{B}/\mathcal{G}$? One way to do so is to pick some $\mathbb{R}/\mathbb{Z}$-valued geometric invariant of manifolds with boundary of dimension $d+1$. (Recall that the spacetime $M$ has dimension $d$.) We will write ${\rm An}_{\mathfrak{F}}$ for the geometric invariant describing the anomaly bundle of the quantum field theory $\mathfrak{F}$. By \emph{geometric invariant}, we mean a functional that depends on certain geometric or topological data on the $d+1$-dimensional manifold $U$, which after restriction to $\partial U$ defines a unique point in $\mathcal{B}$. The only requirement we put on ${\rm An}_{\mathfrak{F}}$ is that it is consistent with the gluing of manifolds along their boundaries. If $U_1$ has a boundary component $M$ and $U_2$ has a boundary component $\bar{M}$ ($M$ with the opposite orientation) such that the extra structure glues smoothly into a manifold $U_1 \cup_M U_2$, then we require that
\be
\label{EqFunctRelGeomInv}
{\rm An}_{\mathfrak{F}}(U_1) + {\rm An}_{\mathfrak{F}}(U_2) = {\rm An}_{\mathfrak{F}}(U_1 \cup_M U_2) \;.
\ee
In more abstract terms, we need to find a cobordism category $\mathfrak{C}$ whose objects are the elements of $\mathcal{B}$, i.e. $d$-dimensional manifolds endowed with all the structures we need to define our quantum field theory. ${\rm An}_{\mathfrak{F}}$ is then a functor from $\mathfrak{C}$ to the category whose only object is the complex line $\mathbb{C}$ and whose morphisms from $\mathbb{C}$ to itself are labeled by $U(1)$, identified with $\mathbb{R}/\mathbb{Z}$ via exponentiation.
The geometric invariant ${\rm An}_{\mathfrak{F}}$ then defines a unitary line bundle $\mathscr{L}$ with connection over $\mathcal{B}/\mathcal{G}$. For instance, a cobordism $U_b$ between the empty manifold and $b \in \mathcal{B}$ can be seen as defining the value at $b$ of (the pull-back of) a section $s$ of $\mathscr{L}$. Indeed, $b \in \mathcal{B}$ defines a manifold $M$ together with background fields, and there is a subset $\mathcal{B}_U \in \mathcal{B}$ consisting of the data that can be extended to $U$. The pull-back of $\pi^\ast(\mathscr{L})$ to $\mathcal{B}_U$ is trivial and we define $\pi^\ast(s)(b) = {\rm An}_{\mathfrak{F}}(U)$. As $b$ moves in $\mathcal{B}_U$, we obtain a function over $\mathcal{B}_U$, which is the pull-back of a section $s$ of $\mathscr{L}$.
An element $g \in \mathcal{G}$ acts on $\mathcal{B}$ and induces a change of trivialization in $\pi^\ast(\mathscr{L})$. We can compute the phase of this change of trivialization by comparing the value of the pull-back of a given section $s$ at $b$ and at $g.b$. We know that $\pi^\ast(s)(b) = {\rm An}_{\mathfrak{F}}(U_b)$. Consider now the twisted double $U_g$ of $U_b$. This is the manifold obtained by gluing $U_b$ to $\bar{U}_b$ ($U_b$ with the opposite orientation) through the transformation $g$. Then ${\rm An}_{\mathfrak{F}}(U_g)$ is the logarithm of the phase associated to the change of trivialization induced by $g$. A simple reasoning shows that the phase obtained is independent of the choice of manifold $U_b$, i.e. of the choice of section of $\mathscr{L}$, see Figure \ref{Fig2}.
\begin{figure}[p]
\vspace{-1cm}
\centering
\includegraphics[width=\textwidth]{Fig-TwistDoubleCob.pdf}
\caption{\emph{This figure illustrates the argument showing that the value of ${\rm An}_{\mathfrak{F}}$ on twisted doubles depends only on the gluing map $\phi$. We start by picking two manifolds $U$ and $U'$ bounded by $M$. On the top left, the twisted double $U_\phi$ is constructed by gluing two copies of $U$, one of them with its orientation reversed, with the help of the map $\phi$. On the top right, the same construction starting from $U'$, with the opposite orientation, yielding $\bar{U}'_\phi$. By rearranging the pieces, we obtain the second line. Then, noticing that the two twists cancel in the second gluing on the second line, we obtain on the third line $V = U \cup_{\rm id} \bar{U}'$ and $\bar{V}$. This pair of manifolds is bordant to the empty manifold, showing that ${\rm An}_{\mathfrak{F}}$ was zero all along and implying that ${\rm An}_{\mathfrak{F}}(U_\phi) = {\rm An}_{\mathfrak{F}}(U'_\phi)$. In terms of the line bundle $\mathscr{L}$, this translates into the fact that the transition functions do not depend on the sections used to compute them.}}
\label{Fig2}
\end{figure}
The parallel transport along a path $p$ in $\mathcal{B}$ is given by a cylindrical cobordism $U_{[0,1]} = M \times [0,1]$ between $p(0)$ and $p(1)$, in such a way that $M \times \{t\} = p(t)$. In particular, a loop $c \in \mathcal{B}$ determines a closed $d+1$ manifold $U_c$, the mapping torus associated to $c$. $\exp 2\pi i {\rm An}_{\mathfrak{F}}(U_c) \in U(1)$ is the holonomy of the connection on $\mathscr{L}$ along $c$. This explains the appearance of mapping tori in the computations of global anomalies \cite{Witten:1985xe, Witten:1985mj, Monnier2011a, Monnierb}. (We are here glossing over the fact that the path or loop in $\mathcal{B}$ might not define unambiguously the data needed to compute the geometric invariant on the cylinder or mapping torus. Those subtleties will play no role in what follows.)
Let us remark that the construction using twisted doubles reviewed above allows us to compute the anomalous phases picked by the partition function of a quantum field theory without computing the latter explicitly, provided we know the invariant ${\rm An}_{\mathfrak{F}}$.
\subsection{Examples}
\label{SecExamAnom}
Let us now turn to some examples. An important example is 3-dimensional Chern-Simons theory, in which the above is well-known \cite{Witten:1988hf, Labastida1989, Freed:1992vw}. The anomalous field theory is the 2-dimensional chiral WZW model and ${\rm An}_{\rm WZW}$ is the Chern-Simons functional. Depending on whether we are considering the quantum \cite{Witten:1988hf, Labastida1989} or classical \cite{Freed:1992vw} theory, we consider the gauge field as dynamical or include it with the background fields. The anomaly line bundle associated to a surface by the Chern-Simons term is the line bundle over the moduli space of flat connections of which the WZW conformal blocks are sections. This line bundle extends as well over the space of conformal structures of the surface.
Another example, treated in detail in \cite{Dai:1994kq}, is the modified eta invariant $\xi$ of a Dirac operator $D$ on an odd-dimensional manifold of dimension $d+1$. $\xi$ is related to the eta invariant by $\xi = \frac{1}{2} (\eta + h)$, where $h$ denotes dimension of the kernel of $D$. It was shown in \cite{Dai:1994kq} that when we take ${\rm An}_{D_+} = \xi$, then $\mathscr{L}$ is the inverse of the determinant bundle of the associated chiral Dirac operator $D_+$ in dimension $d$. ${\rm An}_{D_+}$ then computes the global anomaly of the complex chiral fermionic theory in dimension $d$ associated to the Dirac operator $D_+$. In particular, the holonomies of the anomaly connection are given by $\tau = \exp 2\pi i \xi$ evaluated on mapping tori, in a suitable adiabatic limit in which the size of the base of the mapping tori tends to infinity \cite{Witten:1985xe, MR861886}. One can also compute the actual phase picked by the chiral fermion partition function under a diffeomorphism or a gauge transformation by evaluating $\tau$ on a twisted double, as explained above.
The latter example has the following interesting property. Assume that a closed $d+1$-dimensional spin manifold $U$ is bounded by a $d+2$-dimensional manifold $W$ on which the spin structure of $U$ extends, as well as any other data required to define $D$. The invariant $\xi$ can be computed using the Atiyah-Patodi-Singer index theorem \cite{Atiyah1973}:
\be
\xi = -{\rm index}(D_W) + \int_W I_{D_W}
\ee
where $D_W$ is a Dirac operator on $W$ restricting to $D$ on $U$, and $I_{D_W} = \hat{A}(TW) {\rm ch}(E)$ is the index density of $D_W$. (We expressed $D_W$ as the ordinary Dirac operator on $W$ twisted by a vector bundle $E$). Note that we are reading this formula only modulo 1, so the first term on the right-hand side is irrelevant.
$I_{D_W}$ is exactly the characteristic form in $d+2$ dimension used to compute the local anomaly of the chiral fermionic theory \cite{AlvarezGaume:1983ig}. It can be related to the curvature of the anomaly line bundle $\mathscr{L}$ as follows. Recall that the holonomies of the connection on $\mathscr{L}$ can be computed by evaluating $\tau$ on a mapping torus. Assume that we are interested in a small homotopically trivial loop $c$ in $\mathcal{B}$. Then the mapping torus $U_c = M \times S^1$ is trivial and we can take $W = M \times D^2$, where $D^2$ is a 2-dimensional disk. We find therefore that the holonomy around $c$ is given by the integral of $I_{D_W}$ over $M \times D^2$. But the holonomy around $c$ is also given by the integral of the curvature of $\mathscr{L}$ over $D^2$. As this is true for all loops $c$, we find that the curvature of $\mathscr{L}$ is given by the degree 2 component of $\int_M I_{D_W}$, where $I_{D_W}$ is seen as a differential form on $M \times \mathcal{B}$.
We deduce that the local anomaly polynomial, of degree $d+2$, of a quantum field theory is directly related to the curvature of the anomaly line bundle via integration over spacetime. Of course, the local anomaly does not capture all the information about the anomaly of a quantum field theory: there exist line bundles with non-trivial flat connections. The set of holonomies of the connection captures all the information about the anomaly and is refered to as the \emph{global anomaly} \cite{Witten:1985xe}. Equivalently, the anomaly is fully captured by the geometric invariant ${\rm An}_{\mathfrak{F}}$, and this is the point of view that we will take in this paper.
\subsection{The case of the (2,0) theory}
\label{SecIncons20TheoryAn}
Let us now focus on the (2,0) theory in six dimensions. The local gravitational anomaly of the (2,0) theory of type $A_n$ was derived from M-theory in \cite{Harvey:1998bx}. This result was extended to the type $D_n$ case in \cite{Yi:2001bz} and a general formula also valid for the E-type theories was conjectured in \cite{Intriligator:2000eq}. The degree 8 local anomaly polynomial reads
\be
\label{EqLocAn20Theory}
I_\mathfrak{g} = r(\mathfrak{g}) J_8 - \frac{|\mathfrak{g}|{\rm h}_\mathfrak{g}}{24} p_2(\mathscr{N}_W) \;.
\ee
$r(\mathfrak{g})$, $|\mathfrak{g}|$ and ${\rm h}_\mathfrak{g}$ denote respectively the rank, the dimension and the dual Coxeter number of the simple and simply laced Lie algebra $\mathfrak{g}$. $J_8$, whose explicit expression will appear below, is the anomaly polynomial for a single tensor multiplet in six dimensions. $p_2(\mathscr{N}_W)$ is the second Pontryagin class of the rank 5 bundle $\mathscr{N}_W$ over $W$ obtained by extending the R-symmetry bundle of the (2,0) theory on $M$.
Given our experience with chiral fermions, one may be optimistic and guess that the value of the geometric invariant ${\rm An}_{\mathfrak{g}}$ governing the global anomaly of the (2,0) theory, evaluated on a manifold $U$ bounded by $W$ is simply given by
\be
\label{EqGuessAn20Theory}
\frac{1}{2\pi i} \ln {\rm An}_{\mathfrak{g}}(U) = \int_W I_\mathfrak{g} \;, \quad {\rm mod} \; 1 \;.
\ee
The problem is that \eqref{EqGuessAn20Theory} is inconsistent.
An $\mathbb{R}/\mathbb{Z}$-valued geometric invariant on a $d+1$-dimensional manifold $U$ defined by integrating certain characteristic form $I$ on a bounded manifold of dimension $d+2$ can be well-defined only if $\int_W I$ is an integer for any closed manifold $W$. This is manifestly not the case for \eqref{EqGuessAn20Theory}. For instance, as we will see, $J_8$ can be written $\frac{1}{8}L(TW) + \frac{1}{2}I_f$, where $L(TW)$ is the Hirzebruch L-genus and $I_f$ is an index density with $\int_W I_f \in 2 \mathbb{Z}$ for $W$ closed. On a closed manifold, we have $\int_W L(TW) = \sigma_W$, the signature of the 8-dimensional manifold $W$. If $r(\mathfrak{g}) \sigma_W$ is not a multiple of 8, and in general it has no reason to be so, then $\int_W L(TW)$ cannot define a geometric invariant on $U$. The second term in \eqref{EqLocAn20Theory} does not define an invariant of $U$ either. One can check explicitly that $|\mathfrak{g}|{\rm h}_\mathfrak{g}$ is a multiple of 6, but $\int_W p_2(\mathscr{N}_W)$ has no particular evenness property on a closed manifold. As the coefficients of the two terms do not vary proportionally when we change $\mathfrak{g}$, there is no hope that \eqref{EqGuessAn20Theory} can be well-defined.
The problem calls therefore for a more careful study. We will see that \eqref{EqGuessAn20Theory} holds after adding extra terms on the right hand side that do not contribute to the local anomaly and that make the geometric invariant \eqref{EqGuessAn20Theory} well-defined. Our strategy will be to focus first on A-type theories, through their realizations on stacks of M5-branes. We will then find a straightforward generalization to the D- and E-type theories.
\section{The geometry of M5-branes}
\label{SecGeomM5-branes}
In this section, we review some facts about the geometry of M5-branes that will be useful in the derivation of the anomaly of the (2,0) theory. In Section \ref{SecDisM5br}, we review the properties of the embedding of the M5-brane worldvolume into spacetime. Some subtleties about the coupling of the M-theory C-field to the worldvolume theory of the M5-brane are reviewed in Section \ref{SecEffC-field} and we introduce some important notations for the rest of the paper. In Section \ref{SecStackM5}, we generalize the analysis to the case of a stack of M5-branes and in Section \ref{SecExt7-8-manifolds}, we extend these constructions to 7- and 8-dimensional manifolds, as required by anomaly computations. In Section \ref{SecAnCancDisM5}, we review the M-theory Chern-Simons term and its role in anomaly cancellation in the presence of M5-branes.
We assume that the reader is familiar with the use of (shifted) differential cocycles to model higher $p$-form abelian gauge fields. The original reference is \cite{hopkins-2005-70}. An introduction for physicists can be found in Section 2 of \cite{Freed:2006yc}. Our notations follow Section 2.1 of \cite{Monnierb}, which can be read as a quick reminder. Differential cocycles and cohomology classes are written with a caron $\check{}$. What we often call the \emph{field strength} of a differential cocycle is sometimes called the curvature in the literature. The reason for our terminology is obvious: when the differential cocycle models an abelian gauge field, its curvature coincides with the field strength of the gauge field.
\subsection{Non-intersecting M5-branes}
\label{SecDisM5br}
We consider the low energy limit of M-theory on an 11-dimensional smooth oriented spin manifold $Y$, in the limit of vanishing gravitational coupling. It consists in 11-dimensional supergravity, together with a Chern-Simons term involving an important higher derivative correction \cite{Witten:1996md}. We work in Euclidean signature, so we take $Y$ to be Riemannian. We will be considering gauge transformations and diffeomorphisms that are the identity outside of a compact subset $U$ of the spacetime $Y$. This implies that we can freely modify $Y$ outside this subset and take it to be compact, possibly adding sources outside $U$ in order to satisfy the Gauss law of the gauge fields.
Inside $U$, we choose a smooth oriented 6-dimensional manifold $M$, and we wrap one M5-brane on each of its connected components. We write $\mathscr{N}$ for the normal bundle of $M$ in $Y$. Our assumptions that $Y$ is oriented and spin and that $M$ is oriented imply that
\be
\label{EqRelSWClassesTMN}
w_1(TM) = w_1(\mathscr{N}) = 0 \;, \quad w_2(TM) + w_2(\mathscr{N}) = 0 \;, \quad w_5(\mathscr{N}) = 0 \;.
\ee
The last equality is not obvious and its proof can be found in Appendix A of \cite{Monnierb}. It should be also emphasized that it ceases to be automatically true once we extend these constructions from the 6-dimensional manifold $M$ to an 8-dimensional manifold $W$. In this case it will assumed.
We pick a tubular neighborhood $N$ of $M$ of radius $\delta$, which will eventually be taken to zero, and we write $\tilde{M}$ for its boundary. $\tilde{M}$ is a 4-sphere bundle over $M$, and we write $\pi$ for the bundle map $\tilde{M} \rightarrow M$. We have
\be
T\tilde{M} \oplus \mathbb{R}_{\tilde{M}} \simeq \pi^\ast(TM \oplus \mathscr{N}) \;,
\ee
with $\mathbb{R}_{\tilde{M}}$ a trivial line bundle over $\tilde{M}$. This implies that for any stable characteristic class $c$, such as the Pontryagin or Stiefel-Whitney classes, we have
\be
\label{EqRelCharClassMtMN}
c(T\tilde{M}) = \pi^\ast(c(TM \oplus \mathscr{N})) \;.
\ee
\subsection{The effective C-field}
\label{SecEffC-field}
It is well known that the quantization of the fluxes of the M-theory C-field are shifted: they are integral or half-integral depending on the parity of the periods of $w_4(TY)$ \cite{Witten:1996md}. The precise way of encoding this statement is to see the C-field as an element $\check{C}$ of a group of shifted differential cocycles, written $\check{Z}_{\lambda}$ in Section 2.1 of \cite{Monnierb}. The shift $\lambda$ is a half-integer-valued cocycle such that $2\lambda$ is a lift of $w_4(TY)$ as an integral cocycle (i.e. a period of $2\lambda$ on a 4-cycle is even or odd depending on whether the period of $w_4(TY)$ is 0 or 1). From now on, we will refer to this shift simply as \emph{a shift by} $w_4(TY)$. We will similarly encounter later differential cocycles shifted by the degree 4 Wu class of $M$, i.e. by $w_4(TM) + (w_2(TM))^2$.
The M-theory C-field sources the self-dual two-form gauge field on the worldvolume of the M5-brane. However, it is not trivial to restrict the C-field to the M5-brane worldvolume. Indeed, the M5-brane itself sources the C-field in the bulk, which means that the integral of the C-field field strength $G$ on any 4-sphere linking $M$ is equal to 1. This implies that $G$ diverges near $M$. If the normal bundle $\mathscr{N}$ is trivial, a trivialization defines longitudinal and normal components of elements of $T^\ast M$. The divergent part of the four-form $G$ is purely normal, and one can restrict the longitudinal component to $M$. However, this strategy does not work if the normal bundle is non-trivial.
In Section 2.3 of \cite{Monnierb}, we explained how to define in the general case the effective C-field on the worldvolume. In terms of differential cocycles, the restriction reads
\be
\label{EqDefRestC-field}
\check{C}_M = \frac{1}{2}\pi_\ast(\check{C}_{\tilde{M}} \cup \check{C}_{\tilde{M}}) \;.
\ee
Here $\pi_\ast$ is the pushforward map on differential cocycles associated to the fiber bundle $\tilde{M} \stackrel{\pi}{\rightarrow} M$, $\check{C}_{\tilde{M}}$ is the (non-singular) restriction to $\tilde{M}$ of the C-field on $Y$, and $\cup$ is the cup product on differential cocycles. Let us remark that the factor $\frac{1}{2}$ in \eqref{EqDefRestC-field} makes it not obvious that the differential cohomology class of $\check{C}_M $ depends only on the differential cohomology class of $\check{C}_{\tilde{M}}$, i.e. that \eqref{EqDefRestC-field} is gauge invariant. This can be shown by performing an explicit gauge transformation on $\check{C}_{\tilde{M}}$ in \eqref{EqDefRestC-field} and noticing that the factor $\frac{1}{2}$ do not appear in the variation of $\check{C}_M$ \cite{Monnierb}.
One can show that this definition reduces to the intuitive one sketched above when $\mathscr{N}$ is trivial. We also showed in \cite{Monnierb} that it passes a highly non-trivial consistency test: $\check{C}_M$ is a differential cocycle on $M$ shifted by the degree 4 Wu class of $M$, which is exactly what is required to define consistently the coupling to the worldvolume self-dual field \cite{Witten:1996hc}. (To be precise, the degree 4 Wu class of $M$ always vanishes, for dimensional reasons. We will however momentarily extend these constructions to manifolds of dimension 8, whose degree 4 Wu classes can be non-trivial.)
For explicit computations, it will be useful to choose an unshifted differential cocycle $\check{a}_{\tilde{M}}$, whose field strength $f_{\tilde{M}}$ integrates to $1$ over the 4-sphere fibers to $\tilde{M}$. We will refer to $\check{a}_{\tilde{M}}$ in the following as a \emph{global angular differential cocycle}. $\check{C}_{\tilde{M}}$ and its field strength $G_{\tilde{M}}$ can then be written
\be
\label{EqParCFieldTMSinM5}
\check{C}_{\tilde{M}} = \check{a}_{\tilde{M}} + \pi^\ast(\check{A}_M) \;, \quad G_{\tilde{M}} = f_{\tilde{M}} + \pi^\ast(F_M) \;,
\ee
for some differential cocycle $\check{A}_M$ shifted by $w_4(TM \oplus \mathscr{N})$, with field strength $F_M$. The coefficient of $\check{a}_{\tilde{M}}$ in \eqref{EqParCFieldTMSinM5} is 1 because the M5-brane supported on $M$ sources one unit of flux of the C-field. The effective C-field \eqref{EqDefRestC-field} and its field strength $G_M$ then read
\be
\check{C}_M = \check{b}_M + \check{A}_M \;, \quad G_M = h_M + F_M \;,
\ee
where we defined
\be
\label{EqDefb}
\check{b}_M = \frac{1}{2} \pi_\ast(\check{a}_{\tilde{M}} \cup \check{a}_{\tilde{M}})
\ee
and wrote $h_M$ for the field strength of $\check{b}_M$. The differential cocycle $\check{b}_M$ gives rise to a well-defined differential cohomology class for the same reason as $\check{C}_M$ does, see \eqref{EqDefRestC-field}. Results of \cite{Witten:1999vg} show that it is shifted by $w_4(\mathscr{N})$. The differential cocycles $\check{a}_{\tilde{M}}$ and $\check{b}_{M}$ will play an important role in what follows.
\subsection{Stacks of M5-branes}
\label{SecStackM5}
We point out here the differences arising when $M$ supports a stack of $n$ M5-branes, rather than a single one. The flux through the fibers of $\tilde{M}$ is now $n$ units. Using \eqref{EqParCFieldTMSinM5}, we can parameterize the C-field on $M$ as follows:
\be
\label{EqParCFieldTMStaM5}
\check{C}_{\tilde{M}} = n \check{a}_{\tilde{M}} + \pi^\ast(\check{A}_M) \;, \quad G_{\tilde{M}} = nf_{\tilde{M}} + \pi^\ast(F_M) \;.
\ee
$\check{A}_M$ is as before a differential cocycle shifted by $w_4(TM \oplus \mathscr{N}_M)$. Under changes of the parameterization \eqref{EqParCFieldTMStaM5}, we have
\be
\label{EqReparC-fieldtM1}
\check{a}_{\tilde{M}} \rightarrow \check{a}_{\tilde{M}} + \pi^\ast(\check{B}_M) \;, \quad \check{A}_M \rightarrow \check{A}_M - n \check{B}_M
\ee
for $\check{B}_M$ an unshifted differential cocycle on $M$. We can also define
\be
\label{EqDefEffCn}
\check{C}_{M,n} := n\check{b}_M + \check{A}_M \;, \quad G_{M,n} := nh_M + F_M \;,
\ee
which is invariant under \eqref{EqReparC-fieldtM1}. We can also write $\check{C}_{M,n} = \frac{1}{2n}\pi_\ast(\check{C}_{\tilde{M}} \cup \check{C}_{\tilde{M}})$. Depending on whether $n$ is even or odd, $\check{C}_{M,n}$ is shifted by $w_4(TM \oplus \mathscr{N})$ or by the degree 4 Wu class of $M$. The differential cocycle
\be
\label{EqDefEffC1}
\check{C}_M := \check{b}_M + \check{A}_M
\ee
is shifted by the Wu class of $M$ and will play an important role in what follows. Remark that $\check{C}_M$ depends on a choice of parameterization \eqref{EqParCFieldTMStaM5}.
Simplifications occur when $w_4(\mathscr{N}) = 0$. Indeed, consider the vertical tangent bundle $T_V \tilde{M}$. Remark that its Euler class $e(T_V \tilde{M})$ integrates to 2 over the 4-sphere fibers of $M$, because the Euler number of a 4-sphere is 2. Modulo 2, we have
\be
e(T_V \tilde{M}) = w_4(T_V \tilde{M}) = \pi^\ast(w_4(\mathscr{N})) \;.
\ee
Therefore, if $w_4(\mathscr{N}) = 0$, $e(T_V \tilde{M})$ can be divided by 2. \cite{Bott1998} shows that $\pi_\ast\left( \frac{1}{2} e(T_V \tilde{M}) \cup \frac{1}{2}e(T_V \tilde{M})\right)$ is at most torsion. The above holds for the differential refinement $\check{e}(T_V \tilde{M})$ obtained from the metric on $\tilde{M}$. We can therefore take $\check{a}_{\tilde{M}} = \frac{1}{2} \check{e}(T_V M) + \pi^\ast(\check{t})$, for some differential cocycle $\check{t}$ representing a torsion differential cohomology class. We then have
\be
\check{b}_M = \pi_\ast\left( \frac{1}{2} \check{e}(T_V \tilde{M}) \cup \frac{1}{2}\check{e}(T_V \tilde{M})\right) + \check{t}
\ee
and we can pick $\check{t}$ so that $\check{b}_M = 0$. Equations \eqref{EqDefEffCn} and \eqref{EqDefEffC1} then simplify.
\subsection{Extension to manifolds of dimension 7 and 8}
\label{SecExt7-8-manifolds}
As reviewed in Section \ref{SecRemAnom}, the computation of the anomaly of a quantum field theory in dimension $d$ involves manifolds of dimension $d+1$ and $d+2$. Taking $X$ to be a 7- or 8-dimensional manifold, we endow it with a rank 5 bundle $\mathscr{N}_X$ satisfying \eqref{EqRelSWClassesTMN}. (From now on we will analogously write $\mathscr{N}_M$ for the normal bundle over $M$.) We then have a 4-sphere bundle $\tilde{X}$ over $X$ whose stable characteristic classes satisfy \eqref{EqRelCharClassMtMN}. As before, we write $\pi$ for the bundle map $\tilde{X} \rightarrow X$.
$\check{C}_{\tilde{X}}$ is a differential cocycle on $\tilde{X}$ shifted by $\pi^\ast(w_4(TX \oplus \mathscr{N}_X))$. The constructions of Sections \ref{SecEffC-field} and \ref{SecStackM5} can be repeated on $X$, yielding differential cocycles $\check{a}_{\tilde{X}}$, $\check{A}_X$, $\check{b}_X$.
In the following, we will follow the notation in Section \ref{SecRemAnom} and write $U$ and $W$ for 7- and 8-dimensional manifolds, respectively. As we will argue below, the decoupling of the center of mass degrees of freedom on a stack of M5-branes requires a choice of global angular differential cocycle $\check{a}_{\tilde{M}}$, as introduced in \eqref{EqParCFieldTMStaM5}. It is therefore natural to consider the following. 6-dimensional closed smooth oriented Riemannian manifolds $M$, together with data $\mathfrak{d}_M = (\mathscr{N}_M, \check{C}_{\tilde{M}}, \check{a}_{\tilde{M}})$, can be seen as the objects of a cobordism category $\mathfrak{C}$, whose bordisms are oriented smooth Riemannian manifolds $U$ with boundary, together with data $\mathfrak{d}_U = (\mathscr{N}_U, \check{C}_{\tilde{U}}, \check{a}_{\tilde{U}})$. Of course, we require that if $(U,\mathfrak{d}_U)$ is a cobordism with boundary $(M,\mathfrak{d}_M)$, then $\mathfrak{d}_U|_{M} = \mathfrak{d}_M$. We also require that the Riemannian metric of $U$ is isomorphic to a direct product in a neighborhood of $\partial U$. Similarly, we will consider 8-dimensional cobordisms $(W,\mathfrak{d}_W)$ bounded by 7-dimensional closed manifolds $(U,\mathfrak{d}_U)$.
\subsection{Anomaly cancellation for non-intersecting M5-branes}
\label{SecAnCancDisM5}
M-theory on $Y$ contains a Chern-Simons term reading
\be
\label{EqM-thCSTerm}
{\rm CS}_{11} = 2\pi i \int_Y \left(\frac{1}{6} C \wedge G \wedge G - C \wedge I_8 \right) \;,
\ee
when the C-field is topologically trivial and can be represented by a 3-form $C$ with field strength $G$. The index density $I_8$ is defined in terms of the Pontryagin classes of $TY$ by
\be
I_8 = \frac{1}{48} \left(p_2(TY) + \left(\frac{p_1(TY)}{2}\right)^2\right) \;.
\ee
A more general formulation in terms of eta invariants can be found in \cite{Diaconescu:2003bm}. Alternatively, we can express it in differential cohomology. The integral Pontryagin cohomology class and the metric on $Y$ determine a differential cohomology class admitting $I_8$ as its field strength, which can be lifted to a differential cocycle $\check{I}_8$. In terms of the shifted differential cocycle $\check{C}$ describing the C-field, the Chern-Simons term \eqref{EqM-thCSTerm} can be written
\be
\label{EqM-thCSTermDiffC}
{\rm CS}_{11} = 2\pi i \int_Y \left(\frac{1}{6} \check{C} \cup \check{C} \cup \check{C} - \check{C} \cup \check{I}_8 \right) \;,
\ee
where $\cup$ and $\int$ are the cup product and integral in differential cohomology \cite{hopkins-2005-70}. The integral of a differential cocycle of degree 12 on an 11-dimensional manifold gives an element of $\mathbb{R}/\mathbb{Z}$, reproducing the fact that the Chern-Simons term is defined only modulo $2\pi i$.
The Chern-Simons term \eqref{EqM-thCSTermDiffC} is a geometric invariant in the sense discussed in Section \ref{SecRemAnom}. In particular, it defines an anomaly line bundle over the base of families of 10-dimensional manifolds. When evaluated on a 11-dimensional manifold with boundary, it provides a section of this line bundle. As a result, when $Y$ has a boundary, \eqref{EqM-thCSTermDiffC} is not invariant under diffeomorphisms of and gauge transformations of the C-fields. There is both a gravitational and a gauge anomaly, which are canceled by the fields living on the boundaries of M-theory spacetimes \cite{Horava:1996ma}.
When the spacetime $Y$ has no boundaries but contains M5-branes wrapped on $M$, one is also naturally led to consider \eqref{EqM-thCSTermDiffC} on a manifold with boundary. As was already mentioned above, in this case the C-field, and therefore \eqref{EqM-thCSTermDiffC}, is defined only on $Y \backslash M$. Cutting out a small neighborhood $N$ of $M$, $CS_{11}$ needs to be evaluated on the manifold $Y \backslash N$, which has boundary $\tilde{M}$. This shows that the bulk action of M-theory has both gauge and gravitational anomalies in the presence of M5-branes. Those anomalies cancel against the anomalies present on the worldvolume of the M5-branes. This was discussed in \cite{Duff:1995wd,Witten:1996hc} and shown \cite{Freed:1998tg, Lechner:2001sj} for local anomalies. Global anomalies were shown to cancel in \cite{Monnierb}.
For our purpose, this implies that in order to compute the anomaly associated to a system of (non-intersecting) M5-branes in some region of space, it is sufficient to evaluate the Chern-Simons term \eqref{EqM-thCSTermDiffC} on the boundary of a region containing them.
\section{Global anomalies of A-type (2,0) theories}
\label{SecGlobAnAn}
We compute in this section the global anomaly (2,0) theories in the A-series. In Section \ref{SecIdComp}, we introduce the scaling limit in which we obtain the (2,0) theory from a system of M5-branes. The computation of the anomaly of the stack of M5-branes is performed in Section \ref{SecEvCSTerm}. We then determine the anomaly of the center of mass tensor multiplet in Section \ref{SecGlobAnCM}, and deduce from it the global anomaly of the (2,0) theory in Section \ref{SecGlobAn20ThAn}. In Section \ref{SecConsCheck}, we check that the anomaly formula determines a well-defined geometric invariant of 7-manifolds. Section \ref{SecConfBlocks} presents the relation of the anomaly line bundle to the conformal blocks of the (2,0) theory and we discuss in Section \ref{SecOrigHWZTerms} a conceptual picture for the origin of the Hopf-Wess-Zumino terms present on the Coulomb branch of the (2,0) theory.
\subsection{Idea of the computation}
\label{SecIdComp}
We pick a compact smooth oriented 6-dimensional manifold $M$ and a rank 5 vector bundle $\mathscr{N}_M$ on $M$ whose Stiefel-Whitney classes satisfy \eqref{EqRelSWClassesTMN}. The total space of $\mathscr{N}_M$ is an oriented spin manifold, which we will see as an M-theory spacetime. We assume that $M$ carries a Riemannian metric and that $\mathscr{N}_M$ carries a connection. We endow $\mathscr{N}_M$ with a compatible metric. Inside $\mathscr{N}_M$, points at a fixed distance $R$ from the origin form a 4-sphere bundle $\tilde{M}$ over $M$.
We pick $n$ non-intersecting sections of $\mathscr{N}_M$ on which we wrap $n$ M5-branes. We assume that the largest distance between an M5-brane and the origin is $r$.
As this system is formulated on a non-compact manifold, it displays a global anomaly under diffeomorphisms and gauge transformations that are not compactly supported. As explained in Section \ref{SecRemAnom}, the anomaly can be computed from a closed 7-manifold $U$. $U$ can be a mapping torus of $M$, if we are interested in computing the holonomy of the anomaly connection, or a twisted double, if we are interested in computing the anomalous phase of the partition function under a particular transformation. In any case, $U$ comes with the data $\mathfrak{d}_U = (\mathscr{N}_U, \check{C}_{\tilde{U}}, \check{a}_{\tilde{U}})$ extending the corresponding data on $M$ as described in Section \ref{SecExt7-8-manifolds}. We know from \cite{Monnierb} that the global anomaly vanishes in the bulk, so it can be computed by evaluating the M-theory Chern-Simons term on the asymptotic boundary of $\mathscr{N}_U$, which is diffeomorphic to $\tilde{U}$, the 4-sphere bundle over $U$ associated to $\mathscr{N}_U$.
We now take a decoupling limit in which we rescale both the Planck length $l_P$ and the fibers of $\mathscr{N}_M$, in a way such that $r/l_P^3$ stays constant \cite{Maldacena:1997re}. This limit is such that the M2-branes that might stretch between the M5-brane have constant energy. It ensures that the energy scale at which the gauge symmetry of the (2,0) theory is broken is constant. In the limit, we obtain effectively a free tensor supermultiplet describing the center of mass of the system, together with a (2,0) superconformal field theory of type $A_{n-1}$ at a generic point on its Coulomb branch. These theories are living on $M$, seen as the zero section of $\mathscr{N}_M$.
The global anomaly of the system does not change when we take the limit. As a consequence, we see that we can compute the global anomaly of the (2,0) superconformal field theory of type $A_{n-1}$ (together with the anomaly due to the center of mass) by evaluating the M-theory Chern-Simons term on $\tilde{U}$. Moreover, the anomaly has to be constant across the Coulomb branch. The computation to be performed below, a priori valid only at a generic point of the Coulomb branch, is therefore valid everywhere on the Coulomb branch.
\subsection{Evaluation of the Chern-Simons term}
\label{SecEvCSTerm}
After the limit described above has been taken, both the C-field and the metric on $\mathscr{N}_U$ are spherically symmetric. Moreover, the M-theory spacetime is empty away from the zero section. This implies that the Chern-Simons term can be evaluated on any round sphere bundle $\tilde{U} \subset \mathscr{N}_U$ centered around the origin. Taking $\tilde{U}$ to be a 4-sphere bundle with a finite radius avoids the slight complications coming from the fact that the metric blows up and the C-field field strength tends to zero as one approaches the asymptotic boundary of $\mathscr{N}_U$. Let us note that if $U$ is a mapping torus, adiabatic limits have to be taken in the formulas below. In the adiabatic limit, the metric along the base circle $c$ of $U$ blows up. To simplify the notation, we will suppress the adiabatic limits from the notation. No adiabatic limit is necessary in the case of most interest to us, when $U$ is a twisted double.
We assume that $(U,\mathfrak{d}_U)$ is the boundary of $(W,\mathfrak{d}_W)$ (see Section \ref{SecExt7-8-manifolds}). The cobordism group computing the obstruction to the existence of $(W,\mathfrak{d}_W)$ has been described in Appendix C of \cite{Monnierb}. It is not known explicitly, but is at most torsion. To compute the anomaly of a stack of $n$ M5-branes, we need to evaluate
\begin{align}
\label{EqEvCSTerm20}
{\rm An}_{nM5}(U) = & \, - \int_{\tilde{U}} \left(\frac{1}{6} \check{C}_{\tilde{W}} \cup \check{C}_{\tilde{W}} \cup \check{C}_{\tilde{W}} - \check{C}_{\tilde{W}} \cup \check{I}_8\right) \\
= & \, - \int_{\tilde{W}} \left(\frac{1}{6} G_{\tilde{W}} \wedge G_{\tilde{W}} \wedge G_{\tilde{W}} - G_{\tilde{W}} \wedge I_8\right) \notag \;,
\end{align}
where in the second line we expressed the Chern-Simons term on $\tilde{U}$ as the integral of the associated characteristic form on $\tilde{W}$. As explained in Section \ref{SecStackM5}, the C-field and its field strength on $\tilde{W}$ can be written
\be
\label{EqDecompGtW}
\check{C}_{\tilde{W}} = n \check{a}_{\tilde{W}} + \pi^\ast \check{A}_W \;, \quad G_{\tilde{W}} = n f_{\tilde{W}} + \pi^\ast F_W \;,
\ee
where $G_{\tilde{W}}$, $f_{\tilde{W}}$ and $F_W$ are the field strengths of $\check{C}_{\tilde{W}}$, $\check{a}_{\tilde{W}}$ and $\check{A}_W$, respectively.
$f_{\tilde{W}}$ integrates to 1 on the 4-sphere fibers of $\tilde{W}$. The term $n f_{\tilde{W}}$ in the field strength of the C-field comes from the fact that we have $n$ M5-branes at the origin sourcing the C-field. \eqref{EqDecompGtW} can be reparameterized as follows:
\be
\label{EqReparC-fieldtW}
\check{a}_{\tilde{W}} \rightarrow \check{a}_{\tilde{W}} + \pi^\ast(\check{B}_W) \;, \quad \check{A}_W \rightarrow \check{A}_W - n \check{B}_W \;,
\ee
for any degree 4 unshifted differential cocycle $\check{B}_W$. The minus sign in \eqref{EqEvCSTerm20} comes from the fact that the orientation of the boundary $\tilde{U}$ is reversed compared to \cite{Monnierb}. Equivalently \eqref{EqEvCSTerm20} yields directly the anomaly of the stack of M5-branes, as opposed to the anomaly inflow required to cancel it.
We now want to express \eqref{EqEvCSTerm20} as an integral on $W$. We can proceed as in Section 3.3 of \cite{Monnierb}. First, we see the integral on $\tilde{W}$ as the composition of a pushforward $\pi_\ast$ along the 4-sphere fibers with integration on $W$. The pushforward satisfies the relations
\be
\label{EqRelPushForw}
\pi_\ast( \pi^\ast(x)) = 0 \;, \quad \pi_\ast(y \wedge \pi^\ast(x)) = \pi_\ast(y) \wedge x \;, \quad \pi_\ast(f_{\tilde{W}}) = 1 \;,
\ee
valid for differential forms $x \in \Omega^\bullet(W)$ and $y \in \Omega^\bullet(\tilde{W})$. The right-hand side of \eqref{EqEvCSTerm20} reads
\be
\label{EqIntFibSt1}
-\int_{W} \pi_\ast \left(\frac{1}{6} (nf_{\tilde{W}} + \pi^\ast F_W)^3 - (nf_{\tilde{W}} + \pi^\ast F_W) \wedge I_8\right) \;.
\ee
Note that in this equation, the Pontryagin forms in $I_8$ are those of $T\tilde{W}$, and \eqref{EqRelCharClassMtMN} shows that they are the pull-back to $\tilde{W}$ of the Pontryagin forms of $TW \oplus \mathscr{N}_W$ on $W$. Using the latter fact and \eqref{EqRelPushForw}, we get
\be
\label{EqIntFibSt2}
-\int_{W} \left(\frac{n^3}{6} \pi_\ast(f_{\tilde{W}}^3) + \frac{n^2}{2} \pi_\ast(f_{\tilde{W}}^2) \wedge F_W + \frac{n}{2}F_W^2 - nI_8 \right) \;,
\ee
where now $I_8$ is constructed out of the Pontryagin forms of $TW \oplus \mathscr{N}_W$. Next, we use the notation introduced in Section \ref{SecStackM5} to rewrite \eqref{EqIntFibSt2}:
\be
\label{EqIntFibSt3}
-\int_{W} \left(n^3 \left(\frac{1}{6} \pi_\ast(f_{\tilde{W}}^3) - \frac{1}{8} \pi_\ast(f_{\tilde{W}}^2)^2\right) + \frac{n}{2}G_{W,n}^2 - nI_8 \right) \;.
\ee
The coefficient of $n^3$ is $\frac{1}{24}p_2(\mathscr{N}_W)$, as explained in Section 3.3 of \cite{Monnierb}. We further define the index density
\be
J_8 := I_8 - \frac{1}{24} p_2(\mathscr{N}_W)\;,
\ee
computing the local anomaly of a free tensor multiplet, and we obtain
\be
\label{EqAnnM53}
{\rm An}_{nM5}(U) = \int_W \left( nJ_8 - \frac{n^3-n}{24} p_2(\mathscr{N}_W) - \frac{n}{2}G_{W,n}^2 \right) \;.
\ee
Remark that $G_{W,n}$ is invariant under the reparameterization \eqref{EqReparC-fieldtW}, so \eqref{EqAnnM53} is manifestly invariant as well.
\subsection{The global anomaly of the center of mass}
\label{SecGlobAnCM}
\eqref{EqAnnM53} describes the global anomaly of the stack of $n$ M5-branes, corresponding to the (2,0) theory of type $A_n$ together with a free tensor supermultiplet of charge $n$, describing the center of mass of the system, as well as the degrees of freedom related to it by supersymmetry. In order to isolate the contribution from the (2,0) theory, we need to compute the global anomaly due to the free tensor multiplet.
To derive it, we temporarily ignore the fermions in the tensor multiplet, which do not have a gauge anomaly. The global anomaly of a self-dual field of charge 1 is given by \cite{hopkins-2005-70, Monniera}
\be
\label{EqAnnSD1}
{\rm An}_{SD,1}(U) = \int_W \left( \frac{1}{8}L(TW) - \frac{1}{2} G^2_W \right) \;,
\ee
where $L(TW)$ is the Hirzebruch genus of $TW$. $G_W$ is the field strength of a degree 4 differential cocycle $\check{C}_W$, modeling a 3-form gauge field coupling to the self-dual field. For the anomaly \eqref{EqAnnSD1} to be well-defined, in the sense discussed in Section \ref{SecIncons20TheoryAn}, it is crucial that $\check{C}_W$ is a differential cocycle shifted by the Wu class, as explained in Appendix \ref{SecProofInt}. Our aim is to separate the gravitational anomaly from the gauge anomaly in this expression. This is not a trivial problem, because although the first term in \eqref{EqAnnSD1} seems to capture the gravitational anomaly and the second one the gauge anomaly, they are not separately well-defined. For instance, the first term is obviously not an integer when evaluated on a closed manifold whose signature is not a multiple of 8.
This problem can be cured by rewriting \eqref{EqAnnSD1} as
\be
\label{EqAnnSD12}
{\rm An}_{SD,1}(U) = \frac{1}{8} \int_W (L(TM) - \sigma_W) - \int_W \left(\frac{1}{2}G^2_W - \frac{1}{8}\sigma_W) \right) \;,
\ee
where $\sigma_W$ denotes the signature of the (non-degenerate) intersection form on the image of $H^4(W,\partial W; \mathbb{R})$ in $H^4(W; \mathbb{R})$. The point of this rewriting is that each of the two integrals yields an integer when evaluated on a closed manifold $W$, as explained in Appendix \ref{SecRelLiftWuClass}. Novikov's additivity theorem for the signature also ensures that the corresponding geometric invariants satisfy \eqref{EqFunctRelGeomInv}. Also, the dependence on the metric and on the C-field of the two terms remain unchanged compared to \eqref{EqAnnSD1}. We can therefore interpret the first term as the gravitational anomaly of the self-dual field, and the second one as the gauge anomaly, consistently with the detailed analysis of \cite{Monnier2011a, Monniera}. Both of these anomalies are well-defined in the sense of Section \ref{SecIncons20TheoryAn}.
The gravitational anomaly of a self-dual field of charge $n$ is the same as the one of a self-dual field of charge $1$, while its gauge anomaly is $n$ times larger. (More precisely, its gauge anomaly line bundle is the $n$th tensor product of the gauge anomaly line bundle of a self-dual field of charge 1. This implies that the holonomies and transition functions are taken to the $n$th power.) These facts determine the global anomaly of a self-dual field of charge $n$ to be
\be
\label{EqAnnTMn1}
An_{SD,n}(U) = \frac{1}{8} \int_W (L(TM) - \sigma_W) - n \int_W \left(\frac{1}{2}G^2_W - \frac{1}{8}\sigma_W) \right) \;.
\ee
We deduce that the global anomaly of a tensor multiplet of charge $n$ is given by
\be
\label{EqAnnTMn2}
{\rm An}_{TM,n}(U) = \int_W \left((J_8 + \frac{n-1}{8}\sigma_W - \frac{n}{2}G^2_W \right) \;.
\ee
\subsection{The global anomaly of the (2,0) theory}
\label{SecGlobAn20ThAn}
In \eqref{EqAnnTMn2}, $G_W$ is the field strength of a differential cocycle $\check{C}_W$ shifted by the Wu class. What is the differential cocycle that should be identified with $\check{C}_W$ when the tensor multiplet is the center of mass of a stack of M5-branes? It would be natural to set $\check{C}_W = \check{C}_{W,n}$, but this would be inconsistent, as $\check{C}_{W,n}$ is not shifted by the Wu class in general. The only $n$-independent cocycle with the correct shift in the problem is $\check{C}_W = \check{b}_W + \check{A}_W$. The fact that this cocycle is shifted by the Wu class was shown in Appendix B of \cite{Monnierb}, using crucial results of \cite{Witten:1999vg}.
It was also argued in \cite{Monnierb} that $\check{C}_M = \check{b}_M + \check{A}_M$ is the effective C-field coupling to the self-dual field on the worldvolume of a single M5-brane. It seems natural that the effective C-field coupling to the center-of-mass tensor multiplet should be given by the same expression.
Subtracting the contribution to the anomaly of the free center-of-mass tensor supermultiplet, we obtain a formula for the global anomaly of the (2,0) theory of type $A_{n-1}$:
\begin{align}
\label{EqAn20An}
& {\rm An}_{A_{n-1}}(U) = {\rm An}_{nM5}(U) - {\rm An}_{TM,n}(U) \\
& = \int_W \left( (n-1) J_8 - \frac{n^3-n}{24} p_2(\mathscr{N}_W) - \frac{n-1}{8}\sigma_W - \frac{n(n-1)}{2}h_W(2G_W + (n-1)h_W) \right) \notag \;.
\end{align}
where $G_W$ is the field strength of $\check{C}_W$.
As was discussed in Section \ref{SecStackM5}, if $w_4(\mathscr{N}_M) = 0$, there is a preferred choice for the global angular cocycle $\check{a}_M$, which results in $\check{b}_M = 0$. If the extensions of the normal bundle are such that $w_4(\mathscr{N}_U) = w_4(\mathscr{N}_W) = 0$, then we can extend the global angular cocycle on $\tilde{U}$ and $\tilde{W}$ in such a way that $\check{b}_U = \check{b}_W = 0$. In particular, $h_W = 0$ and the last term vanishes. This is for instance the case when $\mathscr{N}_M$ is trivial. However the cases where $\mathscr{N}_M$ is non-trivial are very important, as they correspond to twistings of the (2,0) theory. Then, even if $w_4(\mathscr{N}_M) = 0$, there is in general no reason that would force $w_4(\mathscr{N}_U) = w_4(\mathscr{N}_W) = 0$ for all the twisted doubles $U$. In fact, we will see that the last term is crucial for the consistency of \eqref{EqAn20An}.
It is interesting to note that there remains a dependence on the background C-field, through the extension $G_W$ of its field strength to $W$. There is as well a dependence on $h_W$, the field strength of \eqref{EqDefb}, and therefore a dependence on the choice of parameterization \eqref{EqDecompGtW}. These somewhat puzzling features can all be traced back to the decoupling of the center of mass degrees of freedom. This operation requires picking a differential cocycle of degree 3 shifted by the Wu class, which is the effective C-field on the worldvolume coupling to the center-of-mass tensor multiplet. There is no way to do this canonically and the choice we made, $\check{C}_M$, extended to $W$ as $\check{C}_W$, depends on \eqref{EqDecompGtW}. In contrast, the anomaly formula \eqref{EqAnnM53} for a stack of M5-branes, including the center of mass, is independent of \eqref{EqDecompGtW}.
A consequence of this fact is that the decomposition \eqref{EqDecompGtW} cannot be chosen freely on $W$. The definition of the (2,0) theory on $M$ should include a choice global angular differential cocycle $\check{a}_{\tilde{M}}$ on $\tilde{M}$, which should then be extended to $\tilde{U}$ and $\tilde{W}$, as was already suggested in our discussion of Section \ref{SecExt7-8-manifolds}. A choice of $\check{a}_{\tilde{M}}$ is effectively a choice of a vertical cotangent bundle on $\tilde{M}$. It is therefore not so surprising that when the normal bundle $\mathscr{N}_M$ is topologically non-trivial, such a choice has to be made in order to decouple the center of mass, and that this choice cannot be made canonically.
As we are only interested in the (2,0) theory, we should set the C-field on $M$ to a preferred value, for instance zero. Because of an analog of the Freed-Witten anomaly for self-dual fields, first described in \cite{Witten:1999vg}, this might not be consistent. We should rather set $\check{C}_M = \check{S}_M$, where $\check{S}_M$ is a certain 2-torsion differential cocycle determined by the anomaly cancellation condition (see Section 3.6 of \cite{Monniera}). Together with a choice $\check{a}_{\tilde{M}}$ of a global angular cocycle on $\tilde{M}$, this fixes the value of the M-theory C-field on $\tilde{M}$.
We can recover the local anomaly from \eqref{EqAn20An} by taking $U$ to be a mapping torus over a small homotopically trivial loop $c$ in the space of background fields. The holonomy of the anomaly connection along $c$ is then proportional to the value of its curvature inside the loop. In this case, we can take $W = M \times D^2$, $\tilde{W} = \tilde{M} \times D^2$, where $D^2$ is a 2-dimensional disk. As the metric alone is changing along $D^2$, only the metric-dependent terms can have a non-zero integral. But the only metric-dependent terms are the first two in \eqref{EqAn20An}. A comparison with \cite{Harvey:1998bx} (see also \eqref{EqLocAn20Theory}) shows that these two terms reproduce the index density governing the local anomaly derived in that paper. Let us also remark that in \cite{Harvey:1998bx}, it was assumed that the local gravitational anomaly cancellation, proven for a single M5-brane, holds as well for a stack of M5-branes. Our derivation requires no such assumption. We rather relied on the cancellation of global anomalies for non-intersecting M5-branes, proven in \cite{Monnierb} to deduce the anomaly at a generic point on the Coulomb branch.
We will also see in the next section that the last two terms in \eqref{EqAn20An}, while having no effect on the local anomaly, are crucial for the anomaly to be consistent globally.
\subsection{A consistency check}
\label{SecConsCheck}
In this section we check that when \eqref{EqAn20An} is evaluated on a closed manifold $W$, it yields an integer. This ensures that the anomaly is well-defined, in the sense discussed in Section \ref{SecIncons20TheoryAn}. Strictly speaking, this check is not necessary. We obtained \eqref{EqAn20An} as the difference of two terms describing well-defined anomalies. One is the reduction of the characteristic form associated to the M-theory Chern-Simons term, which takes integral values on closed manifolds as shown in \cite{Witten:1996md}. The other is the global anomaly of the center of mass, which is shown in Appendix \ref{SecRelLiftWuClass} to take integral values on closed manifolds as well. Nevertheless, this is a good check on our computations and it involves some interesting algebraic topology.
In the rest of this section, $W$ is a closed oriented 8-manifold. Let us first remark that the analysis of the cancellation of local anomalies for five-branes \cite{Witten:1996hc, Freed:1998tg} shows that $J_8 = \frac{1}{8}L(TW) - \frac{1}{2}I_f$, where $I_f$ is the index density of the chiral fermions on the worldvolume of a single M5-brane. As the Dirac operator associated to $I_f$ is quaternionic on an 8-dimensional manifold (see Section 3.1 of \cite{Monnierb}), its index is even and $\int_W \frac{1}{2}I_f$ is an integer. The term involving the Hirzebruch genus integrates to the signature of $W$ and cancels with the third term in \eqref{EqAn20An}. Therefore, all that remains to be shown is that the second and fourth terms in \eqref{EqAn20An} add up to an integer.
To this end, it is useful to distinguish two cases, depending on whether $n$ is even or odd. For odd $n$, $n^3-n$ is a multiple of 24 (it is sufficient to check this explicitly for $n = 1$ to $23$), so the second term is an integer. To see that the last term is an integer as well in this case, we let $n = 2k+1$ and write it
\be
\label{Eq4thTermnOdd}
\int_W \frac{2k+1}{2} 2kh_W(2G_W + 2kh_W) \;.
\ee
But $2kh_W$ is a closed form with integral periods. $2G_W$ is a closed form with integral periods as well, but in addition it is a form lift of the Wu class (see Appendix \ref{SecRelLiftWuClass}). This implies that $2G_W$ is a characteristic element for the wedge product pairing on the space of closed forms on $W$ with integral periods, which implies that \eqref{Eq4thTermnOdd} is an integer.
In case $n$ is even, we need more sophisticated tools. Again, a straightforward inspection shows that for $n = 2k$ even,
\be
4 \frac{n^3-n}{24} = k \quad {\rm mod} \; 4 \;.
\ee
On the other hand, the fourth term in \eqref{EqAn20An} reads
\begin{align}
- k(2k-1) & \, \int_W h_W(2G_W + (2k-1)h_W) \\
& \, = - \frac{(2k-1)}{2} \int_W 2kh_W (2G_W + 2kh_W) + k(2k-1) \int_W h_W^2 \notag\;.
\end{align}
For the same reason as above, the first term on the right-hand side is an integer, and as $h_W$ has half-integral periods, the second term belongs to $\mathbb{Z}/4$. As $k(2k-1) = k$ mod $4$, all we need to show is that $\int_W 4h_W^2 = \int_W p_2(\mathscr{N}_W)$ mod $4$.
For this, we need to introduce a cohomological operation, the Pontryagin square $\mathfrak{P}$. $\mathfrak{P}$ maps $H^\bullet(W;\mathbb{Z}_2)$ into $H^\bullet(W;\mathbb{Z}_4)$. Denoting by $\rho_k$ the reduction modulo $k$, the Pontryagin square has the property that $\mathfrak{P} \rho_2(u) = \rho_4(u^2)$ for any $u \in H^\bullet(W;\mathbb{Z})$. The action of the Pontryagin square on Stiefel-Whitney classes has been computed by Wu \cite{Wu1959} and can be found for instance in \cite{Thomas1960}:
\be
\mathfrak{P}(w_{2i}) = \rho_4(p_i) + \theta_2 \left( w_1 Sq^{2i-1} w_{2i} + \sum_{j = 0}^{i-1} w_{2j} w_{4i-2j} \right) \;.
\ee
In this formula, $Sq^i$ are the Steenrod squares and $\theta_2$ is the embedding of $H^\bullet(W;\mathbb{Z}_2)$ into $H^\bullet(W;\mathbb{Z}_4)$ induced by the corresponding embedding of $\mathbb{Z}_2$ into $\mathbb{Z}_4$. Applying this formula to the bundle $\mathscr{N}_W$, we see that $\mathfrak{P}(w_4(\mathscr{N}_W)) = p_2(\mathscr{N}_W)$ mod $4$, as $w_i(\mathscr{N}_W) = 0$ for $i > 5$. But now we can use the fact that $2h_W$ is a form lift of $w_4(\mathscr{N}_W)$, i.e. the periods of $2h_W$ on 4-cycles on $W$ are even or odd depending on whether $w_4(\mathscr{N}_W)$ has period 0 or 1. Together with the property of $\mathfrak{P}$ mentioned above, this implies that
\be
\int_W 4h_W^2 = \int_W p_2(\mathscr{N}_W) \quad {\rm mod} \; 4 \;.
\ee
We have therefore shown that \eqref{EqAn20An} always takes integer values on closed manifolds $W$. The somewhat strange-looking fourth term is essential in order to cure the ambiguities of the second term.
\subsection{The conformal blocks}
\label{SecConfBlocks}
A potentially confusing point is the following. The geometric invariant ${\rm An}_{A_{n-1}}$ defines a line bundle $\mathscr{L}_{A_{n-1}}$ over the space of objects of the cobordism category $\mathfrak{C}$, that is over the space of 6-manifolds $M$ endowed with the data $\mathfrak{d}_M$. We expect the partition function of the (2,0) theory to be a section of this line bundle.
But it is known that the (2,0) theory does not admit a single partition function. Rather, it has a space of ``conformal blocks'' whose dimension is given by the order of Lagrangian subgroups of $H^3(M;\mathbb{Z}_n)$ with respect to the cup product pairing on $H^3(M;\mathbb{Z}_n)$ \cite{Witten:1998wy, Witten:2009at}.
These two statements can be reconciled as follows. The partition function $Z_{n{\rm M5}}$ of a stack of M5-branes is well-defined and unique. The conformal blocks arise after the decoupling of the center-of-mass tensor multiplet, because the self-dual field of charge $n$ that it contains does not have a single partition function, but rather a set of conformal blocks $Z_{{\rm CM},x}$ \cite{Witten:1998wy}. They form a representation of a central extension $G_H$ of $H^3(M;\mathbb{Z}_n)$ and can be parameterized by an index $x$ running over a Lagrangian subgroup of $H^3(M;\mathbb{Z}_n)$. As $Z_{n{\rm M5}}$ is invariant under $G_H$ and $Z_{{\rm CM},x}$ transforms in the irreducible unitary representation of $G_H$, it is natural to expect that the (2,0) theory has conformal blocks $Z_{A_{n-1},x}$ valued in the dual representation, and that one can write $Z_{n{\rm M5}} = \sum_x Z_{{\rm CM},x}Z_{A_{n-1},x}$. Similar statements in the case of $N=4$ super Yang-Mills were put forward in \cite{Belov:2004ht}. Now $Z_{{\rm CM},x}$ are all sections of the same line bundle. In order for the sum to make sense, the conformal blocks $Z_{A_{n-1},x}$ should all be sections of a unique line bundle; this is the line bundle $\mathscr{L}_{A_{n-1}}$.
The fact that $Z_{{\rm CM},x}$ are sections of the same line bundle for all $x$ also justifies our computation of the anomaly of the (2,0) theory in Section \ref{SecGlobAn20ThAn} by subtracting the anomaly of the center-of-mass tensor multiplet from the anomaly of the stack of M5-branes.
In more detail, recall that we can parameterize the M-theory C-field on $\tilde{M}$ as follows
\be
\label{EqParC-fieldOnM}
\check{C}_{\tilde{M}} = n\check{a}_{\tilde{M}} + \pi^\ast(\check{A}_M) \;.
\ee
Clearly, the differential cohomology class of $\check{C}_{\tilde{M}}$ is left invariant under shifts
\be
\label{EqReparC-fieldtM}
\check{a}_{\tilde{M}} \rightarrow \check{a}_{\tilde{M}} + \pi^\ast(\check{B}_M) \;,
\ee
where $\check{B}_M$ is a differential cocycle on $M$ representing an order $n$ differential cohomology class. (From now on, we will make a slight abuse of language and refer to $\check{B}_M$ as an ``order $n$ torsion differential cocycle'', even if $n\check{B}_M$ is zero only in cohomology.) The effective C-field to which the center-of-mass tensor multiplet couples is
\be
\check{C}_M = \frac{1}{2}\pi_\ast(\check{a}_{\tilde{M}} \cup \check{a}_{\tilde{M}}) + \check{A}_M \;,
\ee
transforming as:
\be
\label{EqTransCM}
\check{C}_M \rightarrow \check{C}_M + \check{B}_M \;.
\ee
So the differential cohomology class of $\check{C}_M$ is not invariant under such changes of parameterization. The transformation \eqref{EqReparC-fieldtM} acts on the conformal blocks of the center of mass, which are functions of $\check{C}_M$, but leaves $Z_{n{\rm M5}}$ invariant.
At least if there is no torsion in $H^3(M;\mathbb{Z})$, we can be more precise. In this case, a (linearly dependent) set of generators of the conformal blocks of the center of mass is provided by level $n$ Siegel theta functions over the torus $\mathcal{J}_n$ of flat (gauge equivalence classes of) C-fields \cite{Henningson:2010rc}. The latter is defined by $\mathcal{J}_n = H^3(M;\mathbb{R})/nH^3_{\mathbb{Z}}(M;\mathbb{Z})$, where $H^3_{\mathbb{Z}}(M;\mathbb{R})$ denotes the de Rham cohomology classes having integral periods on $M$. \eqref{EqTransCM} is then simply an order $n$ rotation of $\mathcal{J}_n$. It is well-known that the theta functions of level $n$ are in bijection with order $n$ points of $\mathcal{J}_n$, and therefore \eqref{EqTransCM} simply permutes the elements in our set of conformal blocks. If torsion is present, the space of flat C-fields $\check{H}^4_{\rm flat}(M)$ fit in a short exact sequence
\be
0 \rightarrow \mathcal{J}_n \rightarrow \check{H}^4_{\rm flat}(M) \rightarrow H^4_{(n)}(M;\mathbb{Z}) \;,
\ee
where $H^4_{(n)}(M;\mathbb{Z})$ is the subgroup generated by the elements of order $n$ in $H^4(M;\mathbb{Z})$. The order $n$ differential cocycle $\check{B}_M$ then acts on $\check{H}^4_{\rm flat}(M)$ by order $n$ rotations of the components $\mathcal{J}_n$ together with permutations of these.
In summary, the data $\mathfrak{d}$ defined in Section \ref{SecExt7-8-manifolds} is the data required to define the (2,0) theory \emph{and select a particular conformal block}. All the conformal blocks of the (2,0) theory are sections of the same line bundle over the moduli space of manifolds endowed with the data $\mathfrak{d}$. This line bundle is determined by ${\rm An}_{A_{n-1}}$ as explained in Section \ref{SecAnomCob}. The conformal blocks share the same anomaly and are permuted by the shifts \eqref{EqReparC-fieldtM} of $\check{a}_{\tilde{M}}$.
In contrast, the data required to define the $A_{n-1}$ (2,0) theory without a choice of conformal block is (keeping the notation of Section \ref{SecExt7-8-manifolds}) $\mathfrak{d}_M' = (\mathscr{N}_M, \check{C}_{\tilde{M}}, n\check{a}_{\tilde{M}})$, where now $\check{a}_{\tilde{M}}$ is determined up to a torsion element of order $n$. Over the moduli space of manifolds with data $\mathfrak{d}'$, the conformal blocks should rather be seen as sections of a vector bundle, whose rank is given by the order of Lagrangian subspaces of $H^3(M;\mathbb{Z}_n)$. To describe the anomaly precisely in this context requires to promote the geometric invariant ${\rm An}_{A_{n-1}}$ to an anomaly field theory \cite{Freed:2014iua}. The relevant anomaly field theory is a type of quantum Dijkgraaf-Witten theory, whose classical version is given by ${\rm An}_{A_{n-1}}$ and whose quantization is performed by summing over the torsion component of $\check{a}_{\tilde{M}}$. The details of this construction will appear in a future paper \cite{Monnierc}.
This generalization is important because there exist diffeomorphisms that fail to preserve the torsion component of $\check{a}_{\tilde{M}}$. Such diffeomorphisms permute the conformal blocks of the (2,0) theory and their action cannot be accounted for naturally using the formalism developed in the present paper. \footnote{We thank the referee for making this point.} Indeed, they were implicitly ruled out by the choice of the data $\mathfrak{d}$, which they fail to preserve.
Let us also remark also that the picture developed in this section shows that all the subtleties of the (2,0) theory at a non-generic point on its Coulomb branch are captured by the partition function $Z_{n{\rm M5}}$ of the stack of M5-branes and are independent of the choice of conformal block.
\subsection{The origin of the Hopf-Wess-Zumino terms}
\label{SecOrigHWZTerms}
A naive computation of the local gravitational anomaly of the (2,0) $A_{n-1}$ theory by summing the anomalies of the $n$ tensor multiplets present at a generic point on the Coulomb branch fails to capture the whole anomaly of the theory. It was proposed in \cite{Intriligator:2000eq} that the effective theory on the Coulomb branch contains certain Wess-Zumino terms, dubbed ``Hopf-Wess-Zumino terms'', compensating for the difference between the naive computation and the correct anomaly found in \cite{Harvey:1998bx}. In our framework, those terms are responsible for the second and fourth terms of the anomaly \eqref{EqAn20An}, although only the second term was accounted for in \cite{Intriligator:2000eq}. We show here that these Wess-Zumino terms can be pictured very concretely as the topological modes of the M-theory C-field that get trapped between the M5-branes when the decoupling limit of Section \ref{SecIdComp} is taken. A somewhat similar idea was mentioned in \cite{Kalkkinen:2002tk}.
Recall our method to compute the anomaly inflow in Section \ref{SecIdComp}. We considered a set of $n$ non-intersecting M5-branes separated by a typical distance $r$. We picked a tubular neighborhood $N_0$ of $M$ including all the M5-branes, say of radius $R_0$. We then rescaled $r$ to zero while keeping $R_0$ fixed. Equivalently, we could have kept $r$ fixed and taken $R_0$ to infinity.
\begin{figure}[htb]
\centering
\includegraphics[width=\textwidth]{Fig-2-0-theory.pdf}
\caption{\emph{A pictorial representation of the arguments in this section. The three pictures represent a fiber over a point of $M$. On the left, the setup used to compute the anomaly due to a set of non-intersecting M5-branes (black dots). Tubular neighborhoods (grayed out) are cut out and there is an anomaly inflow from the M-theory Chern-Simons term in the bulk (in white). This inflow cancels exactly the sum of the anomalies of the isolated M5-branes.
In the middle, the setup presented in Section \ref{SecIdComp} in order to compute the anomaly of a stack of M5-branes on its Coulomb branch. A single tubular neighborhood of $M$ is cut out and includes all the M5-branes. Again, there is an anomaly inflow due to the M-theory Chern-Simons term in the bulk.
On the right, the difference between the anomaly inflow contributions can be attributed to the M-theory Chern-Simons term integrated over the region $N$, represented in white.}}
\label{Fig1}
\end{figure}
An alternative way of computing the anomaly is the following. We take $n$ non-intersecting tubular neighborhoods $N_i$ of the worldvolumes $M_i$ of each M5-brane, of radius $R_i << r$. Let us write $\tilde{M}_i = \partial N_i$, a 4-sphere bundle over $M_i$. If this setup is extended to a 7-manifold $U$, we can compute the inflow due to the bulk on this system by evaluating the M-theory Chern-Simons term on $\bigcup_i \tilde{U}_i$ and taking a limit in which $R_i$ scale down to zero. It is clear that the anomaly obtained in this way is the sum of the anomalies due to each M5-brane. In other words, via this procedure, we obtain the naive anomaly mentioned at the beginning of this section.
But now the reason why the two procedures do not give the same answer is clear. In the first procedure, in addition to the M5-branes themselves, we also included a part of the bulk of M-theory, namely
\be
N := N_0 \backslash \bigcup_{i=1}^n N_i \;.
\ee
The M-theory Chern-Simons term on $N$ is anomalous, because $N$ has boundaries. In fact, when $M$ is promoted to a 7-manifold $U$, the anomaly due to the Chern-Simons term can be obtained by evaluating it on $\tilde{U} \cup \bigcup_i (\overline{\tilde{U}}_i)$. We see that the anomaly difference between a stack of M5-branes on its Coulomb branch and a set of non-intersecting M5-branes is entirely due to the M-theory Chern-Simons term on $N$. See Figure \ref{Fig1}.
$N$ is a fiber bundle over $M$. The fiber is a 5-ball of radius $R$ out of which $n$ 5-balls of radii $R_i$ have been carved out. Writing $\pi$ for the bundle map and $\check{cs}_{11}$ for the integrand of \eqref{EqM-thCSTermDiffC}, the Hopf-Wess-Zumino term is
\be
\
\check{\rm wz} = \pi_\ast({\rm \check{cs}}_{11}) \;,
\ee
i.e. the integral of the Chern-Simons integrand over the fibers of $N$, yielding a top differential cocycle on $M$. By definition, we have
\be
\int_M \check{\rm wz} = \int_N {\rm \check{cs}}_{11} \;,
\ee
and $\check{\rm wz}$ is a local term on $M$ accounting for the anomaly difference.
Finally, we have to take the limit $R_0 \rightarrow \infty$, $R_i \rightarrow 0$. The advantage of this formulation is that it is completely general: no assumption is made on the topology of the system of M5-branes, except that they are not intersecting. Of course, in order to get an explicit expression for the Hopf-Wess-Zumino terms, the setup should be simple enough so that the integration over the fibers of $N$ can be performed explicitly.
\section{The global anomaly of a generic A-D-E (2,0) theory}
\label{SecGenGlobAnForm}
In the present section, we show that the anomaly we found for A-type theories can be naturally rewritten in terms of basic Lie algebra data. This result yields a conjectural formula for the global anomaly of a generic (2,0) theory, which is automatically compatible with the exceptional isomorphisms among the A-D-E Lie algebras. We also provide a consistency check by showing that the corresponding anomaly is well-defined in the sense of Section \ref{SecIncons20TheoryAn}.
\subsection{The anomaly formula}
\label{SecAnFormADE}
For a simply laced simple Lie algebra $\mathfrak{g}$, the general global anomaly formula reads
\be
\label{EqGenAnFormADE}
{\rm An}_{\mathfrak{g}}(U) = \int_W \left( r(\mathfrak{g}) J_8 - \frac{|\mathfrak{g}|{\rm h}_\mathfrak{g}}{24} p_2(\mathscr{N}_W) - \frac{r(\mathfrak{g})}{8}\sigma_W - r(\mathfrak{g}) {\rm h}_{\mathfrak{g}} h_W \left(G_W - h_W\right) - \frac{|\mathfrak{g}|{\rm h}_\mathfrak{g}}{2} h_W^2 \right) \;,
\ee
where $|\mathfrak{g}|$ is the dimension of $\mathfrak{g}$, $r(\mathfrak{g})$ its rank and ${\rm h}_\mathfrak{g}$ its dual Coxeter number. \eqref{EqGenAnFormADE} coincides with \eqref{EqAn20An} for the $A$-type theory. Note that as \eqref{EqGenAnFormADE} is expressed in terms of data intrinsic to $\mathfrak{g}$, this formula is automatically compatible with the exceptional isomorphisms among elements of the $A$, $D$ and $E$ series. For the reader's convenience, we recall the values of the dimension and of the dual Coxeter numbers:
\be
\begin{array}{l|ll}
& |\mathfrak{g}| & {\rm h}_\mathfrak{g} \\ \hline
A_n & n^2 + 2n & n + 1 \\
D_n & 2n^2 - n & 2n - 2 \\
E_6 & 78 & 12 \\
E_7 & 133 & 18 \\
E_8 & 248 & 30 \\
\end{array} \;.
\ee
Of course, the rank of $X_n$ is $n$. The first two terms of \eqref{EqGenAnFormADE}, which are the only ones relevant for the local anomaly, were obtained in \cite{Intriligator:2000eq}.
\subsection{Data required to specify a (2,0) theory}
As we already discussed, in the A-type theories, $h_W$ and $G_W$ have a clear interpretation in terms of M-theory data. For the other (2,0) theories, it is not obvious how these objects should be interpreted, especially for the E-type theories, where there is no M-theory realization. We define here data on $M$ that naturally give rise to $h_W$ and $G_W$. Presumably, this data is required in order to define the (2,0) theory on a 6-manifold $M$, independently of any M-theory realization.
We already know that in order to define the (Euclidean) (2,0) theory, we need a simply laced Lie algebra $\mathfrak{g}$, a smooth oriented Riemannian manifold $M$, an R-symmetry bundle $\mathscr{N}_M$ satisfying \eqref{EqRelSWClassesTMN} and a spin structure on $TM \oplus \mathscr{N}$. We claim that in addition to this we need a choice of global angular differential cocycle $\check{a}_{\tilde{M}}$ on the 4-sphere bundle $\tilde{M}$ associated to $\mathscr{N}_M$.
We saw that in the A-type theories, such a choice was necessary in order to perform the decoupling of the center-of-mass degrees of freedom. $\check{a}_{\tilde{M}}$, together with the requirement $\check{C}_M = \check{S}_M$, fully determines the M-theory C-field on $\tilde{M}$. Similarly, in any (2,0) theory, a choice of $\check{a}_{\tilde{M}}$ allows one to define $\check{b}_M := \frac{1}{2} \pi_\ast(\check{a}_{\tilde{M}} \cup \check{a}_{\tilde{M}})$, $\check{C}_M = \check{S}_M$ and $\check{A}_{M} = \check{C}_M - \check{b}_M$. In anomaly computations, this data is extended to 7- and 8-dimensional manifolds $U$ and $W$. $h_W$ and $G_W$ in \eqref{EqGenAnFormADE} are then respectively the field strengths of $\check{b}_W$ and $\check{C}_W$.
\subsection{Consistency}
Using our analysis of the $A_n$ case, it is easy to see that \eqref{EqGenAnFormADE} yields an integer on closed manifolds for any $\mathfrak{g}$, and therefore that it describes a well-defined anomaly. Indeed, the following terms take independently integer values on closed manifolds:
\be
\label{EqIntTerGenForm1}
\int_W \left(r(\mathfrak{g}) J_8 - \frac{r(\mathfrak{g})}{8}\sigma_W \right) \;,
\ee
\be
\label{EqIntTerGenForm2}
\int_W \left( \frac{|\mathfrak{g}|{\rm h}_\mathfrak{g}}{24} p_2(\mathscr{N}_W) + \frac{|\mathfrak{g}|{\rm h}_\mathfrak{g}}{2} h_W^2 \right) \;,
\ee
\be
\label{EqIntTerGenForm3}
\int_W r(\mathfrak{g}) {\rm h}_{\mathfrak{g}} h_W (G_W - h_W) \;.
\ee
The fact that \eqref{EqIntTerGenForm1} is an integer was explained in Section \ref{SecGlobAnCM}. To show that \eqref{EqIntTerGenForm2} is an integer, recall that we proved in Section \ref{SecConsCheck} that $\int_W p_2(\mathscr{N}_W) = 4\int_W h_W^2$ mod 4. Integrality will follow provided that $|\mathfrak{g}|{\rm h}_\mathfrak{g}/6 = -|\mathfrak{g}|{\rm h}_\mathfrak{g}/2$ mod 4, which requires $|\mathfrak{g}|{\rm h}_\mathfrak{g}$ to be a multiple of 6. This can be readily checked in each case. Finally, the last term takes integer value because $r(\mathfrak{g}) {\rm h}_{\mathfrak{g}}$ is even, $2G_W$ and
$2h_W$ have integral periods, and $2G_W$ is a lift of the Wu class, hence is a characteristic element of the wedge product pairing on forms with integral periods (see Appendix \ref{SecRelLiftWuClass}).
\subsection{Further comments}
We do not have a compelling picture explaining how the conformal blocks arise in D- and E-type theories.
It would be interesting to derive the anomaly formula \eqref{EqGenAnFormADE} from the type IIB realization of the (2,0) theories, but we leave this for future work.
We attempted to derive \eqref{EqGenAnFormADE} for the $D_n$ series using M-theory on an $\mathbb{R}^5/\mathbb{Z}_2$ orbifold. However we cannot perform a rigorous derivation, because of a puzzling feature of the orbifold background: the anomaly of the orbifold is not well-defined globally. This can be understood from the fact that the $\mathbb{R}^5/\mathbb{Z}_2$ sources a half-quantum of flux of the M-theory C-field. The orbifold singularity has an anomaly ``${\rm An}_{O}(U) = -\int_W \frac{1}{2}J_8$'' canceled by anomaly inflow. But as $\frac{1}{2}J_8$ does not integrate to an integer on a closed manifold $W$, the expression above does not define a geometric invariant of $U$. We therefore encounter the same problem that was plaguing the naive anomaly formula \eqref{EqGuessAn20Theory} for the (2,0) theory, and unlike in the latter case, there seems to be no extra term appearing to cure the inconsistency. Closing our eyes to this problem, a calculation very similar to that for $A_n$ theory yields all the terms in \eqref{EqGenAnFormADE} with the right prefactors, except for the fourth one. Because of this, the anomaly derived in this way is inconsistent. We expect that a proper understanding of the orbifold's anomaly should cure this problem.
\subsection*{Acknowledgments}
This research was supported in part by SNF Grant No.200020-149150/1.
|
2,877,628,088,534 | arxiv |
\section{Implementation Details} \label{sec:implementationdetails}
We used Caliban~\cite{ritchie2020caliban} to manage all experiments in a reproducible environment in Google Cloud’s AI Platform.
Each point in plots show the mean value taken over at least five different runs.
\subsection{Training for MLP} \label{sec:mlp-training}
\begin{itemize}
\item Dataset: MNIST~\cite{mnist}, CIFAR-10~\cite{cifar100} and CIFAR-100~\cite{cifar100}
\item Network: MLP
\item Width experiments: single hidden layer, hidden neurons $2^{n}$ where n $\in {7..15} $.
\item Depth experiments: $2^{9}$ neurons per layer, we add additional layers with the same number of neurons.
\item Hyper parameters:
\begin{itemize}
\item LR = fixed 0.01
\item stopping criteria = 300 epochs or loss (CE) $<$ 0.01
\item Momentum = 0.9
\end{itemize}
\end{itemize}
\subsection{Training for CNN architecture family} \label{sec:sconv-training}
\begin{itemize}
\item Dataset: MNIST~\cite{mnist}, CIFAR-10~\cite{cifar100} and CIFAR-100~\cite{cifar100}
\item Network: VGG~\cite{simonyan2014very}, ResNet~\cite{he2015deep})
\item Width experiments: we use VGG11 and ResNet18 changing the width of the first layer and adapting the following layers according to the ratios in the vanilla version of the networks.
\item Depth experiments: we change the architecture: VGG11, VGG13, VGG16 and ResNet18, ResNet34, ResNet50 while keeping the default width of the vanilla networks.
\item Hyper parameters:
\begin{itemize}
\item LR = Cosine Annealing~\cite{loshchilov2016sgdr} with initial LR=0.01
\item stopping criteria = 300 epochs or loss $<$ 0.1
\item Momentum = 0.9
\end{itemize}
\end{itemize}
\section{Additional Plots}
\subsection{Sparsification by increasing network depth}
\label{sec:depth_experiments}
In addition to increasing network width before applying random static sparsification, we also test sparsification by increasing network depth. In this case, we duplicate layers while keeping their width unchanged. In contrast to wider networks, high sparsity levels severely impairs connectivity between layers of deeper networks which make these more difficult to train and converge. VGG experience convergence issues when sparsifying by increasing depth when sparsity approaches 10\,\%. ResNet appears less susceptible to this issue due to the presence of skip connections, whereas MLPs are more robust due to a considerably lower depth in our settings (1,2,4 and 8). Our experiments with deeper MLP networks and high sparsity also reveal convergence issues similarly to VGG. We run experiments for increasing depth for all architectures and datasets up to a point where the overall network accuracy starts to drop and as long as network training converges.
\Figref{fig:depth:perturbated_models} and \Figref{fig:depth:corrupted_data} show the results for weight perturbations and corrupted data of varying severity. The results are similar and show that robustness improves as long as network accuracy without intervention remains steady.
\Figref{fig:perturbated_models_recall} shows the results for the lowest recall among all classes. Network accuracy on the most sensitive classes does not decline with sparsity. The test is conducted without weight perturbation or data corruption.
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.33\linewidth}
\centering
\includegraphics[width=\linewidth]{figs/depth/noisy_model_mlp.pdf}
\caption{MLP}
\label{fig:depth:perturbated_models_mlp}
\end{subfigure}
\hskip 3cm
\begin{subfigure}[b]{0.33\linewidth}
\centering
\includegraphics[width=\linewidth]{figs/depth/noisy_model_resnet.pdf}
\caption{ResNet}
\label{fig:depth:perturbated_models_resnet}
\end{subfigure}
\caption{{\bf Robustness to weight perturbations. Sparsification by increasing network depth} We add multiplicative Gaussian noise$z_i\sim{\mathcal{N}}(\mu, w_i^2\sigma_i^2)$ to each weight and evaluate model performance. There is a sweetspot corresponding to optimal sparsity. With higher depth and sparsity network connectivity declines leading to simultaneous accuracy and robustness drop.}
\label{fig:depth:perturbated_models}
\end{figure*}
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.33\linewidth}
\centering
\includegraphics[width=\linewidth]{figs/depth/data_robustness_mlp}
\caption{MLP}
\label{fig:depth:corrupted_data_mlp}
\end{subfigure}
\hskip 3cm
\begin{subfigure}[b]{0.33\linewidth}
\centering
\includegraphics[width=\linewidth]{figs/depth/data_robustness_resnet}
\caption{ResNet}
\label{fig:depth:corrupted_data_resnet}
\end{subfigure}
\caption{{\bf Robustness to corrupted data. Sparsification by increasing network depth} Corrupted datasets MNIST-C, CIFAR10-C and CIFAR100-C. There is a sweetspot corresponding to optimal sparsity. With higher depth and sparsity network connectivity declines leading to simultaneous accuracy and robustness drop.}
\label{fig:depth:corrupted_data}
\end{figure*}
\begin{figure*}[h]
\centering
\subfloat[Sparsification by increasing network width]{
\includegraphics[width=.33\linewidth]{figs/width/min_recall.pdf}
}
\hskip 3cm
\subfloat[Sparsification by increasing network depth]{
\includegraphics[width=.33\linewidth]{figs/depth/min_recall.pdf}
}
\caption{{\bf Minimum recall among all classes.} Network accuracy on the most sensitive classes does not decline with sparsity. Tested without weight / data corruptions. Sparsification by increasing network width (left) and depth (right).}
\label{fig:perturbated_models_recall}
\end{figure*}
\subsection{Detailed results for corrupted data}
\label{sec:corrupted_data_details}
The plots presented in \Figref{fig:corrupted_data_mlp_selected}, \Figref{fig:corrupted_data_vgg_selected} and \Figref{fig:corrupted_data_resnet_selected} provide details on the impact of individual corruption methods on network performance. We show details for MLP, VGG and ResNet architectures trained on MNIST, CIFAR-10 and CIFAR-100. Sparsification is achieved by increasing network width. Although the effect of data corruptions on model performance varies widely, it can be observed that in all cases a sparser network matches the accuracy of the vanilla 100\,\% network. This observation holds up to high sparsity levels where the overall model performance declines.
\begin{figure*}[htb]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/appendix/width/MLP_MNIST_data_corruption_robustness.pdf}
\caption{MNIST-C}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/appendix/width/MLP_CIFAR10_data_corruption_robustness.pdf}
\caption{CIFAR10-C}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/appendix/width/MLP_CIFAR100_data_corruption_robustness.pdf}
\caption{CIFAR100-C}
\end{subfigure}
\caption{{\bf One layer MLP performance on selected corruption types.} For CIFAR10-C and CIFAR100-C we observe a clear trend across all corruption types, which suggests that the sparser networks with increased width are more robust. We note that for simpler task MNIST-C such increase in the performance is happening earlier in sparsity levels.}
\label{fig:corrupted_data_mlp_selected}
\end{figure*}
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/appendix/width/vgg_MNIST_data_corruption_robustness.pdf}
\caption{MNIST-C}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/appendix/width/vgg_CIFAR10_data_corruption_robustness.pdf}
\caption{CIFAR10-C}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/appendix/width/vgg_CIFAR100_data_corruption_robustness.pdf}
\caption{CIFAR100-C}
\end{subfigure}
\caption{{\bf VGG11 performance on selected corruption types.} The results show a clear upwards trend across different corruption types which indicates, that the networks get more robust as the sparsity and width increase.}
\label{fig:corrupted_data_vgg_selected}
\end{figure*}
\begin{figure*}[t]
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/appendix/width/resnet_MNIST_data_corruption_robustness.pdf}
\caption{ MNIST-C}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/appendix/width/resnet_CIFAR10_data_corruption_robustness.pdf}
\caption{ CIFAR10-C}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/appendix/width/resnet_CIFAR100_data_corruption_robustness.pdf}
\caption{CIFAR100-C}
\end{subfigure}
\caption{{\bf ResNet18 performance on selected corruption types.} We observe a upwards trend across corruption types for CIFAR10-C and CIFAR100-C, models with higher width and higher sparsity perform better on corrupted data. We note that the increase in the performance for simpler task MNIST-C happens sooner.}
\label{fig:corrupted_data_resnet_selected}
\end{figure*}
\section{Introduction}
\label{sec:intro}
Deep learning methods are increasingly used for solving complex tasks, yet little is known about the choice of the best architecture, the required model size, capacity, and the trade-offs involved. A common strategy is to train overparameterized models and compress them into smaller representations~\cite{hoefler2021sparsity}. This works remarkably well with an almost negligible drop in accuracy~\cite{gale2019state,blalock2020state}, and is crucial to make use of these models in resource-constrained environments. Recent works, however, shows that test accuracy does not capture how model compression impacts the generalization properties of these models~\cite{hooker2020compressed, entezari2019class}.
Related literature refers to robustness as the network generalization ability to small shifts in the distribution that humans are usually robust to. There is a growing body of work studying methods for building robust models. Recent studies~\cite{ShankarRMFRS20,recht2019imagenet} found that image classification models show a consistent accuracy drop when evaluated on ImageNet~\cite{deng2009imagenet} and ImageNetV2~\cite{recht2019imagenet}, while humans achieve the same accuracy. Another line of research aims at minimizing the worst case expected error over a set of probability distributions by applying distributionally robust optimization~\cite{shafieezadehabadeh2015distributionally,duchi2020distributionally,sagawa2020distributionally}. A similar line of work focuses on finding models that have low performance drop on adversarial examples~\cite{Biggio_2018,madry2019deep}.
A recent study by~\citet{hooker2020compressed} shows that model compression, and to a smaller extent quantization, result in tremendous robustness degradation. At the same time, \citet{golubeva2021wider} found that wider networks of the same capacity (same number of parameters) yield better performance. Model compression leads simultaneously to sparser and lower capacity networks, yet the contribution of both effects is mixed. Understanding the impact of these effects on model robustness in isolation is crucial when optimizing machine learning models for resource-constrained devices. This work evaluates the effect of model sparsification while keeping the network capacity, defined by the total number of parameters, fixed.
\begin{figure*}[ht]
\centering
\begin{subfigure}[b]{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/width/noisy_model_mlp.pdf}
\caption{One layer MLP}
\label{fig:width:perturbated_models_mlp}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/width/noisy_model_vgg.pdf}
\caption{VGG11}
\label{fig:width:perturbated_models_vgg}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/width/noisy_model_resnet.pdf}
\caption{ResNet18}
\label{fig:width:perturbated_models_resnet}
\end{subfigure}
\caption{{\bf Robustness to weight perturbations, sparsification by increasing width.} We add multiplicative Gaussian noise $z_i\sim{\mathcal{N}}(\mu, w_i^2\sigma_i^2)$ to each weight and evaluate model performance. We observe that as we move towards higher sparsity levels, the performance first increases then decreases in extreme sparsity levels. We note that such increase is happening earlier for simpler tasks like MNIST. This performance improvement indicates a flatter loss landscape around the minima suggesting better generalization.}
\label{fig:width:perturbated_models}
\end{figure*}
\begin{figure*}[ht]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/width/data_robustness_mlp.pdf}
\caption{One layer MLP}
\label{fig:width:corrupted_data_mlp}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/width/data_robustness_vgg.pdf}
\caption{VGG11}
\label{fig:width:corrupted_data_vgg}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/width/data_robustness_resnet.pdf}
\caption{ResNet18}
\label{fig:width:corrupted_data_resnet}
\end{subfigure}
\caption{{\bf Robustness to data corruption, sparsification by increasing width.} We evaluate the performance of the models on corrupted datasets MNIST-C, CIFAR10-C and CIFAR100-C. We observe that as we move towards higher sparsity levels, the performance first increases then decreases in extreme sparsity levels. We note that such increase is happening earlier for simpler tasks like MNIST.}
\label{fig:width:corrupted_data}
\end{figure*}
\begin{figure*}[htb]
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/appendix/adv_width_MLP.pdf}
\caption{One layer MLP}\label{adv_mlp_1layer}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/appendix/adv_width_vgg.pdf}
\caption{VGG11}\label{adv_vgg11}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/appendix/adv_width_resnet.pdf}
\caption{ResNet18}
\label{adv_resnet18}
\end{subfigure}
\caption{\small {\bf Robustness to adversarial attacks. Sparsification by increasing width.} Robustness to all adversarial attacks (BIM~\cite{kurakin2016adversarial}, APGD~\cite{croce2020reliable}, PGD~\cite{madry2019deep}, FFGSM~\cite{goodfellow2014explaining}) is improved as we have less remaining weights and decreases for extreme sparsity levels where overall network accuracy (clean) drops.}
\label{fig:adv_corrupted_data_selected}
\end{figure*}
\begin{figure}[!ht]
\centering
\begin{subfigure}[b]{0.49\linewidth}
\centering
\includegraphics[width=\linewidth]{figs/post_train_sparsify_width/noisy_model_mlp.pdf}
\caption{One layer MLP}
\label{fig:posttraining:perturbated_models_mlp}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\linewidth}
\centering
\includegraphics[width=\linewidth]{figs/post_train_sparsify_width/noisy_model_vgg.pdf}
\caption{VGG11}
\label{fig:posttraining:perturbated_models_vgg}
\end{subfigure}
\caption{{\bf Robustness to weight perturbations. Sparsification after training by increasing width.} We add multiplicative Gaussian noise $z_i\sim{\mathcal{N}}(\mu, w_i^2\sigma_i^2)$ to each weight and evaluate performance on test data. As we move towards higher sparsity levels, the performance decreases in extreme sparsity levels.}
\label{fig:posttraining:perturbated_models}
\end{figure}
\begin{figure}[!ht]
\centering
\begin{subfigure}[b]{0.49\linewidth}
\centering
\includegraphics[width=\linewidth]{figs/post_train_sparsify_width/data_robustness_mlp.pdf}
\caption{One layer MLP}
\label{fig:posttraining:corrupted_data_mlp}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\linewidth}
\centering
\includegraphics[width=\linewidth]{figs/post_train_sparsify_width/data_robustness_vgg.pdf}
\caption{VGG11}
\label{fig:posttraining:corrupted_data_vgg}
\end{subfigure}
\caption{{\bf Robustness to data corruption. Sparsification after training by increasing width.} We evaluate on the corrupted datasets MNIST-C, CIFAR10-C and CIFAR100-C. sompare to static sparsity at the prior to training, robustness degrades sooner.}
\label{fig:posttraining:corrupted_data}
\end{figure}
\fakeparagraph{Contributions}
We hypothesise that sparsity alone does not hurt model robustness when the network capacity is fixed and provide empirical evidence to support this hypothesis in a number of settings. We run our study on a range of network architectures (MLPs, VGG and ResNets), datasets (MNIST, CIFAR-10, CIFAR-100), robustness tests (weight perturbations, data corruptions, adversarial examples) and evaluate the overall and per class network performance. We observe that for randomly initialized models with a static sparsity pattern applied before or after training, network sparsification does not hurt or even improves robustness to a certain sparsity compared to a dense network of the same capacity. Robustness and accuracy decline simultaneously for very high sparsity due to loose connectivity between network layers. We show that our hypothesis holds when introducing sparsity by increasing network width and depth in separate experiments, applied before and after training. These findings show that a rapid robustness drop caused by network compression observed in the literature is due to a reduced network capacity rather than sparsity.
\section{Experimental Framework}
\label{sec:framework}
We hypothesise that sparsity, while keeping the number of parameters fixed, does not hurt network robustness. We support our hypothesis by exhaustive tests covering multiple datasets, network architectures, model and data corruptions, sparsity levels, sparsification methods and schedules. The details are given below.
\fakeparagraph{Datasets and architectures}
The datasets used in the experiments include MINST~\cite{mnist}, CIFAR-10~\cite{cifar100}, and CIFAR-100)~\cite{cifar100}. We fix the number of weights in each network architecture (one layer MLP, VGG16~\cite{simonyan2014very}, ResNet18~\cite{he2015deep}) throughout all experiments, by increasing the width or depth and introducing the proper corresponding sparsity. See \textit {sparsification methods} for more details. We use one layer MLP with $2^7$ hidden units, VGG with 11 layers,
and ResNet18 as base architectures.
We refer to these vanilla architectures as to 100\,\%-networks {\it before} sparsification. Note that for both ResNet and VGG our vanilla implementation uses the layer width of 16 as the base architecture, which is lower than 64 used in the original architecture. We use width to set the number of output channels for the first layer and use the same width ratios as the respective vanilla architectures for the following layers. All networks were trained using SGD with momentum 0.9. Details for each model family are provided in \Secref{sec:implementationdetails}.
\fakeparagraph{Sparsification methods}\label{par:sparsification}
Existing literature covers multiple ways to make use of sparsity during and after model training including static and dynamic sparsity (\emph{e.g.},\xspace $\beta$-Lasso~\cite{neyshabur2020towards}), iterative hard thresholding (\emph{e.g.},\xspace Lottery Ticket Hypothesis with various pruning strategies~\cite{frankle2018lottery, renda2020comparing}) and others. \cite{hoefler2021sparsity} provides a comprehensive survey on pruning strategies. Sparsification without changing the number of parameters was investigated in~\cite{golubeva2021wider}. In their study static sparsity showed the most prominent impact on network performance and is thus adopted in this work.
We sparsify a network while preserving its capacity by changing the network's width or depth. When sparsifying by increasing width, we leverage the approach introduced in \cite{golubeva2021wider}: every layer of the network is sparsified by removing weights at random in proportion to the layer size, using a static mask generated at initialization. This approach is referred to as \emph{static sparsity}. We build on its publicly available implementation~\cite{golubeva2021wider}. Sparsifying by increasing network depth involves duplicating layers and then applying a static random mask to sparsify the weight tensors. When sparsifying by increasing depth, we consider MLP with $2^9$ hidden units in each layer, and add layers of the same size. For VGG and ResNet we build architecture families VGG11, VGG13, VGG16 and ResNet18, ResNet34, ResNet50 all enjoying the default width of 64.
\fakeparagraph{Sparsification schedules}
In addition to static sparsity applied prior to network training, we also investigate network pruning after training by removing a certain amount of weights with the lowest magnitude to match the required sparsity level. Note that no fine-tuning is applied.
\fakeparagraph{Robustness measures}
We evaluate the impact of sparsity on model performance with respect to weight perturbations~\cite{vonoswald2021neural}, data corruptions~\cite{hendrycks2019robustness} and natural adversarial examples~\cite{hendrycks2021natural}.
{\it Model perturbation.} Similarly to \cite{vonoswald2021neural}, we perturb model weights by applying Gaussian noise $z_i\sim{\mathcal{N}}(\mu,w_i^2\sigma_i^2)$ in proportion to the magnitude of each weight $w_i, i \in L$,
and then measure the difference in the loss $\delta \mathcal{L} = {\mathbb{E}_z}[\mathcal{L}(w_i+z)-\mathcal{L}(w_i)]$.
Accuracy drop due to model perturbation is related to the flatness of the loss landscape around the obtained optimum.
Robustness to weight perturbation could also represents a proxy for quantization error. This error is introduced in neural network compression by weight quantization in the literature~\cite{novac2021quantization}.
{\it Corrupted data.} We apply numerous algorithmically generated corruptions, similar to the ones evaluated in \cite{hooker2020compressed} (\emph{e.g.},\xspace blur, contrast, pixelation) to all datasets used in this paper. This allows us investigating how sensitive the sparsified models are to data corruptions of different severity which humans are oblivious to. Our corrupted datasets are MNIST-C \cite{mu2019mnist}, CIFAR10-C and CIFAR100-C \cite{hendrycks2019robustness}.
{\it Natural adversarial examples.} We use Torchattacks~\cite{kim2020torchattacks} to generate a diverse range of adversarial attacks for different combination of mentioned architectures and datasets. This include FGSM~\cite{goodfellow2014explaining}, BIM~\cite{kurakin2016adversarial}, APGD~\cite{croce2020reliable}, and PGD~\cite{madry2019deep}.
When applying sparsity, we evaluate both the overall model performance and its performance on the most sensitive class. We follow the methodology introduced in \cite{hooker2020compressed} and evaluate the change to class level recall compared to the overall model accuracy. The obtained results are presented below.
\section{Results}
\label{sec:results}
\fakeparagraph{Perturbed model weights}
We first investigate the networks that were sparsified while growing the width to keep their capacity fixed. \Figref{fig:width:perturbated_models} shows that as we move towards higher sparsity levels, the test performance first increases then decreases in extreme sparsity levels. We note that such increase is happening earlier for simpler tasks like MNIST. We observe that sparse configurations
are indeed in flatter regions of weight space as $\delta\mathcal{L}$ increases more slowly with $\delta{z_i}$. This suggests better robustness and generalization around the minima~\cite{pittorino2020entropic, jiang2019fantastic}. Each point in this plot shows the mean over five networks trained from different initializations. When sparsification is applied while increasing network depth, the maximum accuracy and robustness are achieved for smaller depth values
in all experiments. Note that keeping a network connected while increasing its depth, in contrast to width, becomes difficult with higher sparsity. The results are summarized in \Secref{sec:depth_experiments}. The outcome across all experiments consistently suggests that sparsification alone does not undermine network robustness to weight perturbations as long as sufficient network connectivity is maintained.
\fakeparagraph{Corrupted data}
\Figref{fig:width:corrupted_data} evaluates the performance of the models on corrupted datasets MNIST-C, CIFAR10-C and CIFAR100-C. We observe that as we move towards higher sparsity levels, the test performance first increases then decreases in extreme sparsity levels. We note that such increase is happening earlier for simpler tasks like MNIST.
Each point in \Figref{fig:width:corrupted_data}is mean performance over three trained networks. For each network we randomly sample 1000 examples from a dataset and add five noise samples in each run. On CIFAR10-C and CIFAR100-C our evaluation considers corruption severity of two and four as classified by \cite{hendrycks2019robustness}. Detailed results for specific corruption types can be found in \Secref{sec:corrupted_data_details}. The results for the achieved performance of networks sparsified by increasing depth are also shown in \Secref{sec:depth_experiments}.
We note that VGG networks experience convergence issues as the network sparsity approaches 10\% due to lacking connectivity between layers. This is not the case for MLP and ResNet which also converge for lower percentage of remaining weights. We attribute these differences to the power of skip connections in ResNet and low overall tested network depths (1,2,4 and 8) for MLP.
\fakeparagraph{Sensitive classes}
Similarly to \cite{hooker2020compressed}, sensitive classes are considered those with the lowest recall. For each sparsity level, we train five models, evaluate them on the test data and report the minimum recall among all classes. \cite{hooker2020compressed} shows that there are some particular examples in each class that a pruned network forgets easily. However, we observe that as the networks get wider (or deeper) and sparser the minimum recall does not decrease. Sparsification does not disproportionally affect sensitive classes, which may not be noticeable by just looking at the overall accuracy. This is due to the fact that the capacity of the networks is fixed. The results are shown in \Figref{fig:perturbated_models_recall} in \Secref{sec:depth_experiments}.
\fakeparagraph{Adversarial attacks}
\Figref{fig:adv_corrupted_data_selected} shows the robustness of sparsified networks when applying adversarial attacks to perturb test data. We observe a consistent trend for robustness to all adversarial attacks (BIM~\cite{kurakin2016adversarial}, APGD~\cite{croce2020reliable}, PGD~\cite{madry2019deep}, FGSM~\cite{goodfellow2014explaining}). Similar to perturbed model weights and corrupted data, as we have less remaining weights, test performance for adversarial examples is first improved and then decreases for extreme sparsity levels where the overall (clean) network accuracy drops. Dense VGG networks trained on MNIST show the highest accuracy decline in the presence of all attacks, while sparsification helps to improve adversarial robustness.
\fakeparagraph{Post-training sparsification}
\Figref{fig:posttraining:perturbated_models} depicts the results for post-training sparsification for MLP and VGG architectures challenged with perturbed model weights. The results indicate a similar trend to the experiments with static sparsity applied at initialization. For VGG we observe a slight improvement followed by an accuracy drop. However, the performance does deteriorate sooner than with static sparsity. For MLP the results show stagnating accuracy and a slight drop in performance on CIFAR-10. We attribute this to the simplicity of our sparsification method and a relatively low number of weights in the one layer MLP. Similar results are obtained on corrupted datasets visualized in \Figref{fig:posttraining:corrupted_data}.
\section{Conclusion}
\label{sec:conclusion}
In this work we hypothesise that sparsity, while keeping the number of parameters fixed, does not hurt network robustness. We provide experimental evidence to support this claim based on several standard architectures, datasets, sparsification methods and measures of robustness. Our observation is that network sparsification often helps to improve robustness compared to a dense model, yet the benefits decline together with the overall model accuracy for high sparsity levels. This is due to the increasingly loose connectivity between layers which complicates optimization. Since network capacity rather than sparsity causes accuracy and robustness drop of compressed models, designing pruning methods that treat network capacity and sparsity separately can lead to better compressed models. In addition, our work emphasizes the need for training procedures that better support sparse operations, which would allow for a faster and more memory efficient training of sparse networks.
\clearpage
\section{Implementation Details} \label{sec:implementationdetails}
We used Caliban~\cite{ritchie2020caliban} to manage all experiments in a reproducible environment in Google Cloud’s AI Platform.
Each point in plots show the mean value taken over at least five different runs.
\subsection{Training for MLP} \label{sec:mlp-training}
\begin{itemize}
\item Dataset: MNIST~\cite{mnist}, CIFAR-10~\cite{cifar100} and CIFAR-100~\cite{cifar100}
\item Network: MLP
\item Width experiments: single hidden layer, hidden neurons $2^{n}$ where n $\in {7..15} $.
\item Depth experiments: $2^{9}$ neurons per layer, we add additional layers with the same number of neurons.
\item Hyper parameters:
\begin{itemize}
\item LR = fixed 0.01
\item stopping criteria = 300 epochs or loss (CE) $<$ 0.01
\item Momentum = 0.9
\end{itemize}
\end{itemize}
\subsection{Training for CNN architecture family} \label{sec:sconv-training}
\begin{itemize}
\item Dataset: MNIST~\cite{mnist}, CIFAR-10~\cite{cifar100} and CIFAR-100~\cite{cifar100}
\item Network: VGG~\cite{simonyan2014very}, ResNet~\cite{he2015deep})
\item Width experiments: we use VGG11 and ResNet18 changing the width of the first layer and adapting the following layers according to the ratios in the vanilla version of the networks.
\item Depth experiments: we change the architecture: VGG11, VGG13, VGG16 and ResNet18, ResNet34, ResNet50 while keeping the default width of the vanilla networks.
\item Hyper parameters:
\begin{itemize}
\item LR = Cosine Annealing~\cite{loshchilov2016sgdr} with initial LR=0.01
\item stopping criteria = 300 epochs or loss $<$ 0.1
\item Momentum = 0.9
\end{itemize}
\end{itemize}
\section{Additional Plots}
\subsection{Sparsification by increasing network depth}
\label{sec:depth_experiments}
In addition to increasing network width before applying random static sparsification, we also test sparsification by increasing network depth. In this case, we duplicate layers while keeping their width unchanged. In contrast to wider networks, high sparsity levels severely impairs connectivity between layers of deeper networks which make these more difficult to train and converge. VGG experience convergence issues when sparsifying by increasing depth when sparsity approaches 10\,\%. ResNet appears less susceptible to this issue due to the presence of skip connections, whereas MLPs are more robust due to a considerably lower depth in our settings (1,2,4 and 8). Our experiments with deeper MLP networks and high sparsity also reveal convergence issues similarly to VGG. We run experiments for increasing depth for all architectures and datasets up to a point where the overall network accuracy starts to drop and as long as network training converges.
\Figref{fig:depth:perturbated_models} and \Figref{fig:depth:corrupted_data} show the results for weight perturbations and corrupted data of varying severity. The results are similar and show that robustness improves as long as network accuracy without intervention remains steady.
\Figref{fig:perturbated_models_recall} shows the results for the lowest recall among all classes. Network accuracy on the most sensitive classes does not decline with sparsity. The test is conducted without weight perturbation or data corruption.
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.33\linewidth}
\centering
\includegraphics[width=\linewidth]{figs/depth/noisy_model_mlp.pdf}
\caption{MLP}
\label{fig:depth:perturbated_models_mlp}
\end{subfigure}
\hskip 3cm
\begin{subfigure}[b]{0.33\linewidth}
\centering
\includegraphics[width=\linewidth]{figs/depth/noisy_model_resnet.pdf}
\caption{ResNet}
\label{fig:depth:perturbated_models_resnet}
\end{subfigure}
\caption{{\bf Robustness to weight perturbations. Sparsification by increasing network depth} We add multiplicative Gaussian noise$z_i\sim{\mathcal{N}}(\mu, w_i^2\sigma_i^2)$ to each weight and evaluate model performance. There is a sweetspot corresponding to optimal sparsity. With higher depth and sparsity network connectivity declines leading to simultaneous accuracy and robustness drop.}
\label{fig:depth:perturbated_models}
\end{figure*}
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.33\linewidth}
\centering
\includegraphics[width=\linewidth]{figs/depth/data_robustness_mlp}
\caption{MLP}
\label{fig:depth:corrupted_data_mlp}
\end{subfigure}
\hskip 3cm
\begin{subfigure}[b]{0.33\linewidth}
\centering
\includegraphics[width=\linewidth]{figs/depth/data_robustness_resnet}
\caption{ResNet}
\label{fig:depth:corrupted_data_resnet}
\end{subfigure}
\caption{{\bf Robustness to corrupted data. Sparsification by increasing network depth} Corrupted datasets MNIST-C, CIFAR10-C and CIFAR100-C. There is a sweetspot corresponding to optimal sparsity. With higher depth and sparsity network connectivity declines leading to simultaneous accuracy and robustness drop.}
\label{fig:depth:corrupted_data}
\end{figure*}
\begin{figure*}[h]
\centering
\subfloat[Sparsification by increasing network width]{
\includegraphics[width=.33\linewidth]{figs/width/min_recall.pdf}
}
\hskip 3cm
\subfloat[Sparsification by increasing network depth]{
\includegraphics[width=.33\linewidth]{figs/depth/min_recall.pdf}
}
\caption{{\bf Minimum recall among all classes.} Network accuracy on the most sensitive classes does not decline with sparsity. Tested without weight / data corruptions. Sparsification by increasing network width (left) and depth (right).}
\label{fig:perturbated_models_recall}
\end{figure*}
\subsection{Detailed results for corrupted data}
\label{sec:corrupted_data_details}
The plots presented in \Figref{fig:corrupted_data_mlp_selected}, \Figref{fig:corrupted_data_vgg_selected} and \Figref{fig:corrupted_data_resnet_selected} provide details on the impact of individual corruption methods on network performance. We show details for MLP, VGG and ResNet architectures trained on MNIST, CIFAR-10 and CIFAR-100. Sparsification is achieved by increasing network width. Although the effect of data corruptions on model performance varies widely, it can be observed that in all cases a sparser network matches the accuracy of the vanilla 100\,\% network. This observation holds up to high sparsity levels where the overall model performance declines.
\begin{figure*}[htb]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/appendix/width/MLP_MNIST_data_corruption_robustness.pdf}
\caption{MNIST-C}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/appendix/width/MLP_CIFAR10_data_corruption_robustness.pdf}
\caption{CIFAR10-C}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/appendix/width/MLP_CIFAR100_data_corruption_robustness.pdf}
\caption{CIFAR100-C}
\end{subfigure}
\caption{{\bf One layer MLP performance on selected corruption types.} For CIFAR10-C and CIFAR100-C we observe a clear trend across all corruption types, which suggests that the sparser networks with increased width are more robust. We note that for simpler task MNIST-C such increase in the performance is happening earlier in sparsity levels.}
\label{fig:corrupted_data_mlp_selected}
\end{figure*}
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/appendix/width/vgg_MNIST_data_corruption_robustness.pdf}
\caption{MNIST-C}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/appendix/width/vgg_CIFAR10_data_corruption_robustness.pdf}
\caption{CIFAR10-C}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/appendix/width/vgg_CIFAR100_data_corruption_robustness.pdf}
\caption{CIFAR100-C}
\end{subfigure}
\caption{{\bf VGG11 performance on selected corruption types.} The results show a clear upwards trend across different corruption types which indicates, that the networks get more robust as the sparsity and width increase.}
\label{fig:corrupted_data_vgg_selected}
\end{figure*}
\begin{figure*}[t]
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/appendix/width/resnet_MNIST_data_corruption_robustness.pdf}
\caption{ MNIST-C}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/appendix/width/resnet_CIFAR10_data_corruption_robustness.pdf}
\caption{ CIFAR10-C}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/appendix/width/resnet_CIFAR100_data_corruption_robustness.pdf}
\caption{CIFAR100-C}
\end{subfigure}
\caption{{\bf ResNet18 performance on selected corruption types.} We observe a upwards trend across corruption types for CIFAR10-C and CIFAR100-C, models with higher width and higher sparsity perform better on corrupted data. We note that the increase in the performance for simpler task MNIST-C happens sooner.}
\label{fig:corrupted_data_resnet_selected}
\end{figure*}
\section{Introduction}
\label{sec:intro}
Deep learning methods are increasingly used for solving complex tasks, yet little is known about the choice of the best architecture, the required model size, capacity, and the trade-offs involved. A common strategy is to train overparameterized models and compress them into smaller representations~\cite{hoefler2021sparsity}. This works remarkably well with an almost negligible drop in accuracy~\cite{gale2019state,blalock2020state}, and is crucial to make use of these models in resource-constrained environments. Recent works, however, shows that test accuracy does not capture how model compression impacts the generalization properties of these models~\cite{hooker2020compressed, entezari2019class}.
Related literature refers to robustness as the network generalization ability to small shifts in the distribution that humans are usually robust to. There is a growing body of work studying methods for building robust models. Recent studies~\cite{ShankarRMFRS20,recht2019imagenet} found that image classification models show a consistent accuracy drop when evaluated on ImageNet~\cite{deng2009imagenet} and ImageNetV2~\cite{recht2019imagenet}, while humans achieve the same accuracy. Another line of research aims at minimizing the worst case expected error over a set of probability distributions by applying distributionally robust optimization~\cite{shafieezadehabadeh2015distributionally,duchi2020distributionally,sagawa2020distributionally}. A similar line of work focuses on finding models that have low performance drop on adversarial examples~\cite{Biggio_2018,madry2019deep}.
A recent study by~\citet{hooker2020compressed} shows that model compression, and to a smaller extent quantization, result in tremendous robustness degradation. At the same time, \citet{golubeva2021wider} found that wider networks of the same capacity (same number of parameters) yield better performance. Model compression leads simultaneously to sparser and lower capacity networks, yet the contribution of both effects is mixed. Understanding the impact of these effects on model robustness in isolation is crucial when optimizing machine learning models for resource-constrained devices. This work evaluates the effect of model sparsification while keeping the network capacity, defined by the total number of parameters, fixed.
\begin{figure*}[ht]
\centering
\begin{subfigure}[b]{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/width/noisy_model_mlp.pdf}
\caption{One layer MLP}
\label{fig:width:perturbated_models_mlp}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/width/noisy_model_vgg.pdf}
\caption{VGG11}
\label{fig:width:perturbated_models_vgg}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.325\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/width/noisy_model_resnet.pdf}
\caption{ResNet18}
\label{fig:width:perturbated_models_resnet}
\end{subfigure}
\caption{{\bf Robustness to weight perturbations, sparsification by increasing width.} We add multiplicative Gaussian noise $z_i\sim{\mathcal{N}}(\mu, w_i^2\sigma_i^2)$ to each weight and evaluate model performance. We observe that as we move towards higher sparsity levels, the performance first increases then decreases in extreme sparsity levels. We note that such increase is happening earlier for simpler tasks like MNIST. This performance improvement indicates a flatter loss landscape around the minima suggesting better generalization.}
\label{fig:width:perturbated_models}
\end{figure*}
\begin{figure*}[ht]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/width/data_robustness_mlp.pdf}
\caption{One layer MLP}
\label{fig:width:corrupted_data_mlp}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/width/data_robustness_vgg.pdf}
\caption{VGG11}
\label{fig:width:corrupted_data_vgg}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/width/data_robustness_resnet.pdf}
\caption{ResNet18}
\label{fig:width:corrupted_data_resnet}
\end{subfigure}
\caption{{\bf Robustness to data corruption, sparsification by increasing width.} We evaluate the performance of the models on corrupted datasets MNIST-C, CIFAR10-C and CIFAR100-C. We observe that as we move towards higher sparsity levels, the performance first increases then decreases in extreme sparsity levels. We note that such increase is happening earlier for simpler tasks like MNIST.}
\label{fig:width:corrupted_data}
\end{figure*}
\begin{figure*}[htb]
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/appendix/adv_width_MLP.pdf}
\caption{One layer MLP}\label{adv_mlp_1layer}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/appendix/adv_width_vgg.pdf}
\caption{VGG11}\label{adv_vgg11}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/appendix/adv_width_resnet.pdf}
\caption{ResNet18}
\label{adv_resnet18}
\end{subfigure}
\caption{\small {\bf Robustness to adversarial attacks. Sparsification by increasing width.} Robustness to all adversarial attacks (BIM~\cite{kurakin2016adversarial}, APGD~\cite{croce2020reliable}, PGD~\cite{madry2019deep}, FFGSM~\cite{goodfellow2014explaining}) is improved as we have less remaining weights and decreases for extreme sparsity levels where overall network accuracy (clean) drops.}
\label{fig:adv_corrupted_data_selected}
\end{figure*}
\begin{figure}[!ht]
\centering
\begin{subfigure}[b]{0.49\linewidth}
\centering
\includegraphics[width=\linewidth]{figs/post_train_sparsify_width/noisy_model_mlp.pdf}
\caption{One layer MLP}
\label{fig:posttraining:perturbated_models_mlp}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\linewidth}
\centering
\includegraphics[width=\linewidth]{figs/post_train_sparsify_width/noisy_model_vgg.pdf}
\caption{VGG11}
\label{fig:posttraining:perturbated_models_vgg}
\end{subfigure}
\caption{{\bf Robustness to weight perturbations. Sparsification after training by increasing width.} We add multiplicative Gaussian noise $z_i\sim{\mathcal{N}}(\mu, w_i^2\sigma_i^2)$ to each weight and evaluate performance on test data. As we move towards higher sparsity levels, the performance decreases in extreme sparsity levels.}
\label{fig:posttraining:perturbated_models}
\end{figure}
\begin{figure}[!ht]
\centering
\begin{subfigure}[b]{0.49\linewidth}
\centering
\includegraphics[width=\linewidth]{figs/post_train_sparsify_width/data_robustness_mlp.pdf}
\caption{One layer MLP}
\label{fig:posttraining:corrupted_data_mlp}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\linewidth}
\centering
\includegraphics[width=\linewidth]{figs/post_train_sparsify_width/data_robustness_vgg.pdf}
\caption{VGG11}
\label{fig:posttraining:corrupted_data_vgg}
\end{subfigure}
\caption{{\bf Robustness to data corruption. Sparsification after training by increasing width.} We evaluate on the corrupted datasets MNIST-C, CIFAR10-C and CIFAR100-C. sompare to static sparsity at the prior to training, robustness degrades sooner.}
\label{fig:posttraining:corrupted_data}
\end{figure}
\fakeparagraph{Contributions}
We hypothesise that sparsity alone does not hurt model robustness when the network capacity is fixed and provide empirical evidence to support this hypothesis in a number of settings. We run our study on a range of network architectures (MLPs, VGG and ResNets), datasets (MNIST, CIFAR-10, CIFAR-100), robustness tests (weight perturbations, data corruptions, adversarial examples) and evaluate the overall and per class network performance. We observe that for randomly initialized models with a static sparsity pattern applied before or after training, network sparsification does not hurt or even improves robustness to a certain sparsity compared to a dense network of the same capacity. Robustness and accuracy decline simultaneously for very high sparsity due to loose connectivity between network layers. We show that our hypothesis holds when introducing sparsity by increasing network width and depth in separate experiments, applied before and after training. These findings show that a rapid robustness drop caused by network compression observed in the literature is due to a reduced network capacity rather than sparsity.
\section{Experimental Framework}
\label{sec:framework}
We hypothesise that sparsity, while keeping the number of parameters fixed, does not hurt network robustness. We support our hypothesis by exhaustive tests covering multiple datasets, network architectures, model and data corruptions, sparsity levels, sparsification methods and schedules. The details are given below.
\fakeparagraph{Datasets and architectures}
The datasets used in the experiments include MINST~\cite{mnist}, CIFAR-10~\cite{cifar100}, and CIFAR-100)~\cite{cifar100}. We fix the number of weights in each network architecture (one layer MLP, VGG16~\cite{simonyan2014very}, ResNet18~\cite{he2015deep}) throughout all experiments, by increasing the width or depth and introducing the proper corresponding sparsity. See \textit {sparsification methods} for more details. We use one layer MLP with $2^7$ hidden units, VGG with 11 layers,
and ResNet18 as base architectures.
We refer to these vanilla architectures as to 100\,\%-networks {\it before} sparsification. Note that for both ResNet and VGG our vanilla implementation uses the layer width of 16 as the base architecture, which is lower than 64 used in the original architecture. We use width to set the number of output channels for the first layer and use the same width ratios as the respective vanilla architectures for the following layers. All networks were trained using SGD with momentum 0.9. Details for each model family are provided in \Secref{sec:implementationdetails}.
\fakeparagraph{Sparsification methods}\label{par:sparsification}
Existing literature covers multiple ways to make use of sparsity during and after model training including static and dynamic sparsity (\emph{e.g.},\xspace $\beta$-Lasso~\cite{neyshabur2020towards}), iterative hard thresholding (\emph{e.g.},\xspace Lottery Ticket Hypothesis with various pruning strategies~\cite{frankle2018lottery, renda2020comparing}) and others. \cite{hoefler2021sparsity} provides a comprehensive survey on pruning strategies. Sparsification without changing the number of parameters was investigated in~\cite{golubeva2021wider}. In their study static sparsity showed the most prominent impact on network performance and is thus adopted in this work.
We sparsify a network while preserving its capacity by changing the network's width or depth. When sparsifying by increasing width, we leverage the approach introduced in \cite{golubeva2021wider}: every layer of the network is sparsified by removing weights at random in proportion to the layer size, using a static mask generated at initialization. This approach is referred to as \emph{static sparsity}. We build on its publicly available implementation~\cite{golubeva2021wider}. Sparsifying by increasing network depth involves duplicating layers and then applying a static random mask to sparsify the weight tensors. When sparsifying by increasing depth, we consider MLP with $2^9$ hidden units in each layer, and add layers of the same size. For VGG and ResNet we build architecture families VGG11, VGG13, VGG16 and ResNet18, ResNet34, ResNet50 all enjoying the default width of 64.
\fakeparagraph{Sparsification schedules}
In addition to static sparsity applied prior to network training, we also investigate network pruning after training by removing a certain amount of weights with the lowest magnitude to match the required sparsity level. Note that no fine-tuning is applied.
\fakeparagraph{Robustness measures}
We evaluate the impact of sparsity on model performance with respect to weight perturbations~\cite{vonoswald2021neural}, data corruptions~\cite{hendrycks2019robustness} and natural adversarial examples~\cite{hendrycks2021natural}.
{\it Model perturbation.} Similarly to \cite{vonoswald2021neural}, we perturb model weights by applying Gaussian noise $z_i\sim{\mathcal{N}}(\mu,w_i^2\sigma_i^2)$ in proportion to the magnitude of each weight $w_i, i \in L$,
and then measure the difference in the loss $\delta \mathcal{L} = {\mathbb{E}_z}[\mathcal{L}(w_i+z)-\mathcal{L}(w_i)]$.
Accuracy drop due to model perturbation is related to the flatness of the loss landscape around the obtained optimum.
Robustness to weight perturbation could also represents a proxy for quantization error. This error is introduced in neural network compression by weight quantization in the literature~\cite{novac2021quantization}.
{\it Corrupted data.} We apply numerous algorithmically generated corruptions, similar to the ones evaluated in \cite{hooker2020compressed} (\emph{e.g.},\xspace blur, contrast, pixelation) to all datasets used in this paper. This allows us investigating how sensitive the sparsified models are to data corruptions of different severity which humans are oblivious to. Our corrupted datasets are MNIST-C \cite{mu2019mnist}, CIFAR10-C and CIFAR100-C \cite{hendrycks2019robustness}.
{\it Natural adversarial examples.} We use Torchattacks~\cite{kim2020torchattacks} to generate a diverse range of adversarial attacks for different combination of mentioned architectures and datasets. This include FGSM~\cite{goodfellow2014explaining}, BIM~\cite{kurakin2016adversarial}, APGD~\cite{croce2020reliable}, and PGD~\cite{madry2019deep}.
When applying sparsity, we evaluate both the overall model performance and its performance on the most sensitive class. We follow the methodology introduced in \cite{hooker2020compressed} and evaluate the change to class level recall compared to the overall model accuracy. The obtained results are presented below.
\section{Results}
\label{sec:results}
\fakeparagraph{Perturbed model weights}
We first investigate the networks that were sparsified while growing the width to keep their capacity fixed. \Figref{fig:width:perturbated_models} shows that as we move towards higher sparsity levels, the test performance first increases then decreases in extreme sparsity levels. We note that such increase is happening earlier for simpler tasks like MNIST. We observe that sparse configurations
are indeed in flatter regions of weight space as $\delta\mathcal{L}$ increases more slowly with $\delta{z_i}$. This suggests better robustness and generalization around the minima~\cite{pittorino2020entropic, jiang2019fantastic}. Each point in this plot shows the mean over five networks trained from different initializations. When sparsification is applied while increasing network depth, the maximum accuracy and robustness are achieved for smaller depth values
in all experiments. Note that keeping a network connected while increasing its depth, in contrast to width, becomes difficult with higher sparsity. The results are summarized in \Secref{sec:depth_experiments}. The outcome across all experiments consistently suggests that sparsification alone does not undermine network robustness to weight perturbations as long as sufficient network connectivity is maintained.
\fakeparagraph{Corrupted data}
\Figref{fig:width:corrupted_data} evaluates the performance of the models on corrupted datasets MNIST-C, CIFAR10-C and CIFAR100-C. We observe that as we move towards higher sparsity levels, the test performance first increases then decreases in extreme sparsity levels. We note that such increase is happening earlier for simpler tasks like MNIST.
Each point in \Figref{fig:width:corrupted_data}is mean performance over three trained networks. For each network we randomly sample 1000 examples from a dataset and add five noise samples in each run. On CIFAR10-C and CIFAR100-C our evaluation considers corruption severity of two and four as classified by \cite{hendrycks2019robustness}. Detailed results for specific corruption types can be found in \Secref{sec:corrupted_data_details}. The results for the achieved performance of networks sparsified by increasing depth are also shown in \Secref{sec:depth_experiments}.
We note that VGG networks experience convergence issues as the network sparsity approaches 10\% due to lacking connectivity between layers. This is not the case for MLP and ResNet which also converge for lower percentage of remaining weights. We attribute these differences to the power of skip connections in ResNet and low overall tested network depths (1,2,4 and 8) for MLP.
\fakeparagraph{Sensitive classes}
Similarly to \cite{hooker2020compressed}, sensitive classes are considered those with the lowest recall. For each sparsity level, we train five models, evaluate them on the test data and report the minimum recall among all classes. \cite{hooker2020compressed} shows that there are some particular examples in each class that a pruned network forgets easily. However, we observe that as the networks get wider (or deeper) and sparser the minimum recall does not decrease. Sparsification does not disproportionally affect sensitive classes, which may not be noticeable by just looking at the overall accuracy. This is due to the fact that the capacity of the networks is fixed. The results are shown in \Figref{fig:perturbated_models_recall} in \Secref{sec:depth_experiments}.
\fakeparagraph{Adversarial attacks}
\Figref{fig:adv_corrupted_data_selected} shows the robustness of sparsified networks when applying adversarial attacks to perturb test data. We observe a consistent trend for robustness to all adversarial attacks (BIM~\cite{kurakin2016adversarial}, APGD~\cite{croce2020reliable}, PGD~\cite{madry2019deep}, FGSM~\cite{goodfellow2014explaining}). Similar to perturbed model weights and corrupted data, as we have less remaining weights, test performance for adversarial examples is first improved and then decreases for extreme sparsity levels where the overall (clean) network accuracy drops. Dense VGG networks trained on MNIST show the highest accuracy decline in the presence of all attacks, while sparsification helps to improve adversarial robustness.
\fakeparagraph{Post-training sparsification}
\Figref{fig:posttraining:perturbated_models} depicts the results for post-training sparsification for MLP and VGG architectures challenged with perturbed model weights. The results indicate a similar trend to the experiments with static sparsity applied at initialization. For VGG we observe a slight improvement followed by an accuracy drop. However, the performance does deteriorate sooner than with static sparsity. For MLP the results show stagnating accuracy and a slight drop in performance on CIFAR-10. We attribute this to the simplicity of our sparsification method and a relatively low number of weights in the one layer MLP. Similar results are obtained on corrupted datasets visualized in \Figref{fig:posttraining:corrupted_data}.
\section{Conclusion}
\label{sec:conclusion}
In this work we hypothesise that sparsity, while keeping the number of parameters fixed, does not hurt network robustness. We provide experimental evidence to support this claim based on several standard architectures, datasets, sparsification methods and measures of robustness. Our observation is that network sparsification often helps to improve robustness compared to a dense model, yet the benefits decline together with the overall model accuracy for high sparsity levels. This is due to the increasingly loose connectivity between layers which complicates optimization. Since network capacity rather than sparsity causes accuracy and robustness drop of compressed models, designing pruning methods that treat network capacity and sparsity separately can lead to better compressed models. In addition, our work emphasizes the need for training procedures that better support sparse operations, which would allow for a faster and more memory efficient training of sparse networks.
\clearpage
|
2,877,628,088,535 | arxiv | \section{Introduction}
Dust plays several prominent roles in the physics of the interstellar medium (ISM) and star formation. Dust absorbs ultraviolet (UV) radiation emitted by young stars. This absorbed UV radiation is re-emitted at infrared (IR) wavelengths, cooling the dust and also cooling the gas via collisions with dust grains \citep{Draine78, Dwek86, Hollenbach+97}. Studies of nearby star-forming galaxies suggest that, on average, nearly half of emitting starlight is reprocessed by dust \citep{Draine03, Tielens+05}, and the thermal infrared (IR) emission from dust grains dominates the 10--100 \micron\xspace spectral energy distributions (SEDs) of galaxies.
Dust is an important tracer of star formation activity and provides an indirect measure of the star formation rate (SFR) in external galaxies. The bolometric, thermal IR luminosity ($L_{\rm TIR}$) is one of the most reliable tracers of dust-obscured star formation \citep{Kennicutt98, PerezGonzalez+06, Kennicutt+09, Kennicutt+12}. \HII regions are locations of recent, active star formation, where massive OB stars emit ionizing UV photons that can interact with the surrounding dust or escape into ISM. \HII regions are generally composed of multiple different components, including the central ionizing cluster(s) of OB stars, a surrounding photodissociation region (PDR), and the remnants of the giant molecular cloud from which the star cluster(s) formed. \HII regions are frequently seen in close proximity to one another, sometimes so much so that their components overlap. It is therefore common to regard \HII regions more generally as star-forming regions within a galaxy.
In general, star formation in the Milky Way cannot be studied using the same observational techniques as external galaxies \citep{Chomiuk+Povich11}. Sight-lines through the Galactic disk suffer very high extinction, so SFR diagnostics that depend upon optical/UV observational tracers (in particular, H$\alpha$) cannot be applied. Distances to Galactic \HII regions are often highly uncertain, and confusion arises from multiple star forming regions overlapping along a given line of sight. However, mid- and far-IR SFR tracers can be applied to Galactic and extragalactic regions \citep{Calzetti+07,Calzetti+10,Li+10,Li+13,Stephens+14,V+E13,VEH16}, and thermal radio continuum observations can easily resolve individual Galactic regions and be used as a substitute for recombination-line diagnostics to count ionizing photons \citep{Paladini+03}.
The Milky Way offers the unique opportunity to study individual massive star forming regions (MSFRs) resolved over sub-parsec distance scales, where the associated young stellar populations can be directly observed. The Massive Young Star-Forming Complex Study in Infrared and X-ray \citep[MYStIX;][]{Feigelson+13, Broos+13} has characterized hundreds of OB stars in $\sim$20 young Galactic star-forming regions within 4 kpc, and $\sim$100 more obscured OB stars in the MYStIX point-source catalog have recently been found by \citet{Povich+17}. The MSFRs Omnibus X-ray Catalog \citep[MOXC;][]{Townsley+14} produced X-ray point-source catalogs and diffuse emission maps from archival {\it Chandra X-ray Observatory} data on seven MYStIX MSFRs and four additional Galactic MSFRs out to 7 kpc (plus 30 Doradus in the Large Magellanic Cloud).
To better understand the interplay between massive stars and IR/radio nebular tracers of star formation, we have conducted a study of 29 Galactic MSFRs, with 21 drawn from the MYStIX and MOXC surveys, and nine additional, prominent regions that have similar high-resolution X-ray through mid-IR archival data available. We construct SEDs by performing aperture photometry on data from the {\it Spitzer Space Telescope}, the {\it Midcourse Space Experiment (MSX)}, the {\it Infrared Astronomical Satellite (IRAS)}, the {\it Herschel Space Observatory}, and the {\it Planck}\xspace satellite. We then fit a multi-component \citet{Draine+Li07} dust, blackbody, and power-law continuum model to the mid-IR through radio SEDs for each region to measure $L_{\rm TIR}$, constrain dust properties, and search for evidence of supernova contamination in the radio continuum. We use MYStIX point-source database of X-ray and IR-detected OB stars, along with supplementary lists of massive stars from the literature for non-MYStIX targets, to predict the ionizing photon rate injected into each region and to calculate the fraction of emitted luminosity that is reprocessed by dust.
This paper is organized as follows: Section~\ref{section:observations} describes the data sources used in this paper. Section~\ref{section:SED_model_description} describes our SED modeling procedure, while Section~\ref{section:SED_fit_results} summarizes the trends observed in the resulting fits. In Section~\ref{section:reprocessing} we discuss the relationship between the MSFRs and their ionizing stellar clusters. In Section~\ref{section:lum_and_SFRs} we discuss commonly used SFR indicators that rely on monochromatic luminosities, and investigate the differences in predicted SFRs these indicators yield when applied to our sample of MSFRs.
\section{Targets and Observations}\label{section:observations}
We targeted MSFRs that could plausibly appear as compact IR sources to an extragalactic observer studying the Milky Way with a spatial resolution of ${\sim}100$~pc. The essential criteria for selecting MSFRs for inclusion in this study were (1) bright, localized mid-IR nebular emission and (2) availability of high-resolution X-ray through IR imaging data that provide spatially-resolved information about of the nebular morphology and associated stellar populations. We included as many high-luminosity regions hosting rich massive clusters as possible. Some prominent regions, notably the very massive Arches and Quintuplet clusters near the Galactic center, were omitted because any localized nebular emission is indistinguishable from the extremely bright diffuse IR--radio background. Table~\ref{table:targets} lists the basic properties of our MSFR sample, including Galactic coordinates, distance from the Sun, and spectral type(s) of the dominant ionizing star(s). See Appendix~\ref{appendix:region_discussion} for details of the OB stellar population in each region. While these MSFRs represent a wide range of masses, luminosities, heliocentric distances, and spatial morphologies, we caution that our sample cannot be considered an unbiased sample of Galactic \HII regions or young massive clusters. Our selection criteria favor younger, more nearby, and more massive regions.
Distances to Galactic MSFRs and their associated young clusters have historically been difficult to measure. A handful of regions (e.g., the Orion Nebula, M17, W3, W51A, and NGC~6334) have had distances measured from multi-epoch very long baseline interferometry (VLBI) parallax measurements of maser spots associated with high-mass protostars. Other techniques to estimate distances include fitting of the high-mass main sequence on the HR diagram, utilizing extinction maps of background stars, or deriving distance constraints from the X-ray luminosity function or molecular cloud radial velocities. All of these techniques are subject to considerable uncertainties (e.g., incorrect accounting for binarity, differential absorption for individual stars, or peculiar velocities deviating from Galactic rotation). For example, over a dozen estimates for the distance to the Lagoon Nebula are presented in \citet{Tothill+08}, most of which fall in the range of 1.3--1.8 kpc.
We searched the Gaia DR2 database \citep{Gaia2018} and found reliable parallax measurements for 193 cataloged OB stars associated with 19 of our MSFRs. Reliable parallaxes had Gaia $g$-band average magnitudes brighter than 15 and typical relative parallax uncertainties ${<}10\%$, with a few exceptions showing larger uncertainties. We computed the uncertainty-weighted mean parallax distance among the OB stars within each MSFR, rejecting ${>}3\sigma$ outliers.
New, reliable parallax distances are available for 17 of the 28 MSFRs. In all cases these distances fall within the (sometimes very wide) range of previously-published distance estimates and provide a significant improvement in precision. In four cases (the Flame Nebula, W40, the Trifid Nebula, and Berkeley~87) these distances are based on a single star, but we nevertheless judge them to be more reliable than previous distance estimates. Cases where the distance appears to have been obtained to greater accuracy for regions without Gaia parallaxes may only represent fewer distance estimates available in the literature. We adopt the distances listed in Table~\ref{table:targets}.
\begin{table}[ht]\setlength{\tabcolsep}{2.5pt}
\centering
\scriptsize
\caption{Massive Star-Forming Region Sample}
\begin{tabular}{ccccc}
\hline \hline
& ($l, b$) & \multicolumn{2}{c}{Distance} & Earliest \\ \cline{3-4}
Name & (J2000) & (kpc) & Reference & Sp. Type \\
(1) & (2) & (3) & (4) & (5) \\
\hline
Flame Nebula & 206.5--16.3 & 0.33$\pm$0.01 & 1 & O9V \\
Orion Nebula & 209.0--19.4 & 0.41$\pm$0.01 & 1 & O7V \\
W40 & 028.8+03.5 & 0.49$\pm$0.05 & 1 & O9.5V \\
RCW~36 & 265.1+01.4 & 1.09$\pm$0.09 & 1 & O9V \\
Lagoon Nebula & 006.0--01.2 & 1.17$\pm$0.10 & 1 & O4V \\
Trifid Nebula & 007.1--00.3 & 1.57$\pm$0.21 & 1 & O7V \\
NGC~6334 & 351.1+00.5 & 1.63$\pm$0.16 & 1 & O7V \\
RCW~38 & 268.0--01.1 & 1.7$\pm$0.9 & 2 & O5.5V \\
Eagle Nebula & 017.0+00.8 & 1.71$\pm$0.18 & 1 & O5V \\
Berkeley~87 & 075.7+00.3 & 1.74$\pm$0.09 & 1 & WC5 \\
NGC~6357 & 353.2+00.9 & 1.78$\pm$0.18 & 1 & O3.5III \\
M17 & 015.1--00.7 & 1.82$\pm$0.16 & 1 & O4V \\
W3 & 133.9+01.1 & 2.18$\pm$0.12 & 1 & O6V \\
W42 & 025.4--00.2 & 2.2 & 3 & O5V \\
W4 & 134.7+00.9 & 2.24$\pm$0.17 & 1 & O4I \\
W33 & 012.8--00.2 & 2.40$^{+0.17}_{-0.15}$ & 4 & O5I \\
G333 & 333.6--00.2 & 2.6 & 5 & O5V \\
NGC~7538 & 111.5+00.8 & 2.65$^{+0.12}_{-0.11}$ & 6 & O5V \\
Carina Nebula & 287.7--00.8 & 2.69$\pm$0.40 & 1 & LBV \\
NGC~3576 & 291.3--00.8 & 2.77$\pm$0.31 & 1 & O7.5V \\
G305 & 305.3+00.1 & 3.59$\pm$0.85 & 1 & O5.5I \\
Westerlund~1 (Wd~1) & 339.5--00.4 & 3.9$\pm$0.6 & 7 & O9.5I \\
RCW~49 & 284.3--00.3 & 4.4$\pm$1.0 & 1 & O3V \\
W51A & 049.5--00.3 & 5.1$^{+2.9}_{-1.4}$ & 8 & O4V \\
W43 & 030.8--00.0 & 5.5$^{+0.4}_{-0.3}$ & 9 & O3.5III \\
G29.96--0.02 & 030.0--00.0 & 6.2 & 10 & O5III \\
NGC~3603 & 291.6--00.5 & 7.0 & 11 & O3V \\
W49A & 043.2+00.0 & 11.4$\pm$1.2 & 12 & O3I \\
\hline \hline
\end{tabular}\label{table:targets}
\tablecomments{MSFRs are listed in order of increasing heliocentric distance. Distance references are: 1: \citet{Gaia2018}, 2: \citet{Schneider+10}, 3: \citet{Blum+00}, 4: \citet{Immer+13}, 5: \citet{Figueredo+05}, 6: \citet{Moscadelli+09}, 7: \citet{Koumpia+Bonanos12}, 8: \citet{Xu+09}, 9: \citet{Zhang+14}, 10: \citet{Russeil+11}, 11: \citet{Harayama+08}, and 12: \citet{Gwinn+92}. For a discussion of the spectral types found in each stellar population (including references), see Appendix~\ref{appendix:region_discussion}.}
\end{table}
Although the MSFRs in our sample are very young ($<$5 Myr), even during this short timescale dramatic, evolutionary changes to the density, temperature, and morphology of the gas and dust can and do occur. The MSFRs in our sample range from highly-embedded \HII regions where the bulk of the stellar luminosity is reprocessed by dust (e.g., the Flame Nebula and W51A) to relatively unobscured \HII regions that have been largely evacuated of dust (e.g., W4, Wd~1). However, age-dating methods for \HII regions and their associated stellar populations are heterogeneous and suffer from large uncertainties, so we will not attempt to place the MSFRs in our sample into an evolutionary sequence. A detailed analysis of the age and SFRs in these regions will be the subject of a forthcoming paper.
\subsection{Observations} \label{subsec:observations}
IR data from {\it Spitzer}\xspace, {\it MSX}\xspace, and {\it IRAS}\xspace and radio data from {\it Planck}\xspace were retrieved using the NASA/IPAC Infrared Science Archive (IRSA)\footnote{See \url{http://irsa.ipac.caltech.edu}}.
\subsubsection{{\it Spitzer}\xspace}
The majority (23) of our target MSFRs were included in the Galactic Legacy Infrared Mid-Plane Survey Extraordinaire \citep[GLIMPSE;][]{Benjamin+03} or follow-up surveys \citep[GLIMPSE II, GLIMPSE 3D, GLIMPSE 360, or Vela-Carina surveys;][]{Churchwell+09,Zasowski+09,Povich+11a} using the four {\it Spitzer}\xspace IRAC bands, centered at 3.6, 4.5, 5.8, and 8.0 \micron\xspace\citep{Fazio+04}. High-resolution (1\farcs2 pixels) mosaics were created by the GLIMPSE pipeline\footnote{See \url{http://www.astro.wisc.edu/glimpse/}.} from Basic Calibrated Data (BCD) image frames processed by the {\it Spitzer}\xspace Science Center (SSC). The GLIMPSE pipeline removes artifacts such as stray light (from all bands), muxbleed (3.6 and 4.5 \micron~bands), and banding (5.8 and 8.0 \micron~bands). The SSC Mopex package \citep{Makovoz+06} is used to mask image artifacts (primarily cosmic rays), and the IPAC Montage packages were used to mosaic the images \citep{Berriman+02}.
The remaining 7 MSFRs were included in the MYStIX survey, and for these we use mosaic images produced by \citet{Kuhn+13} from publicly-available {\it Spitzer}\xspace/IRAC archival observations. The majority of our targets were also observed at 24 and 70~\micron\ using the Multiband Infrared Photometer for {\it Spitzer}\xspace\ \citep[many as part of the MIPSGAL survey;][]{Carey+09}. Because many of our MSFRs are extremely mid-IR bright, the MIPS 24~\micron\ images frequently become saturated, and {\it Herschel}\xspace offers superior sensitivity and photometric calibration at 70~\micron. For these reasons we do not use MIPS data for this study.
\subsubsection{MSX}
The Spirit III instrument on board the {\it MSX}\xspace satellite surveyed the Galactic plane in four IR bands \citep{Price+01}: A (8.28 \micron), C (12.13 \micron), D (14.65 \micron), and E (21.3 \micron). The spatial resolution of Spirit~III was $\sim$18\farcs3. Although its resolution and sensitivity are inferior to that of {\it Spitzer}\xspace, the absolute flux calibration of {\it MSX}\xspace, determined in-flight by measuring the fluxes from projectiles fired away from the spacecraft, is reliable to $\sim$1\% (Price et al. 2004). Hence {\it MSX}\xspace mid-IR fluxes are the most accurate currently available. {\it MSX}\xspace A images provide the benchmark against which IRAC [8.0] fluxes can be compared \citep{Cohen+07}, and {\it MSX}\xspace E provides a substitute for saturated {\it Spitzer}\xspace/MIPS 24~\micron\ images.
\subsubsection{IRAS}
From January to November 1983 the {\it Infrared Astronomy Satellite} ({\it IRAS}\xspace) mapped 98\% of the sky in four IR bands. These bands have effective wavelengths of 12, 25, 60, and 100 \micron\xspace\citep{Beichman+98}. Although the sensitivity of {\it IRAS}\xspace is comparable to that of {\it MSX}\xspace, its resolution was much lower, with 1\farcm5 pixels. We use the Improved Reprocessing of the {\it IRAS}\xspace Survey (IRIS) products available through IRSA, which benefits from improved zodiacal light subtraction, calibration and zero level compatible with {\it DIRBE}, and better destriping, particularly in the 12 and 25 \micron\xspace bands \citep{Miville+Lagache05}. All IR images were downsampled to the 1\farcm5 pixel scale of {\it IRAS}\xspace before aperture photometry and analysis was performed.
\subsubsection{Herschel}
All target MSFRs were included in one or more of the {\it Herschel}\xspace Hi-Gal, HOBYS, or Gould Belt surveys \citep{Molinari+10,Motte+10,Andre+10}. Far-infrared and submillimeter images were downloaded from the {\it Herschel}\xspace Space Observatory Science Archive\footnote{See \url{http://archives.esac.esa.int/hsa/whsa/}} using the {\it Herschel}\xspace Interactive Processing Environment \citep[HIPE;][]{Ott10}. Level 2.5 images were retrieved for both PACS and SPIRE observations. The default JScanam map-maker was selected for all PACS observations. The beam sizes of the 70 \micron, 160 \micron, 250 \micron, 350 \micron, and 500 \micron\xspace images are 5\farcs8, 12\farcs0, 18\farcs0, 25\farcs0, and 37\farcs0, respectively \citep{Griffin+10,Poglitsch+10}.
\subsubsection{Planck}
We retrieved {\it Planck}\xspace cut-out images\footnote{See \url{https://irsa.ipac.caltech.edu/applications/planck/}} at 30 GHz, 44 GHz, 70 GHz, and 100 GHz centered on each MSFR. The images were scaled to be four or eight times the FWHM of the effective beam at each frequency to provide an adequate estimate of the background. The effective FWHM in the 30, 44, 70, and 100 GHz channels are 32\farcm41, 27\farcm10, 13\farcm32, and 9\farcm69, respectively. The flux density is estimated by integrating the data in a circular aperture centered at the position of the source (see next section for details).
Many of the MSFRs in our sample have been studied with a variety of radio facilities at different frequencies. We do not use these historical measurements, which frequently give disparate results even for a single region \citep[e.g., as in M17;][]{Povich+07}, in our analysis. Instead, our analysis utilizes the homogeneous {\it Planck}\xspace data, which had excellent absolute surface brightness calibration and covered the entire sky with sufficient angular resolution to measure the radio continuum for individual star forming regions. Cases where previous measurements of the radio continuum are available for MSFRs are discussed in Appendix~\ref{appendix:region_discussion}.
\subsection{Aperture Photometry}\label{subsection:aperture_photometry}
To construct multiwavelength SEDs, we performed aperture photometry on the IR and radio images of all MSFRs in our sample. Circular apertures sizes were determined by first extracting the surface brightness profile of each MSFR in the IRAC 8.0 \micron\xspace band, centered on the cluster location given in Table~\ref{table:targets}. The surface brightness as a function of distance was then fit with a decaying exponential function plus a constant background. The ``global'' MSFR aperture was defined by the radius within which 99\% of the source surface brightness was enclosed. Circular background apertures were selected by visual inspection near the outer edge of the source apertures, and were required to possess an average surface brightness consistent with that of the constant background level found in the full surface brightness profile. In a few cases, the spatial extent of the MSFR made extracting IRAC 8 \micron\xspace surface brightness profiles out to the background level impossible, as large segments of the nebula run off the edge of the {\it Spitzer}\xspace field of view before the background is reached. In other cases, {\it Spitzer}\xspace 8 \micron\xspace images were missing completely. For these regions, we use the {\it MSX}\xspace A channel (8.28 \micron) to extract the surface brightness profile and define source apertures.
In Figure~\ref{figure:surface_brightness}, we present an RGB-rendered finding chart of the Orion Nebula, with the extraction aperture superimposed; we additionally show the surface brightness profiles that were extracted using the IRAC 8 \micron\, {\it MSX}\xspace E channel (21.3 \micron), and PACS 70 \micron\ images. The {\it MSX}\xspace E and PACS 70 \micron profiles have been renormalized to equal the 8 \micron\ surface brightness at the radius of the extraction aperture. The remaining finding charts and surface brightness profiles are shown in Figures~\ref{figure:surface_brightness}.1 through \ref{figure:surface_brightness}.28 in the online figure set. A detailed analysis of the spatially-resolved SEDs will be presented in a forthcoming paper.
\figsetstart
\figsetnum{1}
\figsettitle{The RGB rendered finding charts and surface brightness profiles for the 28 MSFRs in our sample.}
\figsetgrpstart
\figsetgrpnum{1.1}
\figsetgrptitle{Flame Nebula Finding Chart and Surface Brightness Profile}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{fc_Flame.pdf}}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sb_Flame.pdf}}
\figsetgrpnote{The RGB-rendered finding chart and surface brightness profile for the Flame Nebula. Blue is {\it Spitzer}\xspace IRAC 4 (8 \micron), green is {\it MSX}\xspace E (21.3 \micron), and red is {\it Herschel}\xspace PACS 70 \micron.}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{1.2}
\figsetgrptitle{W40 Finding Chart and Surface Brightness Profile}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{fc_W40.pdf}}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sb_W40.pdf}}
\figsetgrpnote{The RGB-rendered finding chart and surface brightness profile for W40. Blue is {\it Spitzer}\xspace IRAC 4 (8 \micron), green is {\it MSX}\xspace E (21.3 \micron), and red is {\it Herschel}\xspace PACS 70 \micron.}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{1.3}
\figsetgrptitle{Westerlund 1 Finding Chart and Surface Brightness Profile}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{fc_Westerlund 1.pdf}}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sb_Westerlund 1.pdf}}
\figsetgrpnote{The RGB-rendered finding chart and surface brightness profile for Wd 1. Blue is {\it Spitzer}\xspace IRAC 4 (8 \micron), green is {\it MSX}\xspace E (21.3 \micron), and red is {\it Herschel}\xspace PACS 70 \micron.}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{1.4}
\figsetgrptitle{RCW36 Finding Chart and Surface Brightness Profile}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{fc_RCW36.pdf}}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sb_RCW36.pdf}}
\figsetgrpnote{The RGB-rendered finding chart and surface brightness profile for RCW36. Blue is {\it Spitzer}\xspace IRAC 4 (8 \micron), green is {\it MSX}\xspace E (21.3 \micron), and red is {\it Herschel}\xspace PACS 70 \micron.}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{1.5}
\figsetgrptitle{Berkeley 87 Finding Chart and Surface Brightness Profile}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{fc_Berkeley87.pdf}}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sb_Berkeley87.pdf}}
\figsetgrpnote{The RGB-rendered finding chart and surface brightness profile for Berkeley 87. Blue is {\it Spitzer}\xspace IRAC 4 (8 \micron), green is {\it MSX}\xspace E (21.3 \micron), and red is {\it Herschel}\xspace PACS 70 \micron.}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{1.6}
\figsetgrptitle{Orion Nebula Finding Chart and Surface Brightness Profile}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{fc_Orion.pdf}}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sb_Orion.pdf}}
\figsetgrpnote{The RGB-rendered finding chart and surface brightness profile for the Orion Nebula. Blue is {\it Spitzer}\xspace IRAC 4 (8 \micron), green is {\it MSX}\xspace E (21.3 \micron), and red is {\it Herschel}\xspace PACS 70 \micron.}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{1.7}
\figsetgrptitle{Lagoon Nebula Finding Chart and Surface Brightness Profile}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{fc_Lagoon.pdf}}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sb_Lagoon.pdf}}
\figsetgrpnote{The RGB-rendered finding chart and surface brightness profile for the Lagoon Nebula. Blue is {\it Spitzer}\xspace IRAC 4 (8 \micron), green is {\it MSX}\xspace E (21.3 \micron), and red is {\it Herschel}\xspace SPIRE 250 \micron.}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{1.8}
\figsetgrptitle{Trifid Nebula Finding Chart and Surface Brightness Profile}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{fc_Trifid.pdf}}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sb_Trifid.pdf}}
\figsetgrpnote{The RGB-rendered finding chart and surface brightness profile for the Trifid Nebula. Blue is {\it Spitzer}\xspace IRAC 4 (8 \micron), green is {\it MSX}\xspace E (21.3 \micron), and red is {\it Herschel}\xspace PACS 70 \micron.}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{1.9}
\figsetgrptitle{W42 Finding Chart and Surface Brightness Profile}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{fc_W42.pdf}}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sb_W42.pdf}}
\figsetgrpnote{The RGB-rendered finding chart and surface brightness profile for W42. Blue is {\it Spitzer}\xspace IRAC 4 (8 \micron), green is {\it MSX}\xspace E (21.3 \micron), and red is {\it Herschel}\xspace PACS 70 \micron.}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{1.10}
\figsetgrptitle{NGC 7538 Finding Chart and Surface Brightness Profile}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{fc_N7538.pdf}}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sb_N7538.pdf}}
\figsetgrpnote{The RGB-rendered finding chart and surface brightness profile for NGC 7538. Blue is {\it MSX}\xspace A (8.28 \micron), green is {\it MSX}\xspace E (21.3 \micron), and red is {\it Herschel}\xspace PACS 70 \micron.}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{1.11}
\figsetgrptitle{W4 Finding Chart and Surface Brightness Profile}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{fc_W4.pdf}}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sb_W4.pdf}}
\figsetgrpnote{The RGB-rendered finding chart and surface brightness profile for W4. Blue is {\it MSX}\xspace A (8.28 \micron), green is {\it MSX}\xspace E (21.3 \micron), and red is {\it Herschel}\xspace PACS 70 \micron.}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{1.12}
\figsetgrptitle{Eagle Nebula Finding Chart and Surface Brightness Profile}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{fc_Eagle.pdf}}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sb_Eagle.pdf}}
\figsetgrpnote{The RGB-rendered finding chart and surface brightness profile for the Eagle Nebula. Blue is {\it Spitzer}\xspace IRAC 4 (8 \micron), green is {\it MSX}\xspace E (21.3 \micron), and red is {\it Herschel}\xspace PACS 70 \micron.}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{1.13}
\figsetgrptitle{W33 Finding Chart and Surface Brightness Profile}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{fc_W33.pdf}}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sb_W33.pdf}}
\figsetgrpnote{The RGB-rendered finding chart and surface brightness profile for W33. Blue is {\it Spitzer}\xspace IRAC 4 (8 \micron), green is {\it MSX}\xspace E (21.3 \micron), and red is {\it Herschel}\xspace PACS 70 \micron.}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{1.14}
\figsetgrptitle{RCW38 Finding Chart and Surface Brightness Profile}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{fc_RCW38.pdf}}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sb_RCW38.pdf}}
\figsetgrpnote{The RGB-rendered finding chart and surface brightness profile for RCW38. Blue is {\it Spitzer}\xspace IRAC 4 (8 \micron), green is {\it MSX}\xspace E (21.3 \micron), and red is {\it Herschel}\xspace PACS 70 \micron.}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{1.15}
\figsetgrptitle{W3 Finding Chart and Surface Brightness Profile}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{fc_W3.pdf}}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sb_W3.pdf}}
\figsetgrpnote{The RGB-rendered finding chart and surface brightness profile for W3. Blue is {\it Spitzer}\xspace IRAC 4 (8 \micron), green is {\it MSX}\xspace E (21.3 \micron), and red is {\it Herschel}\xspace PACS 70 \micron.}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{1.16}
\figsetgrptitle{NGC 3576 Finding Chart and Surface Brightness Profile}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{fc_N3576.pdf}}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sb_N3576.pdf}}
\figsetgrpnote{The RGB-rendered finding chart and surface brightness profile for NGC 3576. Blue is {\it Spitzer}\xspace IRAC 4 (8 \micron), green is {\it MSX}\xspace E (21.3 \micron), and red is {\it Herschel}\xspace PACS 70 \micron.}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{1.17}
\figsetgrptitle{NGC 6334 Finding Chart and Surface Brightness Profile}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{fc_N6334.pdf}}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sb_N6334.pdf}}
\figsetgrpnote{The RGB-rendered finding chart and surface brightness profile for NGC 6334. Blue is {\it Spitzer}\xspace IRAC 4 (8 \micron), green is {\it MSX}\xspace E (21.3 \micron), and red is {\it Herschel}\xspace PACS 70 \micron.}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{1.18}
\figsetgrptitle{G29.96-0.02 Finding Chart and Surface Brightness Profile}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{fc_G29.pdf}}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sb_G29.pdf}}
\figsetgrpnote{The RGB-rendered finding chart and surface brightness profile for G29.96-0.02. Blue is {\it Spitzer}\xspace IRAC 4 (8 \micron), green is {\it MSX}\xspace E (21.3 \micron), and red is {\it Herschel}\xspace PACS 70 \micron.}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{1.19}
\figsetgrptitle{NGC 6357 Finding Chart and Surface Brightness Profile}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{fc_N6357.pdf}}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sb_N6357.pdf}}
\figsetgrpnote{The RGB-rendered finding chart and surface brightness profile for NGC 6357. Blue is {\it Spitzer}\xspace IRAC 4 (8 \micron), green is {\it MSX}\xspace E (21.3 \micron), and red is {\it Herschel}\xspace PACS 70 \micron.}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{1.20}
\figsetgrptitle{M17 Finding Chart and Surface Brightness Profile}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{fc_M17.pdf}}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sb_M17.pdf}}
\figsetgrpnote{The RGB-rendered finding chart and surface brightness profile for M17. Blue is {\it Spitzer}\xspace IRAC 4 (8 \micron), green is {\it MSX}\xspace E (21.3 \micron), and red is {\it Herschel}\xspace PACS 70 \micron.}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{1.21}
\figsetgrptitle{G333 Finding Chart and Surface Brightness Profile}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{fc_G333.pdf}}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sb_G333.pdf}}
\figsetgrpnote{The RGB-rendered finding chart and surface brightness profile for G333. Blue is {\it MSX}\xspace A (8.28 \micron), green is {\it MSX}\xspace E (21.3 \micron), and red is {\it Herschel}\xspace SPIRE 250 \micron.}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{1.22}
\figsetgrptitle{W43 Finding Chart and Surface Brightness Profile}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{fc_W43.pdf}}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sb_W43.pdf}}
\figsetgrpnote{The RGB-rendered finding chart and surface brightness profile for W43. Blue is {\it Spitzer}\xspace IRAC 4 (8 \micron), green is {\it MSX}\xspace E (21.3 \micron), and red is {\it Herschel}\xspace PACS 70 \micron.}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{1.23}
\figsetgrptitle{RCW49 Finding Chart and Surface Brightness Profile}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{fc_RCW49.pdf}}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sb_RCW49.pdf}}
\figsetgrpnote{The RGB-rendered finding chart and surface brightness profile for RCW49. Blue is {\it Spitzer}\xspace IRAC 4 (8 \micron), green is {\it MSX}\xspace E (21.3 \micron), and red is {\it Herschel}\xspace PACS 70 \micron.}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{1.24}
\figsetgrptitle{G305 Finding Chart and Surface Brightness Profile}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{fc_G305.pdf}}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sb_G305.pdf}}
\figsetgrpnote{The RGB-rendered finding chart and surface brightness profile for G305. Blue is {\it Spitzer}\xspace IRAC 4 (8 \micron), green is {\it MSX}\xspace E (21.3 \micron), and red is {\it Herschel}\xspace PACS 70 \micron.}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{1.25}
\figsetgrptitle{W49A Finding Chart and Surface Brightness Profile}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{fc_W49A.pdf}}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sb_W49A.pdf}}
\figsetgrpnote{The RGB-rendered finding chart and surface brightness profile for W49A. Blue is {\it Spitzer}\xspace IRAC 4 (8 \micron), green is {\it MSX}\xspace E (21.3 \micron), and red is {\it Herschel}\xspace PACS 70 \micron.}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{1.26}
\figsetgrptitle{Carina Nebula Finding Chart and Surface Brightness Profile}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{fc_Carina.pdf}}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sb_Carina.pdf}}
\figsetgrpnote{The RGB-rendered finding chart and surface brightness profile for the Carina Nebula. Blue is {\it Spitzer}\xspace IRAC 4 (8 \micron), green is {\it MSX}\xspace E (21.3 \micron), and red is {\it Herschel}\xspace PACS 70 \micron.}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{1.27}
\figsetgrptitle{W51A Finding Chart and Surface Brightness Profile}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{fc_W51A.pdf}}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sb_W51A.pdf}}
\figsetgrpnote{The RGB-rendered finding chart and surface brightness profile for W51A. Blue is {\it Spitzer}\xspace IRAC 4 (8 \micron), green is {\it MSX}\xspace E (21.3 \micron), and red is {\it Herschel}\xspace PACS 70 \micron.}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{1.28}
\figsetgrptitle{NGC 3603 Finding Chart and Surface Brightness Profile}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{fc_N3603.pdf}}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sb_N3603.pdf}}
\figsetgrpnote{The RGB-rendered finding chart and surface brightness profile for NGC 3603. Blue is {\it Spitzer}\xspace IRAC 4 (8 \micron), green is {\it MSX}\xspace E (21.3 \micron), and red is {\it Herschel}\xspace PACS 70 \micron.}
\figsetgrpend
\figsetend
\begin{figure}[htp]
\begin{tabular}{c}
\includegraphics[width=0.95\linewidth,clip,trim=1cm 5cm 1cm 5cm]{fc_Orion.pdf} \\
\includegraphics[width=1\linewidth,clip,trim=3.1cm 13.1cm 2.3cm 3.3cm]{sb_Orion.pdf}
\end{tabular}
\caption{{\it Top:} The RGB image (using a logarithmic stretch function) of the Orion Nebula, with the extraction aperture shown in white. Blue is {\it Spitzer}\xspace IRAC 4 (8.0 \micron), green is {\it MSX}\xspace E (21.3 \micron), and red is {\it Herschel}\xspace PACS 70 \micron. {\it Bottom:} Surface brightness profiles from {\it Spitzer}\xspace IRAC [8.0], {\it MSX}\xspace E, and PACS 70 \micron\xspace. The vertical red line indicates the outermost radius from within which the SEDs for the region were extracted. The horizontal dotted line indicates the background flux level. The complete figure set (56 images) is available in the online journal.}
\end{figure}\label{figure:surface_brightness}
In many cases, the 8 \micron\xspace aperture radius was smaller than the {\it Planck}\xspace FWHM (especially at 100 GHz and 70 GHz), which would have led to a potentially significant loss of radio flux. We therefore used {\it either} the IR-derived aperture or the {\it Planck}\xspace FWHM at each frequency, whichever was larger, to compute the radio flux. For some MSFRs in crowded regions, source confusion (particularly at lower frequencies) was a serious issue -- we only measure the radio flux in the frequency range where the MSFR is clearly resolved. The aperture size $r_{\rm ap}$ used to compute the radio flux for each region is given in Appendix~\ref{appendix:supplemental_figs}.
Data obtained from the various missions used in this work are reported in different units (for example, {\it Herschel}\xspace/PACS images are calibrated in units of Jy pixel$^{-1}$, while SPIRE images are in MJy sr$^{-1}$). We integrated the intensity images over the apertures defined above in MJy sr$^{-1}$ before converting each value to a flux density $S_0$ (Jy). The background-subtracted flux densities were calculated as
\begin{equation}
S = S_0 - B n_{\rm pix},
\end{equation}
\noindent where $n_{\rm pix}$ is the number of pixels contained within the source aperture and $B$ is the background level. The uncertainties are estimated as
\begin{equation}
\frac{\delta S}{S} = \frac{Bn_{\rm pix}}{S_0}.
\end{equation}
Additional sources of systematic error affect the absolute diffuse flux calibrations of IRAC images. \citet{Cohen+07} compared the IRAC 8 \micron\xspace and {\it MSX}\xspace 8.3 \micron\xspace for a sample of 43 Galactic \HII regions and found, correcting for the difference in bandpasses, that the present calibration of the IRAC 8 \micron\xspace band tends to overestimate diffuse fluxes by 36\%. This discrepancy is attributed to scattered light inside the camera, likely affecting the IRAC 5.8 \micron\xspace band as well. Aperture correction factors have been estimated for all four bands; we adopt the SSC ``infinite-aperture'' correction factors\footnote{See \url{http://ssc.spitzer.caltech.edu/irac/calib/extcal}} and multiply our flux densities at 3.6, 4.5, 5.8, and 8.0 \micron\xspace by factors of 0.91, 0.94, 0.73, and 0.74, respectively. All of our measured flux densities, aperture central coordinates, and adopted radio apertures are reported in Table~\ref{table:all_photometry} in the Appendix.
\section{The Spectral Energy Distributions}\label{section:SED_model_description}
Our SED model contains three basic components: a set of dust models to describe the ``warm'' dust component ($\sim$3--100 \micron), a ``cool'' blackbody component to describe the far-IR ($\sim$20 K, at $\sim$100--500 \micron) observations, and the radio continuum:
\begin{equation}
S_{\nu} = S_{\rm dust} + S_{\rm blackbody} + S_{\rm power law}.
\end{equation}
\noindent We discuss each component in detail below.
\subsection{The ``Warm'' Dust Model}
We employ the dust emissivity models of \citet{Draine+Li07}, using the Galactic ``MW'' grain size distribution models \citep{Weingartner+01}. These models assume a canonical extinction law $A_V/E(B-V)=3.1$ mag. This extinction law may underestimate the dust emissivity at longer wavelengths (e.g., where the colder dust, described below, begins to dominate the SED). For this reason, we do not attempt to constrain the dust emissivity or dust mass using the \citet{Draine+Li07} models; the primary purpose of the SED modeling is to obtain accurate IR luminosities of the MSFRs. The dust is assumed to be a mixture of amorphous silicate and graphitic grains, heated by starlight, with the smallest carbonaceous grains having the physical properties of polycyclic aromatic hydrocarbon (PAH) molecules. The size distributions of these particles are chosen to reproduce the wavelength-dependent extinction in the Milky Way. The silicate and carbonaceous content of the dust grains was constrained by observations of the gas phase depletions in the interstellar medium \citep{Weingartner+01}. The PAH abundance in each model is characterized by the index $q_{\rm PAH}$, defined to be the percentage of the total grain mass contributed by PAHs containing less than 10$^3$ C atoms, which can range from 0.46\% to 4.6\%.
In addition to the physical dust mixture, the models also specify the intensity of the radiation field that is heating the dust grains. The IR emission spectrum is relatively insensitive to the detailed spectrum of the $h\nu<13.6$ eV photons, and the \citet{Draine+Li07} dust models simply adopt the spectrum of the local interstellar radiation field (ISRF). The specific energy density of starlight is therefore taken to be
\begin{equation}
u_{\nu} = U u_{\nu}^{\rm (MMP83)},
\end{equation}
\noindent where $u_{\nu}^{\rm (MMP83)}$ is the specific energy density estimated by \citet{Mathis+83} for the local Galactic ISFR and $U$ is a dimensionless scale factor. In order to account for the range of starlight intensities that may be present in MSFRs, we parameterize the starlight as the sum of two contributions: one describing the radiation field due to the central ionizing cluster, assumed to be a delta function where $U_{\rm min,1} = U_{\rm max,1} = U_1$, and the other describing a range of stellar intensities ranging from $U_{\rm min,2}$ to $U_{\rm max,2}$. The second contribution allows the stellar radiation field to decrease with increasing distance from the principal ionizing OB star(s) and as a result of attenuation by intervening dust. The flux density of the total warm dust model used in our fits is therefore given by
\begin{equation}
f_{\rm dust}(\lambda) = N_{\rm dust} \left[\gamma f_{\rm dust,1}\left( \lambda \right) + (1-\gamma) f_{\rm dust,2}\left( \lambda \right) \right],
\end{equation}
\noindent where $f_{\rm dust,1}(\lambda)$ is the $\delta$-function radiation field and $f_{\rm dust,2}$ is the radiation field described by a range of stellar intensities. The total warm dust model is defined by the PAH fraction of each component ($q_{\rm PAH,1}$ and $q_{\rm PAH,2}$), the minimum and maximum stellar radiation fields experienced by component two ($U_{\rm min,2}$ and $U_{\rm max,2}$), the radiation field experienced by component one ($U_1$), the fraction of flux density emitted by each component ($\gamma$), and a normalization constant ($N_{\rm dust}$). Typically, $U_{\rm min,2}$ spans 0.1--1.00 while $U = U_{\rm max,2}$ = 10$^3$--10$^5$, and $\gamma$ is small ($\sim$10$^{-5}$). For regions without complete {\it Spitzer}\xspace coverage, this dust component is almost completely unconstrained. In these cases, we utilize a single dust component and report only $U_1$ and $q_{\rm PAH}$.
\subsection{The ``Cool'' Blackbody}
A single-temperature blackbody modified by an emissivity law proportional to $\lambda^{-\beta}$ is used to fit the cool ($\sim$20--30 K) dust component of the MSFRs, captured primarily by the SPIRE 250 \micron, 350 \micron, and 500 \micron\xspace channels. We refer to this component as the ``cool'' blackbody to differentiate it from ``cold'' ($\leq$10 K) dust in the ISM. Laboratory studies of interstellar dust analogs have found that $\beta\sim1-2$ for carbonaceous grains \citep{Mennella+95,Zubko+96,Jager+98} and $\beta\sim$2 for silicate grains \citep{Mennella+98,Boudet+05,Coupeaud+11} at FIR wavelengths. The effective value of $\beta$ for interstellar dust depends on the interstellar dust mixture and the interstellar radiation field. We assume $\beta$ = 1.5, consistent with observational constraints from SPIRE \citep[e.g.,][]{Dunne+Eales01,Paradis+09,Gordon+10,Gordon+14,Skibba+11}. The inferred dust temperatures depend marginally on the assumed emissivity, with $\beta$ = 2 yielding temperatures systematically lower by a few degrees \citep{Bendo+03}. With $\beta$ fixed, the blackbody component of our SED model is defined only by the dust temperature ($T_{\rm BB}$) and a normalization component ($N_{\rm BB}$).
We note that the dust opacity and total dust mass will depend on the normalization components $N_{\rm BB}$ and $N_{\rm dust}$. Due to the uncertainties of the dust properties, we do not attempt to estimate the dust mass for any of the MSFRs. The normalizations are used only for estimating the total IR luminosity of each region (see below, Section~\ref{section:SED_fit_results}).
\subsection{The Radio Continuum}
The nebular radio emission from MSFRs \citep[as well as entire star-forming galaxies;][]{Deeg+93} originates from two principal mechanisms: thermal bremsstrahlung (free-free) emission and non-thermal synchrotron radiation from supernovae (SNe). Both free-free and synchrotron radiation produce power law radio continua, with a spectral index $\alpha$ defined by
\begin{equation}
\alpha = \frac{d\text{log}S_{\nu}}{d\text{log}\nu}.
\end{equation}
\noindent We hence adopt the sign convention for which negative values of $\alpha$ indicate decreasing flux density with increasing frequency. Optically thin free-free emission is characterized by $\alpha=-0.1$, while non-thermal synchrotron emission typically yields $\alpha=-0.5$ \citep[e.g., see][and references therein]{Klein+88,Carlstrom+91}. For regions where we are able to estimate the radio flux at all four {\it Planck}\xspace frequencies, both the power law spectral index $\alpha$ and the power law normalization are free parameters in our fit. For regions where source confusion is an issue, and only one or two radio flux measurements are available, we assume $\alpha=-0.1$ (corresponding to a pure thermal continuum) and only fit the normalization component.
The IRAC [4.5] band, while free of strong PAH emission bands, contains the H~I Br$\alpha$ recombination line at 4.05 \micron, a potentially strong emission feature in \HII regions. Following the method of \citet{Povich+07} we use the thermal continuum flux density from the {\it Planck}\xspace radio observations to calculate the contribution of the Br$\alpha$ line to the IRAC [4.5] flux density, which is typically ${\sim}1$--20\%. We then increase the model-predicted 4.5 \micron\xspace flux by this amount prior to fitting the SEDs.
\subsection{Performing the Fit}
Our model SEDs are well-sampled in wavelength, whereas our observed SEDs are not. We therefore integrate the model SED flux density ($S_{\nu}$) over the (broad) response functions for each filter using
\begin{equation}
S_{\rm band} = \frac{\int S_{\nu} R_E\left( \nu \right) d\nu}{\int \left(\nu_0 / \nu \right)^{-1} R_E\left( \nu \right) d\nu}.
\end{equation}
\noindent $R_E(\nu)$ is the response function for each filter\footnote{Filter profiles were obtained from the SVO Filter Profile Service, \url{http://svo2.cab.inta-csic.es/theory/fps/}.} used in our SED modeling, and $\nu_0$ is the central frequency of the bandpass. For simplicity, we assume a $S(\nu) = \nu^{-1}$ reference spectrum \citep[e.g., as used in the calibration of {\it Herschel}\xspace PACS and SPIRE, where the SED peaks; ][]{Gordon+14}. Although the shape of the underlying \HII spectrum is frequency-dependent, the individual filter bandpasses are significantly narrower than the full SED; thus, changes the reference spectrum will change the model flux in each filter by only a few percent.
We use the IDL routine \texttt{mpfitfun} \citep{Markwardt09} to fit the observed fluxes to the model fluxes, and the model yielding the lowest $\chi^2$ per degree of freedom ($\chi^2_r$\xspace) is selected as the best-fit model. Our model is applied to a grid of possible dust model combinations defined by $[{q_{\rm PAH,1}, q_{\rm PAH,2}, U_{\rm min,2}, U_{\rm max,2}, U}$]. The best-fit values of $\gamma$, $N_{\rm dust}$, $T_{\rm BB}$, $N_{\rm BB}$, $\alpha$, $f_{\rm Br\alpha}$, and $N_{\rm PL}$ are determined for each parameter set; these are the parameters which determine the number of degrees of freedom for each model.
\subsection{Results of SED Fitting}\label{section:SED_fit_results}
We calculate the bolometric luminosity $L_{\rm TIR}$ of each \HII region by integrating our best-fit model over the wavelength range probed by our IR photometry (3.6--500 \micron), assuming the distance to each \HII region listed in Table~\ref{table:targets}. We also integrate over the model-predicted fluxes from the warm dust component only, and compute the fraction of the bolometric luminosity that is emitted by the warm dust component ($f_{\rm bol}$).
To examine the robustness of the dust model parameters, we additionally examined the model with the second-lowest $\chi^2_r$\xspace (presented in Table~\ref{table:SED_global_2nd_best} in Appendix~\ref{appendix:supplemental_figs}). In general, the differences in $\chi^2_r$\xspace between the best and second-best models ($\Delta \chi^2$) are small, and the bolometric luminosities inferred from the two models are within 1$\sigma$ of one another for all MSFRs. Only three MSFRs have $\Delta \chi^2 \geq 0.5$ (the Eagle Nebula, the Carina Nebula, and NGC~3603). The radio continua are consistent with thermal emission for all MSFRs for which it could be measured.
The intensity of the thermal radio continuum is proportional to the number of apparent Lyman continuum photons $N_C^{\prime}$. The Lyman continuum photon flux required to maintain ionization is given by
\begin{equation}
\begin{split}
N_C^{\prime} = 7.489 \times10^{46} \left( \frac{\nu}{\text{GHz}} \right)^{0.1} \left( \frac{T_e}{10^4 \text{ K}} \right)^{-0.5} \left( \frac{S_{\nu}}{\text{Jy}} \right) \\
\times \left( \frac{D}{\text{kpc}} \right)^2 \text{ ph s}^{-1},
\end{split}
\end{equation}
\noindent where $S_{\nu}$ is the (thermal) continuum flux density measured by {\it Planck}\xspace, and $D$ is the distance to the source (Table~\ref{table:targets}). We assume an electron temperature of $T_e=10^4$ K \citep{Subrahmanyan+Goss96}. The best-fit parameters for all MSFRs in our sample are summarized in Table~\ref{table:SED_global}, sorted by increasing bolometric luminosity. The table also includes the circular aperture radius used in our photometry analysis, along with the corresponding physical size of the region (note: the precise central coordinates of our apertures are given in Table~\ref{table:all_photometry} in Appendix~\ref{appendix:supplemental_figs}). Figure~\ref{figure:SED_examples} shows the SED and best-fit model for the Orion Nebula; the remaining SEDs are shown in Figures~\ref{figure:SED_examples}.1 through \ref{figure:SED_examples}.28 in the online figure set.
\figsetstart
\figsetnum{2}
\figsettitle{The global SEDs with the best-fit models superimposed.}
\figsetgrpstart
\figsetgrpnum{2.1}
\figsetgrptitle{Flame Nebula SED}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sed_Flame.pdf}}
\figsetgrpnote{The SED and best-fit model for the Flame Nebula}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{2.2}
\figsetgrptitle{W40 SED}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sed_W40.pdf}}
\figsetgrpnote{The SED and best-fit model for W40}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{2.3}
\figsetgrptitle{Westerlund 1 SED}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sed_Westerlund1.pdf}}
\figsetgrpnote{The SED and best-fit model for Wd 1}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{2.4}
\figsetgrptitle{RCW36 SED}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sed_RCW36.pdf}}
\figsetgrpnote{The SED and best-fit model for RCW36}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{2.5}
\figsetgrptitle{Berkeley 87 SED}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sed_Berkeley87.pdf}}
\figsetgrpnote{The SED and best-fit model for Berkeley 87}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{2.6}
\figsetgrptitle{Orion Nebula SED}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sed_Orion.pdf}}
\figsetgrpnote{The SED and best-fit model for the Orion Nebula}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{2.7}
\figsetgrptitle{Lagoon Nebula SED}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sed_Lagoon.pdf}}
\figsetgrpnote{The SED and best-fit model for the Lagoon Nebula}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{2.8}
\figsetgrptitle{Trifid Nebula SED}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sed_Trifid.pdf}}
\figsetgrpnote{The SED and best-fit model for the Trifid Nebula}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{2.9}
\figsetgrptitle{W42 SED}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sed_W42.pdf}}
\figsetgrpnote{The SED and best-fit model for W42}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{2.10}
\figsetgrptitle{NGC 7538 SED}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sed_N7538.pdf}}
\figsetgrpnote{The SED and best-fit model for NGC 7538}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{2.11}
\figsetgrptitle{W4 SED}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sed_W4.pdf}}
\figsetgrpnote{The SED and best-fit model for W4}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{2.12}
\figsetgrptitle{Eagle Nebula SED}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sed_Eagle.pdf}}
\figsetgrpnote{The SED and best-fit model for the Eagle Nebula}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{2.13}
\figsetgrptitle{W33 SED}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sed_W33.pdf}}
\figsetgrpnote{The SED and best-fit model for W33}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{2.14}
\figsetgrptitle{RCW38 SED}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sed_RCW38.pdf}}
\figsetgrpnote{The SED and best-fit model for RCW38}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{2.15}
\figsetgrptitle{W3 SED}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sed_W3.pdf}}
\figsetgrpnote{The SED and best-fit model for W3}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{2.16}
\figsetgrptitle{NGC 3576 SED}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sed_N3576.pdf}}
\figsetgrpnote{The SED and best-fit model for NGC 3576}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{2.17}
\figsetgrptitle{NGC 6334 SED}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sed_N6334.pdf}}
\figsetgrpnote{The SED and best-fit model for NGC 6334}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{2.18}
\figsetgrptitle{G29.96-0.02 SED}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sed_G29.pdf}}
\figsetgrpnote{The SED and best-fit model for G29.96-0.02}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{2.19}
\figsetgrptitle{NGC 6357 SED}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sed_N6357.pdf}}
\figsetgrpnote{The SED and best-fit model for NGC 6357}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{2.20}
\figsetgrptitle{M17 SED}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sed_M17.pdf}}
\figsetgrpnote{The SED and best-fit model for M17}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{2.21}
\figsetgrptitle{G333 SED}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sed_G333.pdf}}
\figsetgrpnote{The SED and best-fit model for G333}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{2.22}
\figsetgrptitle{W43 SED}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sed_W43.pdf}}
\figsetgrpnote{The SED and best-fit model for W43}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{2.23}
\figsetgrptitle{RCW49 SED}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sed_RCW49.pdf}}
\figsetgrpnote{The SED and best-fit model for RCW49}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{2.24}
\figsetgrptitle{G305 SED}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sed_G305.pdf}}
\figsetgrpnote{The SED and best-fit model for G305}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{2.25}
\figsetgrptitle{W49A SED}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sed_W49A.pdf}}
\figsetgrpnote{The SED and best-fit model for W49A}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{2.26}
\figsetgrptitle{Carina Nebula SED}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sed_Carina.pdf}}
\figsetgrpnote{The SED and best-fit model for the Carina Nebula}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{2.27}
\figsetgrptitle{W51A SED}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sed_W51A.pdf}}
\figsetgrpnote{The SED and best-fit model for W51A}
\figsetgrpend
\figsetgrpstart
\figsetgrpnum{2.28}
\figsetgrptitle{NGC 3603 SED}
\figsetplot{\includegraphics[width=0.31\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sed_N3603.pdf}}
\figsetgrpnote{The SED and best-fit model for NGC 3603}
\figsetgrpend
\figsetend
\begin{figure}[htp]
\centering
\includegraphics[width=1\linewidth,clip,trim=1.2cm 12.4cm 3.5cm 3.7cm]{sed_Orion.pdf}
\caption{The 3.6 \micron\ -- 10 mm SED for the Orion Nebula, with the best-fit model superimposed. The complete figure set (28 images) is available in the online journal.}
\end{figure}\label{figure:SED_examples}
\begin{figure}
\includegraphics[width=1\linewidth,clip,trim=1cm 12.5cm 2.5cm 3cm]{Lbol_vs_Ncprime_upd.pdf}
\caption{Dust-processed bolometric luminosity $L_{\rm TIR}$ as a function of $N_C^{\prime}$ from the {\it Planck}\xspace radio continuum. The dashed black line shows the best-fit relation (log$L_{\rm TIR}=(-36.33\pm2.53) + (0.86\pm0.05)$ log$N_C^{\prime}$). Only regions with $N_C^{\prime}$ measurements from {\it Planck}\xspace are used in the fit.}
\end{figure}\label{figure:Lbol_vs_Ncprime}
\begin{table*}[ht]\setlength{\tabcolsep}{2.5pt}
\centering
\scriptsize
\caption{Global SED Model Fits}
\begin{tabular}{ccccccccccccccc}
\hline \hline
Name & aperture & $U_1$ & $q_{\rm PAH,1}$ & $U_{\rm min,2}$ & $U_{\rm max,2}$ & $q_{\rm PAH,2}$ & $f_{\rm bol}$ & 1-$\gamma$ & $L_{\rm TIR}$$^a$ & $T_{\rm BB}$ & $\alpha$ & $f_{\rm Br\alpha}$ & $N_C^{\prime}$ & $\chi^2_r$\xspace \\
& radius & & (\%) & & & (\%) & (\%) & (10$^{-5}$) & (10$^6$ L$_{\odot}$) & (K) & & (\%) & ($10^{49}$) & \\
(1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) & (12) & (13) & (14) & (15) \\
\hline
Flame & 15\farcm9 (1.5 pc) & 10$^5$ & 4.58 & 0.50 & 10$^5$ & 0.47 & 41 & 7.5$\pm$0.9 & 0.04$\pm$0.01 & 34.9$\pm$9.3 & -0.10$\pm$0.01 & 2 & 0.03$\pm$0.01 & 5.0 \\
W40 & 22\farcm4 (3.3 pc) & 10$^5$ & 3.19 & \nodata & \nodata & \nodata & 26 & \nodata & 0.04$\pm$0.01 & 25.8$\pm$1.3 & -0.09$\pm$0.01 & 2 & 0.04$\pm$0.01 & 7.2 \\
Wd~1$^e$ & 2\farcm8 (3.2 pc) & 10$^5$ & 4.58 & 1.00 & 10$^5$ & 3.19 & 100 & 84.1$\pm$9.9 & 0.09$\pm$0.04 & 8.0$\pm$7.5 & \nodata & \nodata & \nodata & 11.5 \\
RCW36 & 16\farcm4 (5.2 pc) & 10$^5$ & 4.58 & 0.50 & 10$^5$ & 4.58 & 49 & 5.2$\pm$0.3 & 0.10$\pm$0.03 & 24.2$\pm$2.4 & -0.10$\pm$0.02 & 3 & 0.12$\pm$0.02 & 3.2 \\
Berkeley~87 & 8\farcm4 (4.3 pc) & 10$^5$ & 1.12 & 0.50 & 10$^5$ & 4.58 & 43 & 5.7$\pm$0.9 & 0.17$\pm$0.07 & 27.4$\pm$4.1 & -0.09$\pm$0.01 & 11 & 0.34$\pm$0.17 & 2.2 \\
Orion & 22\farcm6 (2.7 pc) & 10$^5$ & 2.50 & 0.50 & 10$^5$ & 3.90 & 61 & 5.8$\pm$0.9 & 0.24$\pm$0.07 & 36.2$\pm$3.0 & -0.09$\pm$0.01 & 26 & 0.2$\pm$0.1 & 2.7 \\
Lagoon$^d$ & 13\farcm0 (4.4 pc) & 10$^5$ & 3.19 & 0.50 & 10$^5$ & 1.12 & 72 & 5.0$\pm$0.2 & 0.32$\pm$0.07 & 29.6$\pm$2.2 & -0.09$\pm$0.01 & 17 & 1.1$\pm$0.2 & 1.6 \\
Trifid & 16\farcm9 (7.7 pc) & 10$^3$ & 2.50 & 0.50 & 10$^3$ & 4.58 & 87 & 5.0$\pm$0.8 & 0.37$\pm$0.11 & 20.9$\pm$3.1 & -0.09$\pm$0.01 & 6 & 1.5$\pm$0.4 & 1.8 \\
W42 & 5\farcm2 (3.3 pc) & 10$^5$ & 1.77 & 0.50 & 10$^5$ & 2.50 & 60 & 5.5$\pm$0.8 & 0.37$\pm$0.10 & 26.4$\pm$0.8 & -0.10 (fixed) & 25 & 1.9$\pm$0.1 & 1.3 \\
NGC~7538$^b$ & 8\farcm0 (6.2 pc) & 10$^5$ & 4.58 & 0.50 & 10$^5$ & 1.12 & 58 & 6.3$\pm$0.2 & 0.59$\pm$0.16 & 27.0$\pm$4.0 & -0.07$\pm$0.01 & 7 & 1.3$\pm$0.1 & 2.1 \\
W4$^{c,e}$ & 55\farcm7 (36.3 pc) & 10$^5$ & 4.58 & 0.50 & 10$^5$ & 4.58 & 26 & 5.0$\pm$0.3 & 0.77$\pm$0.07 & 24.8$\pm$1.2 & \nodata & \nodata & \nodata & 2.6 \\
Eagle & 20\farcm5 (10.2 pc) & 10$^5$ & 0.47 & \nodata & \nodata & \nodata & 53 & \nodata & 1.12$\pm$0.32 & 22.1$\pm$1.8 & -0.10$\pm$0.02 & 86 & 1.6$\pm$0.4 & 7.5 \\
W33 & 12\farcm0 (8.4 pc) & 10$^5$ & 1.12 & 0.50 & 10$^4$ & 0.47 & 35 & 4.9$\pm$0.4 & 1.18$\pm$0.29 & 25.5$\pm$1.8 & -0.09$\pm$0.01 & 39 & 4.6$\pm$0.8 & 6.2 \\
RCW38 & 21\farcm2 (10.5 pc) & 10$^5$ & 2.50 & 0.50 & 10$^5$ & 3.90 & 59 & 5.0$\pm$0.1 & 1.22$\pm$0.20 & 29.9$\pm$1.2 & -0.09$\pm$0.01 & 8 & 2.6$\pm$0.1 & 1.7 \\
W3 & 13\farcm4 (8.5 pc) & 10$^5$ & 3.90 & 0.50 & 10$^5$ & 2.50 & 45 & 7.1$\pm$1.1 & 1.38$\pm$0.32 & 32.6$\pm$1.1 & -0.08$\pm$0.01 & 7 & 2.9$\pm$0.2 & 2.2 \\
NGC~3576 & 12\farcm9 (10.4 pc) & 10$^5$ & 1.77 & 0.50 & 10$^5$ & 3.90 & 52 & 5.8$\pm$0.4 & 1.65$\pm$0.12 & 30.6$\pm$0.7 & -0.10 (fixed) & 13 & 4.0$\pm$0.1 & 2.5 \\
NGC~6334 & 19\farcm8 (9.4 pc) & 10$^5$ & 4.58 & 0.50 & 10$^5$ & 1.12 & 50 & 6.1$\pm$0.3 & 2.72$\pm$0.63 & 30.2$\pm$1.4 & -0.11$\pm$0.02 & 7 & 2.8$\pm$0.3 & 2.0 \\
G29.96--0.02 & 9\farcm7 (17.5 pc) & 10$^5$ & 3.19 & 0.50 & 10$^5$ & 2.50 & 37 & 4.6$\pm$0.2 & 3.96$\pm$0.88 & 29.3$\pm$1.0 & -0.10 (fixed) & 14 & 15.4$\pm$0.4 & 1.3 \\
NGC~6357 & 25\farcm8 (13.4 pc) & 10$^5$ & 0.47 & 0.50 & 10$^5$ & 4.58 & 55 & 6.0$\pm$0.5 & 4.33$\pm$0.34 & 27.5$\pm$1.7 & -0.09$\pm$0.01 & 7 & 6.4$\pm$0.2 & 3.1 \\
M17 & 23\farcm0 (12.2 pc) & 10$^5$ & 1.77 & 0.50 & 10$^5$ & 4.58 & 67 & 5.8$\pm$0.2 & 4.46$\pm$1.28 & 32.1$\pm$4.4 & -0.09$\pm$0.01 & 5 & 7.4$\pm$0.6 & 2.1 \\ \cline{2-2}
& 9\farcm2 (7.0 pc), & & & & & & & & & & & & & \\
G333 & 7\farcm5 (5.7 pc), & 10$^5$ & 0.47 & 0.50 & 10$^5$ & 3.90 & 51 & 4.9$\pm$0.7 & 4.80$\pm$1.29 & 36.3$\pm$1.9 & -0.07$\pm$0.01 & 18 & 11.1$\pm$0.2 & 1.9 \\
& 9\farcm2 (7.0 pc) & & & & & & & & & & & & & \\ \cline{2-2}
W43 & 7\farcm5 (12.0 pc) & 10$^5$ & 1.12 & 0.50 & 10$^5$ & 3.19 & 46 & 5.5$\pm$0.8 & 5.19$\pm$1.37 & 28.3$\pm$0.9 & -0.10$\pm$0.02 & 51 & 57.4$\pm$9.5 & 1.9 \\
RCW49 & 19\farcm6 (25.1 pc) & 10$^5$ & 2.50 & 0.50 & 10$^5$ & 2.50 & 62 & 5.8$\pm$0.3 & 9.02$\pm$2.03 & 33.7$\pm$1.1 & -0.10$\pm$0.02 & 10 & 26.3$\pm$3.0 & 1.8 \\
G305 & 40\farcm8 (42.6 pc) & 10$^5$ & 0.47 & 0.50 & 10$^5$ & 4.58 & 42 & 5.4$\pm$0.8 & 13.73$\pm$3.74 & 26.9$\pm$1.7 & -0.11$\pm$0.02 & 8 & 19.5$\pm$3.5 & 4.1 \\
W49A & 6\farcm7 (22.2 pc) & 10$^5$ & 3.19 & 0.50 & 10$^5$ & 1.77 & 31 & 6.3$\pm$0.9 & 15.61$\pm$3.63 & 29.9$\pm$1.1 & -0.09$\pm$0.01 & 23 & 38.3$\pm$9.2 & 1.9 \\
Carina & 63\farcm9 (50.0 pc) & 10$^4$ & 3.19 & 0.50 & 10$^4$ & 0.47 & 96 & 46.5$\pm$1.9 & 17.51$\pm$5.31 & 28.7$\pm$4.3 & -0.08$\pm$0.01 & 5 & 29.0$\pm$3.1 & 4.1 \\
W51A & 31\farcm8 (47.2 pc) & 10$^5$ & 4.58 & 0.50 & 10$^5$ & 0.47 & 51 & 4.7$\pm$0.3 & 17.88$\pm$4.86 & 31.9$\pm$4.8 & -0.12$\pm$0.02 & 9 & 33.5$\pm$5.2 & 2.3 \\
NGC~3603 & 12\farcm4 (25.2 pc) & 10$^5$ & 1.12 & 0.50 & 10$^5$ & 2.50 & 56 & 12.6$\pm$1.1 & 23.10$\pm$6.40 & 40.6$\pm$6.1 & -0.10 (fixed) & 8 & 31.1$\pm$0.9 & 3.3 \\
\hline \hline
\end{tabular}
\label{table:SED_global}
\tablecomments{$^a$In this and subsequent tables, MSFRs are listed in order of increasing $L_{\rm TIR}$. $^b$Missing {\it Spitzer}\xspace [5.8] and [8.0] observations. $^c${\it Spitzer}\xspace observations not used in fit due to incomplete coverage of the region. $^d$Missing {\it Herschel}\xspace PACS observations. $^e$Missing or insufficient radio emission in {\it Planck}\xspace.}
\end{table*}
In Figure~\ref{figure:Lbol_vs_Ncprime} we plot $L_{\rm TIR}$ against $N_C^{\prime}$. Previous studies have utilized the relationship between the luminosity at 24 \micron\xspace and $N_C^{\prime}$ as a foundation for calibrations of the extragalactic, mid-IR SFR determinations \citep{Calzetti+07, Chomiuk+Povich11,VEH16}. We find a sub-linear correlation, with log$L_{\rm TIR}=(-36.33\pm2.53) + (0.86\pm0.05)$ log$N_C^{\prime}$. This is consistent with sub-linear correlation between radio and MIR tracers of star formation found by \citet{VEH16}.
The cool blackbody components of the MSFRs in our sample have an average temperature $\langle T_{\rm BB}\rangle= 28.6\pm6.0$~K. This temperature range is consistent with the galaxy-wide SED modeling results found in the KINGFISH survey \citep{Hunt+15}.
\section{\HII Region Reprocessing of Starlight and Ionizing Photons}\label{section:reprocessing}
We compiled the known massive stellar content (stars with spectral types earlier than B2) of each region and estimated the Lyman continuum photon rate ($N_C$) and bolometric luminosity ($L_{\star}$) produced by the stars in each region. We used the models of \citep{Martins+05b} to estimate $N_C$ for the cataloged OB population in each region. This grid covers the log $g$--$T_{\rm eff}$ plane for O- and early-B stars, and includes non-LTE treatment and line-blanketing. We used the observed spectral type of each massive star (B2 or earlier) to assign a corresponding $L_{\star}$ and $N_C$, summarized in Table~\ref{table:stellar_models}. For Wolf-Rayet stars, we adopt the luminosities and ionizing photon rates provided in \citet[][their Table~2]{Crowther07}. A detailed discussion of each region, including references to the previously catalogued stellar content or assumptions made regarding the spectral types, is presented in Appendix~\ref{appendix:region_discussion}.
\begin{table*}[ht]
\centering
\caption{Adopted Stellar Parameters by Spectral Type}
\begin{tabular}{ccccccccccccccc}
\hline \hline
& \multicolumn{2}{c}{Class V} && \multicolumn{2}{c}{Class III} && \multicolumn{2}{c}{Class I} && \multicolumn{2}{c}{Wolf Rayet (WN+)} && \multicolumn{2}{c}{Wolf Rayet (WC+)} \\ \cline{2-3} \cline{5-6} \cline{8-9} \cline{11-12} \cline{14-15}
Spectral & log $L_{\star}$ & log $N_C$ && log $L_{\star}$ & log $N_C$ && log $L_{\star}$ & log $N_C$ && log $L_{\star}$ & log $N_C$ && log $L_{\star}$ & log $N_C$ \\
Type & ($L_{\odot}$\xspace) & (s$^{-1}$) && ($L_{\odot}$\xspace) & (s$^{-1}$) && ($L_{\odot}$\xspace) & (s$^{-1}$) && ($L_{\odot}$\xspace) & (s$^{-1}$) && ($L_{\odot}$\xspace) & (s$^{-1}$) \\
(1) & (2) & (3) && (4) & (5) && (6) & (7) && (8) & (9) && (10) & (11) \\
\hline
O3 & 5.84 & 49.64 && 5.96 & 49.77 && 5.99 & 49.78 && 5.34 & 49.20 && ... & ... \\
O3.5 & 5.76 & 49.54 && 5.91 & 49.71 && 5.96 & 49.74 && ... & ... && ... & ... \\
O4 & 5.67 & 49.44 && 5.85 & 49.64 && 5.93 & 49.70 && 5.30 & 49.20 && 5.54 & 49.40 \\
O4.5 & 5.58 & 49.33 && 5.70 & 49.56 && 5.90 & 49.66 && ... & ... && ... & ... \\
O5 & 5.49 & 49.22 && 5.73 & 49.48 && 5.87 & 49.62 && 5.20 & 49.00 && 5.10 & 48.90 \\
O5.5 & 5.41 & 49.10 && 5.67 & 49.40 && 5.84 & 49.58 && ... & ... && ... & ... \\
O6 & 5.32 & 48.99 && 5.61 & 49.32 && 5.81 & 49.52 && 5.20 & 49.10 && 5.06 & 48.90 \\
O6.5 & 5.23 & 48.88 && 5.54 & 49.23 && 5.78 & 49.46 && ... & ... && ... & ... \\
O7 & 5.14 & 48.75 && 5.48 & 49.13 && 5.75 & 49.41 && 5.54 & 49.40 && 5.34 & 49.10 \\
O7.5 & 5.05 & 48.61 && 5.42 & 49.01 && 5.72 & 49.31 && ... & ... && ... & ... \\
O8 & 4.96 & 48.44 && 5.35 & 48.88 && 5.68 & 49.25 && 5.38 & 49.10 && 5.14 & 49.00 \\
O8.5 & 4.86 & 48.27 && 5.28 & 48.75 && 5.65 & 49.19 && ... & ... && ... & ... \\
O9 & 4.77 & 48.06 && 5.21 & 48.65 && 5.61 & 49.11 && 5.70 & 48.90 && 4.94 & 48.60 \\
O9.5 & 4.68 & 47.88 && 5.15 & 48.42 && 5.57 & 49.00 && ... & ... && ... & ... \\
B0 & 4.57 & 47.70 && 5.08 & 48.28 && ... & ... && ... & ... && ... & ... \\
B0.5 & 4.47 & 47.50 && 5.00 & 48.10 && ... & ... && ... & ... && ... & ... \\
B1 & 4.37 & 47.28 && 4.93 & 47.90 && ... & ... && ... & ... && ... & ... \\
B1.5 & 4.28 & 47.05 && 4.86 & 47.68 && ... & ... && ... & ... && ... & ... \\
B2 & 4.19 & 46.80 && 4.78 & 47.44 && ... & ... && ... & ... && ... & ... \\
\hline \hline
\end{tabular}
\label{table:stellar_models}
\end{table*}
In Table~\ref{table:energy_budget} we summarize the expected ($N_C$) and ($L_{\star}$) in each MSFR, the ionizing photon flux estimated from the {\it Planck}\xspace radio observations ($N_C^{\prime}$), the bolometric luminosity ($L_{\rm TIR}$) measured by our SED fitting, and the ratios $N_C^{\prime}/N_C$ and $L_{\rm TIR}/L_{\star}$ for each region. The spectral types of the ionizing stellar populations are, of course, not known with perfect accuracy. To assign uncertainties to $N_C$ and $L_{\star}$, we assume that the cataloged spectral type of each star may differ by up to one type from the true value; e.g., an O6V star may be as late as an O7V or as early as an O5V. For each star in the MSFR, we randomly select a spectral type that can be the same as the reported spectral type, or a half- or full-spectral type earlier or later than the reported spectral type. We then re-compute $N_C$ and $L_{\star}$ for the region. This process is repeated 500 times for each MSFR, yielding distributions of plausible $N_C$ and $L_{\star}$ values for each region. The standard deviations of these distributions are then used as the uncertainties on $N_C$ and $L_{\star}$ reported in Table~\ref{table:energy_budget}; typically, the means of these distributions agree with the values computed using the cataloged spectral types.
\begin{table*}[ht]
\footnotesize
\centering
\caption{Lyman Continuum Rates and $L_{\star}$ vs. $L_{\rm TIR}$}
\begin{tabular}{cccccccc}
\hline \hline
& \multicolumn{2}{c}{OB Stars} && \multicolumn{2}{c}{SED Model/Stellar Population} & & \\ \cline{2-3} \cline{5-6}
Region & $N_C$ (10$^{49}$ s$^{-1}$) & $L_{\star}$ (10$^6$ $L_{\odot}$) && $N_C^{\prime}/N_C$ & $L_{\rm TIR}/L_{\star}$ & f$_{\rm esc}$ & $f_{C,\rm abs}$ \\
(1) & (2) & (3) && (4) & (5) & (6) & (7) \\
\hline
Flame$^a$ & 0.15$\pm$0.07 & 0.09$\pm$0.02 && 0.20$\pm$0.11 & 0.44$\pm$0.15 & 0.56$\pm$0.19 & 0.24$\pm$0.16 \\
W40 & 0.09$\pm$0.05 & 0.07$\pm$0.01 && 0.44$\pm$0.27 & 0.57$\pm$0.16 & 0.43$\pm$0.12 & 0.13$\pm$0.09 \\
Wd~1 & 6.33$\pm$4.11 & 2.64$\pm$0.65 && \nodata & 0.03$\pm$0.02 & $>$0.32 & \nodata \\
RCW~36 & 0.19$\pm$0.09 & 0.11$\pm$0.02 && 0.63$\pm$0.32 & 0.91$\pm$0.32 & 0.09$\pm$0.03 & 0.28$\pm$0.17 \\
Berkeley~87 & 1.86$\pm$0.72 & 0.72$\pm$0.15 && 0.18$\pm$0.11 & 0.24$\pm$0.11 & $>$0.41 & 0.06$\pm$0.05 \\
Orion & 0.77$\pm$0.23 & 0.29$\pm$0.04 && 0.26$\pm$0.15 & 0.83$\pm$0.27 & 0.17$\pm$0.06 & 0.57$\pm$0.38 \\
Lagoon$^b$ & 3.71$\pm$2.09 & 1.14$\pm$0.27 && 0.30$\pm$0.18 & 0.28$\pm$0.09 & 0.72$\pm$0.23 & 0 \\
Trifid$^c$ & 0.58$\pm$0.22 & 0.16$\pm$0.04 && 2.59$\pm$1.20 & 2.31$\pm$0.90 & \nodata & \nodata \\
W42 & 1.71$\pm$0.54 & 0.35$\pm$0.08 && 1.11$\pm$0.36 & 1.06$\pm$0.38 & \nodata & 0 \\
NGC~7538$^a$ & 4.74$\pm$0.89 & 1.01$\pm$0.13 && 0.27$\pm$0.05 & 0.58$\pm$0.17 & 0.42$\pm$0.12 & 0.31$\pm$0.10 \\
W4$^b$ & 11.50$\pm$2.12 & 2.60$\pm$0.28 && \nodata & 0.30$\pm$0.03 & 0.70$\pm$0.07 & \nodata \\
Eagle$^b$ & 4.31$\pm$2.88 & 2.01$\pm$0.41 && 0.37$\pm$0.26 & 0.56$\pm$0.20 & 0.44$\pm$0.16 & 0.19$\pm$0.15 \\
W33 & 10.78$\pm$1.30 & 1.98$\pm$0.16 && 0.43$\pm$0.09 & 0.60$\pm$0.16 & 0.40$\pm$0.11 & 0.17$\pm$0.06 \\
RCW~38$^{a,b}$ & 3.50$\pm$1.36 & 0.76$\pm$0.18 && 0.74$\pm$0.29 & 1.61$\pm$0.46 & \nodata & \nodata \\
W3$^b$ & 5.87$\pm$2.37 & 1.68$\pm$0.31 && 0.49$\pm$0.20 & 0.82$\pm$0.24 & 0.18$\pm$0.05 & 0.30$\pm$0.17 \\
NGC~3576$^c$ & 2.71$\pm$1.26 & 0.88$\pm$0.16 && 1.48$\pm$0.69 & 1.88$\pm$0.37 & \nodata & \nodata \\
NGC~6334$^{b,c}$ & 4.24$\pm$1.74 & 1.17$\pm$0.27 && 0.66$\pm$0.28 & 2.32$\pm$0.76 & \nodata & \nodata \\
G29.96--0.02$^c$ & 4.17$\pm$0.51 & 0.74$\pm$0.06 && 3.69$\pm$0.46 & 5.35$\pm$1.24 & \nodata & \nodata \\
NGC~6357$^b$ & 32.95$\pm$3.41 & 7.13$\pm$0.53 && 0.19$\pm$0.02 & 0.61$\pm$0.07 & 0.39$\pm$0.04 & 0.42$\pm$0.07 \\
M17$^b$ & 22.39$\pm$4.08 & 5.89$\pm$0.54 && 0.33$\pm$0.07 & 0.76$\pm$0.23 & 0.24$\pm$0.07 & 0.43$\pm$0.16 \\
G333$^{a,c}$ & 8.30$\pm$1.26 & 1.55$\pm$0.18 && 1.33$\pm$0.20 & 1.22$\pm$0.36 & \nodata & \nodata \\
W43 & 39.70$\pm$1.95 & 7.10$\pm$0.25 && 1.44$\pm$0.25 & 0.73$\pm$0.19 & 0.27$\pm$0.07 & \nodata \\
RCW~49 & 56.24$\pm$2.82 & 10.38$\pm$0.42 && 0.47$\pm$0.06 & 0.87$\pm$0.20 & 0.13$\pm$0.03 & 0.40$\pm$0.11 \\
G305$^c$ & 29.31$\pm$2.34 & 6.70$\pm$0.34 && 0.67$\pm$0.13 & 2.05$\pm$0.57 & \nodata & \nodata \\
W49A$^c$ & 60.89$\pm$3.18 & 10.53$\pm$0.45 && 0.63$\pm$0.15 & 1.48$\pm$0.35 & \nodata & \nodata \\
Carina & 93.79$\pm$6.43 & 22.76$\pm$0.97 && 0.31$\pm$0.04 & 0.77$\pm$0.24 & 0.23$\pm$0.07 & 0.46$\pm$0.16 \\
W51A$^{c,d}$ & 42.95$\pm$2.81 & 9.15$\pm$0.43 && 0.78$\pm$0.13 & 1.95$\pm$0.28 & \nodata & \nodata \\
NGC~3603 & 137.08$\pm$5.02 & 23.03$\pm$0.74 && 0.23$\pm$0.02 & 1.00$\pm$0.28 & $<$0.28 & 0.77$\pm$0.23 \\
\hline \hline
\end{tabular}\label{table:energy_budget}
\tablecomments{$^a$Assumptions made about the spectral type(s) of one or more stars in the region. $^b$Includes candidate OB members from \citet{Povich+17}. $^c$Stellar content likely incomplete. $^d$Significant distance uncertainty.}
\end{table*}
\begin{figure*}[htp]
\centering
\includegraphics[width=0.45\linewidth,clip,trim=1.5cm 12.5cm 2cm 3cm]{Lyman_photon_rate_upd.pdf} \quad
\includegraphics[width=0.45\linewidth,clip,trim=1.5cm 12.5cm 2cm 3cm]{Lbol_vs_Lstar_upd.pdf}
\caption{{\it Left}: The Lyman continuum photon rate from radio observations ($N_C^{\prime}$) as a function of the stellar content ($N_C$). Excluding the MSFRs plotted as open circles with dotted-line error bars (see text for details), $\log{N_C^{\prime}} = (-0.40\pm0.80) + (1.00\pm0.07)\log{N_C}$ (black dashed line). Using all MSFRs, the slope becomes 0.97$\pm$0.08 and the $y$-intercept is 1.00$\pm$0.76. {\it Right}: The bolometric luminosity ($L_{\rm TIR}$) as a function of the stellar content ($L_{\star}$). The fit to the filled circle points gives $\log{L_{\rm TIR}} = (-0.52\pm0.35) + (1.04\pm0.06)\log{L_{\star}}$ (black dashed line). Using all MSFRs, the slope becomes $1.00\pm0.12$ and the $y$-intercept becomes -0.07$\pm$0.73 (gray dashed line). The bottom panels show the scatter about the best-fit relationships.}
\label{figure:SED_vs_OB}
\end{figure*}
In Figure~\ref{figure:SED_vs_OB} we plot $N_C^\prime$ and $L_{\rm TIR}$ derived from our SED fits against against $N_C$ and $L_\star$, respectively, from the massive stellar content in all 28 MSFRs. Excluding MSFRs that lack secure distances, well-characterized massive stellar populations, or measurable $N_C^{\prime}$ from the {\it Planck}\xspace observations, the best-fit relationship for the Lyman continuum photon rates yields a linear slope, $\log{N_C^{\prime}} = (-0.40\pm0.80) + (1.00\pm0.07)\log{N_C}$, as does the relationship for bolometric luminosities, with $\log{L_{\rm TIR}} = (-0.52\pm0.35) + (1.04\pm0.06)\log{L_{\star}}$. Including all MSFRs does not significantly change the power-law slopes of these relationships.
Dust can absorb Lyman continuum photons before they contribute to the ionization of \HII regions \citep{McKee+Williams97}, reducing $N_C^\prime/N_C$ while contributing to $L_{\rm TIR}/L_\star$. Previous studies have found that $\sim$20-50\% of UV photons produced by massive stars in the Milky Way and Local Group galaxies are absorbed by dust in surrounding \HII regions \citep[see][and references therein]{Inoue01,Inoue+01}. Similar studies of external galaxies have suggested 30--70\% of the emitted UV photons escape from \HII regions and interact with the ISM \citep[e.g.,][]{Oey+Kennicutt97,Zurita+02,Giammanco+05,Pellegrini+12}.
Our SED modeling results allow us to estimate both the fraction of stellar luminosity that escapes the MSFR and the fraction of Lyman continuum photons absorbed by dust within the \HII regions. Strong density inhomogeneities in the \HII regions and surrounding PDRs create low-density pathways through which Lyman continuum and longer-wavelength photons may reach the diffuse ISM without first being absorbed by local dust or gas associated with the MSFR. UV photons carry the bulk of the emitted stellar luminosity and have characteristically large interaction cross-sections with both dust and gas. The average hydrogen gas density $n_H$ of the diffuse ISM is typically lower than that within a young \HII region by a factor of $10^{-3}$. Since the Str{\"o}mgren radius is proportional to $n_H^{-2/3}$ \citep{Stromgren39}, Lyman continuum photons that manage to escape MSFRs can ionize regions 100 times larger than their parent \HII regions. The largest \HII regions in our sample have diameters of tens of pc, hence their escaped Lyman continuum photons contribute to the ionization of the warm ionized medium, with its ${\sim}1$~kpc scale height \citep{Haffner+03}.
The fraction of stellar luminosity escaping from each MSFR is simply
\begin{equation}\label{equation:fesc}
f_{\rm esc} = 1 - L_{\rm TIR}/L_{\star},
\end{equation}
\noindent which we calculate for the 18 MSFRs with well-characterized massive stellar populations, well-constrained distances (hence excluding the 8 regions marked with a $c$ or $d$ in column 1 of Table~\ref{table:energy_budget}; values of $f_{\rm esc}$ are presented in column 6), and measurable $N_C^{\prime}$. For these regions we find an average $\langle L_{\rm TIR}/L_{\star}\rangle=0.74\pm0.22$, so approximately three-quarters of the emitted stellar luminosity is absorbed and reprocessed by the \HII regions and surrounding PDRs and one-quarter escapes into the diffuse ISM. The average ratio of Lyman continuum photon rate emitted by the massive stars to ionizing photon rate measured from the {\it Planck}\xspace thermal radio continuum is $\langle N_C^{\prime}/N_C\rangle=0.47\pm0.24$. In other words, we find that only $\sim$50\% of UV photons emitted by massive stars contribute to the ionization of their surrounding \HII regions, consistent with \citet{Inoue01}. Ionizing continuum photons are lost to the \HII regions due to the combination of dust absorption and escape, hence we define the fraction ($f_{C,\rm abs}$) of Lyman continuum photons absorbed by dust within \HII regions using
\begin{equation}
f_{C,\rm abs} + f_{\rm esc} = (N_{C,\rm abs} + N_{C,\rm esc})/N_C = 1 - N_C^\prime/N_C.
\end{equation}
\noindent Substituting Equation~\ref{equation:fesc} into the above and rearranging terms, we have
\begin{equation}
f_{C,\rm abs} = L_{\rm TIR}/L_{\star} - N_C^\prime/N_C.
\end{equation}
We have tacitly assumed that $f_{\rm esc}$ does not differ significantly between Lyman continuum and longer-wavelength UV photons. Values of $f_{C,\rm abs}$ are reported in column 7 of Table~\ref{table:energy_budget}. The uncertainties are relatively large, and $f_{C,\rm abs}$ falls within $1\sigma$ of zero for roughly 20\% (3/17) of \HII regions for which it could be calculated; for these regions we report upper limits only.
In Table~\ref{table:subgroup_mean} we divide our MSFR into subgroups based on luminosity and then compute the average values of $N_C^\prime/N_C$, $L_{\rm TIR}/L_\star$, and $f_{C,\rm abs}$ for each subgroup. The luminosity subgroups were defined to categorize the ionizing stellar population as follows: those with a fully-populated upper stellar initial mass function (IMF) containing multiple O2/O3 stars plus Wolf-Rayet stars ($\log{L_{\rm TIR}/L_\sun}\ge 6.8$ or $\log{L_{\star}/L_\sun}\ge 6.8$), those ionized by the equivalent of a single O6 or later-type star ($\log{N_C/{\rm s}^{-1}}< 49$), and the intermediate case of \HII regions ionized by one or more early O stars but may still be influenced by stochastic sampling of the upper IMF \citep[see][]{Kennicutt+12}. Only regions with reasonably secure distances, well-catalogued stellar populations, and for which we were able to calculate non-zero values of $f_{C,\rm abs}$ (e.g., half of our sample) were used for this analysis.
\begin{table*}[htp]
\centering
\caption{Mean Fractions of Starlight Reprocessed by Dust in MSFRs}
\begin{tabular}{ccccccc}
\hline \hline
MSFR Subgroup & $N$ & $\langle \log{(N_C^{\prime}/{\rm s}^{-1})}\rangle$ & $\langle \log{(L_{\rm TIR}/L_\sun)}\rangle$ & $\langle N_C^{\prime}/N_C\rangle$ & $\langle L_{\rm TIR}/L_{\star}\rangle$ & $\langle f_{C,\rm abs}\rangle$ \\
(1) & (2) & (3) & (4) & (5) & (6) & (7) \\
\hline
Fully-populated upper IMF & 4 & 50.295 & 7.050 & 0.30 & 0.82 & 0.51 \\
Some early O stars & 6 & 49.307 & 5.985 & 0.35 & 0.59 & 0.24 \\
Single O6 or later & 4 & 47.865 & 4.896 & 0.38 & 0.69 & 0.31 \\
\hline
All MSFRs & 14 & 49.177 & 5.978 & 0.34 & 0.68 & 0.34 \\
\hline \hline
\end{tabular}\label{table:subgroup_mean}
\end{table*}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth,clip,trim=2.5cm 13cm 2cm 3cm]{Lfrac_fcabs.pdf}
\caption{The fraction of Lyman continuum photons absorbed by dust within each \HII region ($f_{C,\rm abs}$) as a function of the luminosity ratio $L_{\rm TIR}/L_{\star}$.}
\label{figure:Lfrac_vs_fcabs}
\end{figure}
The main result highlighted in Table~\ref{table:subgroup_mean} is that the high-luminosity regions with fully-populated upper IMFs lose 50\% of their Lyman continuum photons to dust absorption that would otherwise contribute to ionizing their \HII regions, which is significantly above the average $f_{C,\rm abs}=30\%$ for the entire sample. This result agrees very well with the predictions of \citet{McKee+Williams97}, who calculated that the fraction of ionizing photons absorbed by dust increases with ionizing photon rate in \HII regions. This trend was caused by the higher ionization fraction lowering the absorption cross-section of the gas toward ionizing photons.
In Figure~\ref{figure:Lfrac_vs_fcabs}, we plot $f_{C,\rm abs}$ as a function of the luminosity ratio $L_{\rm TIR}/L_{\star}$ (the regions included in Figure~\ref{figure:Lfrac_vs_fcabs} are the same regions used to compute averages in Table~\ref{table:subgroup_mean}). Regions with higher fractions of their stellar luminosities reprocessed by dust show higher fractions of Lyman continuum photon absorption.
Given the extreme feedback effects produced by the radiation and stellar winds of the most massive young clusters, one might expect that dust grains would be more efficiently destroyed or evacuated from more luminous regions. \citet{Everett+Churchwell10} found that dust must be continually replenished within \HII regions to produce the observed 24~\micron\ emission. \citet{McKee+Williams97} did not address the question of whether dust properties vary among different \HII regions. We find no significant depletion of dust among the more luminous Galactic \HII regions in our sample, indeed, our results seem to imply the opposite, that more luminous \HII regions are somehow dustier than less-luminous ones.
More massive clusters are formed from the densest clumps within the most massive giant molecular clouds, and the high gravitational potential combined with the self-shielding effects of very dense, dusty gas could preserve massive reservoirs of cold dust within filaments, pillars, and globules, that are in close proximity to or even surrounded by ionized gas \citep[see, e.g.][]{Dale+Bonnell11}. Photoevaporation of this cold, dusty gas could hence provide a readily-available source of dust replenishment for luminous \HII regions. This is consistent with the observed morphologies of dusty, giant \HII regions, which feature large cavities filled with hot, low-density X-ray-emitting plasma surrounded by relatively thin shells traced by both the brightest mid-IR and radio emission \citep{Townsley+03,Povich+07,Townsley+11b}. In our sample, M17, W43, and NGC 3603 exemplify this morphology.
Part of this trend is likely be due to selection bias in our sample; after all we have targeted IR-bright Galactic MSFRs, not a representative sample of all Galactic MSFRs. The most massive young clusters are located at relatively large heliocentric distances in the bright, crowded reaches of the Galactic midplane, so very massive clusters that have been cleared of dust (such as Wd~1) are difficult to identify. But our sample was constructed so that the selection biases should be similar to those of a spatially-resolved, IR observation of a nearby disk galaxy targeting the brightest compact ``knots'' of IR emission \citep[e.g.,][]{Calzetti+07}. We have thus included a representative sample of regions that are bright in the IR and sufficiently isolated (none are within 5~kpc of the Galactic center) that they would stand out among the brightest compact IR sources in to an external observer of the Milky Way.
\section{Monochromatic Luminosities and Predicted Star Formation Rates}\label{section:lum_and_SFRs}
Monochromatic luminosities at various IR wavelengths have been developed as more convenient substitutes for $L_{\rm TIR}$ to measure SFRs in galaxies, generally calibrated against extinction-corrected H$\alpha$ emission as a proxy for the ionizing photon rate. In this section we analyze the behavior, among our sample of Galactic MSFRs, of three monochromatic luminosities that have been widely investigated in the extragalactic context. Monochromatic luminosities reported here are measured from the SED model luminosities convolved with the relevant instrumental filter bandpass, which may differ from the flux density measured directly from aperture photometry in that bandpass. In the case of the 24~\micron\ luminosity the direct measure is not available, as our MSFRs usually saturate the MIPS 24~\micron\ images. Saturation frequently affects the IRAC 8~\micron\ flux densities in bright regions as well.
At shorter wavelengths (3~\micron~$\la\lambda\la 20$~\micron), IR emission from MSFRs is dominated by the emission features of PAHs and the warm (${>}150$~K) dust continuum. Not surprisingly, short-wavelength tracers such as the monochromatic 8 \micron\xspace luminosity ($L_8$) are inaccurate, showing large degrees of variability with respect to metallicity \citep[e.g.,][]{Madden00,Madden+06,Engelbracht+05,Engelbracht+08,Draine+Li07,Galliano+08,Gordon+08,MunozMateos+09,Marble+10} and the shape of the SEDs \citep{Dale+05,Calzetti+07}. For these reasons, $L_8$ is not generally considered a reliable SFR tracer. The strong ionizing radiation fields of early-type stars are extremely efficient at destroying PAHs in their vicinity, thereby reducing their line strength and relative contribution to $L_8$. Thus, $L_8$ may be a better tracer of the B-star population in a region than the overall SFR \citep{Peeters+04}.
The monochromatic 24 \micron\xspace luminosity ($L_{24}$) is often utilized as a SFR tracer \citep{Calzetti+07}. Although the $L_{\rm 24}$/SFR ratio is reasonably consistent on local scales, it is systematically higher when applied to starburst galaxies or ULRIGs \citep{Calzetti+05}. In this intermediate wavelength range ($\sim$20--60 \micron), warm dust ($\sim$50 K) emission transitions from being dominated by stochastically heated small grains to being dominated by larger grains in thermal equilibrium. Thus, variations in $L_{24}$ are related to the shapes of the observed SED of the star-forming region, and hence should be sensitive to the radiation field strength of the ionizing stars and to the dust temperature.
On ${\sim}500$~pc scales in spatially-resolved observations of nearby galaxies, \citet{Calzetti+07} found a sublinear relationship between the SFR and $L_{24}$ (their Equation 9),
\begin{equation}
\frac{\text{SFR}_{24}}{M_{\odot}~\text{yr}^{-1}} = \left(1.3\pm0.2\right)\times10^{-38} \left(\frac{L_{24}}{\text{erg~s}^{-1}}\right)^{\left(0.89\pm0.02\right)},
\end{equation}
\noindent derived over a luminosity range of $3\times 10^{6} \leq (L_{24}/L_{\sun}) \leq 10^{11}$.
The non-linearity of this correlation is characteristic of this tracer \citep{AlonsoHerrero+06,PerezGonzalez+06,Calzetti+07,Relano+07,Murphy+11}. Proposed explanations for this trend invoke increasing dust opacity in star-forming regions and/or increasing mean dust temperature with increasing $L_{24}$.
Longward of $\sim$60 \micron, the emission from star-forming regions is dominated by thermal emission from larger dust grains at $\sim$20 K (typically referred to as the ``cool'' or ``cold'' dust component, and represented in our SED model by the cold blackbody component). Although heating from lower-mass (and potentially older) stars contributes more at these cold temperatures than at shorter wavelengths, the 70 \micron\xspace luminosity has been found to be an accurate monochromatic SFR indicator \citep{Dale+05}. The relationship between SFR and $L_{70}$ is linear; using a sample of over 500 star-forming regions, \citet{Li+10} found
\begin{equation}\label{eq:SFRcalconst}
\frac{\text{SFR}_{70}}{M_{\odot}~\text{yr}^{-1}} = c_{70}\times 10^{-43} \left(\frac{L_{70}}{\text{erg s}^{-1}}\right),
\end{equation}
\noindent with calibration constant $c_{70}=0.94$, over 1 kpc scales for $10^{7} \la (L_{70}/L_{\sun}) \la 10^{10}$ (their Equation 4). The formal uncertainty reported for the calibration constant was ${\sim}2\%$. \citet{Lawton+10} found a similar relationship for dust-obscured \HII regions in the Large and Small Magellanic Clouds.
In Figure~\ref{figure:Lbol_Lmon} we plot the ratio of these monochromatic luminosities to $L_{\rm TIR}$ against $L_{\rm TIR}$ for each of the MSFRs in our sample. On average, we find $L_8$ comprises 17$\pm$5\% of the bolometric luminosity of the regions in our sample, comparable to previous studies of the $L_8$-SFR relationship in other metal-rich, star forming galaxies \citep{Crocker+12,Treyer+10,Elbaz+11}. The 24~\micron\ luminosity accounts for 25$\pm$7\% of the bolometric luminosity, and 52$\pm$10\% of the bolometric luminosity is emitted at 70~\micron. The peak of the IR SEDs almost always falls near or within the PACS 70~\micron\ band (with the notable exception of Wd 1, indicated by an open circle in Figure~\ref{figure:Lbol_Lmon}).
\begin{figure}
\includegraphics[width=1\linewidth,clip,trim=1.5cm 12.5cm 2cm 3cm]{Lbol_Lmon.pdf}
\caption{The fraction of the monochromatic luminosities to total luminosity as a function of total luminosity derived from the global SED fitting. The gray shaded region in each panel shows the 1$\sigma$ uncertainty in the average fraction. Wd 1 (open circle and dotted-line error bars) has been excluded from the fit.}
\end{figure}\label{figure:Lbol_Lmon}
We do not observe a trend consistent with increasing dust temperature enhancing $L_{24}$ as $L_{\rm TIR}$ (or, implicitly, SFR) increases. Our SED modeling similarly reveals no correlation between $f_{\rm bol}$ (the fraction of luminosity in the warm dust component) and $L_{\rm TIR}$ (Table~\ref{table:SED_global}). One possible explanation could be that increased feedback in more luminous regions tends to expel or destroy dust from the immediate vicinity of the early-type stars, hence the remaining dust persists at larger distances from the ionizing cluster(s), maintaining cooler equilibrium temperatures than would be predicted by models that increase the radiation field without changing the spatial distribution of dust. While the upper end of our sampled luminosities overlaps with the luminosity range studied by \citet{Calzetti+07}, we cannot rule out the possibility the $L_{24}$--$L_{\rm TIR}$ relation steepens at higher luminosities, so there is no evident tension between our results and the extragalactic calibrations.
\subsection{Luminosity and the PAH Fraction}
The \citet{Draine+Li07} dust models may provide insight into the physical nature of the dust present in each star-forming region. Of particular interest is the PAH fraction required to best-fit the short-wavelength {\it Spitzer}\xspace fluxes in the MSFR SEDs. We calculate a luminosity-weighted average PAH fraction $\langle q_{\rm PAH} \rangle$ from our best-fit SED model (e.g., with the value of $q_{\rm PAH}$ of each dust component weighted by the luminosity produced by that component); uncertainties on $\langle q_{\rm PAH} \rangle$ are derived from the uncertainties in the luminosities of each dust component.
We observe marginal evidence for a correlation between the average $q_{\rm PAH}$ and $L_{\rm TIR}$; brighter regions have systematically lower average $q_{\rm PAH}$ values. Figure~\ref{figure:av_qpah} shows the average $q_{\rm PAH}$ as a function of $L_{\rm TIR}$ for each of the three luminosity categories listed in Table~\ref{table:subgroup_mean}, as well as for the entire MSFR sample. Regions with fully-populated upper IMFs exhibit the lowest $\langle q_{\rm PAH} \rangle$ values, 2.2$\pm$0.5\%, compared to regions with some early O-stars ($\langle q_{\rm PAH} \rangle$ = 2.8$\pm$0.9\%) or a single O6-type or later ($\langle q_{\rm PAH} \rangle$ = 3.5$\pm$1.0\%). We caution that the inferred PAH fractions depend on the \citet{Draine+Li07} dust models, which assume a canonical extinction law ($R_V=3.1$ mag) that may underestimate the dust emissivity at longer wavelengths; the absolute $\langle q_{\rm PAH} \rangle$ values, therefore, may depend on choice of extinction law. The best-fit relationship between $\langle q_{\rm PAH} \rangle$ and $L_{\rm TIR}$ using only the average quantities for the three MSFR subgroups (the black dashed line in Figure~\ref{figure:av_qpah}) is given by
\begin{equation}
\langle q_{\rm PAH} \rangle = (6.8\pm1.4) - (0.7\pm0.2) \text{log }L_{\rm TIR}~(\%).
\end{equation}
\noindent There is no significant difference in the fit parameters when all MSFRs are used. This correlation may arise from weaker radiation fields produced by late O-type stars being inefficient at destroying a large percentage of PAH molecules.
\begin{figure}
\includegraphics[width=1\linewidth,clip,trim=3cm 13cm 2cm 3cm]{av_qpah.pdf}
\caption{The average $q_{\rm PAH}$ of MSFRs in our sample sorted by luminosity subgroup; colors are the same as in previous figures. Regions with a fully-populated upper IMF have the lowest median $q_{\rm PAH}$ at (2.2$\pm$0.5)\%, compared with intermediate luminosity regions (e.g., those powered by some early O stars; 2.8$\pm$0.9\%) and fainter regions (e.g., those powered by a single O6 star or later; 3.5$\pm$1.0\%). The black dashed line shows the best-fit relationship between $\langle q_{\rm PAH} \rangle$ and $L_{\rm TIR}$ using only the average quantities for the three MSFR subgroups; the gray dashed line shows the relationship derived from all MSFRs. }
\end{figure}\label{figure:av_qpah}
\subsection{Calibrating SFR Tracers Against Ionizing Photon Rates of Cataloged Massive Stellar Populations}
The Galaxy offers the advantage that the ionizing stellar populations in MSFRs can be resolved at the level of individual OB stars, with unresolved binary/multiple systems revealed through spectroscopy. We can therefore calibrate SFRs directly against the Lyman continuum photon rate $N_C$ expected from the cataloged OB populations (Tables~\ref{table:stellar_models} and \ref{table:energy_budget}). Using the Starburst99 population synthesis code \citep{Leitherer+99} and assuming a Kroupa initial mass function \citep{K+W03} the relationship between $N_C$ and SFR is\footnote{Many studies still use an older calibration based on a Salpeter IMF from \citet{Kennicutt98}, which increases derived SFRs by a factor of 1.44 assuming a minimum stellar mass of 0.1~$M_{\odot}$\xspace.}
\begin{equation}\label{eq:SFR(LyC)}
\frac{\text{SFR$_{\rm LyC}$}}{M_{\odot}~\text{yr}^{-1}} = (7.5 \times10^{-54})\frac{N_C} {\text{s}^{-1}}
\end{equation}
\noindent \citep{Kennicutt+09,Chomiuk+Povich11}. \citet{Chomiuk+Povich11} caution that this relationship likely underestimates the SFR for the very young MSFRs in our sample. SFR$_{\rm LyC}$ is the {\it continuous} SFR required to maintain ``steady state'' conditions within the MSFR in which the ionizing star birth rate equals the death rate. The lifetime of the ionizing stars is therefore assumed in this relationship, and this timescale is likely to be longer than the duration of star formation activity in many of the MSFRs in our sample.
Values for SFR$_{\rm LyC}$ in each MSFR computed using Equation~\ref{eq:SFR(LyC)} are reported in Table~\ref{table:model_luminosity_SFRs}; for the six regions where we have determined that an incomplete massive stellar census (e.g., where the principal ionizing star or stars have not yet been identified, marked with a $c$ in Table~\ref{table:energy_budget}) we report lower limits. We then assume SFR$_{\rm LyC}$ for the left-hand side of Equation~\ref{eq:SFRcalconst} and substitute $L_{\rm TIR}$, $L_{24}$, or $L_{70}$ to calculate the calibration constants $c_{\rm TIR}$, $c_{24}$, and $c_{70}$, respectively. We additionally derive a radio calibration constant $c_{\rm radio}$ from the {\it Planck}\xspace radio observations by substituting $N_C^{\prime}$ into the right-hand side of Equation~\ref{eq:SFR(LyC)}. Excluding the eight MSFRs with incompletely cataloged stellar populations or very uncertain distances (e.g., those marked with a $c$ or $d$ in Table~\ref{table:energy_budget}) and Wd~1, we derive $c_{\rm TIR}=1.2\pm0.7$, $c_{24}=3.7\pm2.4$, $c_{70}=2.7\pm1.4$, and $c_{\rm radio}=21.8\pm11.4$ averaged across 19 Galactic MSFRs. We also computed median values for each calibration constant and found that they did not differ significantly from the mean values.
Using these calibration constants, we computed SFR$_{\rm TIR}$ (analogous to the \citealp{Kennicutt+09} ``total IR'' SFR tracer), SFR$_{24}$, SFR$_{70}$, and SFR$_{\rm radio}$ for all MSFRs in our sample. These various SFRs are presented in Table~\ref{table:model_luminosity_SFRs} along with the monochromatic 8, 24, and 70~\micron\ fluxes and luminosities. Comparisons between each of these IR SFR indicators and SFR$_{\rm LyC}$ are shown in Figure~\ref{figure:SFRs}. Not surprisingly, Wd~1 is a clear outlier, with a predicted SFR$_{\rm TIR}$ that is only $\sim$6\% SFR$_{\rm LyC}$. Figure~\ref{figure:SFRs} shows that all three IR indicators begin to overestimate the SFR for SFR$_{\rm LyC}\lesssim10^{-4}$ $M_{\odot}$\xspace yr$^{-1}$, but the SFR derived from the radio observations does not.
\begin{table*}
\centering
\setlength\tabcolsep{2 pt}
\tiny
\caption{Monochromatic Luminosities and SFRs}
\begin{tabular}{ccccccccccccccccc}
\hline \hline
& & & & \multicolumn{3}{c}{8~\micron} && \multicolumn{4}{c}{24~\micron} && \multicolumn{4}{c}{70 \micron} \\ \cline {5-7} \cline{9-12} \cline{14-17}
Name & SFR$_{\rm LyC}$ & SFR$_{\rm TIR}$ & SFR$_{\rm radio}$ & $S_{\nu}$ & $L_{8}$ & $\frac{L_{8}}{L_{\rm TIR}}$ && $S_{\nu}$ & $L_{24}$ & SFR$_{24}$ & $\frac{L_{24}}{L_{\rm TIR}}$ && $S_{\nu}$ & $L_{70}$ & SFR$_{70}$ & $\frac{L_{70}}{L_{\rm TIR}}$ \\
&($10^{-3} \frac{M_{\sun}}{\rm yr}$) &($10^{-3} \frac{M_{\sun}}{\rm yr}$) &($10^{-3} \frac{M_{\sun}}{\rm yr}$) & (Jy) & ($10^{39}\frac{\rm erg}{\rm s}$) & && (Jy) & ($10^{39}\frac{\rm erg}{\rm s}$) & ($10^{-3} \frac{M_{\sun}}{\rm yr}$) & && (Jy) & ($10^{39}\frac{\rm erg}{\rm s}$) & ($10^{-3} \frac{M_{\sun}}{\rm yr}$) & \\
(1) & (2) & (3) & (4) & (5) & (6) & (7) && (8) & (9) & (10) & (11) && (12) & (13) & (14) \\
\hline
Flame & 0.011$\pm$0.005 & 0.018$\pm$0.004 & 0.007$\pm$0.003 & 5,170 & 0.03 & 0.14 && 22,840 & 0.04 & 0.02 & 0.65 && 206,760 & 0.12 & 0.03 & 0.08 \\
W40 & 0.007$\pm$0.005 & 0.018$\pm$0.004 & 0.009$\pm$0.005 & 1,740 & 0.02 & 0.12 && 8,820 & 0.03 & 0.01 & 0.19 && 72,120 & 0.09 & 0.02 & 0.54 \\
Wd~1 & 0.48$\pm$0.26 & 0.04$\pm$0.02 & \nodata & 280 & 0.19 & 0.53 && 1,000 & 0.23 & 0.08 & 0.63 && 360 & 0.03 & 0.01 & 0.08 \\
RCW~36 & 0.014$\pm$0.006 & 0.05$\pm$0.01 & 0.026$\pm$0.014 & 1,720 & 0.09 & 0.23 && 4,930 & 0.09 & 0.03 & 0.22 && 27,640 & 0.17 & 0.05 & 0.42 \\
Berkeley~87 & 0.14$\pm$0.02 & 0.08$\pm$0.03 & 0.074$\pm$0.039 & 750 & 0.10 & 0.15 && 3,310 & 0.15 & 0.06 & 0.22 && 24,040 & 0.37 & 0.10 & 0.65 \\
Orion & 0.059$\pm$0.023 & 0.11$\pm$0.03 & 0.044$\pm$0.023 & 25,860 & 0.19 & 0.21 && 110,710 & 0.28 & 0.10 & 0.31 && 520,060 & 0.45 & 0.12 & 0.49 \\
Lagoon & 0.28$\pm$0.15 & 0.14$\pm$0.03 & 0.24$\pm$0.13 & 3,800 & 0.23 & 0.19 && 20,380 & 0.42 & 0.15 & 0.34 && 73,410 & 0.51 & 0.14 & 0.42 \\
Trifid & 0.044$\pm$0.005 & 0.17$\pm$0.05 & 0.33$\pm$0.17 & 3,730 & 0.41 & 0.29 && 3,500 & 0.13 & 0.05 & 0.09 && 52,010 & 0.66 & 0.18 & 0.46 \\
W42 & 0.13$\pm$0.04 & 0.17$\pm$0.05 & 0.41$\pm$0.22 & 1,090 & 0.24 & 0.16 && 6,050 & 0.44 & 0.16 & 0.30 && 25,080 & 0.62 & 0.17 & 0.43 \\
NGC~7538 & 0.36$\pm$0.11 & 0.26$\pm$0.07 & 0.28$\pm$0.15 & 1,540 & 0.48 & 0.21 && 5,920 & 0.62 & 0.23 & 0.27 && 28,640 & 1.03 & 0.27 & 0.46 \\
W4 & 0.86$\pm$0.28 & 0.34$\pm$0.03 & \nodata & 1,600 & 0.36 & 0.12 && 4,560 & 0.34 & 0.13 & 0.12 && 62,330 & 1.60 & 0.43 & 0.55 \\
Eagle & 0.37$\pm$0.25 & 0.50$\pm$0.14 & 0.35$\pm$0.18 & 2,110 & 0.28 & 0.06 && 31,110 & 1.36 & 0.50 & 0.32 && 85,840 & 1.28 & 0.43 & 0.30 \\
W33 & 0.82$\pm$0.18 & 0.53$\pm$0.13 & 1.00$\pm$0.52 & 1,050 & 0.27 & 0.06 && 8,630 & 0.74 & 0.27 & 0.16 && 84,340 & 2.48 & 0.66 & 0.55 \\
RCW~38 & 0.42$\pm$0.19 & 0.54$\pm$0.09 & 0.57$\pm$0.30 & 7,380 & 0.95 & 0.21 && 30,030 & 1.29 & 0.47 & 0.44 && 154,970 & 2.29 & 0.61 & 0.50 \\
W3 & 0.44$\pm$0.27 & 0.62$\pm$0.14 & 0.63$\pm$0.33 & 4,160 & 0.88 & 0.17 && 16,950 & 1.20 & 0.44 & 0.23 && 130,710 & 3.18 & 0.85 & 0.60 \\
NGC~3576\tablenotemark{a} & ${>}0.20$ & 0.74$\pm$0.05 & 0.87$\pm$0.46 & 3,060 & 1.05 & 0.17 && 14,400 & 1.60 & 0.59 & 0.41 && 87,340 & 3.28 & 0.87 & 0.54 \\
NGC~6334\tablenotemark{a} & $>$0.65 & 1.21$\pm$0.28 & 0.61$\pm$0.32 & 14,990 & 1.78 & 0.17 && 62,090 & 2.46 & 0.90 & 0.24 && 427,740 & 5.82 & 1.55 & 0.56 \\
G29.96--0.02\tablenotemark{a} & ${>}0.32$ & 1.77$\pm$0.39 & 3.36$\pm$1.75 & 1,020 & 1.76 & 0.12 && 4,550 & 2.61 & 0.96 & 0.88 && 46,620 & 9.17 & 2.44 & 0.62 \\
NGC~6357 & 2.47$\pm$0.26 & 1.93$\pm$0.15 & 1.40$\pm$0.73 & 21,050 & 2.98 & 0.18 && 100,360 & 4.85 & 1.78 & 0.29 && 460,600 & 7.89 & 2.10 & 0.45 \\
M17 & 1.99$\pm$0.21 & 1.99$\pm$0.57 & 1.62$\pm$0.84 & 27,690 & 4.10 & 0.24 && 115,980 & 5.72 & 2.10 & 0.34 && 441,940 & 7.49 & 1.99 & 0.44 \\
G333 & 1.03$\pm$0.26 & 2.14$\pm$0.58 & 2.42$\pm$1.26 & 9,190 & 2.78 & 0.15 && 47,350 & 4.77 & 1.75 & 0.26 && 300,920 & 10.40 & 2.76 & 0.57 \\
W43 & 3.01$\pm$0.62 & 2.32$\pm$0.61 & 12.53$\pm$6.53 & 1,890 & 2.56 & 0.13 && 10,100 & 4.55 & 1.67 & 0.23 && 70,790 & 10.95 & 2.91 & 0.56 \\
RCW~49 & 5.41$\pm$1.23 & 4.03$\pm$0.91 & 5.74$\pm$2.99 & 7,220 & 6.25 & 0.18 && 36,840 & 10.63 & 3.89 & 0.31 && 172,710 & 17.10 & 4.54 & 0.50 \\
G305\tablenotemark{a} & ${>}2.22$ & 6.13$\pm$1.67 & 5.26$\pm$2.22 & 12,380 & 7.13 & 0.14 && 57,890 & 11.12 & 4.07 & 0.21 && 425,800 & 28.07 & 7.46 & 0.54 \\
W49A\tablenotemark{a} & ${>}4.62$ & 6.97$\pm$1.62 & 8.36$\pm$4.36 & 970 & 5.65 & 0.10 && 4,780 & 9.26 & 3.39 & 0.16 && 60,760 & 40.38 & 10.73 & 0.68 \\
Carina & 6.14$\pm$2.24 & 7.81$\pm$2.37 & 6.33$\pm$3.30 & 34,900 & 11.29 & 0.17 && 203,640 & 21.96 & 8.05 & 0.33 && 632,930 & 23.42 & 6.22 & 0.35 \\
W51A\tablenotemark{a} & ${>}3.43$ & 7.98$\pm$2.17 & 7.31$\pm$3.81 & 8,620 & 10.03 & 0.15 && 40,450 & 15.68 & 5.74 & 0.23 && 299,440 & 39.83 & 10.58 & 0.59 \\
NGC~3603 & 10.4$\pm$1.47 & 10.31$\pm$2.86 & 6.79$\pm$3.54 & 5,840 & 12.79 & 0.16 && 38,870 & 28.39 & 10.40 & 0.34 && 161,700 & 40.52 & 10.77 & 0.49 \\
\hline \hline
\end{tabular}
\tablenotetext{a}{Regions for which massive stellar content remains incompletely cataloged; reported SFR$_{\rm LyC}$ is a lower limit.}
\end{table*}\label{table:model_luminosity_SFRs}
\begin{figure*}
\includegraphics[width=0.9\linewidth,clip,trim=1cm 11.8cm 2cm 6cm]{compare_SFRs_upd.pdf}
\caption{The top panel shows comparisons (from left to right) of SFR$_{\rm TIR}$, SFR$_{24}$, SFR$_{70}$, and SFR$_{\rm radio}$ to SFR$_{\rm LyC}$. To bottom panel shows the ratio of the monochromatic luminosity-predicted SFR to SFR$_{\rm LyC}$, as a function of SFR$_{\rm LyC}$. The dotted lines show one-to-one correlations, not fits to the data. Regions with incompletely cataloged massive stellar populations or negligible obscuration are marked with open circles (as in Figure~\ref{figure:SED_vs_OB}) and excluded from the calculations of the calibration constants (see text). Colors are as in Figure~\ref{figure:Lbol_Lmon}.}
\end{figure*}\label{figure:SFRs}
\begin{figure}
\includegraphics[width=1\linewidth,clip,trim=1.5cm 11cm 7cm 2cm]{compare_SFRs_TIR_upd.pdf}
\caption{Monochromatic SFR indicators for 24~\micron\ ({\em top}), 70~\micron\ ({\em middle}), and the radio continuum ({\em bottom}) normalized to SFR$_{\rm TIR}$ measured from $L_{\rm TIR}$. Black points show average values. Symbols and colors are the same as in Figure~\ref{figure:SFRs}. The gray dotted line shows a perfect agreement between the monochromatic SFR and SFR$_{\rm TIR}$. }
\end{figure}\label{figure:SFRdiff_vs_Lbol}
In Figure~\ref{figure:SFRdiff_vs_Lbol} we compare each of the monochromatic SFR indicators directly with SFR$_{\rm TIR}$, which has the advantage of being independent of distance or knowledge of ionizing stellar content. Excluded regions in our sample include both very young, highly embedded \HII regions for which the massive stellar content is difficult to spectroscopically catalog, and older, unobscured regions that have largely dispersed their dust. Highly-embedded \HII regions are strong far-IR emitters due to their large fraction of cold dust that remains shielded from the nascent ionizing clusters, while unobscured regions have little to no warm dust remaining, although they may externally illuminate nearby cold, molecular cloud fragments.
\subsection{Comparison to Extragalactic Tracers of Dust-Obscured SFRs}
Using our value of $c_{24}$ yields SFR$_{24}$ estimates (as in Table~\ref{table:model_luminosity_SFRs} and Figure~\ref{figure:SFRs}) that are comparable to those derived from the sub-linear calibration from \cite[][their equation~9]{Calzetti+07}. Although there is considerable variation among individual MSFRs, the Galactic and resolved extragalactic tracers can be regarded as generally consistent.
\citet{Calzetti+10} warn that SFR calibrations based on $L_{24}$ alone break down when applied to entire galaxies with $L_{24}<5\times 10^{43}$~erg~s$^{-1}$, a luminosity range that begins one order of magnitude above the brightest MSFRs in our sample. Our linear calibration constant $c_{24}$ is ${\sim}40$\% higher than the analogous extragalactic calibrations \citep[see references in][]{Calzetti+10}, albeit with large uncertainties. This is sensible because the average $L_{24}/L_{\rm TIR}\sim0.25$ (Figure~\ref{figure:Lbol_Lmon}) agrees with the upper end of the range of $L_{24}/L_{\rm TIR}$ measured for whole galaxies by \citet{Calzetti+10}. The mid-IR SEDs of our MSFRs thus resemble those of dusty, starburst galaxies, where the 24~\micron\ emission is completely dominated by heating from young stars. The average star-forming galaxy has a ratio that is lower by a factor of ${\sim}40$\%. Astrophysically, this discrepancy is explained by the increasing contribution from dust heated by older stellar populations to $L_{24}$ as SFR decreases, when the IR luminosity is measured using whole-galaxy apertures.
The effect of increasing the aperture size over which the IR SEDs are measured becomes far more pronounced as the IR wavelength considered increases, because older stellar populations heat dust to lower temperatures than young stellar populations. Our 70 \micron\xspace calibration constant ($c_{70}$) is higher than the value measured by \citet{Calzetti+10} for whole galaxies by a factor of $\sim$5.5, a much greater discrepancy than we find for $c_{24}$. Meanwhile, our result that on average $L_{70}/L_{\rm TIR}=55\%$ is in excellent agreement with their measurements of star-forming galaxies.\footnote{Excluding luminous IR galaxies (LIRGs), which have systematically cooler dust temperatures and hence elevated $L_{70}$ and $L_{160}$ at the expense of $L_{24}$ compared to normal galaxies or most of our MSFRs.}
\citet{Li+13} explored the relationship between $c_{70}$ and physical aperture size and found that an adjustment to $c_{70}$ is required to ensure consistency of SFR$_{70}$ on different spatial scales. In Figure~\ref{figure:SFR_size}, we reproduce their Figure~9, including our significantly smaller, individual MSFRs (which have ${\sim}15$ pc typical physical aperture size). Our value of $c_{70}$, calibrated to SFR$_{\rm LyC}$ from the cataloged massive stellar populations within our MSFRs, is close to the value predicted by extrapolating the trend from the calibrations of \citet{Calzetti+10}, \citet{Li+10}, and \citet{Li+13} to smaller spatial scales.
Dust absorption produces a 50\% reduction in the ionizing photon rates compared to the production rates of Lyman continuum photons in the most luminous Galactic \HII regions. Such IR-luminous regions dominate the extragalactic calibrations of obscured star formation. This effect is potentially pernicious because it is very difficult to separate dust-absorbed from obscured star formation without knowing the ionizing stellar population. Indeed, the terms ``absorbed'' and ``obscured'' in this context are routinely used interchangeably in the extragalactic literature. Here we make the same distinction as did \citet{McKee+Williams97}. Lyman continuum photons ``absorbed'' by dust grains within \HII regions do not contribute to the ionization of the gas, hence this absorption reduces both the radio free-free and H$\alpha$ luminosity by the same factor. By contrast, ``obscuration'' refers to the effects of both absorption and scattering of photons below the Lyman limit by dust within or along the line-of-sight to an \HII region, which reduces the observed H$\alpha$ but does not affect the radio continuum. The empirical attenuation corrections typically applied to recombination-line studies of external galaxies account for the obscuration affecting visible/near-IR photons \citep[see the definition of ``attenuation'' provided by][]{Kennicutt+09}, but if they fail to completely correct for Lyman continuum photons absorbed by dust within the \HII regions the resulting SFR calibrations will underestimate the true obscured SFRs.
This dust absorption systematic is most important for smaller, IR-bright star-forming regions (such as our sample) and likely becomes insignificant for most galaxy-wide studies.\footnote{In the case of starburst galaxies or (U)LIRGs the potential impact is unclear. In the latter case it is generally preferable to assume 100\% obscured star-formation and hence base SFRs directly on $L_{\rm TIR}$ from population synthesis models, avoiding intermediate calibrations based on H$\alpha$.} For larger-scale galactic sub-regions and more evolved stellar populations, the contributions of unobscured \HII regions and Lyman continuum photons that have escaped from obscured \HII regions come to dominate the measured SFRs.
\begin{figure}
\includegraphics[width=0.9\linewidth,clip,trim=2.5cm 13.0cm 2.5cm 3cm]{SFR_size.pdf}
\caption{The SFR$_{70}$ calibration constant $c_{70}$ as a function of physical size. The solid squares show measurements of the calibration constant from external galaxies and large, extragalactic star-forming regions. The solid squares show the predicted calibration constants for continuous star forming populations \citep{Li+10,Li+13}. The estimated average SFR$_{70}$ calibration constant for our regions (with an average size of $\sim$15 pc) is shown by the solid circle.}
\end{figure}\label{figure:SFR_size}
\section{Summary \& Conclusions}
We have presented a comprehensive study of the globally integrated IR-radio emission of 28 Galactic MSFRs. We fit the 3.6~\micron--10~mm spectral SEDs constructed from aperture photometry on {\it Spitzer}\xspace, {\it MSX}\xspace, {\it IRAS}\xspace, and {\it Herschel}\xspace images plus {\it Planck}\xspace extended sources with models consisting of one or two \citet{Draine+Li07} dust components, one cold blackbody component, and a power-law continuum. From our SED model fits and adopted distances to each MSFR we derive the total IR luminosity $L_{\rm TIR}$ and ionizing photon rate $N_C^{\prime}$ required to maintain each radio \HII region. Our sampled MSFRs span three orders of magnitude in luminosity, ranging over $10^4~L_{\sun}\la L_{\rm TIR}\la 2\times 10^7~L_{\sun}$ in dust-reprocessed total infrared luminosity and $3\times 10^{47}~\rm{s}^{-1} \la N_C^{\prime}\la 5\times 10^{50}~\rm{s}^{-1}$ in ionizing photon rate required to maintain the observed radio \HII regions.
Modeling the IR+radio SED simultaneously offers considerable advantages over studying either the IR or radio emission alone. The free-free continuum is negligible at shorter mid-IR wavelengths. Although the true ionized gas spectrum departs from a pure power law at short wavelengths, this is unlikely to significant impact our results (e.g., the power law continuum contributes at most a few percent of the total flux at 3.6~\micron\; see Figure~\ref{figure:SED_examples}). However, the incorporation of the Br$\alpha$ emission-line flux at 4.5 \micron, constrained by the radio spectrum, has enabled an improved (although still not perfect) fit to the Spitzer [4.5] mid-IR band compared to models based on dust emission alone \citep{Stephens+14}.
We searched the literature to compile lists of the known massive stellar population in each MSFR to estimate the stellar bolometric luminosity ($L_{\star}$) and emitted Lyman continuum photon rate ($N_C$). We balance the ``energy budget'' in each MSFR in terms of the ratios $L_{\rm TIR}/L_{\star}$ and $N_C^{\prime}/N_C$. In 10/28 MSFRs the emergent dust-processed luminosity in the SED exceeds the bolometric luminosity input by the cataloged stars, leading us to conclude that the census of the massive stellar population is incomplete.
Our main results are summarized as follows:
\begin{enumerate}
\item A significant fraction ($f_{C,\rm{abs}}$) of Lyman continuum photons emitted by massive stars is absorbed by dust before contributing to the ionization of \HII regions. This absorption increases with bolometric luminosity; $f_{C,\rm{abs}}=34\%$ averaged across the 14 MSFRs for which it could be calculated and increases to $51\%$ averaged over the 4 most luminous MSFRs in our sample, which average $L_{\rm TIR}=10^7$~$L_{\odot}$\xspace\ each (Table~\ref{table:subgroup_mean}). This empirical result agrees well with the theoretical predictions of \citet{McKee+Williams97}, who calculated that the dust opacity in giant \HII regions increases with ionizing photon luminosity, reaching an average $\langle f_{C,\rm{abs}}\rangle=0.46$ for Galactic radio \HII regions with $N_{C}^{\prime}>1.5\times 10^{50}$~s$^{-1}$.
\item We calculate an average PAH fraction from our dust models and find that it is systematically higher in regions that are powered by a single O6-type star or later, with lower PAH fractions observed in regions will fully populated upper IMFs. This radiation fields in these lower-luminosity \HII regions are relatively weak, and inefficient at destroying PAH molecules.
\item We calibrate SFRs based on the monochromatic luminosities $L_{24}$ and $L_{70}$ from our SED models against the Lyman continuum photon rates of the cataloged massive stars in each region. We find that standard extragalactic calibrations of monochromatic SFRs based on population synthesis models are generally consistent with our values, although there is large variation among the 28 individual MSFRs in our sample. Our results are consistent with the \citet{Calzetti+07} 24~\micron\ calibration, and an extrapolation of the \citet{Li+13} 70~\micron\ SFR to the smaller size scales of the Galactic regions is broadly consistent our SFRs.
\item The preferred monochromatic luminosity for measuring obscured SFRs is $L_{70}$, which captures, on average, $52\%$ of $L_{\rm TIR}$ in our regions, a result that is in excellent agreement with comparable extragalactic studies \citep[e.g.,][]{Calzetti+10}.
\end{enumerate}
SFR studies using Galactic radio \HII regions have long included corrections for Lyman continuum photons lost to dust absorption \citep{SBM78,Inoue+01,Murray+Rahman10,Lee+12}. Such corrections are typically not incorporated into extragalactic calibrations, as most H$\alpha$ emission observed on galaxy-wide scales originates from regions with negligible dust. Other SFR tracers, such as integrated UV emission, that do not rely on Lyman continuum photon rates avoid this issue entirely. However, dust absorption becomes significant for spatially-resolved studies of obscured star formation. While current, widely-used calibrations of obscured SFRs account for Lyman continuum photons that escape into the diffuse ISM by using a combination of recombination lines and IR broadband emission \citep[e.g][]{Calzetti+07,Kennicutt+09}, these calibrations could be biased toward low SFRs at the smallest spatial scales and/or highest dust obscurations.
The IR and radio SFR calibrations presented in this work are preferred for application to Milky Way studies over the analogous extragalactic calibrations, given the orders-of-magnitude differences in timescales, physical sizes, and luminosities separating whole galaxies from individual Galactic star-forming regions. Systematics due to heating of dust by older stellar populations are most pronounced for the total IR or $L_{70}$ SFR tracers. The \citet{Calzetti+07} $L_{24}$ calibration, which was based on individual IR-bright knots in nearby galaxies, is most consistent with our $L_{24}$ calibration, and both appear to give reasonable results when applied to Galactic regions with sufficiently high IR luminosities \citep[see][whose sample of Galactic star-forming regions overlaps with the low-luminosity end of our sample]{VEH16}. Even within the Milky Way, our calibrations would likely break down when applied to star-forming clouds that are either too low-mass or too early in their evolution to have formed massive stars ionizing radio \HII regions \citep{V+E13,Povich+16}.
Thermal radio continuum has been relied upon over the past four decades to measure the total ionizing photon rate of the Milky Way and hence the Galactic SFR \citep[][and references therein]{Chomiuk+Povich11,Kennicutt+12}. We have demonstrated, across nearly three orders of magnitude in luminosity, that the average ionizing photon rate required to maintain the ionization of radio \HII regions is only one-third of the Lyman continuum photon rate emitted by the massive stellar content of these regions. It is therefore important to account for both the escape of Lyman continuum photons from compact radio \HII regions and their absorption by dust within the \HII regions to derive accurate SFRs or simply to infer the ionizing stellar populations within radio \HII regions. For example, the work by \citet{Murray+Rahman10} to measure the Galactic SFR using free-free emission measured by the {\it Wilkinson Microwave Anisotropy Probe ({\it WMAP})} used the calculations of \citet{McKee+Williams97} to correct their Lyman continuum photon rates for dust absorption. This absorption correction seems appropriate, and because {\it WMAP} measured free-free emission across Galactic scales the escape of ionizing photons would be negligible. For smaller, less-luminous \HII regions such as those studied by \citet{VEH16}, neither absorption nor escape of Lyman continuum photons can be safely neglected.
Our comparisons of Galactic and extragalactic SFR calibrations required that we assume a standard conversion of Lyman continuum photon rates to absolute SFR based on population synthesis models. While it is encouraging to see convergence between the IR nebular SFR tracers in the Galactic and extragalactic cases, \citet{Chomiuk+Povich11} warned that the assumed star formation timescale in this conversion is likely to be too long by a factor of a few compared to the actual duration of star formation in individual, IR-bright regions, hence such calibrations likely underestimate the true absolute SFRs. In future work we will measure SFRs directly from the spatially resolved low- and intermediate-mass stellar populations to provide a more direct, empirical SFR calibration for the IR and radio nebular tracers.
\acknowledgements
The authors would like to thank the referee Neal Evans for comments and suggestions that significantly improved this manuscript. The authors thank Karin Sandstrom for helpful discussions about SED modeling and Roberta Paladini for her assistance in obtaining the {\it Herschel}\xspace data. This work was supported by the National Science Foundation under award CAREER-1454333 (PI: M. S. Povich). This work is based in part on observations made with the {\it Spitzer Space Telescope}, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. This work is based in part on observations made with {\it Herschel}, an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA. This research made use of data products from the {\it Midcourse Space Experiment}, with data processing funded by the Ballistic Missile Defense Organization with additional support from NASA Office of Space Science. This research used data products produced by the {\it Planck}\xspace Collaboration. This research has made extensive use of the NASA/IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This research has made use of the SVO Filter Profile Service supported from the Spanish MINECO through grant AyA2014-55216. \software{mpfitfun \citep{Markwardt09}}
|
2,877,628,088,536 | arxiv | \section{Introduction}
As machine learning continues to be more widely used for applications with societal impact such as credit decisioning, predictive policing, and employment applicant screening, practitioners face regulatory, ethical, and legal challenges to prove whether or not their models are fair~\cite{ainow2019}.
To provide quantitative tests of model fairness, the practitioners further need to choose between multiple definitions of fairness that exist in the machine learning literature~\cite{calders2009indep, zliobaite2015relation,narayanan2018translation}. Among them is a class of definitions called \emph{group fairness}, which measures how a group of individuals with certain protected attributes are treated differently from other groups. This notion is widely studied as a concept of \emph{disparate impact} in the legal context, and one specific instance of this notion was enforced as a law for fair employment process back in 1978~\cite{biddle2006adverse}. From a technical point of view however, several notions of group fairness have been shown to conflict with one another~\cite{kleinberg2016inherent, chouldechova2017fair}, sometimes with a necessary cost in loss of accuracy~\cite{liu2019cg}.
Such considerations complicate the practical development and assessment of machine learning models designed to satisfy group fairness, as the conditions under which these trade-offs must necessarily occur can be too abstract to understand. Previous works on these trade-offs have been presented in ad hoc and definition-specific manner, which further calls for a more general perspective addressing the trade-offs in practice.
As an example, suppose an engineer is responsible for training a loan prediction model from a large user dataset,
subject to mandatory group fairness requirements shaped by regulatory concerns.
She has many choices for how to train this fair model,
with fairness enforced before~\cite{kamiran2010discrimination, zemel2013learning, madras2018learning, samadi2018price, song2019learning, tan2019learning}, during~\cite{zafar2015fairness, zafar2017fairness}, or after~\cite{dwork2012fairness, feldman2015di, hardt2016equality} training.
However, she must resort to trial and error to determine which of these myriad approaches, if any, will produce a compliant model with sufficient
performance\footnote{In this work, \emph{performance} refers to classical metrics derived from the confusion matrix, e.g., accuracy, precision and fairness notions are not part of it.}
to satisfy business needs.
It may even turn out that despite her best efforts,
the fairness constraints set by the regulators are actually impossible to satisfy to begin with,
due to limitations intrinsic to the prediction task and data at hand. If there was a tool to understand the potential trade-offs exhibited by the model, even before training, it would be easier for multiple parties to effectively reconcile the conflicting components in designing fair classifiers.
Motivated by such practical considerations, we propose the \emph{FACT (\textbf{FA}irness-\textbf{C}onfusion \textbf{T}ensor) diagnostic}
for exploring the trade-offs involving group fairness: the diagnostic provides a general framework under which the practitioners can understand both fairness--fairness trade-offs and fairness--performance trade-offs. At the core of our diagnostic lies the \emph{fairness--confusion tensor}, which is the confusion matrix divided along an additional axis for protected attributes. The FACT diagnostic first expresses the majority of group fairness notions as linear/quadratic functions of the elements of this tensor. The simplicity of these functions makes it easy for them to be naturally integrated into a class of optimization problems over the elements of the tensor (not over the model parameters), which we call \emph{performance--fairness optimality problem{}} (PFOP). It essentially considers the geometry of valid fairness--confusion tensor{}s that satisfy a specified set of performance and/or fairness conditions.
By noting that many settings involve only linear notions of fairness, in this work we focus on \emph{least-squares accuracy--fairness optimality problem{}} (LAFOP) and \emph{model-specific least-squares accuracy--fairness optimality problem{}} (MS-LAFOP), which are specific instantiations of PFOP, each representative of model-agnostic and model-specific scenarios. In particular, for the model-agnostic case, the diagnostic allows for a comparative analysis of the \emph{relative} difficulty of learning a classifier under additional group fairness constraints imposed. This difficulty is interpreted with respect to the Bayes error, which is the inherent difficulty of the fairness-unconstrained learning problem, hence a natural reference point.
Our contributions are:
\begin{enumerate}
\item to demonstrate how fairness--confusion tensor{} characterizes the majority of group fairness definitions in the literature as linear or quadratic functions, whose simplicity can be leveraged to formulate optimization problems suited for trade-off analysis,
\item to formulate the \textsc{FACT}\ diagnostic as a PFOP, LAFOP, and MS-LAFOP over the fairness--confusion tensor{}, enabling both model-agnostic and model-specific analysis of fairness trade-offs,
\item to provide a general understanding of group fairness incompatibility, which simplifies the existing results in the literature and extends them to new types,
\item to demonstrate the use of the FACT diagnostic on synthetic and real datasets, e.g. how it can be used for diagnosis of relative influence of the fairness notions on performance and other fairness conditions, and how it can be used as a post-processing method for designing fair classifiers.
\end{enumerate}
\section{Related Work}%
\label{sec:relatedwork}
\textbf{Fairness--confusion tensor} is not a completely new notion -- several work has implicitly mentioned it, mostly disregarding it as a simple computational tool that eases the computation on an implementation level~\cite{bellamy2018ai, celis2019classification}. It is also a natural object considered in several post-processing methods in fairness~\cite{hardt2016equality, pleiss2017fairness}, a group of algorithms that fine-tune a trained model to mitigate the unfairness while keeping the performance change minimal. Here we take a closer look at the fairness--confusion tensor{} itself and study how this object naturally brings together several notions of group fairness, simplifying and generalizing the analysis of inherent trade-offs within.
\textbf{Quantitative definitions of group fairness} exist in many different variations~\cite{narayanan2018translation, kleinberg2016inherent, chouldechova2017fair, dwork2012fairness, hardt2016equality, calders2010dp, berk2018fairness} but few work exists to categorize these notions with a broader perspective encompassing the trade-off schemes.
\citet{verma2018fairness} categorized the existing group fairness definitions based on entries and rates derived from the fairness--confusion tensor but did not explore any trade-offs and incompatibilities within.
Our work extends this effort and provides a versatile geometric formalism to study the trade-offs.
\textbf{Fairness--performance trade-offs} have been studied in many specific cases~\cite{calders2009indep, zliobaite2015relation,kamiran2010discrimination, feldman2015di, menon2018cost, liu2019cg, zhao2019inherent},
for limited definitions of fairness, performance, and models. To our knowledge, these trade-offs have not been studied in the general way we present below.
\citet{zafar2015fairness, zafar2017fairness} presented an optimization-based analysis of the trade-offs,
albeit over the parameter space of a particular model.
\textbf{Fairness--fairness trade-offs} describe the incompatibility of multiple notions of group fairness~\cite{kleinberg2016inherent,chouldechova2017fair,pleiss2017fairness,berk2018fairness} without some strong assumptions about the data and the model. Previous incompatibility results have been presented mostly in ad hoc and definition-specific manner, which our diagnostic addresses with a more general perspective for understanding incompatibilities. We show a general incompatibility result involving Calibration fairness condition, which naturally implies the result in \citet{kleinberg2016inherent} along with many other new ones.
To the best of our knowledge,
our work is the first to provide a systematic approach to diagnose both fairness--fairness and fairness--performance trade-offs together for group fairness under the same formalism.
\setlength\arraycolsep{2pt}
\section{The Fairness--confusion Tensor}
\label{sec:linear}
\begin{table*}[!ht]
\small
\resizebox{\textwidth}{!}{
\begin{threeparttable}[t]
\centering
\begin{tabular}{l l l}
Name of fairness & Definition and linear system & Terms in fairness--confusion tensor \\
\toprule
Demographic parity (DP) &
${\sf Pr}(\hat{y} = 1 | \ensuremath{\mathbf a} = 1) = {\sf Pr}(\hat{y} = 1 | \ensuremath{\mathbf a} = 0)$
\\ &
$\ensuremath{\mathbf A}_\textsc{dp} = \frac 1 \ensuremath{N} \begin{pmatrix}
\ensuremath{N}_0 & 0 & \ensuremath{N}_0 & 0 & -\ensuremath{N}_1 & 0 & -\ensuremath{N}_1 & 0
\end{pmatrix}$ &
\DPpic
\\
Equality of opportunity (EOp)\cite{hardt2016equality} &
${\sf Pr}(\hat{y} = 1 | y=1, \ensuremath{\mathbf a} = 1) = {\sf Pr}(\hat{y} = 1| y = 1 , \ensuremath{\mathbf a} = 0)$ &
\\ &
$\ensuremath{\mathbf A}_\textsc{eop} = \frac 1 \ensuremath{N} \begin{pmatrix}
\ensuremath{M}_0 & 0 & 0 & 0 & -\ensuremath{M}_1 & 0 & 0 & 0
\end{pmatrix}
$ &
\EOppic
\\
Predictive equality (PE)\cite{chouldechova2017fair} &
${\sf Pr}(\hat{y} = 1 | y=0 , \ensuremath{\mathbf a} = 1) = {\sf Pr}(\hat{y} = 1 |y=0 , \ensuremath{\mathbf a} = 0)$ &
\\ &
$\ensuremath{\mathbf A}_\textsc{pe} = \frac 1 \ensuremath{N} \begin{pmatrix}
0 & 0 & \ensuremath{N}_0 - \ensuremath{M}_0 & 0 & 0& 0 & -\ensuremath{N}_1 + \ensuremath{M}_1&0
\end{pmatrix}$ &
\PEpic
\\
Equalized odds (EOd)\cite{hardt2016equality} &
EOp $\land$ PE &
\EOppic $\land$ \PEpic
\\
Equal false negative rate (EFNR) \tnote{2} &
${\sf Pr}(\ensuremath{\hat y} = 0 | \ensuremath{y} = 1, \ensuremath{\mathbf a} = 1) = \Pr (\ensuremath{\hat y} = 0 | \ensuremath{y} = 1, \ensuremath{\mathbf a} = 0)$ &
\\ &
$\ensuremath{\mathbf A}_\textsc{efnr} = \frac{1}{\ensuremath{N}} \begin{pmatrix}
0 & M_0 & 0 & 0 & 0 &-M_1 & 0 &0
\end{pmatrix}$ &
\EFNRpic
\\
Calibration within groups (CG)\cite{kleinberg2016inherent} &
${\sf Pr}(y=1 | P_\theta(\mathbf{x}) = s, \ensuremath{\mathbf a} = 1) = {\sf Pr}(y=1 | P_\theta(\mathbf{x}) = s, \ensuremath{\mathbf a} = 0) = s$ &
\\ &
$\ensuremath{\mathbf A}_\textsc{cg} = \CGmat$ &
\CGpic
\\
Positive class balance (PCB)\cite{kleinberg2016inherent} &
$\mathbb{E}(P_\theta | y=1 , \ensuremath{\mathbf a} = 1) = \mathbb{E}(P_\theta | y=1 , \ensuremath{\mathbf a} = 0)$ &
\\ &
$\ensuremath{\mathbf A}_\textsc{pcb} =
\min_\ensuremath{a}(\ensuremath{M}_\ensuremath{a})
\begin{pmatrix}
\frac{v_1}{\ensuremath{M}_1} & \frac{v_0}{\ensuremath{M}_1} & 0 & 0 & -\frac{v_1}{\ensuremath{M}_0} & -\frac{v_0}{\ensuremath{M}_0} & 0 & 0 \end{pmatrix} $ &
\PCBpic
\\
Negative class balance (NCB)\cite{kleinberg2016inherent} &
$\mathbb{E}(P_\theta | y=0 , \ensuremath{\mathbf a} = 1) = \mathbb{E}(P_\theta | y=0 , \ensuremath{\mathbf a} = 0)$ &
\\ &
$\ensuremath{\mathbf A}_\textsc{ncb} = \min_\ensuremath{a}(\ensuremath{N}_\ensuremath{a} - \ensuremath{M}_\ensuremath{a}) \begin{pmatrix}
0 & 0 & \frac{v_1}{\ensuremath{N}_1 - \ensuremath{M}_1} & \frac{v_0}{\ensuremath{N}_1 - \ensuremath{M}_1} & 0 & 0 & -\frac{v_1}{\ensuremath{N}_0 - \ensuremath{M}_0} & -\frac{v_0}{\ensuremath{N}_0 - \ensuremath{M}_0}
\end{pmatrix}$ &
\NCBpic
\\
Relaxed Equalized Odds (REod)\cite{pleiss2017fairness} &
$\alpha_0 FPR_0 + \beta_0 FNR_0 = \alpha_1 FPR_1 + \beta_1 FNR_1$ &
\\ &
$\ensuremath{\mathbf A}_\textsc{REOd} = \begin{pmatrix}
0 & \frac{\beta_1}{\ensuremath{M}_1} & \frac{\alpha_1}{\ensuremath{N}_1 - \ensuremath{M}_1} & 0 & 0 & \frac{\beta_0}{\ensuremath{M}_0} & \frac{\alpha_0}{\ensuremath{N}_0 - \ensuremath{M}_0} & 0
\end{pmatrix} / N$
& \REOdPic
\\
\midrule
Predictive parity (PP)\cite{chouldechova2017fair} &
${\sf Pr}(y = 1 | \hat{y}=1 , \ensuremath{\mathbf a} = 1) = {\sf Pr}(y = 1 | \hat{y}=1 , \ensuremath{\mathbf a} = 0)$ &
\\ &
$\frac 1 2 \ensuremath{\mathbf z}^T \ensuremath{\mathbf B}_\textsc{pp} \ensuremath{\mathbf z} = (TP_1 FP_0 - TP_0 FP_1)/N^2$ &
\PPpic
\\
Equal false omission rate (EFOR) \tnote{1} &
${\sf Pr}(y = 1 | \hat{y}=0 , \ensuremath{\mathbf a} = 1) = {\sf Pr}(y = 1 | \hat{y}=0 , \ensuremath{\mathbf a} = 0)$ &
\\ &
$\frac 1 2 \ensuremath{\mathbf z}^T \ensuremath{\mathbf B}_\textsc{efor} \ensuremath{\mathbf z} = (TN_1 FN_0 - TN_0 FN_1)/N^2$ &
\EFORpic
\\
Conditional accuracy equality (CA)\cite{berk2018fairness} &
PP $\land$ EFOR &
\PPpic $\land$ \EFORpic
\\
\bottomrule
\end{tabular}
\begin{tablenotes}
\item[1]To our knowledge, EFOR has not been described in literature in isolation, but is used in the definition of conditional accuracy equality (CA)\cite{berk2018fairness}.
\item[2]Defined implicitly in \cite{chouldechova2017fair}.
\end{tablenotes}
\end{threeparttable}
}
\caption{
Some common group fairness definitions and corresponding abbreviations used throughout the paper in terms of linear functions $\ensuremath{\phi}(\ensuremath{\mathbf z}) = \ensuremath{\mathbf A} \ensuremath{\mathbf z}$
or quadratic functions $\ensuremath{\phi}(\ensuremath{\mathbf z}) = \frac 1 2 \ensuremath{\mathbf z}^T \ensuremath{\mathbf B} \ensuremath{\mathbf z}$ that appear in the performance--fairness optimality problem{} \eqref{eqn:PFOP}.
There are two groups separated by the horizontal line:
those that are specified by linear functions (above),
or quadratic functions (below).
The graphical notation is described in \Cref{sec:linear}.
$P_\theta$ is the probability produced by a model (parameterized by $\theta$) of $\hat y =1$.
The fairness functions $\ensuremath{\phi}$ are uniquely defined only up to a normalization factor and overall sign.
}%
\label{tab:def_fairness}
\end{table*}
Our key insight is that the elements of the fairness--confusion tensor{} encode all the information needed to study many notions of performance and group fairness.
The fairness--confusion tensor{} is simply the stack of confusion matrices for each protected attribute $\ensuremath{a}$, as shown in \Cref{tab:fct}.
We focus on the simplest case, with one binary protected attribute $\ensuremath{a}\in\ensuremath{\{0, 1\}}$, and a binary classifier $\ensuremath{\hat y}\in\ensuremath{\{0, 1\}}$
for a binary prediction label $\ensuremath{y}\in\ensuremath{\{0, 1\}}$.\footnote{The arguments generalize to multiple and non-binary protected attributes with high-dimensional tensors.}
\begin{table}[!ht]
\small
\centering
\begin{tabular}{|c||c|c|}
\toprule
$a=1$ & $y=1$ & $y=0$\tabularnewline
\midrule
$\hat{y}=1$ &TP$_{1}$ & FP$_{1}$\tabularnewline
\midrule
\midrule
$\hat{y}=0$ &FN$_{1}$ & TN$_{1}$\tabularnewline
\bottomrule
\end{tabular} \quad
\begin{tabular}{|c||c|c|}
\toprule
$a=0$ & $y=1$ & $y=0$\tabularnewline
\midrule
$\hat{y}=1$ & TP$_{0}$ & FP$_{0}$\tabularnewline
\midrule
\midrule
$\hat{y}=0$ & FN$_{0}$ & TN$_{0}$\tabularnewline
\bottomrule
\end{tabular}
\caption{The fairness--confusion tensor{},
showing the two planes corresponding to the confusion matrix for each of the favored ($\ensuremath{a} = 1$)
and disfavored groups ($\ensuremath{a} = 0$).}%
\label{tab:fct}
\end{table}
Let us denote the elements of the fairness--confusion tensor as $TP_a, FP_a, FN_a, TN_a$, each element with subscripts indicating $\ensuremath{a}$, $\ensuremath{N}$ be the number of data points,
$\ensuremath{N}_\ensuremath{a} = TP_\ensuremath{a} + FN_\ensuremath{a} + FP_\ensuremath{a} + TN_\ensuremath{a}$
be the number of data points in each group $\ensuremath{a} \in \ensuremath{\{0, 1\}}$, and
$\ensuremath{M}_\ensuremath{a} = TP_\ensuremath{a} + FN_\ensuremath{a}$
be the number of positive-class instances ($\ensuremath{y} = 1$) for each group.
Assume $\ensuremath{N}$, $\ensuremath{N}_\ensuremath{a}$ and $\ensuremath{M}_\ensuremath{a}$ are known constants.
Unraveling the fairness--confusion tensor{} into an 8-dimensional vector, we write it as
\begin{equation*}
\ensuremath{\mathbf z} = {(TP_1, F\ensuremath{N}_1, FP_1, T\ensuremath{N}_1, TP_0, F\ensuremath{N}_0, FP_0, T\ensuremath{N}_0)}^T/N,
\end{equation*}
normalized and constrained to lie on $\mathcal K = \{\ensuremath{\mathbf z} \ge 0 : \ensuremath{\mathbf{A}_{\sf const}} \ensuremath{\mathbf z}\ = \ensuremath{\mathbf{b}_{\sf const}}, \|\ensuremath{\mathbf z}\|_1 = 1\}$, where $\ensuremath{\mathbf{A}_{\sf const}}$ and $\ensuremath{\mathbf{b}_{\sf const}}$ encode marginal sum constraints of the dataset (e.g., $TP_a + FN_a = M_a$) in matrix notations:
\begin{align*}
\small
\ensuremath{\mathbf{A}_{\sf const}} &= \begin{pmatrix}
1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \\
1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 \\
0 & 0 & 0 & 0 & 1 & 1 & 0 & 0
\end{pmatrix}, \\
\ensuremath{\mathbf{b}_{\sf const}} &= {(\ensuremath{N}_1, \ensuremath{M}_1, \ensuremath{N}_0, \ensuremath{M}_0)}^T / N.
\end{align*}
We show below that some typical notions of group fairness can be reformulated as simple functions of $\ensuremath{\mathbf z}$, namely as a form of $\phi(\ensuremath{\mathbf z}) = 0$.
\textbf{Demographic parity (DP)} states that each protected group should receive positive prediction at an equal rate: ${\sf Pr}(\hat{y} = 1 | \ensuremath{\mathbf a} = 1) = {\sf Pr}(\hat{y} = 1 | \ensuremath{\mathbf a} = 0)$, which is equivalent to $(TP_1 + FP_1)/{\ensuremath{N}_1} = (TP_0 + FP_0)/{\ensuremath{N}_0}$, or also the linear system $\phi(\ensuremath{\mathbf z}) = \ensuremath{\mathbf A}_\textsc{dp} \ensuremath{\mathbf z} = 0$,
where
\begin{equation}
\ensuremath{\mathbf A}_\textsc{dp} = \begin{pmatrix}
\ensuremath{N}_0 & 0 & \ensuremath{N}_0 & 0 & -\ensuremath{N}_1 & 0 & -\ensuremath{N}_1 & 0
\end{pmatrix} / \ensuremath{N}.
\label{eqn:dp}
\end{equation}
The choice of normalization, $1/N$, ensures that the matrix coefficients are in $[0, 1]$. We will refer to these matrices $\ensuremath{\mathbf A}$ that encode information about the fairness conditions as fairness matrices.
\textbf{Predictive parity (PP)} \cite{chouldechova2017fair}
states that the likelihood of being in the positive class given the positive prediction is the same for each group: ${\sf Pr}(y = 1 | \hat{y}=1 , \ensuremath{\mathbf a} = 1) = {\sf Pr}(y = 1 | \hat{y}=1 , \ensuremath{\mathbf a} = 0)$, which is equivalent to
$\frac{TP_1}{TP_1 + FP_1} =
\frac{TP_0}{TP_0 + FP_0} \Longleftrightarrow \frac{TP_1}{TP_0} =
\frac{FP_1}{FP_0}$. Unlike for DP, the marginal sum constraints do not relate $TP_a$ and $FP_a$, so this notion of fairness is \textit{not} linear in the fairness--confusion tensor.
PP actually can be expressed using a \emph{quadratic} form:
\begin{equation}
\small
\phi(\ensuremath{\mathbf z}) = \frac 1 2 \ensuremath{\mathbf z}^T \ensuremath{\mathbf B}_\textsc{PP} \ensuremath{\mathbf z} = 0,\quad
\ensuremath{\mathbf B}_\textsc{PP} = \begin{pmatrix}
0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
-1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0
\end{pmatrix}.
\end{equation}
\textbf{Calibration within groups (CG)}~\cite{kleinberg2016inherent},
when specialized to binary classifiers and binary protected classes,
can be written as the system of equations $FN_a = v_0 (FN_a + TN_a); TP_a = v_1 (TP_a + FP_a)$,
where the $v_i$s are scores satisfying $0 \le v_0 < v_1 \le 1$
and have no implicit dependence on any entries of the fairness--confusion tensor.
We can rewrite this this condition explicitly as the matrix equation
$\phi(\ensuremath{\mathbf z}) = \ensuremath{\mathbf A}_{\textsc{cg}} \ensuremath{\mathbf z} = 0$ with a fairness matrix
\begin{equation}
\label{eqn:cg}
\small
\ensuremath{\mathbf A}_{\textsc{cg}} = \CGmat.
\end{equation}
\textbf{Equalized odds (EOd)} \cite{hardt2016equality} states that true-positive rates and false-positive rates are the same for both groups, which can be expressed as a linear system $\phi(\ensuremath{\mathbf z}) = \ensuremath{\mathbf A}_{\textsc{EOd}} \ensuremath{\mathbf z} = 0$ with a fairness matrix
\begin{equation}
\ensuremath{\mathbf A}_{\textsc{EOd}} = \frac{1}{N} \begin{pmatrix}
\ensuremath{M}_0 & 0 & 0 & 0 & -\ensuremath{M}_1 & 0 & 0 & 0 \\
0 & 0 & \ensuremath{N}_0 - \ensuremath{M}_0 & 0 & 0& 0 & -\ensuremath{N}_1 + \ensuremath{M}_1&0
\end{pmatrix}
\end{equation}
where each row respectively corresponds to conditions for Equality of Opportunity (EOp)~\cite{hardt2016equality} and Predictive Equality (PE)~\cite{chouldechova2017fair}.
Likewise, vertically stacking multiple fairness matrices results in a fairness matrix corresponding to the conjunction of different fairness notions.
In \Cref{tab:def_fairness} we generalize this formulation to a wide majority of group fairness definitions in the literature, along with their abbreviations used throughout the paper. We find that most of the definitions take either linear or quadratic form with respect to $\ensuremath{\mathbf z}$. We further introduce a graphical notation to help visualize which components of the fairness--confusion tensor{}
participate in the fairness definition.
Depict the fairness--confusion tensor{} as
\begin{tikzpicture}
\draw[help lines, step=4pt] (0,0) grid (8pt, 8pt);
\draw[help lines, step=4pt] (12pt,0) grid (20pt, 8pt);
\end{tikzpicture}
,
with the left matrix for the favored class ($\ensuremath{a} = 1$)
and the right matrix for the disfavored class ($\ensuremath{a} = 0$).
Since each component of $\ensuremath{\mathbf z}$ corresponds to some element of the fairness--confusion tensor,
we shade each component that appears in the equation.
Blue shading denotes the favored class,
while red shading denotes the disfavored class.
We further distinguish two kinds of dependencies.
Components that have a nonzero coefficient in the matrix are shaded fully.
However, the values of these coefficients themselves can depend on other components,
albeit implicitly, and we shade these implicit components in a lighter shade. Putting this all together,
we can represent DP in \eqref{eqn:dp} graphically as
\DPpic, EOd as \EOppic$\land$\PEpic, PP as \PPpic, with the superscript denoting the quadratic order of the term. As shown in the third column of \Cref{tab:def_fairness}, all group fairness notions can be effectively described in this notation.
\section{Optimization over the Fairness--confusion Tensor}%
\label{sec:optimization}
The fairness--confusion tensor $\ensuremath{\mathbf z}$ allows for a succinct linear and quadratic characterization of group fairness definitions in the literature. We naturally consider the following family of optimization problems over $\ensuremath{\mathbf z} \in \mathcal K$, where the objective function is constructed so that the solution reflects trade-offs between fairness and performance.
\begin{definition}
Let $\ensuremath{f}^{(i)} : \mathcal K \rightarrow [0, 1]$ be performance metrics (indexed by $i$) with best performance 0 and worst performance 1,
$\ensuremath{\phi}^{(j)}(\ensuremath{\mathbf z})$ be fairness functions (indexed by $j$) with
$\ensuremath{\mu}_i$, $\ensuremath{\lambda}_j$ be real constants with $\ensuremath{\mu}_0 = 1$.
Then, the \emph{performance--fairness optimality problem{} (PFOP)} is a class of optimization problem of form:
\begin{equation}
\argmin_{\ensuremath{\mathbf z} \in \mathcal K}
\sum_{i\ge0} \ensuremath{\mu}_i \ensuremath{f}^{(i)}(\ensuremath{\mathbf z})
+ \sum_{j\ge0} \ensuremath{\lambda}_j \ensuremath{\phi}^{(j)}(\ensuremath{\mathbf z})
\label{eqn:PFOP}
\end{equation}
\end{definition}
PFOP is a general optimization problem containing two groups of terms;
the first quantifying performance loss;
the second quantifying unfairness.
The restriction $\ensuremath{\mathbf z}\in\mathcal K$ is necessary to ensure that $\ensuremath{\mathbf z}$ is a valid fairness--confusion tensor{} that obeys the requisite marginal sums. In our discussion below, it will be convenient to consider solutions with explicit bounds on their optimality.
\begin{definition}%
\label{def:approxsol}
Let $\ensuremath{\epsilon} \ge 0 $ and $\ensuremath{\delta} \ge 0$.
Then, a \emph{$(\ensuremath{\epsilon}, \ensuremath{\delta})$-solution to the PFOP}
is a $\ensuremath{\mathbf z}$ that satisfies \eqref{eqn:PFOP}
such that $\sum_j \lambda_j \ensuremath{\phi}^{(j)}(\ensuremath{\mathbf z}) \le \ensuremath{\epsilon}$
and $\sum_i \mu_i \ensuremath{f}^{(i)}(\ensuremath{\mathbf z}) \le \ensuremath{\delta}$.
\end{definition}
The parameters $\ensuremath{\epsilon}$ and $\ensuremath{\delta}$ represent the sum total of deviation from perfect fairness and perfect predictive performance respectively. Unless otherwise stated, the rest of the paper is dedicated to analyzing one of the simplest instantiations of PFOP, defined below.
\begin{definition
The \emph{least-squares accuracy--fairness optimality problem{} (LAFOP)} is a PFOP with accuracy (or classification error rate) as the performance function $\ensuremath{f}^{(0)}$, and $K\ge1$ fairness constraints in the form of a fairness matrix $\ensuremath{\mathbf A}$ (each row indexed by $j$), with
\begin{equation}
\begin{aligned}
\ensuremath{\phi}^{(j)}(\ensuremath{\mathbf z}) &= (\ensuremath{\mathbf A}_{j,*} \ensuremath{\mathbf z})^2, \quad j = 0, ..., K-1\\
\ensuremath{f}^{(0)}(\ensuremath{\mathbf z}) &= (\ensuremath{\mathbf c} \cdot \ensuremath{\mathbf z})^2, \\
\ensuremath{\mathbf c} &= {(0, 1, 1, 0, 0, 1, 1, 0)}^T, \\
\ensuremath{\lambda} &= \ensuremath{\lambda}_0 = ... = \ensuremath{\lambda}_{K-1}.
\end{aligned}
\end{equation}
In other words, LAFOP is the problem
\begin{equation}
\argmin_{\ensuremath{\mathbf z} \in \mathcal K}
{(\ensuremath{\mathbf c} \cdot \ensuremath{\mathbf z})}^2
+ \ensuremath{\lambda} \Vert \ensuremath{\mathbf A} \ensuremath{\mathbf z} \Vert^2_2,
\label{eqn:LAFOP}
\end{equation}
\end{definition}
where $\ensuremath{\mathbf c} \cdot \ensuremath{\mathbf z}$ encodes the usual notion of classification error, and $\ensuremath{\mathbf A}$ encodes $K$ linear fairness functions stacked together as the regularizer.
A single hyperparameter $\ensuremath{\lambda}$ specifies the relative importance of satisfying the fairness constraints while optimizing classification performance, with $\ensuremath{\lambda} = 0$ considering only performance and disabling all fairness constraints, and $\ensuremath{\lambda} = \infty$ imposing fairness constraints without regard to accuracy.
LAFOP is a convex optimization problem which is simple to analyze.
Despite its simplicity, LAFOP encompasses many situations involving linear notions of fairness,
allowing us to reason about multiple fairness constraints as well as fairness--accuracy trade-offs under versatile scenarios.
\subsection{Reduction to a post-processing method for fair classification}
PFOP and LAFOP do not assume anything about the model, therefore are designed to be model-agnostic. In this section we highlight the versatility of LAFOP by showing that adding a model-specific constraint on LAFOP reduces it to a post-processing algorithm for fair classification.
Post-processing method, in particular for EOd as introduced in \citet{hardt2016equality}, solves the following optimization problem for $\tilde{Y}$, which is a post-processed, supposedly fair, classifier, given $\hat{Y}$, a vanilla classifier:
\begin{multline}
\min_{\tilde{Y}} \mathbb{E}l(\tilde{Y}, Y) \text{ such that } \gamma_0(\tilde{Y}) = \gamma_1(\tilde{Y}) \\
\text{ and } \gamma_0(\tilde{Y}) \in P_0(\hat{Y}), \gamma_1(\tilde{Y}) \in P_1(\hat{Y})
\label{eqn:hardt}
\end{multline} where $\gamma_a(\tilde{Y})$ represents EOd constraints for $\tilde{Y}$ as a tuple of ($FPR_a$, $TPR_a$), and $P_a(\hat{Y})$ is a model-specific set of feasible $\gamma_a$ values, defined as $P_a(\hat{Y}) = \text{convhull}\{(0,0), \gamma_a(\hat{Y}), \gamma_a(1 - \hat{Y}), (1,1)\}$.
All the components of \eqref{eqn:hardt} can be rewritten in terms of $\hat{\ensuremath{\mathbf z}}$ and $\tilde{\ensuremath{\mathbf z}}$, the fairness--confusion tensors corresponding to the classifiers $\hat{Y}$ and $\tilde{Y}$ respectively. This yields a LAFOP over $\tilde{z}$ with additional model-specific constraints derived from $\hat{z}$ on the solution space. More formally, we have the following optimization problem for post-processing:
\begin{definition}
Given a classifier to be post-processed and its corresponding fairness--confusion tensor{} $\hat{z}$, the \emph{model-specific LAFOP} (MS-LAFOP) for EOd is the variant of LAFOP with model-specific constraints on the solution space as the following:
\begin{equation}
\argmin_{\tilde{\ensuremath{\mathbf z}} \in \hat{\mathcal{K}}}
{(\ensuremath{\mathbf c} \cdot \tilde{\ensuremath{\mathbf z}})}^2
+ \ensuremath{\lambda} \Vert \ensuremath{\mathbf A}_{\textsc{EOd}} \tilde{\ensuremath{\mathbf z}} \Vert^2_2, \text{ where }
\label{eqn:ms-lafop}
\end{equation} where
\begin{multline*}
\hat{\mathcal{K}} = \big\{ \tilde{\ensuremath{\mathbf z}} \ge 0 : \ensuremath{\mathbf{A}_{\sf const}} \tilde{\ensuremath{\mathbf z}}\ = \ensuremath{\mathbf{b}_{\sf const}}, \|\tilde{\ensuremath{\mathbf z}}\|_1 = 1, \\
\beta_a(\tilde{z}) \in \text{convhull}\left\{ (0,0), \beta_a(\hat{z}), \beta_a(1-\hat{z}), (1,1)\right\} \forall a \big\}
\end{multline*} with $\beta_a$ expressing ($FPR_a$, $TPR_a$) tuples computed from the corresponding fairness--confusion tensor{} of group $a$.
\end{definition}
From the solution of MS-LAFOP, it is possible to compute mixing rates for post-processing the given classifier. We note that MS-LAFOP can be extended to other group fairness notions as long as the model-specific constraints are accordingly set up for them. For more details, refer to \Cref{sec:post-process-apdx}.
\begin{table*}[ht]
\small
\centering
\begin{tabular}{lc}
Sets of fairness definitions & Necessary conditions \\\hline \hline
\{CG, PP, DP, and any of EOp, PE, PCB, NCB, EFOR\} & $M_0 = M_1$ and $N_0 = N_1$ \\ \hline
\{CG, DP, and any of EOp, PE, PCB, NCB, EFOR\} & EBR only \\\hline
\multirow{2}{40em}{\{CG,EOp\}, \{CG,PCB\}, \{CG,EOp,PCB\},\{CG,EFOR,EOp\}, \{CG,EFOR,PCB\},\{CG,EFOR,EOp,PCB\}} & $v_{0}=0$ \\
& or EBR \\ \hline
\multirow{2}{40em}{\{CG,PE\}, \{CG,NCB\}, \{CG,EOp,NCB\}, \{CG,EFOR,PE\}, \{CG,EFOR,NCB\}, \{CG,EFOR,EOp,NCB\}} & $v_{1}=1$ \\
& or EBR \\ \hline
\multirow{2}{40em}{\{CG,EOd\}\cite{pleiss2017fairness}, \{CG, PCB, NCB\} \cite{kleinberg2016inherent},\{CG,EOd,PCB,NCB\}, \{CG,EFOR,EOd\}, \{CG,EFOR,PCB,NCB\},\{CG,EFOR,EOd,PCB,NCB\}} & ($v_{0}=0$ and $v_{1}=1$) \\
& or EBR \\ \hline \hline
\end{tabular}
\caption{Some sets of fairness definitions containing Calibration(CG), which are incompatible in the sense of \Cref{def:incompat} (left-column), together with their necessary conditions to be compatible (right column).
EBR is the equal base rate condition, $M_0/N_0 = M_1/N_1$. For other abbreviations, refer to \Cref{tab:def_fairness}. These are all special cases of \Cref{thm:calib_incomp}, while not exhaustive.}
\label{tab:cg-impossible}
\end{table*}
\section{Incompatible Group Fairness Definitions}%
\label{sec:incompatible}
In this section, we show how LAFOP yields a more general view of understanding group fairness incompatibility results. As $\ensuremath{\lambda} \to \infty$, for linear fairness functions $\ensuremath{\phi}^{(i)}(\ensuremath{\mathbf z}) = \ensuremath{\mathbf A}^{(i)} \ensuremath{\mathbf z}$, LAFOP becomes equivalent to solving the following linear system of equations:
\begin{equation}
\small
\begin{pmatrix}
\ensuremath{\mathbf A}^{(0)}\\
\vdots\\
\ensuremath{\mathbf A}^{(K-1)}\\
\ensuremath{\mathbf{A}_{\sf const}}
\end{pmatrix} \ensuremath{\mathbf z} =
\begin{pmatrix}
0\\
\vdots\\
0\\
\ensuremath{\mathbf{b}_{\sf const}}
\end{pmatrix}, \, \ensuremath{\mathbf z}\ge0,
\label{eqn:linear-fair-compat}
\end{equation}
Notice the compatibility of fairness conditions encoded by these $K$ fairness matrices $\ensuremath{\mathbf A}^{(i)}$ is equivalent to having infinitely many solutions to the above linear system. We formally define (in)compatibility of fairness notions below based on this observation.
\begin{definition}
Let $\ensuremath{\Phi} = {\{\ensuremath{\phi}^{(i)}\}}^{K-1}_{i=0}$ be a set of linear fairness functions, encoded in a fairness matrix $\ensuremath{\mathbf A}$ (of which each row corresponds to $\ensuremath{\phi}^{(i)}$), and let $\ensuremath{\rho}$ be the number of solutions for the system in \eqref{eqn:linear-fair-compat}. If $\ensuremath{\rho} = 0$, then $\ensuremath{\Phi}$ is said to be incompatible. Otherwise, $\ensuremath{\Phi}$ is compatible. When $\ensuremath{\Phi}$ is incompatible, some additional set of constraints on the dataset or the model is required for it to be compatible.
\label{def:incompat}
\end{definition}
This means that in general, incompatibility results among the group fairness definitions can be proven simply by asking if and when solutions exist to their corresponding linear system of form~\eqref{eqn:linear-fair-compat}.
\subsection{The incompatibility involving CG}%
\label{sec:kleinberg}
We introduce a general incompatibility result involving CG that leads to many other new results as well as the one from \citet{kleinberg2016inherent}.
\begin{theorem}%
\label{thm:calib_incomp}
Let $B=2$ be the number of bins in the definition of calibration within groups fairness (CG) \cite{kleinberg2016inherent},
and $v_0$, $v_1$ be the scores, with $0 \le v_0 < v_1 \le 1$,
and $K>1$ with $\ensuremath{\phi}^{(0)}(\ensuremath{\mathbf z}) = \ensuremath{\mathbf A}_\textsc{CG}\ensuremath{\mathbf z}$.
Then, the corresponding \eqref{eqn:linear-fair-compat} has the only solution
\begin{equation}
z_0 = \frac{1}{\ensuremath{N} (v_1 - v_0)}
\begin{pmatrix}
v_1 ( \ensuremath{M}_1 - \ensuremath{N}_1 v_0) \\
v_0 (-\ensuremath{M}_1 + \ensuremath{N}_1 v_1) \\
(1 - v_1)( \ensuremath{M}_1 - \ensuremath{N}_1 v_0) \\
(1 - v_0)(-\ensuremath{M}_1 + \ensuremath{N}_1 v_1) \\
v_1 ( \ensuremath{M}_0 - \ensuremath{N}_0 v_0) \\
v_0 (-\ensuremath{M}_0 + \ensuremath{N}_0 v_1) \\
(1 - v_1)(\ensuremath{M}_0 - \ensuremath{N}_0 v_0) \\
(1 - v_0)(-\ensuremath{M}_0 + \ensuremath{N}_0 v_1)
\end{pmatrix},
\label{eqn:impossible-sol}
\end{equation}
and only when
\begin{equation}
0\le v_0 \le \min_a\left(\frac{\ensuremath{M}_a}{\ensuremath{N}_a}\right)
\le \max_a\left(\frac{\ensuremath{M}_a}{\ensuremath{N}_a}\right) \le v_1 \le 1.
\label{eqn:impossible-sol-cond}
\end{equation}
Otherwise, no solution exists.
\end{theorem}
\Cref{thm:calib_incomp} yields other extended results regarding the incompatibility of CG and other notions of fairness. As one canonical instance, simply substituting $z_0$ in \eqref{eqn:impossible-sol} to the linear system of the form in \eqref{eqn:linear-fair-compat} with PCB and NCB fairness matrices yields the following corollary, which is equivalent to the result presented in \citet{kleinberg2016inherent} (proof is in \Cref{sec:calib-proof}).
\begin{corollary}[Re-derivation of \cite{kleinberg2016inherent}]%
\label{thm:tr2}
Consider a classifier that satisfies CG, PCB and NCB fairness simultaneously.
Then, at least one of the following statements is true:
\begin{enumerate}[nolistsep]
\item the data have equal base rates for each class $\ensuremath{a}$, i.e.\
$\ensuremath{M}_0/\ensuremath{N}_0 = \ensuremath{M}_1/\ensuremath{N}_1$, or
\item the classifier has perfect prediction, i.e.\ $v_0 = 0$ and $v_1 = 1$.
\end{enumerate}
\end{corollary}
Similar approach can be applied to derive incompatibilities of CG with other linear and quadratic notions of fairness as below (proofs in \Cref{sec:cgdp}, \Cref{sec:cgpp}).
\begin{corollary}(Linear notion of fairness: DP)
\label{thm:cgdp}
Consider a classifier that satisfies CG and DP fairness simultaneously.
Then, the data have equal base rates for each group $\ensuremath{a}$.
\end{corollary}
\begin{corollary}(Quadratic notion of fairness: PP)
\label{thm:cgpp}
Consider a classifier that satisfies CG and PP fairness simultaneously.
Then, at least one of the following is true
\begin{enumerate}[nolistsep]
\item
$v_0 = (M_1 - M_0) / (N_1 - N_0)$.
\item
$v_1 = 1$.
\end{enumerate}
\end{corollary}
From \Cref{thm:calib_incomp} and its corollaries, we curate the extended incompatibility results involving CG in \Cref{tab:cg-impossible} along with conditions for compatibility. To our knowledge, all cases other than the bottom row of the table are new.
\subsection{The incompatibility of \{PE, EFNR, PP\}}
Using the same logic as the previous section, we re-derive an incompatibility result in \citet{chouldechova2017fair} and provide more precise necessary conditions for compatibility. For details of the proof, refer to \Cref{sec:proof-chould}.
\begin{theorem}[Restatement of \citet{chouldechova2017fair}]
\label{thm:chouldechova}
Consider a classifier that satisfies \{PE, EFNR, PP\}. Then, at least one of these statements must be true:
\begin{enumerate}[nolistsep]
\item The classifier has no true positives.
\item The classifier has no false positives.
\item Each protected class has the same base rate
\end{enumerate}
\end{theorem}
\Cref{thm:chouldechova} systematically shows that equal false positive rates, equal false negative rates, and predictive parity are compatible only under specific data/model-dependent circumstances, that were otherwise not clear in the original statements in \citet{chouldechova2017fair}.
\section{Experiments}%
\label{sec:experiments}
In this section we show how the FACT diagnostic can practically show the relative impact of several notions of fairness on accuracy on synthetic and real datasets\footnote{Code available: \href{https://github.com/wnstlr/FACT}{\texttt{github.com/wnstlr/FACT}}}. First we introduce FACT Pareto frontiers which characterize a model's achievable accuracy for a given set of fairness conditions, as a tool for understanding the trade-offs and contextualizing some recent works in fair classification (\Cref{sec:frontier}). We then explore a model-agnostic assessment of multiple fairness conditions via LAFOP (\Cref{sec:exact-relaxed}, \Cref{sec:multi}), as well as a model-specific assessment of post-processing methods in fair classification via MS-LAFOP (\Cref{sec:post-exp}, \Cref{sec:post-process-apdx}).
\subsection{Datasets}%
\label{sec:datasets}
We study a synthetic dataset similar to that in \citet{zafar2015fairness},
consisting of two-dimensional features along with a single binary protected attribute that is either sampled from an independent Bernoulli distribution (``unbiased'' variant, denoted \textbf{S(U)}),
or sampled dependent on the features (``biased'' variant, denoted \textbf{S(B)}). The synthetic dataset consists of two-dimensional data $\ensuremath{\mathbf x} = (x_0, x_1)$ that follow the Gaussian distributions
\begin{equation}
\small
\begin{aligned}
\ensuremath{\mathbf x} | \ensuremath{y} = 1 \sim & \ensuremath{\mathcal N}\left(\begin{pmatrix}
2\\2
\end{pmatrix}, \begin{pmatrix}
5 & 1 \\ 1 & 5
\end{pmatrix} \right) \\
\ensuremath{\mathbf x} | \ensuremath{y} = 0 \sim & \ensuremath{\mathcal N}\left(\begin{pmatrix}
-2\\-2
\end{pmatrix}, \begin{pmatrix}
10 & 1 \\ 1 & 3
\end{pmatrix} \right).
\end{aligned}
\end{equation}
For the S(U) dataset, the protected attribute value is independent of $\ensuremath{\mathbf x}$ and $\ensuremath{y}$,
and is instead distributed according to the Bernoulli distribution $\ensuremath{a} \sim \ensuremath{\mathcal B}\left(\frac 1 2\right)$. This notion of fairness was described in \cite{calders2009indep}.
For the S(B) dataset, the protected attribute value is assigned as $a|\ensuremath{\mathbf x} = \sgn(x_0)$, which corresponds to a situation when some features (but not all)
encode a protected attribute.
We also study the
UCI Adult dataset~\cite{UCIMLrepo},
a census dataset used for income classification tasks where we consider sex as the protected attribute of interest.
\subsection{FACT Pareto frontiers}
\label{sec:frontier}
\begin{figure}[ht]
\centering
\includegraphics[width=\columnwidth]{fig/frontier_new.pdf}
\caption{Model-agnostic (MA) and model-specific (MS) FACT Pareto frontiers of equalized odds on the Adult dataset. Three fair models (FGP, Eq.Odd., Op.) are shown in context by varying the strength of the fairness condition imposed, along with some baseline models (LR, SVM, RF, ConstantPrediction). The MA frontier should be interpreted relative to the Bayes error because it is oblivious to it --- $\delta=0$ means that the upper bound of the accuracy is the accuracy of the Bayes classifier, not 1. The MS frontier on the other hand provides realistic more bounds.}
\label{fig:frontier}
\end{figure}
With LAFOP and MS-LAFOP, one can naturally consider a FACT Pareto frontier of accuracy and fairness by plotting $(\ensuremath{\epsilon}, \ensuremath{\delta})$ values of the $(\ensuremath{\epsilon}, \ensuremath{\delta})$-solutions. In this section, we want to highlight the use of this frontier in the context of several published results in the literature as well as its implications.
The FACT Pareto frontier can be computed both in model-agnostic (MA) and model-specific (MS) scenarios by solving LAFOP and MS-LAFOP respectively, and \Cref{fig:frontier} shows such example on the Adult dataset for EOd fairness. We also consider three fair classification models: \textbf{FGP}~\cite{tan2019learning}, \textbf{Op.}~\cite{zafar2015fairness}, and \textbf{Eq.Odd.}~\cite{hardt2016equality}, individually representing three different approaches one can take in training fair models (imposing fairness before, during, or after training). Some baseline models (logistic regression, SVM, random forest) are also plotted for reference, and a perfectly fair classifier (ConstantPredict: predicting all instances to be negative) on the bottom right corner is considered as an edge-case.
It is important to note that the MA FACT Pareto frontier should be interpreted as characterizing the model's achievable accuracy \emph{relative} to the Bayes error (i.e., the degree to which the added fairness constraints adversely impact the Bayes error), which in this case is empirically estimated at around 0.12 from a wide range of ML models that have been tested on the Adult datset~\cite{chakrabarty2018statistical}. This relatively less realizable bound calls for a model-specific counterpart, the MS FACT Pareto frontier, which limits the frontier to be derived from a given pre-trained classifier. As shown in \Cref{fig:frontier}, it indeed provides a more reasonable frontier for the models considered.
Placing different types of classifiers on the frontier, it is easy to visually grasp strengths and weaknesses of each models. FGP seems to outperform all other models in terms of the trade-off, while Op and EqOdd suffer more from early accuracy drops. The frontier further informs that for any model trained, only for fairness gaps below $10^{-2}$ will the accuracy start to suffer. Such understanding of the trade-offs will be helpful in anticipating practical limitations of models to be trained, as well as in comparing multiple models to determine which is better-suited for different situations.
\begin{figure*}[ht]
\includegraphics[width=\textwidth]{fig/eps_all3.pdf}
\caption{Model-agnostic FACT Pareto frontier for different groups of fairness notions (colored and grouped according to their convergence value as $\ensuremath{\epsilon} \to 0$) for three datasets (\Cref{sec:datasets}). The bottom two groups of fairness notions are incompatible (black, red), hence the halted trajectories before reaching smaller values of $\ensuremath{\epsilon}$. Similar convergence behaviors within the fairness groups in blue reflect the dominance of \{EOd, DP\} -- any additional fairness notions added on top of these have no impact on the convergence value. Best viewed in color.}
\label{fig:eps-delta-curve}
\end{figure*}
In the rest of the following sections and figures, for the model-agnostic analysis, $\ensuremath{\delta}$ should be interpreted in reference to the Bayes error, i.e $\ensuremath{\delta} = 0$ means that the upper bound of the best-achievable accuracy is the accuracy of the Bayes classifier, not 1.
\subsection{Model-agnostic scenario with multiple fairness conditions}
\label{sec:exact-relaxed}
We are now interested in how a \emph{group} of fairness conditions simultaneously affect accuracy. This can be assessed by looking at the shape of the MA FACT Pareto frontier of LAFOP with multiple fairness constraints, particularly $\ensuremath{\delta}$ values of $(\ensuremath{\epsilon}, \ensuremath{\delta})$-solutions when $\ensuremath{\epsilon}$ is varied to be zero (or very close to it) on multiple fairness notions. \Cref{fig:eps-delta-curve} shows this in two different ways: (i) ($\ensuremath{\epsilon}$,$\ensuremath{\delta}$)-solutions obtained when fairness conditions are imposed as hard inequality constraints instead of as regularizers, i.e. solving $\argmin_{\ensuremath{\mathbf z} \in \mathcal K}
{(\ensuremath{\mathbf c} \cdot \ensuremath{\mathbf z})^2} \text{ s.t. } \Vert \ensuremath{\mathbf A} \ensuremath{\mathbf z} \Vert^2_2 \leq \ensuremath{\epsilon}$ (solid line), and (ii)
($\ensuremath{\epsilon}$,$\ensuremath{\delta}$)-solutions obtained from the LAFOP \eqref{eqn:LAFOP} while varying $\ensuremath{\lambda}$s (crosses). Different groups of fairness notions are colored according to their convergence behaviors.
Similar trajectories and convergence of the curves allow us to identify fairness notions that come ``for free'' given some others, in terms of additional accuracy drops. In other words, the Pareto frontiers are effective at demonstrating the relative strength of the fairness notions within a group. For instance, under \{EOd, DP\} (third group, blue) the best attainable accuracy drops by over 60 percent for S(U) and S(B), but we also observe that adding CB, PE, and/or PCB on top of them causes no additional accuracy drop -- \{EOd, DP\} essentially determines $\ensuremath{\delta}$ for the entire group of fairness notions in blue.
The MA FACT Pareto frontiers for multiple fairness conditions also show not only the existing incompatibility of the fairness notions, but also how much relaxation is required for them to be approximately compatible.
The halted trajectories before hitting much smaller $\ensuremath{\epsilon}$ for the bottom two groups in black and red clearly verify this. Because the S(U) dataset has a smaller base rate gap between the groups compared to the Adult or the S(B) dataset by design, the incompatibility in S(U) becomes only visible at a much smaller $\ensuremath{\epsilon}$ value.
Taking a more macroscopic perspective, the MA FACT Pareto frontiers also show which dataset allows overall better trade-off scheme compared to the others. Because the S(U) dataset was designed to be less biased compared to the S(B) dataset, it exhibits significantly smaller drop in overall accuracy, particularly for the green group involving DP. The way S(U) was designed aligns with this observation, as the sensitive attributes were randomly sampled independently from the features. However, EOd and DP together (in blue) drives down the accuracy just like the biased counterpart, which demonstrates how conservative EOd fairness is for these datasets.
More observations and experiments are presented in \Cref{sec:multi}. It is possible to further extend these analyses to an arbitrary number of fairness constraints imposed on LAFOP, as well as to other performance metrics like precision or recall as seem fit.
\subsection{Model-specific scenario with post-processing methods}
\label{sec:post-exp}
While the MA FACT Pareto frontier shows a broader trade-off landscape for any classifiers, model-specific analysis using MS-LAFOP in \eqref{eqn:ms-lafop} can be helpful in practice with more reasonable MS Pareto frontiers. Also after solving the MS-LAFOP, its solution can be used to compute the mixing rates for post-processing any given classifier just like done in \citet{hardt2016equality}. For more details, refer to \Cref{sec:post-process-apdx}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\columnwidth]{fig/post.pdf}
\caption{Model-specific FACT Pareto frontier of EOd on Adult dataset. Compared to the model-agnostic frontier, it yields a more realizable bounds on the trade-off between fairness and accuracy. Post-processed solutions for the given classifiers (crosses) using the algorithm in \cite{hardt2016equality} (circles, EOd-solution) and FACT (stars, FACT-solution) are also shown. The FACT-solutions suffer significantly less from the trade-off, yielding competitive accuracy to the original classifiers while achieving smaller fairness gaps compared to the EOd-solutions.}
\label{fig:post-fig}
\end{figure}
\Cref{fig:post-fig} shows the MS FACT Pareto frontier of EOd computed from MS-LAFOP for the Adult dataset (it is a zoomed-in version of the MA FACT Pareto frontier in \Cref{fig:frontier}). We also plot two types of post-processed classifiers: EOd-solutions using the algorithm in \citet{hardt2016equality} (circles), and FACT-solutions using MS-LAFOP (stars). EOd solutions undergo steeper trade-off while the FACT-solutions are able to find a better configuration with smaller fairness gaps, retaining a competitive accuracy level to the original classifier (cross).
\section{Conclusions}
The \textsc{FACT}\ diagnostic facilitates systematic reasoning about different kinds of trade-offs involving arbitrarily many notions of performance and group fairness notions, which all can be expressed as functions of the fairness--confusion tensor.
In our formalism, the majority of group fairness definitions in the literature are in fact linear or quadratic thus are easy to be imposed as constraints to the PFOP.
The \textsc{FACT}{} diagnostic further benefits from elementary linear algebra and convex optimization to provide a unified perspective of viewing fairness--fairness trade-offs and fairness--performance trade-offs. We have also empirically demonstrated the practical use of the \textsc{FACT}{} diagnostic in several scenarios.
Many of the presented results require only linear fairness functions and accuracy, as in the LAFOP/MS-LAFOP setting. Nevertheless, it is easy to extend this to quadratic fairness functions with more varied performance metrics depending on different use cases.
We also briefly introduce a small theoretical result regarding fairness--accuracy trade-offs using the FACT diagnostic in \Cref{sec:cg-accuracy}, which deserves further analysis.
\section*{Acknowledgements}
We thank Valerie Chen, Jeremy Cohen, Amanda Coston, Mikhail Khodak, Jeffrey Li, Liam Li, Gregory Plumb, Nick Roberts, and Samuel Yeom for helpful feedback and discussions. This work was supported in part by DARPA FA875017C0141, the National Science Foundation grants IIS1705121 and IIS1838017, an Okawa Grant, a Google Faculty Award, an Amazon Web Services Award, a JP Morgan A.I. Research Faculty Award, and a Carnegie Bosch Institute Research Award. JSK acknowledges support from Kwanjeong Educational Fellowship. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of DARPA, the National Science Foundation, or any other funding agency.
This paper was prepared for information purposes by the Artificial Intelligence Research group of JPMorgan Chase \& Co and its affiliates (``JP Morgan''), and is not a product of the Research Department of JP Morgan. JP Morgan makes no representation and warranty whatsoever and disclaims all liability, for the completeness, accuracy or reliability of the information contained herein. This document is not intended as investment research or investment advice, or a recommendation, offer or solicitation for the purchase or sale of any security, financial instrument, financial product or service, or to be used in any way for evaluating the merits of participating in any transaction, and shall not constitute a solicitation under any jurisdiction or to any person, if such solicitation under such jurisdiction or to such person would be unlawful. © 2020 JPMorgan Chase \& Co. All rights reserved.
|
2,877,628,088,537 | arxiv | \section{Introduction}
Quantum field theory in curved spacetime \cite{Birrell:1982ix,Parker and Toms,Fulling:1989nb,Wald:1995yp,Mukhanov:2007zz,Fabbri:2005mw} is an exciting arena
in which two cornerstones of modern physics, quantum field theory and general relativity, merge to produce surprising results. One classic prediction at this crossroads is that a quantum field in an initial vacuum state, under the influence of spacetime curvature (or gravity), leads to a spontaneous generation of particles associated with that field. This was first realized by Schrodinger \cite{Schrodinger:1939} in the context of relativistic quantum mechanics in an expanding universe and later by Parker \cite{Parker:1966,Parker:1968mv} who independently showed this in the context of general quantum fields in cosmological spacetimes. One such class of spacetimes is the one experienced by an accelerating observer: the Rindler spacetime {\cite{Rindler:1966zz}}. However, this spacetime is special because it creates particles with a thermal spectrum {\cite{Fulling:1972md,Davies:1974th,Unruh:1976db}}, i.e. an accelerating (or Rindler) observer sees the Minkowski (or flat) spacetime vacuum as a thermal bath of particles. This phenomenon is called the Fulling-Davies-Unruh effect (also known as the Unruh effect). Here, the thermality emerges due to two reasons. The first is the appearance of a horizon that splits the entire spacetime into two mutually inaccessible regions (corresponding to observers accelerating in opposite directions) and thus vacuum expectation values in one region lead to tracing over the degrees of freedom of the other region, thus yielding a mixed state. The second reason is that the response function of an accelerating particle detector follows the principle of detailed balance, or in other words satisfies the Kubo-Martin-Schwinger (KMS) condition {\cite{Kubo:1957,Martin & Schwinger:1959}}, which is a sufficient condition for a spectrum to be called thermal.
Similar horizons and therefore their associated thermal behavior also emerge in other spacetimes, such as black holes {\cite{Hawking:1974rv,Hawking:1974sw}} where this behavior is known as Hawking radiation, and the Gibbons-Hawking effect in de-Sitter cosmologies {\cite{Gibbons:1977mu}}. A surprising result that appears here is that the
power spectrum, which depends on the density of states and the statistics, is sensitive to the dimensions of spacetime. In odd spacetime dimensions, the power spectrum of fermions has a Bose-Einstein distribution, whereas bosons follow a Fermi-Dirac
distribution. This is the well-known `apparent inversion of statistics' due to Takagi {\cite{Takagi:1986kn}} which is linked to the violation of Huygens' principle in odd spacetime dimensions {\cite{Ooguri:1985nv,Unruh:1986tc,Terashima:1999xp,Sriramkumar:2002nt,Sriramkumar:2002dn,Pascazio_Huygens,Arrechea:2021szl}}.
There have been various proposals to detect the Unruh effect in accelerating systems~\cite{Crispino:2007eb,Martin-Martinez:2010gnz,Nation:2011dka}, for example using Bose-Einstein condensates~\cite{Retzker:2008,Gooding:2020scc}. However, observing this effect is challenging as an acceleration of about $10^{21}{\rm m}/{\rm s}^2$ is required to generate a temperature of $1$K {\cite{Mukhanov:2007zz}} which is likely beyond the reach of current technology. In such a situation, analogue gravity~\cite{Barcelo:2005fc} offers an alternative arena for observing relativistic phenomena, in which condensed matter or cold atom systems are
engineered to mimic the behavior of relativistic systems. This area emerged in 1981 when Unruh showed {\cite{Unruh:1980cg}} how water ripples in a draining bathtub can mimic the Klein-Gordon equation for a scalar field near a black hole horizon. This led to the prediction of analogue Hawking radiation which was realized in a series of experiments \cite{Philbin:2007ji,Belgiorno:2010wn,Weinfurtner:2010nu,Steinhauer:2015saa}. On the other hand, particle creation in the context of the inflationary early universe was recently observed in toroidal Bose-Einstein condensates \cite{Eckel:2017uqx,Banik:2021xjn} and studied theoretically in Refs.~\cite{Llorente:2019rbs,Bhardwaj:2020ndh,Eckel:2020qee}.
Such analogue platforms can be used to mimic the Unruh effect, as was recently observed in Bose-Einstein condensates \cite{Hu:2018psq} by modulating the scattering length that determines the interactions between ultracold bosonic atoms. Various proposals have also been made to detect the analogue Unruh effect in ultracold Fermi gases in square lattices {\cite{Boada:2010sh,Rodriguez-Laguna:2016kri,Kosior:2018vgx}}, in graphene {\cite{Iorio:2011yz,Iorio:2013ifa,Cvetic:2012vg}}, in quantum hall systems \cite{Hegde:2018xub,Subramanyan:2020fmx}, and in Weyl semi-metals \cite{Volovik:2016kid}.
Here our main interest is in exploring analogue Rindler physics, and the analogue Unruh effect, in
graphene and related cold-atom systems (i.e., fermionic atoms in honeycomb lattices. Indeed,
the status of graphene as an analogue relativistic system has been long recognized~\cite{Wallace:1947,Semenoff}, and the fact that graphene's low-energy excitations obey the Dirac equation was established even from
the earliest experimental work on these systems~\cite{Novoselov 2004,Novoselov 2005}. As is well
known, the effective
\lq\lq speed of light\rq\rq\
characterizing the Dirac quasiparticles in graphene takes a value $v\simeq c/300$ (with
$c$ the actual speed of light). To achieve the Rindler Hamiltonian in graphene requires engineering
a spatial variation in $v$ along one direction.
In this paper, our aim is to discuss how the Unruh effect would be manifested in honeycomb systems
such as mechanically strained graphene or in an appropriately engineered cold atom optical lattice
system \cite{Tarruell:2012zz,Soltan-Panahi 2011,Soltan-Panahi 2012,Lee:2009,Li:2016}. In either case what is needed is a spatial variation in the local tunneling matrix
elements between sites.
The basic idea is to start with unstrained graphene, in equilibrium at low temperature $T$
(that we will usually assume to be $T=0$). As mentioned above, fermionic excitations in unstrained graphene obey
the conventional Dirac equation, i.e., the Dirac equation in Minkowski (flat) spacetime.
The next step is to suddenly switch on the strain field, changing the system Hamiltonian to
the Rindler Hamiltonian, with excitations described by a Rindler Dirac equation. The Unruh effect
emerges because a vacuum initial (Minkowski) state becomes, after the strain, an effective
thermal distribution of Rindler quasiparticles characterized by the strain-dependent Unruh temperature.
Earlier theoretical work by Rodriquez-Laguna and collaborators showed~\cite{Rodriguez-Laguna:2016kri},
in the context of square optical lattices, that such a sudden quench should indeed yield the Unruh
effect, provided that the timescale of the switching process is much faster than the timescale at which the electron dynamics operates (governed by the inverse tunnelling rate). Here we assume the switching on is
sufficiently rapid so that, invoking the sudden approximation of quantum mechanics, the correct
procedure is to obtain observables by calculating the expectation values of operators in the strained system with respect to states of the unstrained lattice (i.e., the Minkowski
vacuum or, at finite real temperature, a Fermi gas of Dirac quasiparticles and holes).
The rest of the paper is organized as follows. In Sec.~\ref{SEC:two}, we describe how the Rindler Hamiltonian can be realized for low-energy and long wavelength fermions in mechanically strained graphene.
Since the basic effect relies only on engineering a spatially-varying tunneling matrix element, we expect it should be similarly possible to engineer the Rindler Hamiltonian in cold-atom systems.
In Sec.~\ref{SEC:three}, we revisit the Hamiltonian for fermions in flat spacetime (or flat graphene sheet) and identify the
normal modes of this system that correspond to particle and hole excitations. In Sec.~\ref{SEC:four}, we derive the Dirac equation due to the Rindler Hamiltonian, obtaining a similar mode expansion for the strained case. In Sec.~\ref{SEC:five}, we use the mode expansions in flat and strained (Rindler) honeycomb lattices to
derive how a sudden strain can induce spontaneous electron-hole creation with an emergent Fermi-Dirac distribution, which is the analogue Unruh effect.
In Sec.~\ref{SEC:six} we analyze the Green's functions after such a sudden strain, showing how signatures
of the analogue Unruh effect may be measured in observables such as photoemission spectroscopy and
scanning tunneling microscopy and how
the form of the emergent
thermality is connected to the violation of Huygens' principle.
In Sec.~\ref{SEC:seven}, we study the frequency-dependent optical conductivity of this system, which we find to increase approximately linearly with
increasing frequency, in contrast to flat graphene, where it is known to be nearly constant (i.e., frequency-independent)
~\cite{Nair2008,Mishchenko2008,Sheehy2009,Link2016}.
In Sec.~\ref{SEC:eight}, we discuss the effects of this sudden switching-on of the Rindler Hamiltonian on the total internal energy of fermions
at finite environment temperature. In Sec.~\ref{SEC:nine} we provide brief concluding remarks and in Appendix \ref{SEC:Appendix Dirac Eqn}, we give
details that are omitted from the main text on the Dirac equation in curved spacetime.
\section{Creating the Rindler Hamiltonian}
\label{SEC:two}
In this section, we will show how the Rindler Hamiltonian can be realized via graphene with a spatially-varying
strain that yields a Hamiltonian with a spatially-varying Fermi velocity.
This is in contrast to
the low-energy theory of conventional graphene that exhibits a spatially-uniform Fermi velocity.
To see how such a spatially-varying Fermi velocity can be engineered, we start with the tight binding
Hamiltonian for graphene which involves ($\pi$ orbital) electrons
hopping from carbon atoms in the $A$ sub-lattice to their nearest
neighboring $B$ carbon atoms, and vice versa:
\begin{equation}
\label{Tight Binding Hamiltonian}
\hat{H} =
-\sum_{\boldsymbol{R}_{j},n} t_{\boldsymbol{R}_{j},n} \Big[
\hat{a}^{\dagger}_{\boldsymbol{R}_{j}}\hat{b}_{\boldsymbol{R}_{j}+\boldsymbol{\delta}_{n}} +
\hat{b}^{\dagger}_{\boldsymbol{R}_{j}+\boldsymbol{\delta}_{n}}\hat{a}_{\boldsymbol{R}_{j}}\Big],
\end{equation}
where $\boldsymbol{R}_{j}$ labels the Bravais lattice points formed by
the $A$-atoms, and index $n$ denotes the three nearest neighboring $B$
atoms. Here, the $\hat{a}$ and $\hat{b}$ operators annihilate fermions
on the $A$ and $B$ sublattices, respectively, with hopping amplitude
$t_{\boldsymbol{R}_{j},n}$ (that we have taken to be real). The nearest neighbor vectors
$\boldsymbol{\delta}_{n}$ joining the $A$ and $B$ atoms are as
follows:
\begin{equation}\label{Nearest Neighbor Vectors}
\boldsymbol{\delta}_{1}=a\bigg(\frac{\sqrt{3}}{2},\frac{1}{2}\bigg),
\,\,\boldsymbol{\delta}_{2}=a\bigg(\frac{-\sqrt{3}}{2},\frac{1}{2}\bigg),
\,\,\boldsymbol{\delta}_{3}=a\big(0,-1\big),
\end{equation}
with $a$ the nearest-neighbor carbon distance.
When a graphene sheet undergoes a mechanical strain, with
$u_{ij}\equiv\frac{1}{2}(\partial_{i}u_{j}+\partial_{j}u_{i})$ being
the strain tensor, the distance between two carbon atoms changes and
thus the hopping amplitude gets adjusted accordingly. For perturbative
strains, we can then Taylor expand the hopping amplitude as follows
{\cite{deJuan:2012hxm}}:
\begin{equation}
\label{Taylor Expand Hopping}
t_{\boldsymbol{R}_{j},n} = t_{0}\Big[1 - \beta\Delta u^{(1)}_{n} - \beta\Delta u^{(2)}_{n}\Big],
\end{equation}
%
with
\begin{eqnarray}
\Delta u^{(1)}_{n} &=& \frac{\delta_{n}^{i}\delta_{n}^{j}}{a^{2}}u_{ij},
\\
\Delta u^{(2)}_{n} &=& \frac{\delta_{n}^{i}\delta_{n}^{j}\delta_{n}^{k}}{2a^{2}}\partial_{i}u_{jk}
\end{eqnarray}
where $\Delta u^{(1)}_{n}$ is the first order change due to strains alone, and $\Delta u^{(2)}_{n}$ denotes the first order change due to strains and their derivatives (which is a low energy approximation). Here, $a$ is the lattice spacing, and $\beta=|\frac{\partial\log t}{\partial\log a}|$ is the Gr\"{u}neisen parameter. Note we also assume that the electrons cannot hop to the next nearest neighbors, i.e. $t'=0$.
With the aim of realizing the Rindler Hamiltonian, henceforth we choose the following components for the strain tensor:
\begin{eqnarray}\label{RindlerStrainPattern} u_{xx} & = & u_{yy} = -\frac{|x|}{\beta\lambda},~~~~~~~~~~u_{xy}=0, \nonumber \\
t_{1}(x) & = & 1 + \frac{|x|}{\lambda} + \frac{\sqrt{3}}{4}\frac{a}{\lambda}\text{sgn}(x), \nonumber \\
t_{2}(x) & = & 1 + \frac{|x|}{\lambda} - \frac{\sqrt{3}}{4}\frac{a}{\lambda}\text{sgn}(x), \nonumber \\
t_{3}(x) & = & 1 + \frac{|x|}{\lambda},
\end{eqnarray}
where $\lambda$
is the strain scale that measures the distance over which an appreciable
inhomogeneity develops in the honeycomb lattice. With this choice of strain tensor, the distance between atoms decreases with increasing distance from $x=0$.
At low energies, the electron dynamics is governed by two nodes in the reciprocal space ${\bf K}=\Big(\frac{4\pi}{3a\sqrt{3}},0\Big)=-{\bf K}'$. We can thus write down the $a$ and $b$ operators localized near these nodes as {\cite{Castro Neto:2009}}:
\begin{eqnarray}\label{Opertors Near Nodes}
\hat{a}_{\boldsymbol{R}_{j}} & = & e^{i\boldsymbol{K}\cdot\boldsymbol{R}_{j}}\hat{A}(\boldsymbol{R}_{j}) + e^{i\boldsymbol{K}'\cdot\boldsymbol{R}_{j}}\hat{A}'(\boldsymbol{R}_{j}), \\
\hat{b}_{\boldsymbol{R}_{j}+\boldsymbol{\delta}_{n}} & = & e^{i\boldsymbol{K}\cdot(\boldsymbol{R}_{j}+\boldsymbol{\delta}_{n})}\hat{B}(\boldsymbol{R}_{j}+\boldsymbol{\delta}_{n}) \nonumber \\
& + & e^{i\boldsymbol{K}'\cdot(\boldsymbol{R}_{j}+\boldsymbol{\delta}_{n})}\hat{B}'(\boldsymbol{R}_{j}+\boldsymbol{\delta}_{n}),
\end{eqnarray}
where the prime $'$ denotes operators associated to the ${\bf K}'$ node. For low energies, it suffices to Taylor expand the $\hat{b}_{\boldsymbol{R}+\boldsymbol{\delta}_{n}}$ operators to linear order in gradients of these operators {\cite{Castro Neto:2009}}:
\begin{eqnarray}\label{TaylorExpand B operators}
\hat{B}(\boldsymbol{R}_{j}+\boldsymbol{\delta}_{n}) & \approx & \hat{B}(\boldsymbol{R}_{j}) + \boldsymbol{\delta}_n\cdot\boldsymbol{\nabla}\hat{B}(\boldsymbol{R}_{j}).
\end{eqnarray}
Plugging into the tight-binding Hamiltonian {(\ref{Tight Binding Hamiltonian})}, the expressions for operators near the nodes {(\ref{Opertors Near Nodes})}, and the Taylor expansions for the hopping amplitude {(\ref{Taylor Expand Hopping})} and for the operators on $B$ carbon atoms {(\ref{TaylorExpand B operators})}, gives us the following:
\begin{widetext}
\begin{eqnarray}
\hat{H} & = & -t_{0}\sum_{\boldsymbol{R}_{j},n}\Big[1-\beta\Delta u^{(1)}_{n}-\beta\Delta u^{(2)}_{n}\Big]\cdot\Big[\hat{A}^{\dagger}(\boldsymbol{R}_{j})\Big\{\hat{B}(\boldsymbol{R}_{j})+\boldsymbol{\delta}_{n}\cdot\boldsymbol{\nabla}\hat{B}(\boldsymbol{R}_{j})\Big\}e^{i\boldsymbol{K}\cdot\boldsymbol{\delta}_{n}} + \text{h.c.} \Big] \nonumber \\
& - & t_{0}\sum_{\boldsymbol{R}_{j},n}\Big[1-\beta\Delta u^{(1)}_{n}-\beta\Delta u^{(2)}_{n}\Big]\cdot\Big[\hat{A}^{'\dagger}(\boldsymbol{R}_{j})\Big\{\hat{B}'(\boldsymbol{R}_{j})+\boldsymbol{\delta}_{n}\cdot\boldsymbol{\nabla}\hat{B}'(\boldsymbol{R}_{j})\Big\}e^{i\boldsymbol{K}'\cdot\boldsymbol{\delta}_{n}} + \text{h.c.} \Big],
\end{eqnarray}
\end{widetext}
where second term in each line is the hermitian conjugate of the first, denoted by $\text{h.c.}$. Here we have ignored cross-terms between the two nodes like $\sim\sum_{\boldsymbol{R}_{j}}\hat{A}^{\dagger}(\boldsymbol{R}_{j})\hat{B}'(\boldsymbol{R}_{j})e^{i(\boldsymbol{K}-\boldsymbol{K}')\cdot\boldsymbol{R}_{j}}$, that destructively interfere and thus vanish. We now simplify this expression by using the Rindler strain pattern {(\ref{RindlerStrainPattern})} and
keeping terms that are linear order in gradients, terms that are linear order in strains and terms that are both linear in gradients as well as strains. We also introduce two-component
field operators at the ${\bf K}$ and ${\bf K'}$ nodes:
\begin{eqnarray}
\hat{\psi}_{\bf K}(\boldsymbol{R}_{j})=\begin{pmatrix}
\hat{B}(\boldsymbol{R}_{j}) \\ \hat{A}(\boldsymbol{R}_{j})
\end{pmatrix},
\\
\hat{\psi}_{\bf K'} (\boldsymbol{R}_{j})=\begin{pmatrix}
\hat{A}^{'}(\boldsymbol{R}_{j}) \\ \hat{B}^{'}(\boldsymbol{R}_{j})
\end{pmatrix}.
\end{eqnarray}
Upon approximating the sums over
Bravais lattice points $\boldsymbol{R}_{j}$ to spatial integrals over ${\bf r}$,
relabeling
the ${\bf K}$ and ${\bf K}'$ points to be the right ($R$) and left ($L$)
nodes, we finally arrive at the effective Hamiltonian:
\begin{eqnarray}
\label{Eq:fullHAM}
\hat{H} &=& \sum_{i=R,L} \int d^2 r \hat{\psi}_i^\dagger({\bf r}) \hat{h}_i \hat{\psi}_i({\bf r}),
\\
\hat{h}_R &\equiv & \sqrt{v(x)}
\bm{\sigma}\cdot ( \boldsymbol{\sigma}\cdot\hat{\boldsymbol{p}}\big) \sqrt{v(x)}
= -\hat{h}_L,
\end{eqnarray}
where $\boldsymbol{\sigma}=\big(\sigma_{x},\sigma_{y}\big)$ is the
vector of Pauli matrices,
$\hat{\boldsymbol{p}}=-i\hbar\boldsymbol{\nabla}$ is the momentum
operator, with
$\boldsymbol{\nabla}=\big(\partial_{x},\partial_{y}\big)$ being the
gradient. Here,
$v(x) = v_{0}\big(1+\frac{|x|}{\lambda}\big)$
represents a spatially-varying Fermi velocity with $v_0 =
\frac{3t_{0}a}{2\hbar}$ being the Fermi velocity of the unstrained
Honeycomb lattice. If we had instead
chosen a plus sign for the strain tensor components in {(\ref{RindlerStrainPattern})},
then we would get a spatially decreasing Fermi velocity
$v_{0}\big(1-\frac{|x|}{\lambda}\big)$.
In the next step, we establish two different limiting cases of the Hamiltonian
Eq.~(\ref{Eq:fullHAM}): The unstrained case, $\lambda \to \infty$,
that yields the well known 2D Dirac Hamiltonian, and the
case of strong strains, $\lambda \to 0$, in which the system
Hamiltonian describes Dirac particles moving in a Rindler metric~\cite{Rindler:1966zz}. In the strong-strain limit,
we can neglect the unit contribution in $v(x)$, leaving $v(x)=v_{0}|x|/\lambda$. In fact, as we now argue,
this approximation also holds
in the long-wavelength limit. Our argument relies on
translation symmetry in the $y$-direction, which implies eigenfunctions of
$\hat{h}_R$ are plane waves in the $y$ direction, $\propto {\rm e}^{ik_y y}$ with wavevector $k_{y}$. Re-scaling the coordinates via $x\rightarrow x/|k_{y}|$ and $y\rightarrow y/|k_{y}|$ changes the spatially dependent Fermi velocity to $v(x)\rightarrow v_0\big(1+\frac{|x|}{|k_{y}|\lambda}\big)$ and the momentum operator becomes $\hat{\boldsymbol{p}}\rightarrow|k_{y}|\cdot\hat{\boldsymbol{p}}$. In the long-wavelength limit ($|k_{y}|\lambda\ll1$), the contribution of unity inside $v(x)$ becomes negligible and $|k_{y}|$ cancels out, giving us the 2D Rindler Hamiltonian which is just (\ref{Eq:fullHAM}) with the Fermi velocity $v(x)=v_{0}|x|/\lambda$.
Having discussed how the Rindler Hamiltonian can be realized in strained honeycomb lattices, in the coming sections, we apply these ideas to see how a sudden switch on of the system strain, suddenly changing the Hamiltonian from the 2D Dirac Hamiltonian to
the 2D Rindler Hamiltonian can strongly modify low-energy
and long-wavelength
properties leading to the analogue Unruh effect. To begin with, in the next section, we start with a review of fermions in flat unstrained honeycomb lattices, i.e., the case of graphene.
\begin{figure*}[t]
\centering
\subfloat[Subfigure 1 list of figures text][Minkowski]{
\includegraphics[width=0.45\textwidth]{Fig1a.pdf}
\label{Band Structure: Minkowski}}
\subfloat[Subfigure 2 list of figures text][Rindler]{
\includegraphics[width=0.45\textwidth]{Fig1b.pdf}
\label{Band Structure: Rindler}}
\caption{(Color Online) A schematic figure to depict the (a) Minkowski and (b) Rindler mode expansions. In flat graphene, the existence of translation symmetry yields a Dirac-like linear energy dispersion $\epsilon_{\boldsymbol{k}}=\hbar v_{0}|\boldsymbol{k}|$ (shown in green in panel a). The electron and hole excitation energies are both positive ($\epsilon_{\boldsymbol{k}}>0$)
with the operators $\hat{a}_{\boldsymbol{k}}|0_{\cal{M}}\rangle=0=\hat{b}_{\boldsymbol{k}}|0_{\cal{M}}\rangle$ annihilating the Minkwoski vacuum. In strained graphene, the Rindler energy $E_{k_{y},\Omega}=\hbar\Omega>0$ (shown in green in panel b) and transverse momenta $\hbar k_{y}$ are decoupled, with their associated electron and hole operators annihilating the Rindler vacuum state $\hat{c}_{k_{y},\Omega}|0_{\cal{R}}\rangle=0=\hat{d}_{k_{y},\Omega}|0_{\cal{R}}\rangle$.}
\label{Band Structure}
\end{figure*}
\section{Mode expansion: Flat honeycomb lattice}
\label{SEC:three}
In this section, we review the Dirac equation for
flat (unstrained) graphene
and derive the resulting normal mode expansion
that describes electron and hole excitations.
As we have already discussed, the low-energy Hamiltonian for fermions
hopping on a uniform
(unstrained) honeycomb lattice follows from taking the $\lambda \to \infty$
limit of Eq.~(\ref{Eq:fullHAM}), resulting in $\hat{H} = \hat{H}_{\rm R} + \hat{H}_{\rm L}$
with
\begin{eqnarray}
&&\hat{H}_{\text{R}}
= v_0 \int d^{2}r~\hat{\psi}^{\dagger}_{\text{R}}(\boldsymbol{r})
\boldsymbol{\sigma}\cdot\hat{\boldsymbol{p}} \hat{\psi}_{\text{R}}(\boldsymbol{r}), \label{Dirac Hamiltonian}
\end{eqnarray}
where to get $\hat{H}_{\rm L}$ we simply replace $R\to L$ and take $v_0 \to -v_0$.
The field operators $\hat{\psi}_{i}$ ($i=L,R$) satisfy the anticommutation
relation
\begin{equation}
\{\hat{\psi}_{i},\hat{\psi}_{j}^\dagger\} = \delta_{ij} \delta(\boldsymbol{r}-\boldsymbol{r}'),
\label{Eq:acrelation}
\end{equation}
In the following we focus on the right node, with results from the left
node easily following.
The Heisenberg equation of motion for the field operators
$\hat{\psi}_{\text{R}}(\boldsymbol{r},t)$ is:
\begin{equation}
\label{Dirac Equation Flat Graphene}
i\hbar\partial_{t}\hat{\psi}_{\text{R}}(\boldsymbol{r},t)=[\hat{\psi}_{\text{R}}(\boldsymbol{r},t),\hat{H}] = v_{0}\boldsymbol{\sigma}\cdot\hat{\boldsymbol{p}}~\hat{\psi}_{\text{R}}(\boldsymbol{r},t),
\end{equation}
the massless Dirac equation (Weyl equation) that describes how fermions (with zero rest mass) propagate in a flat spacetime with an emergent $(2+1)$-dimensional Minkowski line element labeled by the inertial coordinates $(T,X,Y)$:
\begin{eqnarray}\label{Minkowski Metric}
ds_{\text{Mink}}^{2} & = & -v_{0}^{2}dT^{2} + dX^{2} + dY^{2},
\end{eqnarray}
where the speed of light is now replaced by the Fermi velocity $c\rightarrow v_{0}$. In Appendix \ref{SEC:Appendix Dirac Eqn}, we describe how a metric expressed in inertial coordinates like {(\ref{Minkowski Metric})} (see Eq.~{(\ref{Inertial Coordinates})}) leads to a Dirac equation in inertial coordinates {(\ref{Dirac Equation Flat Graphene})} (see Eq.~{(\ref{Weyl Equations Inertial Coordinates})}). This metric describes the dynamical trajectories of inertial observers in a flat spacetime. Suppose two inertial frames $S$ and $S'$ moving with relative speed $v$, then the coordinates of an observer in frame $S'$ i.e. $(T',X',Y')$, are related to the ones in $S$ via Lorentz transformations:
\begin{eqnarray}\label{Lorentz Transformation}
v_{0}T' & = & v_{0}T\cosh\theta - x\sinh\theta, \nonumber \\
X' & = & x\cosh\theta - v_{0}T\sinh\theta, \nonumber \\
Y' & = & Y,
\end{eqnarray}
where $\cosh\theta=\gamma=\frac{1}{\sqrt{1-\beta^{2}}}$ is the Lorentz factor
with $\beta=\frac{v}{v_{0}}$, and $\sinh\theta=\gamma\beta$. The ratio
of these factors relate the velocity with rapidity
$\theta\in(-\infty,\infty)$: $\tanh\theta=\beta\in(-1,1)$. In either
frame, the trajectory of an inertial observer is of the form
$-v_{0}^{2}T^{2}+X^{2}+Y^{2}=\text{constant}$.
Thus, as one might expect, fermions hopping in an unstrained honeycomb
lattice obey an analogue Dirac equation with the Fermi velocity $v_0$
playing the role
of the speed of light. Our next task is to expand the fermion field
operators into normal modes corresponding to positive energy
\lq \lq particle\rq\rq\ and negative energy \lq\lq hole\rq\rq\ excitations in graphene's Dirac band
structure.
Since the system is homogeneous in space and time (or alternatively
the emergent metric components {(\ref{Minkowski
Metric})} are constants), the Dirac equation solutions that
describe the evolution of fermions are plane waves of the form
$e^{\pm i(\boldsymbol{k}\cdot\boldsymbol{x}-\omega_{k}t)}$ and thus
the field operators on the right node can be expressed in terms of the
following mode expansion \cite{Das:2008zze}:
\begin{eqnarray}\label{MinkowskiModeExpansionStandardRight1}
\hat{\psi}_{\text{R}}(\boldsymbol{r})\! =\! \int
\frac{d^{2}k}{2\pi}\Big(e^{i(\boldsymbol{k}\cdot\boldsymbol{r}-v_{0}kt)}u_{\boldsymbol{k}}\hat{a}_{\boldsymbol{k}}
+
e^{-i(\boldsymbol{k}\cdot\boldsymbol{r}-v_{0} k t)}v
_{-\boldsymbol{k}}\hat{b}^{\dagger}_{\boldsymbol{k}}\Big), \nonumber \\
\end{eqnarray}
where the wave-vector $\boldsymbol{k}=(k_{x},k_{y})$ is related to the linear momenta in spatial directions via $\boldsymbol{p}=\hbar\boldsymbol{k}$ and, thanks to translation symmetry, is related to the energy $\epsilon_{k}=\hbar\omega_{k}$ ($\omega_{k}$ is the mode frequency), via the dispersion relations $\epsilon_{k}=\hbar v_{0}|\boldsymbol{k}|$ or $\omega_{k}=v_{0}k$ where $k \equiv
|\boldsymbol{k}| =
\sqrt{k_{x}^{2}+k_{y}^{2}}$
is the wavevector magnitude.
This mode expansion for the right node ${\bf K}_{\text{R}}$
(right-handed Weyl fermions) should have positive
helicity, which is defined as the projection of the Pauli spin operator onto the
direction of the momentum vector
$h=\boldsymbol{\sigma}\cdot\hat{\boldsymbol{k}}$. Thus the flat spinors
used in the mode expansion (\ref{MinkowskiModeExpansionStandardRight1})
are defined as follows:
\begin{eqnarray}
u_{\boldsymbol{k}} = \frac{1}{\sqrt{2}}\begin{bmatrix}
1 \\
\frac{k_{x}+ik_{y}}{k}\end{bmatrix},~~~~~
v_{\boldsymbol{k}} = \frac{1}{\sqrt{2}}\begin{bmatrix}
-\big(\frac{k_{x}-ik_{y}}{k}\big) \\
1 \end{bmatrix}.
\end{eqnarray}
In the above definitions, $u_{\boldsymbol{k}}$ has positive helicity
$h = +1$, whereas $v_{\boldsymbol{k}}$ has negative helicity $h =
-1$. The particle
$\hat{a}$ and hole $\hat{b}$ operators satisfy anti-commutation
relations and annihilate the flat honeycomb (Minkowski) vacuum state
$|0_{\cal{M}}\rangle$:
\begin{eqnarray}\label{Minkowski Operators1}
\{\hat{a}_{\boldsymbol{k}},\hat{a}^{\dagger}_{\boldsymbol{k}'}\} & = & \delta(\boldsymbol{k}-\boldsymbol{k}'),~~~~~ \{\hat{b}_{\boldsymbol{k}},\hat{b}^{\dagger}_{\boldsymbol{k}'}\}~=~\delta(\boldsymbol{k}-\boldsymbol{k}'), \nonumber \\
\hat{a}_{\boldsymbol{k}}|0_{\cal{M}}\rangle & = & 0,~~~~~~~~~~~~~~~~~~~\hat{b}_{\boldsymbol{k}}|0_{\cal{M}}\rangle~=~0.
\end{eqnarray}
To obtain the mode expansion for the left node $\boldsymbol{K}_{\text{L}}$ (left-handed Weyl fermions), the particle and hole spinors $u_{\boldsymbol{k}}$ and $v_{-\boldsymbol{k}}$ in Eq.~(\ref{MinkowskiModeExpansionStandardRight1}) need to be switched with $v_{\boldsymbol{k}}$ and $u_{-\boldsymbol{k}}$, respectively, which means they both have negative helicities.
As is well known, the particle and hole fermionic excitations in
graphene obey a linear dispersion relation, with $\omega_{k}\propto|\boldsymbol{k}|$. In Fig.~\ref{Band Structure: Minkowski},
we depict this linear energy dispersion, with the system ground
state being a fully occupied valence band at negative energies
and a fully unoccupied conduction band at positive energies.
This figure also depicts the positive energy particle (or electron)
and hole excitations that are captured by the mode expansion (\ref{MinkowskiModeExpansionStandardRight1}).
\section{Mode expansion: Rindler system}
\label{SEC:four}
In this section, we study the case of fermions hopping
in a honeycomb lattice in the presence of a strain field that
leads to the Rindler low-energy Hamiltonian, obtained by approximating
$v(x) \simeq \frac{v_0}{\lambda}|x|$. As in the flat case,
the system Hamiltonian comprises terms from the left and right nodes,
$\hat{H} = \hat{H}_{\rm R} + \hat{H}_{\rm L}$,
with the right-node Hamiltonian:
\begin{equation}
\hat{H}_{\text{R}}
= \frac{v_0}{\lambda} \int d^{2}r~\hat{\psi}^{\dagger}_{\text{R}}(\boldsymbol{r})
\sqrt{|x|}
\boldsymbol{\sigma}\cdot\hat{\boldsymbol{p}}\sqrt{|x|}
\hat{\psi}_{\text{R}}(\boldsymbol{r}),
\label{Rindler Hamiltonian}
\end{equation}
which we call the Rindler Hamiltonian by analogy with the well-known Rindler metric,
that describes how the flat Minkowski spacetime is seen by an accelerating observer \cite{Rindler:1966zz}. Following
the discussion in the homogeneous case, we find the equation of motion
\begin{equation}\label{Dirac Equation Rindler Graphene}
i\hbar\partial_{t}\hat{\psi}_{\text{R}}(\boldsymbol{r}) = \frac{v_0}{\lambda}
\sqrt{|x|} \boldsymbol{\sigma}\cdot\hat{\boldsymbol{p}} \sqrt{|x|} ~\hat{\psi}_{\text{R}}(\boldsymbol{r}),
\end{equation}
the Dirac equation for massless fermions in Rindler spacetime with Rindler coordinates
$(t,x,y)$ {\cite{Wald:1984rg,Misner:1973prb,Rindler:2006km}} described by the line element
\begin{eqnarray}\label{Rindler Metric}
ds^{2} & = & -\Big(\frac{x}{\lambda}\Big)^{2}v_{0}^{2}dt^{2} + dx^{2} + dy^{2}.
\end{eqnarray}
In Appendix \ref{SEC:Appendix Dirac Eqn}, we describe how the Rindler metric (see
Eq.~{(\ref{Rindler Coordinates})}) leads to a Dirac
equation for accelerating electrons (see
Eq.~{(\ref{Weyl Equations Rindler Coordinates})}). To
understand the role of this metric in the context of honeycomb
systems, we first need to understand its role in relativistic physics. Imagine a Rindler observer in the frame $S_{\text{R}}$, moving with some acceleration
$\boldsymbol{a}=a\hat{x}$ ($a>0$) with respect to an inertial frame $S$. The observer starts their journey far away at
$x=\infty$ at time $t=-\infty$ with velocity close to the speed of light $c$ moving
towards the origin $x=0$. Initially they decelerate, eventually stopping at a
certain distance from the origin $x_{\text{min}}=\frac{c^{2}}{a}$, and
then return to $x=+\infty$ at $t=+\infty$. Since at any one instant of time, the Rindler observer is moving at a certain velocity $v$, we expect a hyperbolic-like trajectory
similar to the Minkowski case: $-v_{0}^{2}T^{2}+X^{2}+Y^{2}=\text{constant}$, and the transformation between inertial coordinates $(T,X)$ and Rindler $(t,x)$ coordinates to be similar to ${(\ref{Lorentz Transformation})}$. This is reminiscent of non-relativistic physics, where the trajectory of an accelerated observer is parabolic: $x=x_{0}+u_{0}t+\frac{1}{2}at^{2}$. However, relativistic accelerations need to be hyperbolic as motion also affects the rate at which the observer's clock ticks. Thus the relation between the inertial and Rindler coordinates are as follows {\cite{Wald:1984rg,Misner:1973prb,Rindler:2006km}}:
\begin{eqnarray}\label{Rindler Transformation}
cT & = & x_{\text{min}}\sinh\frac{ct}{x_\text{min}}, \nonumber \\
X & = & x_{\text{min}}\cosh\frac{ct}{x_\text{min}},
\end{eqnarray}
which gives us the trajectory of a Rindler observer viewed from an inertial frame $S$: $X^{2}-c^{2}T^{2} = x^{2}_{\text{min}}$. The above coordinates $(T,X)$ label the worldline of an accelerated observer from the perspective of an inertial frame. If the acceleration is changed to a different but constant value, then we get a family of Rindler observers, each with a different closest distance of approach $x_{\text{min}}$.
This family is parameterized using a new coordinate $x_{\text{min}}\rightarrow x$, giving us the Rindler metric in Eq.~{(\ref{Rindler Metric})}. If we set the spatial coordinates to zero, i.e. $dx=dy=0$ then $t$ behaves like the proper time as seen on the watch of a Rindler observer. Similar arguments hold for an observer accelerating in the opposite direction with $a<0$. Note that (\ref{Rindler Metric}) becomes degenerate at $x=0$, i.e. the time-time component of the metric tensor vanishes ($g_{tt}=0$) and hence has no inverse. This is known as the \emph{Rindler horizon}. Because of this horizon, oppositely accelerating observers can never communicate with each other.
Note that the connection between the coordinates $(T,X)$ and $(t,x)$ is just a switch of variables, therefore the metric {(\ref{Rindler Metric})}, is basically flat spacetime written in disguise, and thus the Riemann curvature of this spacetime is zero. Also note that the coordinates $(t,x,y)$ cover only two portions of the flat Minkowski spacetime: the right Rindler wedge $x>0$ for positive accelerations and the left Rindler wedge $x<0$ for negative accelerations.
In the context of strained graphene, the emergent metric in Eq.~(\ref{Rindler Metric}) tells us that similar Rindler physics is expected provided we replace the speed of light with the Fermi velocity $c\rightarrow v_{0}$, and the distance of closest approach with the strain scale $x_{\text{min}}\rightarrow\lambda$. Once we do this, then we can interpret the electron dynamics inside graphene as Rindler fermions where the analogue acceleration is given by $a=\frac{v_{0}^{2}}{\lambda}$, where a choice of strain $\lambda$ corresponds to choosing a unique Rindler observer with this acceleration. Such analogue accelerations are expected here because under the semiclassical model of electron dynamics, the strained graphene has an environment with broken translation symmetry that forces the Fermi velocity to be spatially dependent $v(x)=v_{0}(1+\frac{|x|}{\lambda})$. Moreover, the strain pattern in Eq.~(\ref{RindlerStrainPattern}) tells us that carbon atoms become closer with distance from the origin, thus enhancing electron hopping. This hopping from one carbon atom to another will be most difficult at the origin itself, especially for low-energy and long-wavelength modes which cannot tunnel from one side to the other. Therefore, $x=0$ being a barrier for such modes acts as an analogue of the Rindler horizon, breaking the strained graphene into two disconnected pieces: the right side mimics the right Rindler wedge, and the left side mimics the left Rindler wedge.
Our next task is to identify the normal mode expansion for the field operator $\hat{\psi}_{\text{R}}(\boldsymbol{r})$ in
the Rindler
Dirac equation (\ref{Dirac Equation Rindler Graphene}) \cite{Unruh:1974,Candelas:1978gg,Soffel:1980kx,Hughes:1983ch,Iyer:1985ufr,Jauregui:1991me,Crispino:2007eb,Takagi:1986kn}. In doing this, we define
the frequency scale $\Omega>0$ and look for positive energy $(E= \hbar\Omega>0)$
solutions (corresponding to Rindler particles) and
negative energy $(E= -\hbar\Omega<0)$ solutions (corresponding to Rindler holes).
Starting with the $E>0$ case, the solutions take the form $\psi^{+}_{\Omega}(x,k_{y})
e^{i(k_{y}y-\Omega t)}$, where $p_{y}=\hbar k_{y}$ is the momentum in
the $y$-direction.
If we define the components of the spinor part via
\begin{equation}
\psi^{+}_{\Omega}(x,k_{y}) = \begin{pmatrix}
f(x) \\
g(x)
\end{pmatrix},
\end{equation}
then the functions $f(x)$ and $g(x)$ satisfy (henceforth we set $\hbar\to 1$):
\begin{subequations}
\label{fandgee}
\begin{eqnarray}
\label{Og-gives-f}
\bigg(|x|\frac{d}{dx}+k_{y}|x|+\frac{\text{sgn}(x)}{2}\bigg)g(x) & = & i\Omega f(x), \\ \label{Of-gives-g}
\bigg(|x|\frac{d}{dx}-k_{y}|x|+\frac{\text{sgn}(x)}{2}\bigg)f(x) & = & i\Omega g(x).
\end{eqnarray}
\end{subequations}
The dimensionless form of these equations came because we measured energy (or frequency, $\Omega$) relative to the scale
\begin{equation}
\label{eq:omegacdef}
\omega_{c}=v_0/\lambda
\end{equation}
characterizing the strain magnitude.
Starting with the case of $x>0$ and $k_y>0$, and focusing on solutions that are normalizable at $|x|\to \infty$, we find:
\begin{subequations}
\label{Bessel Solutions}
\begin{eqnarray}
f(x) & = & K_{\frac{1}{2}-i\Omega}\big(k_{y}x\big) - K_{\frac{1}{2}+i\Omega}\big(k_{y}x\big),
\\
g(x) & = & K_{\frac{1}{2}-i\Omega}\big(k_{y}x\big) + K_{\frac{1}{2}+i\Omega}\big(k_{y}x\big),
\end{eqnarray}
\end{subequations}
where $K_{\nu}(x)$ is the modified Bessel function of the second kind, that diverges at the origin $x=0$ and for large negative arguments $x\rightarrow-\infty$. This divergence can be attributed to the form of the analogue Rindler metric {(\ref{Rindler Metric})}, whose time-time component vanishes at $x=0$, and contributes a non-smooth modulus function $|x|$ in the Weyl equations
which leads to different solutions in the left and right spatial regions of the strained honeycomb lattice. As we have already discussed,
this demarcation of the system at $x=0$ is known as the Rindler horizon. In analogy with relativity, the left spatial portion acts as the left Rindler wedge, and similarly for the right portion. There, an observer in right wedge will never be able to communicate with their counterpart in the left wedge.
In the next section, we will see that this is an essential reason why a natural temperature emerges in this system.
The solutions for $f$ and $g$ above have Bessel functions with positive arguments. Therefore they are finite and vanish asymptotically for $k_{y}x\rightarrow\infty$. For the case $x>0$ and $k_{y}<0$, the equations {(\ref{Og-gives-f})} and {(\ref{Of-gives-g})} get interchanged, resulting in an exchange of the spinor components $f(x)\leftrightarrow g(x)$. The case of $x<0$ and $k_{y}>0$ effectively switches $\Omega\rightarrow-\Omega$ and $k_y \to -k_y$ relative to the $x>0$ and $k_y>0$ case,
while the case of $x<0$ and $k_{y}<0$ effectively switches $\Omega\rightarrow-\Omega$ relative to
the $x>0$ and $k_y>0$ case. Taken together, these considerations imply the positive energy
spinor
\begin{equation*}
\psi^{+}_{\Omega}(x,k_{y}) = \begin{cases}
\begin{pmatrix}
K_{\frac{1}{2}-i\Omega} - {\rm sgn}(k_y) K_{\frac{1}{2}+i\Omega}\\
K_{\frac{1}{2}-i\Omega} + {\rm sgn}(k_y) K_{\frac{1}{2}+i\Omega} \\
\end{pmatrix} &\text{if $x>0$}\\
\begin{pmatrix}
K_{\frac{1}{2}+i\Omega} + {\rm sgn}(k_y) K_{\frac{1}{2}-i\Omega} \\
K_{\frac{1}{2}+i\Omega} - {\rm sgn}(k_y) K_{\frac{1}{2}-i\Omega}\\
\end{pmatrix} &\text{if $x<0$}
\end{cases}
\end{equation*}
where $ K_{\frac{1}{2}\pm i\Omega}$ is shorthand for $K_{\frac{1}{2}\pm i\Omega}(|k_y x|)$. We emphasize here that the above two solutions come from solving the Rindler-Dirac equation separately for $x>0$ and $x<0$, pertaining to the two sides of the honeycomb lattice. Thus we define orthonormality separately in
the $x>0$ and $x<0$ regimes.
Turning to the $E<0$ case, we take the solutions to have the form $\psi^{-}_{\Omega}(x,k_{y})
e^{-i(k_{y}y-\Omega t)}$, which effectively changes the sign of $k_y$ and $\Omega$ relative to the
positive energy case. This leads to the negative energy spinors:
\begin{equation*}
\psi^{-}_{\Omega}(x,k_{y}) = \begin{cases}
\begin{pmatrix}
K_{\frac{1}{2}+i\Omega} + {\rm sgn}(k_y) K_{\frac{1}{2}-i\Omega}\\
K_{\frac{1}{2}+i\Omega} - {\rm sgn}(k_y) K_{\frac{1}{2}-i\Omega} \\
\end{pmatrix} &\text{if $x>0$}\\
\begin{pmatrix}
K_{\frac{1}{2}-i\Omega} - {\rm sgn}(k_y) K_{\frac{1}{2}+i\Omega} \\
K_{\frac{1}{2}-i\Omega} + {\rm sgn}(k_y) K_{\frac{1}{2}+i\Omega}\\
\end{pmatrix} &\text{if $x<0$}
\end{cases}
\end{equation*}
The normal mode expansion then takes the form:
\cite{Unruh:1974,Candelas:1978gg,Soffel:1980kx,Hughes:1983ch,Iyer:1985ufr,Jauregui:1991me,Crispino:2007eb,Takagi:1986kn}
\begin{eqnarray}
\label{ModeExpansionRightNode}
&&\hat{\psi}_{\text{R}}(\boldsymbol{r},t) = \int_{-\infty}^{\infty} \frac{dk_{y}}{\sqrt{2\pi}} \int_{0}^{\infty} d\Omega~N_{k_{y},\Omega}
\\
&& \hspace{-0.5cm}\times \Big[\psi^{+}_{\Omega}(x,k_{y})e^{i(k_{y}y-\Omega t)}\hat{c}_{k_{y},\Omega}
+ \psi^{-}_{\Omega}(x,k_{y})e^{-i(k_{y}y-\Omega t)}\hat{d}^{\dagger}_{k_{y},\Omega} \Big],
\nonumber
\end{eqnarray}
where the operators $\hat{c}_{k_{y},\Omega}$ annihilate positive energy Rindler
particles and the operator $\hat{d}^{\dagger}_{k_{y},\Omega}$ creates
a negative energy Rindler hole, as illustrated in Fig.~\ref{Band Structure: Rindler}.
These particle and hole operators satisfy fermionic anticommutation relations:
\begin{eqnarray}
\{\hat{c}_{k_{y},\Omega},\hat{c}^{\dagger}_{k'_{y},\Omega'}\} & = & \delta(k_{y}-k'_{y})\delta(\Omega-\Omega'),
\\
\{\hat{d}_{k_{y},\Omega},\hat{d}^{\dagger}_{k'_{y},\Omega'}\} & = & \delta(k_{y}-k'_{y})\delta(\Omega-\Omega').
\end{eqnarray}
We emphasize that, in our convention, the energy scale $\hbar \Omega>0$, so that
both particle and hole excitations have positive energy (although the latter
emerge from below the Fermi level). Thus the Rindler vacuum $|0_{\cal{R}}\rangle$
is annihilated by both the electron and hole operators:
\begin{eqnarray}
\hat{c}_{k_{y},\Omega}|0_{\cal{R}}\rangle & = & 0,
\\
\hat{d}_{k_{y},\Omega}|0_{\cal{R}}\rangle &=& 0.
\end{eqnarray}
For the left handed electrons, we need to solve the corresponding set of Weyl equations, which is the same as the equation for right-handed electrons, except for a minus sign associated with the time derivative. This amounts to saying that the fermions on ${\bf K}_{\text{L}}$ node will be described by the same mode expansion as {(\ref{ModeExpansionRightNode})}, except that the spinors will all change signs for the frequency i.e. $\psi^{\pm}_{\Omega}(x,k_{y})\rightarrow\psi^{\pm}_{-\Omega}(x,k_{y})$. Finally, to determine the normalization factor $N_{k_{y},\Omega}=\sqrt{\frac{|k_{y}|}{2\pi^{2}}\cosh\pi\Omega}$ we make use of the inner product for Weyl spinors {\cite{Takagi:1986kn,Birrell:1982ix}}:
\begin{eqnarray}\label{Orthonormality}
&&\Big(\psi^{\sigma'}_{\Omega'}(x,k_{y}),\psi^{\sigma}_{\Omega}(x,k_{y})\Big) \equiv \int_{0}^{\infty} dx~\psi^{\sigma'\dagger}_{\Omega'}(x,k_{y})
\psi^{\sigma}_{\Omega}(x,k_{y}) \nonumber \\
&&\qquad
= \delta^{\sigma\sigma'}\delta(\Omega-\Omega'),~~~~~~~
\end{eqnarray}
where $\sigma=\pm$ denotes the positive or negative energy spinors, and the following identity for Bessel functions {\cite{Jauregui:1991me,Gradshteyn}}:
\begin{eqnarray}\label{Normalization Bessel}
&&\int_0^\infty dx \, \Big[K_{\frac{1}{2}+i\Omega} (x)
K_{\frac{1}{2}-i\Omega'} (x)
+
K_{\frac{1}{2}-i\Omega} (x)
K_{\frac{1}{2}+i\Omega'} (x)\Big] \nonumber \\
& = & \pi^2 \sech (\pi\Omega)\delta(\Omega-\Omega').
\end{eqnarray}
Now that we have derived the mode expansion {(\ref{ModeExpansionRightNode})} in terms of Bessel functions that are singular at the horizon for the field operators in a strained graphene system (or in an ultracold honeycomb optical lattice that has a linear-in-position Fermi velocity), in the next section, we will describe how this leads to spontaneous creation of electron-hole pairs, which is equivalent to saying that a sudden change in the Fermi velocity $v_{0}\rightarrow v_{0}\frac{|x|}{\lambda}$ leads to a spontaneous jump of electrons from the valence to conduction band.
\section{Spontaneous Electron-Hole Pair Creation}\label{SEC:five}
In the last two sections, we discussed the Dirac Hamiltonian {(\ref{Dirac Hamiltonian})} and its solutions {(\ref{MinkowskiModeExpansionStandardRight1})} for a flat honeycomb system with homogeneous Fermi velocity $v(x)=v_{0}$, and the Rindler Hamiltonian {(\ref{Rindler Hamiltonian})} and its solutions {(\ref{ModeExpansionRightNode})} for an inhomogeneous honeycomb lattice with a spatially-varying Fermi velocity $v(x)=v_{0}\frac{|x|}{\lambda}$. The latter solutions are made out of spinors of Bessel functions that diverge at the horizon $x=0$,
with separate solutions at $x>0$ and $x<0$.
In this section, we will describe how this set-up leads to spontaneous creation of electron-hole pairs, with the spectrum of these excitations described by an emergent Fermi-Dirac distribution that is a function of Rindler mode frequency $\Omega$ and the characteristic frequency $\omega_{c}$,
defined in Eq.~(\ref{eq:omegacdef}), that is proportional to the Unruh temperature.
Since the Rindler $|0_{\cal{R}}\rangle$ and the Minkowski vacua $|0_{\cal{M}}\rangle$ are associated with strained and flat honeycomb lattices respectively, they are expected to be very different from each other, i.e. the notion of particles that one ascribes to with respect to the Minkowski vacuum cannot be same as the Rindler case, since in the
former case there exists translation symmetry, whereas in the latter,
the mechanical strain strongly modifies the properties of system
eigenstates.
We consider the situation where we start with the flat honeycomb Hamiltonian {(\ref{Dirac Hamiltonian})} described by the mode expansion {(\ref{MinkowskiModeExpansionStandardRight1})} for the field operators, and then suddenly switch on the linear-in-position Fermi velocity with a characteristic strain length $\lambda$, thereby invoking the Rindler Hamiltonian {(\ref{Rindler Hamiltonian})} and the corresponding mode expansion {(\ref{ModeExpansionRightNode})}.
In the Heisenberg picture then, we expect that the mode expansion for the fermionic field operators $\hat{\psi}_{\text{R}}$ on the right node evolve from Eq.~{(\ref{MinkowskiModeExpansionStandardRight1})} to Eq.~{(\ref{ModeExpansionRightNode})}, whereas the state of the system will remain the Minkowski vacuum state $|0_{\cal{M}}\rangle$.
This is just the sudden approximation of quantum mechanics,
where if a potential suddenly changes its shape, then the original ground state can be expressed as a linear combination of the eigenstates of the new Hamiltonian, and thus the observables can be found by taking expectation values of operators in the modified system with respect to the ground state of the original Hamiltonian. Thus, in the present case, to find observables we need to know how the Rindler operators $\hat{c}$ and $\hat{d}$ act on the Minkowski vacuum state $|0_{\cal{M}}\rangle$. For this, we need to find an expression of these Rindler operators in terms of the Minkowski annihilation operators $\hat{a}$ and $\hat{b}$.
To connect these operators, we can simply equate the two mode expansions {(\ref{MinkowskiModeExpansionStandardRight1})} and {(\ref{ModeExpansionRightNode}) as they describe the same quantum field operator $\hat{\psi}_{\text{R}}$. Then we take its inner product with positive energy solutions $\big(\psi^{+}_{\Omega}(x,k_{y}),\hat{\psi}_{\text{R}}(x)\big)$ for electron, and negative energy solutions $\big(\psi^{-}_{\Omega}(x,k_{y}),\hat{\psi}_{\text{R}}(x)\big)$ for hole Rindler operators, as defined in (\ref{Orthonormality}) {\cite{Takagi:1986kn,Birrell:1982ix}}, yielding:
\begin{eqnarray}\label{BogoliubovTransformation}
\hat{c}^{>}_{k_{y},\Omega} & = & \int d^{2}k'~ \Bigg[\alpha^{+,>}_{\boldsymbol{k}',k_{y},\Omega}\hat{a}_{\boldsymbol{k}'} + \beta^{+,>}_{\boldsymbol{k}',k_{y},\Omega}\hat{b}^{\dagger}_{\boldsymbol{k}'}\Bigg], \nonumber \\
\hat{d}^{>\dagger}_{k_{y},\Omega} & = & \int d^{2}k'~ \Bigg[\beta^{-,>}_{\boldsymbol{k}',k_{y},\Omega}\hat{a}_{\boldsymbol{k}'} + \alpha^{-,>}_{\boldsymbol{k}',k_{y},\Omega}\hat{b}^{\dagger}_{\boldsymbol{k}'}\Bigg],
\end{eqnarray}
the Bogoliubov transformations that express the Rindler ladder operators for $x>0$ (denoted by superscript $>$) as a linear combination of the Minkwoski ladder operators. Similar relations hold for $x<0$ region with the superscript $<$ at the appropriate places.
Following Takagi \cite{Takagi:1986kn}, the coefficients of this linear relationship $\alpha^{\pm}_{\boldsymbol{k},k'_{y},\Omega'}$ and $\beta^{\pm}_{\boldsymbol{k},k'_{y},\Omega'}$, known as Bogoliubov coefficients, are found to be:
\begin{eqnarray}\label{BogoliubovCoeffs}
\alpha^{+,>}_{\boldsymbol{k}',k_{y},\Omega} & = & \sqrt{n_{\text{F}}(-2\pi\Omega)}~\delta(k_{y}-k'_{y})~\mathcal{P}(\boldsymbol{k}',\Omega), \nonumber \\
\beta^{+,>}_{\boldsymbol{k}',k_{y},\Omega} & = & -i \sqrt{n_{\text{F}}(2\pi\Omega)}~\delta(k_{y}+k'_{y})~\mathcal{P}(\boldsymbol{k}',\Omega), \nonumber \\
\alpha^{-,>}_{\boldsymbol{k}',k_{y},\Omega} & = & \alpha^{+,<}_{\boldsymbol{k}',k_{y},\Omega} = \big(\alpha^{+,>}_{\boldsymbol{k}',k_{y},\Omega}\big)^{*} = \big(\alpha^{-,<}_{\boldsymbol{k}',k_{y},\Omega}\big)^{*}, \nonumber \\
\beta^{-,>}_{\boldsymbol{k}',k_{y},\Omega} & = & \beta^{+,<}_{\boldsymbol{k}',k_{y},\Omega} = \big(\beta^{+,>}_{\boldsymbol{k}',k_{y},\Omega}\big)^{*} = \big(\beta^{-,<}_{\boldsymbol{k}',k_{y},\Omega}\big)^{*},~~~~~~~
\end{eqnarray}
where the first Bogoliubov coefficient $\alpha^{+,>}$ for the right side of graphene is found by taking the inner product of the positive energy Rindler spinor for $x>0$ with the positive energy Minkowski modes, whereas the second coefficient $\beta^{+,>}$ is found using the negative energy Minkowski modes. Similarly, the other two coefficients $\alpha^{-,>}$ and $\beta^{-,>}$ can be found by using negative energy Rindler spinors. In the last two lines, we list how the rest of the coefficients are related to the first two via complex conjugation. These coefficients are written in terms of the Fermi-Dirac function $n_{\text{F}}(x)=(e^{x}+1)^{-1}$ and the projection operator:
\begin{eqnarray}\label{Projection Operator}
\mathcal{P}(\boldsymbol{k},\Omega) & = & \frac{1+i}{\sqrt{2}}\frac{1}{\sqrt{2\pi k}}\bigg(\frac{k+k_{x}}{k-k_{x}}\bigg)^{\frac{i\Omega}{2}} \nonumber \\
& \times & \bigg(\sqrt{\frac{k+k_{x}}{2k}}+i\sqrt{\frac{k-k_{x}}{2k}}\bigg).~~~~~~
\end{eqnarray}
The anticommutation relations for the Rindler operators $\hat{c}_{k_{y},\Omega}$ and $\hat{d}_{k_{y},\Omega}$, along with those of the Minkowski operators
$\hat{a}_{\boldsymbol{k}}$ and $\hat{b}_{\boldsymbol{k}}$ and the transformations
Eq.~(\ref{BogoliubovTransformation}) imply the following normalization condition for the Bogoliubov coefficients:
\begin{eqnarray}
&&\int d^{2}\tilde{k}~ \Big(\alpha^{\sigma,r}_{\tilde{\boldsymbol{k}},k_{y},\Omega}\alpha^{\sigma',r'*}_{\tilde{\boldsymbol{k}},k'_{y},\Omega'} + \beta^{\sigma,r}_{\tilde{\boldsymbol{k}},k_{y},\Omega}\beta^{\sigma',r'*}_{\tilde{\boldsymbol{k}},k'_{y},\Omega'}\Big) \nonumber \\
& = & \delta^{\sigma\sigma'}\delta^{rr'}\delta(k_{y}-k'_{y})\delta(\Omega-\Omega'),
\end{eqnarray}
where the superscript $\sigma=\pm$ labels the positive and negative energy solutions, and $r=>,<$ labels the right ($x>0$) or left ($x<0$) region of graphene. To find these Bogoliubov coefficients, we made use of the Fourier transform of the modified Bessel functions of the second kind {\cite{Jauregui:1991me,Gradshteyn}}:
\begin{eqnarray}\label{FourierBessel}
&& \int_{0}^{\infty}dx~K_{\nu}(ax)e^{ibx} \nonumber \\
& = & \frac{\pi}{4\sqrt{a^{2}+b^{2}}}\Bigg[\frac{(\sqrt{r^{2}+1}+r)^{\nu}+(\sqrt{r^{2}+1}-r)^{\nu}}{\cos(\pi\nu/2)} \nonumber \\
& + & i~\frac{(\sqrt{r^{2}+1}+r)^{\nu}-(\sqrt{r^{2}+1}-r)^{\nu}}{\sin(\pi\nu/2)}\bigg]~~~~~
\end{eqnarray}
where $r=b/a$. The conditions required for the validity of the sine transform are $\text{Re}~a>0$, $b>0$, $|\text{Re}~\nu|<2$ and $\nu\neq 0$. Whereas the conditions for the cosine transform are $\text{Re}~a>0$, $b>0$, $|\text{Re}~\nu|<1$. For our case, $a=k_{y}>0$ and $\nu=\frac{1}{2}\pm i\Omega$ satisfy the conditions. However, $b=k_{x}$ could be positive or negative. For $k_{x}>0$ case, the above Fourier transform can be used whereas for $k_{x}<0$, one needs to take the complex conjugate of the above transform.
Note that the transformation in {(\ref{BogoliubovTransformation})} and the corresponding Bogoliubov coefficients in {(\ref{BogoliubovCoeffs})}, can be re-written
in a much cleaner way {\cite{Takagi:1986kn}}:
\begin{subequations}
\label{Bogoliubov Transformations Actual}
\begin{eqnarray}
\label{Bog1}
\hat{c}^{>}_{k_{y},\Omega} & = &
\sqrt{n_{\text{F}}(-2\pi\Omega)}\hat{A}_{k_{y},\Omega} -i
\sqrt{n_{\text{F}}(2\pi\Omega)}\hat{B}^{\dagger}_{-k_{y},\Omega},~~~~~ \\ \label{Bog2} \hat{d}^{>\dagger}_{k_{y},\Omega} & = &
i\sqrt{n_{\text{F}}(2\pi\Omega)}\hat{A}^{*}_{-k_{y},\Omega} +
\sqrt{n_{\text{F}}(-2\pi\Omega)}\hat{B}^{*\dagger}_{k_{y},\Omega}.~~~~~
\end{eqnarray}
\end{subequations}
where instead of using momentum integrations as in (\ref{BogoliubovTransformation}), the Rindler operators are expressed in terms of modified Minkowski $\hat{A}$ and $\hat{B}$, that are defined as a complex linear combination of the original Minkowski operators $\hat{a}$ and $\hat{b}$ as follows {\cite{Takagi:1986kn}}:
\begin{eqnarray}\label{Modified & Actual Minkwoski Operators}
\hat{A}_{k_{y},\Omega} & = & \int_{-\infty}^{\infty} dk_{x}~\mathcal{P}(\boldsymbol{k},\Omega) ~\hat{a}_{\boldsymbol{k}}, \nonumber \\
\hat{B}^{\dagger}_{k_{y},\Omega} & = & \int_{-\infty}^{\infty}dk_{x}~\mathcal{P}(\boldsymbol{k},\Omega)
~\hat{b}^{\dagger}_{\boldsymbol{k}},
\end{eqnarray}
that (like the operators $\{\hat{a},\hat{b}\}$) also annihilate the Minkowski vacuum:
\begin{equation}
\label{Modified Minkwoski Annihilate Miknkoski vacuum}
\hat{A}_{k_{y},\Omega} |0_{\cal{M}}\rangle = \hat{B}_{k_{y},\Omega} |0_{\cal{M}}\rangle= 0 ,
\end{equation}
which follows from Eq.~(\ref{Minkowski Operators1}). In addition, they satisfy the
anti-commutation relations:
\begin{eqnarray}
\big\{\hat{A}_{k_{y},\Omega},\hat{A}^{\dagger}_{k'_{y},\Omega'}\big\}
& = & \big\{\hat{B}_{k_{y},\Omega},\hat{B}^{\dagger}_{k'_{y},\Omega'}\big\} \nonumber \\
& = & \delta(k_{y}-k'_{y})\delta(\Omega-\Omega').
\label{Anti-Commutation Modified Minkwoski}
\end{eqnarray}
As a result of these properties, the expectation value of modified operators in the Minkowski vacuum state $|0_{\cal{M}}\rangle$ become:
\begin{eqnarray}\label{VEV of Modified Minkowski}
\big\langle 0_{\cal{M}} \big| \hat{A}_{k_{y},\Omega}\hat{A}^{\dagger}_{k'_{y},\Omega'} \big| 0_{\cal{M}} \big\rangle & = & \big\langle 0_{\cal{M}} \big| \hat{B}_{k_{y},\Omega}\hat{B}^{\dagger}_{k'_{y},\Omega'} \big| 0_{\cal{M}} \big\rangle \nonumber \\
& = & \delta(k_{y}-k'_{y})\delta(\Omega-\Omega'), \nonumber \\
\big\langle 0_{\cal{M}} \big| \hat{A}^{\dagger}_{k_{y},\Omega}\hat{A}_{k'_{y},\Omega'} \big| 0_{\cal{M}} \big\rangle & = & \big\langle 0_{\cal{M}} \big| \hat{B}^{\dagger}_{k_{y},\Omega}\hat{B}_{k'_{y},\Omega'} \big| 0_{\cal{M}} \big\rangle = 0,~~~~~~~
\end{eqnarray}
where in order to derive the Dirac delta function in energies $\delta(\Omega-\Omega')$, in the above vacuum averages, the following identity was used {\cite{Takagi:1986kn}}:
\begin{eqnarray}\label{Integral-kx-Indentity}
\int_{-\infty}^{\infty} \frac{dk_{x}}{2\pi k} \bigg(\frac{k+k_{x}}{k-k_{x}}\bigg)^{i(\Omega-\Omega')/2} & = & \int_{-\infty}^{\infty} \frac{dy}{2\pi} e^{i(\Omega-\Omega')y} \nonumber \\
& = & \delta(\Omega-\Omega'),
\end{eqnarray}
where in the first equality we made the substitution $y=\frac{1}{2}\log\big(\frac{k+k_{x}}{k-k_{x}}\big)$.
The advantage of (\ref{Bogoliubov Transformations Actual}) emerges when we evaluate the expectation value of Rindler operators in the Minkwoski vacuum, where we only need vacuum averages of modified Minkwoski operators,
simplifying our calculations. Interestingly,
when we compute expectation values of the Rindler operators with
respect to the Minkoski vacuum we find that such averages
involve an emergent Fermi distribution:
\begin{eqnarray}\label{Vacuum Averages of Rindler Operators}
\langle0_{\cal{M}}|\hat{c}^{>\dagger}_{k_{y},\Omega}\hat{c}^{>}_{k'_{y},\Omega'}|0_{\cal{M}}\rangle & = & \langle0_{\cal{M}}|\hat{d}^{>\dagger}_{k_{y},\Omega }
\hat{d}^{>}_{k'_{y},\Omega'}|0_{\cal{M}}\rangle, \nonumber \\
& = & n_{\text{F}}(2\pi\Omega)\delta(k_{y}-k'_{y})\delta(\Omega-\Omega'),~~~~~~~
\end{eqnarray}
that arise solely due to strains in the material, rather than due to any real heat bath. This implies that although the occupancy of Rindler electrons and holes in the Rindler vacuum is zero, in the Minkowski vacuum state it is proportional to the Fermi function. Thus surprisingly, spontaneous particle creation here has a spectrum that turns out to be \emph{thermal} in nature. This is known as the Fulling-Davies-Unruh effect
which, in the conventional setting, says that an accelerating observer views the Minkowski spacetime as a thermal bath of particles at the Unruh temperature $T_{\text{U}}=\frac{\hbar a}{2\pi k_{\text{B}} c}$. Within the present analogue setup, in which the accelerating observer is
replaced by a sudden switch on of a spatially-inhomogeneous strain, the analogue Unruh temperature is given by $T_{\text{U}}=\frac{\hbar\omega_{c}}{2\pi k_{\text{B}}}=\frac{\hbar v_{0}}{2\pi k_{\text{B}}\lambda}$.
To see how this thermality arises in a concrete way, we re-write the (\ref{Bogoliubov Transformations Actual}) for electrons in the right side of graphene $(x>0)$ and holes on the left side $(x<0)$:
\begin{subequations}
\label{Bogoliubov Transformations ><}
\begin{eqnarray}
\label{Bog >}
\hat{c}^{>}_{k_{y},\Omega} & = &
\sqrt{n_{\text{F}}(-2\pi\Omega)}\hat{A}_{k_{y},\Omega} -i
\sqrt{n_{\text{F}}(2\pi\Omega)}\hat{B}^{\dagger}_{-k_{y},\Omega},~~~~~~~ \\ \label{Bog <} \hat{d}^{<\dagger}_{-k_{y},\Omega} & = &
-i\sqrt{n_{\text{F}}(2\pi\Omega)}\hat{A}_{k_{y},\Omega} +
\sqrt{n_{\text{F}}(-2\pi\Omega)}\hat{B}^{\dagger}_{-k_{y},\Omega}.~~~~~~~
\end{eqnarray}
\end{subequations}
where we made use of the symmetry properties of Bogoliubov coefficients in (\ref{BogoliubovCoeffs}) and we chose to evaluate the hole operator for $x<0$ region and with inverted momentum $-k_{y}$ with respect to the electrons. These can be inverted to write the modified operators in terms of Rindler operators:
\begin{subequations}
\label{Inverse Bogoliubov Transformations ><}
\begin{eqnarray}
\label{Inverse Bogoliubov A}
\hat{A}_{k_{y},\Omega} & = &
\sqrt{n_{\text{F}}(-2\pi\Omega)}\hat{c}^{>}_{k_{y},\Omega} + i
\sqrt{n_{\text{F}}(2\pi\Omega)}\hat{d}^{<\dagger}_{-k_{y},\Omega},~~~~~~~ \\ \label{Inverse Bogoliubov B} \hat{B}^{\dagger}_{-k_{y},\Omega} & = &
i\sqrt{n_{\text{F}}(2\pi\Omega)}\hat{c}^{>}_{k_{y},\Omega} +
\sqrt{n_{\text{F}}(-2\pi\Omega)}\hat{d}^{<\dagger}_{-k_{y},\Omega}.~~~~~~~
\end{eqnarray}
\end{subequations}
Equation (\ref{Vacuum Averages of Rindler Operators}) suggests that what we see as the vacuum of a flat graphene sheet, may appear as a state filled with Rindler strained particles. Thus we can express the Minkowski vacuum in terms of Rindler excited states in the following way~\cite{Alsing:2006cj,Leon:2009uod}:
\begin{eqnarray}
\label{Ansatz Minkwoski in terms of Rindler}
&&|0_{\cal{M}}\rangle = \prod_{k_{y},\Omega} |0_{k_y,\Omega}\rangle_{\cal{M}}
\\
&&
|0_{k_y,\Omega}\rangle_{\cal{M}}
= \sum_{m,n=0}^{1} A_{mn} |m^{>}_{k_{y},\Omega}\rangle_{{\cal{R}}} ~|n^{<}_{-k_{y},\Omega}\rangle_{{\cal{R}}},
\end{eqnarray}
which expresses the Minkowski vacuum state in terms of a Rindler state with $m$ electrons on the right and $n$ holes on the left side. Note that the sum has only two entries because of the Pauli principle for fermions which according to (\ref{Modified Minkwoski Annihilate Miknkoski vacuum}), means that the electron annihilation operator (also true for holes) acting on the state with no electrons as well as the corresponding electron creation operator acting on a state with one electron will yield zero, i.e. $\hat{c}|0_{\cal{R}}\rangle=\hat{c}^{\dagger}|1_{\text{R}}\rangle=0$. Dropping the quantum labels $k_{y}$ and $\Omega$, and the subscript $\cal{R}$, and applying the modified Minkowski electron annihilation operator $\hat{A}_{k_{y},\Omega}$ to the above Minkwoski state in Eq.~(\ref{Ansatz Minkwoski in terms of Rindler}), we get (\cite{Alsing:2006cj,Leon:2009uod}):
\begin{eqnarray}
0 &=& \hat{A}
|0_{k_y,\Omega}\rangle_{\cal{M}}
\\
&=& \big[n^{\frac{1}{2}}_{\text{F}}(2\pi\Omega)A_{11} + in^{\frac{1}{2}}_{\text{F}}(-2\pi\Omega)A_{00}\big]|0^{>}\rangle |1^{<}\rangle \nonumber \\
& + & n^{\frac{1}{2}}_{\text{F}}(-2\pi\Omega)A_{10}|0^{>}\rangle |0^{<}\rangle + in^{\frac{1}{2}}_{\text{F}}(2\pi\Omega)A_{10}\big]|1^{>}\rangle |1^{<}\rangle. \nonumber
\end{eqnarray}
If the right hand side vanishes for arbitrary Rindler Fock states, then this yields the summation coefficients as $A_{10}=A_{01}=0$, and $A_{11}=-iA_{00}e^{-\pi\Omega}$. Also normalizing the ansatz in (\ref{Ansatz Minkwoski in terms of Rindler}) yields $|A_{00}|^{2}+|A_{11}|^{2}=1$. Combing these ideas, we get $A_{00}=n_{\text{F}}^{\frac{1}{2}}(2\pi\Omega)$ and $A_{11}=-in_{\text{F}}^{\frac{1}{2}}(-2\pi\Omega)$, and therefore the flat graphene vacuum state can be expressed as a two-mode squeezed state of Rindler-strained fermions:
\begin{eqnarray}
&& |0_{\cal{M}}\rangle = \prod_{k_{y},\Omega} n_{\text{F}}^{\frac{1}{2}}(-2\pi\Omega) \nonumber \\
& \hspace{-0.2cm} \times & \Big[|0^{>}_{k_{y},\Omega}\rangle_{\cal{R}}~ |0^{<}_{-k_{y},\Omega}\rangle_{\cal{R}} -ie^{-\pi\Omega}|1^{>}_{k_{y},\Omega}\rangle_{\cal{R}}~|1^{<}_{-k_{y},\Omega} \rangle_{\cal{R}}\Big],~~~~~~~
\end{eqnarray}
similar to the Bardeen-Cooper-Schrieffer (BCS) state \cite{BCS Short,BCS Long} for electrons that form a Cooper pair \cite{Cooper} inside a superconductor or superfluid. From this, a density matrix can be constructed $\hat{\rho}=|0_{\cal{M}}\rangle\langle0_{\cal{M}}|$ representing the pure state of the flat graphene sheet, and when traced over the left side ($x<0$) Rindler particle states, we get a reduced density matrix in terms of the Rindler Hamiltonian expressed in terms of modes pertaining to the right side only ($x>0$):
\begin{equation}\label{Density Matrix Reduced Gibbs Form}
\hat{\rho}^{>} = \frac{e^{-2\pi\hat{H}^{>}}}{\text{Tr}~e^{-2\pi\hat{H}^{>}}}
\end{equation}
where the normal ordered Hamiltonian constrained to the right side should be understood in terms of a sum in modes $\hat{H}^{>}=\sum_{k_{y},\Omega}\Omega\{\hat{c}^{>\dagger}_{k_{y},\Omega}\hat{c}^{>}_{k_{y},\Omega} + \hat{d}^{>\dagger}_{k_{y},\Omega}\hat{d}^{>}_{k_{y},\Omega}\}$. This density matrix is clearly of the Gibbs' thermal ensemble form.
In the case of the conventional Unruh effect with an accelerating observer, the Rindler horizon that bars any communication between the two wedges presents a natural trace of the density matrix. In the present setting
of a strained honeycomb lattice, the low-energy and long-wavelength
modes see the point $x=0$ as
an analogue horizon and thus leakage of such modes between the two sides is either zero or minuscule. Hence, even though the global state of the honeycomb system might be a pure state, when we make measurements on one side of the sheet, the degrees of freedom on the other side are not available to us and hence get \emph{naturally} traced out from the density matrix giving us a reduced mixed thermal state as in Eq.~{(\ref{Density Matrix Reduced Gibbs Form})} {\cite{Fabbri:2005mw,Takagi:1986kn,DeWitt:1979,Crispino:2007eb}}. This is known as the thermalization theorem which says that the presence of horizons in a spacetime is sufficient for thermality to emerge. It is intimately connected to the Kubo-Martin-Schwinger (KMS) condition {\cite{Kubo:1957,Martin & Schwinger:1959}} and the principle of detailed balance which we shall discuss in the next section.
Thus any strain pattern that realizes an analogue spacetime with a natural horizon such as black holes, de-Sitter or Rindler, can lead to the appearance of such thermal effects.
So far we have discussed how a Rindler Hamiltonian
{(\ref{Rindler Hamiltonian})} forms from assuming a
linear-in-position Fermi velocity $v(x)=v_{0}\frac{|x|}{\lambda}$, how
this leads to the Bogoliubov transformations
{(\ref{Bogoliubov Transformations Actual})} between the
strained (Rindler) and flat (modified Minkowski) honeycomb operators,
giving rise to the vacuum averages in
{(\ref{Vacuum Averages of Rindler Operators})} that behave as thermal averages over an
ensemble represented by the density matrix
{(\ref{Density Matrix Reduced Gibbs Form})}. These
results collectively are termed as the Unruh effect which emerges due
to the presence of a natural demarcation in the material. Before we
discuss the implications of this spontaneous electron-hole formation
on observables like the electronic conductivity and internal energy, in
the next section we will present the Green's functions pertaining to
the strained graphene system to discuss in what sense is the Unruh
effect a genuine thermal phenomena. We will also discuss how the
dimensionality of graphene leads to the violation of Huygens'
principle and the inversion of statistics which could possibly be seen
in photo-emission experiments.
\section{Green's Functions}\label{SEC:six}
In this section, we describe properties of single-particle Green's functions
that will help us explain how thermal behavior emerges, how the Huygens' principle is violated and how this leads to the phenomena of apparent statistics inversion in the excitation spectrum of fermions. Towards the end of this section, we discuss how these properties can be detected in experiments like photoemission spectroscopy (PES) and scanning tunneling microscopy (STM).
Following Ooguri {\cite{Ooguri:1985nv}}, we introduce two fundamental single particle Green's functions defined with respect to the flat graphene vacuum state $|0_{\cal{M}}\rangle$:
\begin{subequations}
\begin{eqnarray}\label{G+}
G_{+}(\boldsymbol{r},t;\boldsymbol{r}',t') & = & \langle 0_{\cal{M}}|\hat{\psi}_{\text{R}}(x,y,t)\hat{\psi}^{\dagger}_{\text{R}}(x',y',t')|0_{\cal{M}}\rangle,~~~~~~~ \\ \label{G-}
G_{-}(\boldsymbol{r},t;\boldsymbol{r}',t') & = & \langle 0_{\cal{M}}|\hat{\psi}^{\dagger}_{\text{R}}(x,y,t)\hat{\psi}_{\text{R}}(x',y',t')|0_{\cal{M}}\rangle.~~~~~~~
\end{eqnarray}
\end{subequations}
Here $G_{+}$ creates a particle at location $\boldsymbol{r}'=(x',y')$ and time $t'$, and then annihilates it at another location $\boldsymbol{r}=(x,y)$ and time $t$, whereas $G_{-}$ does the opposite. In the condensed-matter context, these are called the $>$ and $<$ Green's functions,
respectively~\cite{Coleman} (up to factors of $i$), and their physical interpretation will become clear when we discuss their Fourier transforms below.
Interestingly, despite the intrinsically nonequilibrium nature of this setup, i.e., a sudden switch-on of the system strain that changes the system Hamiltonian from the Dirac to the Rindler Hamiltonian, these Green's functions have simple forms, at least in the local real-space limit. To see this, we
set the positions equal i.e. $x'=x$ and $y'=y$. Making use of mode expansion {(\ref{ModeExpansionRightNode})} for the right node fields and the vacuum averages {(\ref{Vacuum Averages of Rindler Operators})} for Rindler ladder operators with respect to Minkowski vacuum, and taking the spinor trace, we find:
\begin{eqnarray}\label{G+ special}
&&\text{Tr}~G_{+}(x,y,\Delta t) = \frac{1}{2\pi x^{2}}\int_{0}^{\infty} d\Omega~\Omega\coth\pi\Omega \nonumber \\
& \times & \Big[e^{i\Omega\Delta t}n_{\text{F}}(2\pi\Omega) + e^{-i\Omega\Delta t}n_{\text{F}}(-2\pi\Omega)\Big],~~~~~~ \\ \label{G- special}
&&\text{Tr}~G_{-}(x,y,\Delta t) = \frac{1}{2\pi x^{2}}\int_{0}^{\infty} d\Omega \Omega\coth\pi\Omega \nonumber \\
& \times & \Big[e^{i\Omega\Delta t}n_{\text{F}}(-2\pi\Omega) + e^{-i\Omega\Delta t}n_{\text{F}}(2\pi\Omega)\Big],~~~~~~
\end{eqnarray}
where $\Delta t=(t-t')$.
If we define a typical timescale associated with the Unruh temperature, $\hbar/(k_{\rm B} T_{\rm U})$ (equal to $2\pi$ in our
units) then it can be shown that the above Green's functions are periodic in imaginary shifts by this timescale:
\begin{equation}\label{KMS Condition}
\text{Tr}~G_{+}(x,y,\Delta t-2\pi i) = \text{Tr}~G_{-}(x,y,\Delta t).
\end{equation}
This is known as the Kubo-Martin-Schwinger (KMS) condition {\cite{Kubo:1957,Martin & Schwinger:1959}} which in the conventional equilibrium case at
temperature $T$, guarantees that the thermal average of any two operators $\hat{A}$ and $\hat{B}$ for a system kept in contact with a heat bath at temperature $\beta=(k_{\text{B}}T)^{-1}$ is also periodic in imaginary time, i.e. $\langle\hat{A}(t)\hat{B}(t')\rangle=\langle\hat{B}(t')\hat{A}(t+i\beta)\rangle$. For example, if we take the operators $\hat{A}$ and $\hat{B}$ as the graphene right-node field operators, then we get the following KMS condition for the Green's functions in (\ref{G+}) and (\ref{G-}):
\begin{equation}\label{KMS General}
G_{+}(\boldsymbol{r},\boldsymbol{r}',\Delta t-2\pi i) = G_{-}(\boldsymbol{r},\boldsymbol{r}',\Delta t).
\end{equation}
Note that here we have assumed that the Green's functions depend solely on the time difference $\Delta t$ because the system exhibits time translation invariance when it is in thermal equilibrium.
In an isolated strained graphene sheet, this condition implies that the vacuum (pure state) average of field operators behaves as a legitimate thermal (mixed state) average with respect to the reduced density operator (\ref{Density Matrix Reduced Gibbs Form}) (that can be thought of as an evolution operator \cite{Coleman,Matsubara:1955ws}), as if it is kept in contact with a real heat bath set at the Unruh temperature, i.e. $T=T_{\text{U}}$.
To further understand the meaning of the KMS condition, we take the Fourier transforms of the above Green's functions {(\ref{G+ special})} and {(\ref{G- special})} defined as
\begin{equation}
F_{\pm}(x,\omega)=\int_{-\infty}^{\infty}d(\Delta t)~e^{-i\omega\Delta t}\text{Tr}~ G_{\pm}(x,y,\Delta t),
\end{equation}
from which we obtain:
\begin{eqnarray}\label{Power Spectrum}
F_{+}(x,\omega) & = & \frac{\omega}{x^{2}}n_{\text{B}}(2\pi\omega), \\
F_{-}(x,\omega) & = & -\frac{\omega}{x^{2}}n_{\text{B}}(-2\pi\omega).
\end{eqnarray}
As discussed by Coleman~\cite{Coleman}, $F_{+}(x,\omega)$ is the photo-emission spectra that gives the total excitation of electrons when graphene is illuminated by light. Similarly, $F_-(x,\omega)$ measures the de-excitation spectra. The ratio of these two power spectra turns out to be:
\begin{eqnarray}
\label{eq:ratioPS}
\frac{F_{+}(x,\omega)}{F_{-}(x,\omega)} = e^{-2\pi\omega}
\end{eqnarray}
which says that the rate of excitation versus de-excitation is of the Boltzmann form with the strain frequency $1/2\pi$ playing the role of temperature. This is the principle of detailed balance which originates from Boltzmann's principle of microscopic reversibility {\cite{Boltzmann,Tolman}}, but was first applied to quantum systems by Einstein in {\cite{Einstein}} that predicted the phenomena of stimulated emission. He studied the set up where an atom with two energy levels $E_{1}<E_{2}$ is in thermal contact with a bath of photons such that when equilibrium sets in the ratio of number of particles in the excited state $|E_{2}\rangle$ versus $|E_{1}\rangle$ is $e^{-\beta(E_{2}-E_{1})}$. Then by demanding that the excitation probability should match de-excitation (spontaneous and stimulated) at equilibrium, the number distribution of photons will be given by a Planck distribution $\rho(\omega)=(\exp(\beta\omega)-1)^{-1}$, where $\omega=(E_{2}-E_{1})$ is the energy of the photon wave-packet that is absorbed by the two-level atoms. Such two-level systems are termed Unruh-DeWitt detectors in the relativistic context~\cite{Unruh:1976db,DeWitt:1979}. Thus the Fourier transform of the KMS condition, i.e., the principle of detailed balance, tells us that accelerated fermionic fields have a Fermi-Dirac spectrum and when they are in contact with a two-level or more atom or detector, then the latter comes into global thermal equilibrium with the field with the Unruh temperature defined everywhere on the real or analogue spacetime.
The discussion above can be summarized by stating the thermalization theorem. For a comprehensive account of its various versions, see {\cite{Takagi:1986kn}}. It states that if the spacetime (or the analogue system) has a causal horizon (like the Minkowski spacetime in Rindler coordinates), then any quantum field on that spacetime will spontaneously emit particles in a thermal distribution characterized by a Bose or Fermi function which is captured by the reduced density matrix {(\ref{Density Matrix Reduced Gibbs Form})} in Gibbs' ensemble form. Once this density operator is achieved, then the KMS condition {(\ref{KMS Condition})}, or more generally Eq.~(\ref{KMS General}), guarantees that the field will also thermalize any other system (like an atom or a detector) in its contact, that has energy levels, thus establishing a global thermal equilibrium with temperature $T=1/2\pi$.
\begin{figure}[h!]
\includegraphics[width=1.0\columnwidth]{Fig2.pdf}
\caption{(Color Online) The orange solid curve
is a plot of the power spectrum
$F_{-}(\omega)$, that can be measured in
photo-emission spectroscopy (PES) experiments.
%
The black dot-dashed curve is a plot of $F_{A}(\omega)$ that
can be measured in scanning-tunneling microscopy (STM) experiments that measure the density of states.
Their ratio (in green), yields the expected Fermi-Dirac spectrum in
accordance with the Unruh effect predictions.
}
\label{ARPES & STM}
\end{figure}
To discuss Huygens' principle and how its violation leads to statistics inversion, we now consider two other fundamental Green's functions pertaining to the commutator and the anti-commutator of fermionic fields, that are similar to the Green's functions defined in {(\ref{G+ special})} and {(\ref{G- special})}. The former is related to the Keldysh Green's function \cite{RammerSmith1986} and the
latter is related to the retarded Green's function that takes causality into account:
\begin{subequations}
\begin{eqnarray}\label{GC}
\hspace{-0.2cm} G_{C}(\boldsymbol{r},t;\boldsymbol{r}',t') & = & \langle 0_{\cal{M}}|\Big[\hat{\psi}_{\text{R}}(x,y,t),\hat{\psi}^{\dagger}_{\text{R}}(x',y',t')\Big]|0_{\cal{M}}\rangle,~~~~~~~ \\ \label{GA}
\hspace{-0.2cm} G_{A}(\boldsymbol{r},t;\boldsymbol{r}',t') & = & \langle 0_{\cal{M}}|\Big\{\hat{\psi}^{\dagger}_{\text{R}}(x,y,t),\hat{\psi}_{\text{R}}(x',y',t')\Big\}|0_{\cal{M}}\rangle.~~~~~~~
\end{eqnarray}
\end{subequations}
After setting $x= x'$ and $y=y'$, computing these Green's functions, and taking the trace, we get:
\begin{eqnarray}
\label{G-C}
\text{Tr}~G_{C}(x,y,\Delta t) & = & -\frac{i}{\pi x^{2}} \int_{0}^{\infty} d\Omega~\Omega\sin(\Omega\Delta t),
\\
\label{G-A}
\text{Tr}~G_{A}(x,y,\Delta t) & = & \frac{1}{\pi x^{2}} \int_{0}^{\infty} d\Omega~\Omega\coth(\pi\Omega)\cos(\Omega\Delta t).~~~~~~~
\end{eqnarray}
Conventionally, the Huygens' principle states that, if we have a source in even spacetime dimensions, then its wave-fronts can be constructed by drawing circles (appropriate to the dimensions) with the source at the center {\cite{Takagi:1986kn}. This means that the retarded Green's function that describes the propagation of waves to any point $(x,y,t)$ with the source at $(x',y',t')$ has support only on the light cone, i.e. it vanishes when $(x',y',t')$ and $(x,y,t)$ are either timelike or spacelike separated. This implies that the retarded Green's function in even spacetime dimensions is proportional to a Dirac delta function and its derivatives.
However, strained graphene mimics an odd-dimensional spacetime where we find that the anticommutator in Eq.~(\ref{G-A}) is not a Dirac delta function. This is the manifestation of the well-known violation of Huygens' principle {\cite{Takagi:1986kn},\cite{Courant-Hilbert}}. It says that in odd spacetime dimensions,
our intuition for wave propagation breaks down, i.e.
a sharp source of wave does not lead to a single spherical wavefront, and instead the observer notices a continuously decreasing tail.
Curiously, although the anticommutator Green's function violates Huygens' principle, from Eq.~(\ref{G-C}) we see that
the commutator Green's function $G_C$ amounts
to $\frac{2i}{x^{2}}\delta'(\Delta t)$, i.e. it has support on the light cone. As a result, it is expected that the Fourier transform of the $G_{C}$ will be a polynomial in $\omega$, whereas for $G_{A}$ it will lead to the following:
\begin{eqnarray}
\label{Commutator Polynomial}
F_{C}(x,\omega) & = & -\frac{\omega}{x^{2}},
\\
\label{Density of States}
F_{A}(x,\omega) & = & \frac{\omega}{x^{2}}\coth\pi\omega.
\end{eqnarray}
To see the connection of this violation of Huygens' principle with statistics inversion, we need the fluctuation-dissipation theorem. They can be derived in general by writing {(\ref{G-A})} and {(\ref{G-C})} in terms of {(\ref{G+})} and {(\ref{G-})}, i.e. $G_{A}=G_{+}+G_{-}$ and $G_{C}=G_{+}-G_{-}$, Fourier transforming them, and finally applying the KMS condition or the principle of detailed balance i.e. $F_{-}=e^{2\pi\omega}F_{+}$, yields two different but equivalent versions of the theorem:
\begin{eqnarray}\label{Fluctuation-Dissipation Version 1}
F_{+}(x,\omega) & = & n_{\text{F}}(2\pi\omega) \times F_{\text{A}}(x,\omega), \\ \label{Fluctuation-Dissipation Version 2}
& = & n_{\text{B}}(2\pi\omega) \times -F_{\text{C}}(x,\omega).
\end{eqnarray}
The excitation or power spectrum $F_{+}(x,\omega)$ is related to the rate at which an accelerated detector senses Rindler particles, and shows inversion of statistics depending on the dimension of the spacetime~\cite{Takagi:1986kn,Unruh:1986tc,Ooguri:1985nv,Terashima:1999xp,Sriramkumar:2002dn,Sriramkumar:2002nt,Pascazio_Huygens,Arrechea:2021szl}.
Following Ooguri {\cite{Ooguri:1985nv}}, there are two interpretations for this. The first makes use of {(\ref{Fluctuation-Dissipation Version 1})}, which says that the excitation spectrum is basically the Fermi-Dirac function coming from the real statistics of the fermions, multiplied with the spectral density of states coming from the Fourier transform of anti-commutator which we know violates Huygens' principle and thus is not simply a polynomial in $\omega$. This, coupled with the particular form of the mode functions in {(\ref{ModeExpansionRightNode})} gives us a hyperbolic cotangent which coincidentally inverts the Fermi to a Bose function. The other interpretation comes from {(\ref{Fluctuation-Dissipation Version 2})}, where one can argue that since we are in odd spacetime dimensions in graphene, therefore we would expect the Fourier transform of the commutator to be polynomial in
$\omega$ (see {(\ref{Commutator Polynomial})}). Thus the excitation spectrum should be expected to be a Bose-Einstein distribution multiplied by a factor which is polynomial in $\omega$, thereby removing the need to invoke any inversion. \\
To see how these power spectra could manifest themselves in experiments, we focus at the first version (\ref{Fluctuation-Dissipation Version 1}) of the Fluctuation-Dissipation theorem, but instead for $F_{-}$, i.e. $F_{-}(\omega)=n_{\text{F}}(-2\pi\omega)F_{\text{A}}(\omega)$. To do this, the experimenter will first obtain the photo-emission data from the Photo Emission Spectroscopy or the PES experiment {\cite{Rodriguez-Laguna:2016kri}}. For low-energies and long wavelengths, this will give us a plot of fermion occupancy in graphene's lowest energy band which in this limit, should mimic the Planck distribution $F_{-}=-\frac{\omega}{x^{2}}n_{\text{B}}(-2\pi\omega)$. As can be seen from Fig.~\ref{ARPES & STM}, $F_{-}(\omega)$ increases with energy, which is due to the fact that the PES-experiment measures the occupancy of valence band electrons by extracting them by shining light. If the intensity of light is increased, then more electrons residing in the lower valence energy levels will be detected. The experimenter can then obtain the data regarding the local density of states by performing the Scanning Tunneling Microscopy or the STM experiment {\cite{Iorio:2011yz,Iorio:2013ifa}} which, in the low-energy
and long-wavelength limit (where our calculations are valid), will be given by the statistics inversion factor $F_{A}=\frac{\omega}{x^{2}}\coth\pi\omega$, which implies that Huygens' principle is being violated in strained graphene. Now if we take the ratio of the PES and STM data, we will find:
\begin{equation}
\frac{\text{PES data}}{\text{STM data}} = \frac{F_{-}(x,\omega)}{F_{A}(x,\omega)} = n_{\text{F}}(-2\pi\omega),
\end{equation}
we will obtain the Fermi-Dirac distribution as expected from the Unruh effect of fermions, as can be seen in Fig.~\ref{ARPES & STM}.
Equipped with the Bogoliubov transformations {(\ref{Bogoliubov Transformations Actual})} between the strained (Rindler) and flat (modified Minkowski) honeycomb operators, that lead to the vacuum averages in {(\ref{Vacuum Averages of Rindler Operators})} and the statistics inversion in Eqs.~{(\ref{Fluctuation-Dissipation Version 1})}-{(\ref{Fluctuation-Dissipation Version 2})}, we are now ready to discuss in the next two sections, the implications of this spontaneous electron-hole formation on observables like the electronic conductivity and total internal energy.
\section{Electronic Conductivity}
\label{SEC:seven}
In this section, we consider another observable that is sensitive to the Unruh effect in
strained graphene, the frequency-dependent conductivity. For this calculation, we shall require the
Bogoliubov transformations {(\ref{Bogoliubov Transformations Actual})}, derived
in Sec.~\ref{SEC:five}, that establish the relationship between the Rindler operators $\{\hat{c},\hat{d}\}$ in a strained honeycomb system with the modified Minkowski operators $\{\hat{A},\hat{B}\}$ in a flat (unstrained) honeycomb system. This led us to the expectation value {(\ref{Vacuum Averages of Rindler Operators})} of the Rindler operators with respect to the Minkowski vacuum.
To use these results, we will need the Kubo formula that
relates the frequency-dependent conductivity to an associated current-current correlation function. %
For generality, we'll briefly recall the Kubo formula derivation for both the setups
considered here, i.e., the case of
electronic graphene (in which the fermions are charged electrons) and the case of neutral cold atoms in an optical lattice.
For the electronic graphene case, we can start with the Rindler Hamiltonian {(\ref{Rindler Hamiltonian})} minimally coupled to an electromagnetic vector potential $\boldsymbol{A}(\boldsymbol{r},t)$, i.e. we can make the replacement $-i\hbar\boldsymbol{\nabla}\rightarrow-i\hbar\boldsymbol{\nabla}-e\boldsymbol{A}$ in the derivative operators giving us the following new Hamiltonian {\cite{Mahan}}:
\begin{eqnarray}\label{Electromagnetic Coupling}
\hat{H}(t) & = & \hat{H}_{\text{R}} + \hat{H}^{1}(t), \nonumber \\
\hat{H}^{1}(t) & = & - \int d^{2}r~ \boldsymbol{\hat{j}}(\boldsymbol{r},t)\cdot \boldsymbol{A}(\boldsymbol{r},t),
\end{eqnarray}
where $\hat{H}_{\text{R}}$ is the Rindler Hamiltonian {(\ref{Rindler Hamiltonian})}. Here, the conserved current operator in the strained (Rindler) system is:
\begin{equation}
\label{eq:rindlercurrent}
\boldsymbol{\hat{j}}(\boldsymbol{r},t) \equiv ev_{0}\frac{|x|}{\lambda} \hat{\psi}^{\dagger}_{\text{R}}(\boldsymbol{r},t)\boldsymbol{\sigma}\hat{\psi}_{\text{R}}(\boldsymbol{r},t).
\end{equation}
Within linear response theory, we can treat the vector potential term as a perturbation, and to linear order the response of the average current is given by:
\begin{eqnarray}\label{Kubo Formula}
&&\langle\hat{j}_{\mu}(\boldsymbol{r},t)\rangle = -\frac{i}{\hbar}\int_{-\infty}^{t}dt'~\big\langle\big[\hat{j}_{\mu}(\boldsymbol{r},t),\hat{H}^{1}(t')\big]\big\rangle, \\
& = & \frac{i}{\hbar}\int_{-\infty}^{t}dt'\int d^{2}r'~\big\langle\big[\hat{j}_{\mu}(\boldsymbol{r},t),\hat{j}_{\nu}(\boldsymbol{r}',t')\big]\big\rangle A_{\nu}(\boldsymbol{r}',t').\nonumber
\end{eqnarray}
The time-dependent vector potential can be written as
$A_{\nu}(\boldsymbol{r}',t')=\frac{1}{i\omega^{+}}E_{\nu}(\boldsymbol{k},\omega)e^{-i(\boldsymbol{k}\cdot\boldsymbol{r}'+\omega^{+}t')}$, where $\omega^{+}=\omega+i\delta$ , with $\delta=0^{+}$. Here, $E_{\nu}(\boldsymbol{k},\omega)$ is the electric
field at wavevector $\boldsymbol{k}$ and frequency $\omega$. Upon plugging this into Eq.~(\ref{Kubo Formula}), multiplying
both sides by
$e^{-i\boldsymbol{q}\cdot\boldsymbol{r}}$ and integrating over ${\bf r}$ in the limit of $\boldsymbol{q}\rightarrow0$ (corresponding to spatial averaging), we obtain:
\begin{eqnarray}\label{Current q0}
\langle\hat{j}_{\mu}(\boldsymbol{q}\rightarrow0,t)\rangle & = & \frac{1}{\hbar\omega^{+}}\int_{-\infty}^{\infty}dt'\Theta(t-t')e^{-i\omega^{+}t'} \nonumber \\
& \times & \big\langle\big[\hat{j}_{\mu}(0,t),\hat{j}_{\nu}(0,t')\big]\big\rangle E_{\nu}(0,\omega).~~~
\end{eqnarray}
Noting that the time-dependent electric field $E_{\nu}(0,t) = E_{\nu}(0,\omega) e^{-i\omega^{+}t}$,
redefining the variable of integration to $T=(t-t')$ and taking the ratio of current and electric field, we find the average conductivity tensor $\sigma_{\mu\nu}=\frac{\langle\hat{j}_{\mu}(0,t)\rangle}{E_{\nu}(0,t)}$:
\begin{equation}\label{ConductivityTensor}
\sigma_{\mu\nu} = \frac{1}{\hbar\omega^{+}}\int_{-\infty}^{\infty}dT~\Theta(T)e^{i\omega^{+}T}\big\langle\big[\hat{j}_{\mu}(0,T),\hat{j}_{\nu}(0,0)\big]\big\rangle.
\end{equation}
The preceding derivation depends on the use of the vector potential as an external stimulus. But, for a system that is not made of charged particles such as neutral ultracold atomic gases, we must use a different approach. In this case,
a change in the local chemical potential creates a pressure difference and hence affects the density of fermions. Instead of Eq.~(\ref{Electromagnetic Coupling}),
the perturbing Hamiltonian involves a coupling of the atom density $\hat{n}(\boldsymbol{r},t)=\hat{\psi}^{\dagger}(\boldsymbol{r},t)\hat{\psi}(\boldsymbol{r},t)$ to a spatially and temporally varying chemical potential:
\begin{equation}
\hat{H}^{1}(t) = - \int d^{2}r~ \mu(\boldsymbol{r},t)\hat{n}(\boldsymbol{r},t).
\end{equation}
Plugging this into Eq.~(\ref{Kubo Formula}) with $\mu(\boldsymbol{r},t)=\mu(\boldsymbol{r})e^{-i\omega t}$,
integrating by parts in the $t'$ integral and also
integrating by parts in space using the equation of continuity $0 = \frac{\partial}{\partial t}\hat{n}(\boldsymbol{r},t)+\boldsymbol{\nabla}
\cdot\hat{\boldsymbol{j}}(\boldsymbol{r},t)$ ,
we finally arrive at the Kubo formula
for neutral atoms, with the average atom current
related to the chemical potential gradient as
$\boldsymbol{j}=-\sigma\boldsymbol{\nabla}\mu$ where $\sigma$ is given by ({\ref{ConductivityTensor}}).
Thus, in either case we require the current-current correlation function, with the averages being
performed with respect to the Minkowski vacuum.
We start with the computation of $\sigma_{xx}$.
Instead of directly using Eq.~(\ref{ConductivityTensor}) that involves the spatially Fourier-transformed current correlator, we
start with the real-space current-current correlation function,
perform spatial averages (on $\boldsymbol{r}$ and $\boldsymbol{r}'$), and finally Fourier transform to frequency space. The average current correlation function at the right node has the following form :
\begin{eqnarray}\label{Current-Current xx-Correlation}
\hspace{-0.2cm} \bar{C}^{xx}(t-t') & = & \int d^{2}r\int d^{2}r' \langle 0_{\cal{M}}| \hat{j}^{x}(\boldsymbol{r},t) \hat{j}^{x}(\boldsymbol{r}',t') |0_{\cal{M}} \rangle,~~~~~~~
\end{eqnarray}
where we are evaluating the correlations only between fields on the right node. In what follows, we will set $e\rightarrow1$, $\hbar\rightarrow1$ and $\omega_{c}\rightarrow1$.
\begin{figure}[h]
\centering
\includegraphics[width=1.0\columnwidth]{Fig3.pdf}
\caption{(Color Online) A plot showing how the average dissipative conductivity (in units of $e^{2}/\hbar$) grows approximately linearly as a function of AC-frequency (in units of strain frequency $\omega_{c}=v_{0}/\lambda$). The longitudinal components $\sigma''_{xx}(\omega)$ (in {red}) and $\sigma''_{yy}(\omega)$ (in green) both vanish in the DC-limit $\omega\rightarrow0$.
}
\label{Average Conductivity}
\end{figure}
We performed spatial integrals on (\ref{Current-Current xx-Correlation}) along the coordinates $\boldsymbol{r}$ and $\boldsymbol{r}'$ because the conductivity Eq.~(\ref{ConductivityTensor}) requires the current-current correlation in the reciprocal space in the limit $\boldsymbol{q}\rightarrow0$.
Integration along $y$ and $y'$ will yield Dirac delta functions in wavevectors $\delta(k_{y}-k'_{y})$,
after which integration of spinor products is performed over $x$ and $x'$ directions using the following identity:
\begin{eqnarray}
&&\int_{0}^{\infty}dx~x\Big[K_{\frac{1}{2}+i\Omega}(x)K_{\frac{1}{2}-i\Omega'}(x) - K_{\frac{1}{2}-i\Omega}(x)K_{\frac{1}{2}+i\Omega'}(x)\Big] \nonumber \\
& = & \frac{i\pi^{2}(\Omega^{2}-\Omega'^{2})}{2[\sinh(\pi\Omega)+\sinh(\pi\Omega')]}.
\end{eqnarray}
Thus the average current-current correlator as a function of time for the right handed fermions looks as follows:
\begin{eqnarray}\label{Correlator t-t'}
&&\bar{C}^{xx}(\Delta t) = \frac{1}{2}\int_{-\infty}^{\infty}d\Omega\int_{-\infty}^{\infty}d\Omega'~
\cosh\pi\Omega\cosh\pi\Omega' \nonumber \\ &&n_{\text{F}}(2\pi\Omega)n_{\text{F}}(-2\pi\Omega')e^{i(\Omega-\Omega')\Delta t}~ \frac{(\Omega^{2}-\Omega'^{2})^{2}}{[\sinh(\pi\Omega)+\sinh(\pi\Omega')]^{2}}, \nonumber \\
\end{eqnarray}
where $\Delta t=(t-t')$. We now subtract from this the current correlator with time coordinates interchanged, $t\leftrightarrow t'$ i.e. $\bar{C}^{xx}(t'-t)=\bar{C}^{xx}(-\Delta t)$, to obtain the vacuum average of the commutator of current-current correlation. Plugging this
into the expression for conductivity tensor {(\ref{ConductivityTensor})}, where we perform the Fourier transform of a retarded function in time using the Plemelj formula $\lim\limits_{\delta\rightarrow0^{+}}\frac{1}{x+i\delta}=\mathcal{P}\frac{1}{x}-i\pi\delta(x)$, and extracting the imaginary part, we finally obtain the $xx$-component of the dissipative average conductivity as follows:
\begin{eqnarray}\label{Conductivity-XX}
&&\bar{\sigma}''_{xx}(\omega) = \frac{\pi\omega}{2}\int_{-\infty}^{\infty} d\Omega~ \cosh\pi\Omega\cosh\pi(\Omega+\omega) \\
&& \hspace{-0.5cm}\times \frac{(2\Omega+\omega)^{2}}{(\sinh\pi\Omega+\sinh\pi(\Omega+\omega))^{2}} \big[n_{\text{F}}(2\pi\Omega)-n_{\text{F}}(2\pi(\Omega+\omega))\big],
\nonumber
\end{eqnarray}
where the double prime $''$ denotes the imaginary part of conductivity that leads to dissipation of electronic current.
In this formula,
we have dropped dimensionful prefactors (such as $e^2/\hbar$,
the typical units of conductivity), and we have dropped an extensive factor
\begin{equation}
\mathcal{A}=\int_{0}^{\infty}\frac{dk_y}{2\pi}\frac{1}{k_y^{2}}
\int_{-\infty}^{\infty} dy = L_y \int_{0}^{\infty}\frac{dk_y}{2\pi}\frac{1}{k^{2}_y},
\end{equation}
with $L_y$ the size of the system in the $y$ direction. Properly handling
the remaining integral would require analyzing our problem in a finite
system along $x$, a task we leave for future work.
We have plotted Eq.~(\ref{Conductivity-XX}) in Fig.~\ref{Average Conductivity} which shows that the conductivity grows approximately linearly with the probing frequency and vanishes in the DC-limit ($\omega\rightarrow0$). As we discussed in Sec.~\ref{SEC:two}, the Rindler Hamiltonian with Fermi velocity $v(x)\simeq v_{0}|x|/\lambda$, can be achieved for modes with low energies and long wavelengths.
Hence, the results for conductivity (and for internal energy) for strained honeycomb lattices are valid if we choose to probe long-wavelength modes $k_{y}\lambda\ll1$. This is valid because in order to evaluate these observables, spatial averages need to be performed equivalent to setting $\boldsymbol{q}\rightarrow0$ in $\sigma(\omega,\boldsymbol{q}\rightarrow0)$ as we discussed in Eq.~(\ref{Current q0}).
To understand the result in Eq.~(\ref{Conductivity-XX}), we revisit the electronic conductivity of flat graphene (per node and per spin) in the collisionless limit and at a finite environment temperature $\beta=(k_{\text{B}}T)^{-1}$ \cite{Stauber:2008}:
\begin{eqnarray}\label{Flat Graphene Conductivity}
\bar{\sigma}''_{xx}(\omega) & = & \frac{1}{16}\big[n_{\text{F}}\Big(-\frac{\beta\omega}{2}\Big)-n_{\text{F}}\Big(\frac{\beta\omega}{2}\Big)\big] \\ \label{Flat Graphene Conductivity Alt}
& = & \frac{1}{16}\big[1-2n_{\text{F}}\Big(\frac{\beta\omega}{2}\Big)\big],
\end{eqnarray}
where the left hand side is measured in units of $e^{2}/\hbar$. The right hand side vanishes in the DC-limit $\omega\rightarrow0$. This happens because in this limit, only the energy levels close to the Dirac point participate in electronic transitions due to switching on the vector potential in (\ref{Electromagnetic Coupling}). However, here the electron occupancy in conduction band, given by $n_{\text{F}}(\beta\omega)\sim0.5$, is equal to the electron occupancy in the valence band, given by $n_{\text{F}}(-\beta\omega)\sim0.5$. Thus the rate of excitation and de-excitation are equal and hence the electrons near the Dirac point (DC-limit) do not participate in conductivity. On the other hand, as the probing frequency is increased, the electron occupancy in the valence band starts exceeding the conduction band, thus giving us a net rate of excitation of electrons that give non-zero conductivity. In the opposite limit of $\omega\gg(\hbar\beta)^{-1}$, the high energy modes are unaffected by the thermal scale and hence, the electron occupancy here is approximately unity, i.e. the de-excitation is minuscule and conductivity reaches it maximum value of $e^{2}/16\hbar$. The density of states in a two-dimensional material such as graphene, is expected to be linear in energy. However this gets cancelled out due to the $1/\omega$ in the expression for conductivity (\ref{ConductivityTensor}), and therefore only the Fermi functions are needed to physically understand the behavior of conductivity.
Since the strained graphene system is effectively at an Unruh temperature $T_U$, by analogy with the preceding argument
we might also expect to find that $\sigma(\omega) \to 0$ for $\omega \to 0$, as we indeed find in Fig.~\ref{Average Conductivity}.
To derive the approximate linear behavior, we use the fact that the factor multiplying the Fermi functions in square brackets in
Eq.~(\ref{Conductivity-XX}) is sharply peaked at $\Omega=-\omega/2$. Then, we are allowed to make this replacement in
the square brackets, yielding $\big[n_{\text{F}}(-\pi\omega)-n_{\text{F}}(\pi\omega)\big]$ that can be pulled outside the integral.
Upon evaluating the remaining $\Omega$ integration over the peak region finally gives
\begin{equation}
\bar{\sigma}''_{xx}(\omega) \simeq \frac{\sqrt{3}}{2\pi^{3/2}} \omega \tanh \frac{\pi \omega}{2} ,
\end{equation}
which agrees with our numerical result in Fig.~\ref{Average Conductivity}.
We can also interpret these results by focusing on the second version similar to (\ref{Flat Graphene Conductivity Alt}) and noting that the conductivity is reduced due to the presence of emergent Fermi functions. This happens due to stimulated particle reduction \cite{Parker:1966,Parker:1971pt,Hu:1986jd,Hu:1986jj,Kandrup:1988sg}. The process of straining the honeycomb lattice leads to creation of fermions in the conduction band with a Fermi distribution $n_{\text{F}}(2\pi\Omega)$ characterized by Unruh temperature (here $1/2\pi$), yielding a thermally excited state. To study the linear electronic response of this system, a vector potential stimulus is provided because of which more electrons from the valence band jump to the conduction band. Pauli's exclusion principle does not allow the strained electrons to co-exist with the electronically excited ones, hence leading to an overall reduction in the response. Since particle creation is maximum at zero energy where the two bands meet (as the Unruh-Fermi function is maximum at low energies), it is easiest for strains to create electrons at this zero-energy level, and hence the stimulated reduction is maximum for zero probing frequency i.e. the DC-limit $\omega\rightarrow0$. In contrast higher energies overpower the strains making the Fermi functions small, and hence maximum conductivity is achieved.
Next we turn to the conductivity $\sigma_{yy}$ for directions perpendicular to
the strain fields, which, following the same procedure, leads to the similar result:
\begin{eqnarray}
\label{Conductivity-YY}
&&\bar{\sigma}''_{yy}(\omega) = \frac{\pi \omega}{2}\int_{-\infty}^{\infty} d\Omega~ \cosh\pi\Omega\cosh\pi(\Omega+\omega)
\\
&& \hspace{-0.5cm}\times \frac{(2\Omega+\omega)^{2}}{(\sinh\pi\Omega-\sinh\pi(\Omega+\omega))^{2}} \big[ n_{\text{F}}(2\pi\Omega)-n_{\text{F}}(2\pi(\Omega+\omega))\big],
\nonumber
\end{eqnarray}
the only difference being a minus sign in the denominator of one factor in the
integrand. In this case the factor multiplying the Fermi functions in square brackets is not a narrow peak at $-\omega/2$; nonetheless
the qualitative behavior is similar as seen in Fig.~\ref{Average Conductivity} which shows that just like the $xx$-component, the $yy$-component of conductivity also grows approximately linearly with the probing frequency and vanishes in the DC-limit ($\omega\rightarrow0$). One key difference is that $\bar{\sigma}''_{yy}(\omega)$ is smaller in magnitude.
The reason is that in $\hat{x}$-direction the atoms have been forced to come closer to each other using strains of type ${(\ref{RindlerStrainPattern})}$ thereby increasing the Fermi velocity, and thus hopping becomes easier. Whereas in the $\hat{y}$-direction, the strains do not depend on coordinate $y$, and thus the atoms are further apart in this direction as compared to $\hat{x}$, hence the hopping and the conductivity here are lower.
The transverse or off-diagonal components of conductivity tensor are anti-symmetric i.e. $\sigma_{xy}(\omega)=-\sigma_{yx}(\omega)$, which can be readily inferred from the commutator in Eq.~{(\ref{ConductivityTensor})}. This means that knowledge of one, determines the other.
Performing similar calculations as the longitudinal case, yields a vanishing transverse conductivity:
\begin{equation}\label{Conductivity-XY}
\sigma''_{xy}(\omega)=-\sigma''_{yx}(\omega)=0.
\end{equation}
This can be expected because if $\sigma^{xy}\neq0$, then an electric field in the $x$-direction $E_{x}$ would be able to create a current in the $y$-direction. However due to translation symmetry, there is no reason why $+\hat{y}$ would be favored over $-\hat{y}$, and thus the current is expected to be zero by symmetry. This symmetry gets broken when there is a real magnetic field in the system.
In this section we showed how the Rindler Hamiltonian {(\ref{Rindler Hamiltonian})} leads to a linear in probing frequency behavior of longitudinal components of the electronic conductivity {(\ref{Conductivity-XX})}, {(\ref{Conductivity-YY})}, and that the transverse {(\ref{Conductivity-XY})} components simply vanish. These results for average dissipative conductivity are summarized in Fig.~\ref{Average Conductivity}, where both the longitudinal components scale linearly for frequencies.
In the next section, we will take a look at the consequence of Rindler Hamiltonian on the internal energy of such honeycomb systems.
\section{Internal Energy}\label{SEC:eight}
As we saw in the previous section, that spontaneous particle creation due to us assuming a linear-in-position Fermi velocity had a profound effect on the behavior of electronic conductivity which scaled linearly in the probing frequency, as opposed to the flat honeycomb case where it has a constant value for all frequencies. In this section, we will be looking at how this Rindler-Unruh particle creation affects the response of honeycomb systems when brought in contact with a thermal heat bath, i.e. we will find the total electronic energy in the system $U$, which can be calculated using the expectation value of the Rindler Hamiltonian {(\ref{Rindler Hamiltonian})} with respect to a Minkowski thermal density matrix labeled by the temperature parameter $\beta=(k_{\text{B}}T)^{-1}$ as a subscript:
\begin{eqnarray}
U_{\text{M}} & = & \langle\hat{H}_{\text{R}}\rangle_{\beta}, \nonumber \\
& = & i\hbar~\bigg\langle \int d^{2}x~\hat{\psi}_{\text{R}}^{\dagger}(x)\cdot \partial_{t}\hat{\psi}_{\text{R}}(x) \bigg\rangle_{\beta,\cal{M}},
\end{eqnarray}
where
to get the second line we made use of the Dirac equation {(\ref{Dirac Equation Rindler Graphene})} to simplify further calculations. Equivalently, this can also be calculated using the energy-momentum tensor operator as discussed in Ref.~\cite{Takagi:1986kn}. However, the above Minkowski thermal average is divergent and thus requires normal ordering. This involves subtracting off the Rindler thermal average (i.e. the limit of zero strains $\lambda\rightarrow\infty$) of the Rindler Hamiltonian from the Minkowski average as follows:
\begin{equation}\label{Renormalized Internal Energy}
U = \langle\hat{H}_{\text{R}}\rangle_{\beta,\cal{M}} - \langle\hat{H}_{\text{R}}\rangle_{\beta,\cal{R}}.
\end{equation}
This renormalization is needed because the Hamiltonian is quadratic in the fields $\hat{\psi}^{2}(x)$ \cite{Birrell:1982ix,Fulling:1989nb,Wald:1995yp,Fabbri:2005mw,Mukhanov:2007zz,Parker and Toms}, and thus the expectation value has a genuine divergence since even smearing the field operators will not cure this divergence, unlike the case of two-point functions which are bi-distributions and their divergences at short distances can be cured by smearing.
To evaluate these expectation values, the physical picture that we will be needing is that the honeycomb lattice is initially in a thermal state (due to contact with a heat bath or the surroundings), and then strains are put on it. As a result, the initial state of the flat honeycomb lattice is described by the eigenstates of the standard Dirac Hamiltonian {(\ref{Dirac Hamiltonian})}, whose excitations are described by Minkowski operators $\{\hat{a}_{\boldsymbol{k}},\hat{b}_{\boldsymbol{k}}\}$ in Eq.~{(\ref{MinkowskiModeExpansionStandardRight1})} labeled by momentum vector $\boldsymbol{k}$. Since this system is also kept in contact with a heat bath at temperature $\beta=(k_{\text{B}}T)^{-1}$, the thermal averages of Minkowski operators will be given by the Fermi distributions:
\begin{equation}\label{Thermal Averages Minkwoski}
\langle\hat{a}_{\boldsymbol{k}}^{\dagger}\hat{a}_{\boldsymbol{k}}\rangle_{\beta,\cal{M}} = \langle\hat{b}_{\boldsymbol{k}}^{\dagger}\hat{b}_{\boldsymbol{k}}\rangle_{\beta,\cal{M}} = n_{\text{F}}(\beta\epsilon_{k}) \equiv \frac{1}{e^{\beta\epsilon_{k}}+1},
\end{equation}
as a function of the Minkowski energy dispersion relation $\epsilon_{k}=\hbar\omega_{k}=\hbar v_{0}|\boldsymbol{k}|$. When the strains are turned on, then the system is described the Rindler Hamiltonian {(\ref{Rindler Hamiltonian})}, whose excitations are governed by the Rindler operators $\{\hat{c}_{k_{y},\Omega},\hat{d}_{k_{y},\Omega}\}$, labeled by the independent pair of momenta $\hbar k_{y}$ and energy $\hbar \Omega$. We have seen in Sec.~\ref{SEC:four}, that the Bogoliubov transformations {(\ref{Bogoliubov Transformations Actual})} help express these Rindler operators in terms of modified Minkowski operators $\{\hat{A}^{\pm}_{k_{y},\Omega},\hat{B}^{\pm}_{k_{y},\Omega}\}$, which are in turn complex linear combinations of the standard ones $\{\hat{a}_{\boldsymbol{k}},\hat{b}_{\boldsymbol{k}}\}$ as given in Eq.~{(\ref{Modified & Actual Minkwoski Operators})}. Thus making use of this transformation between operators, and the thermal averages in Eq.~{(\ref{Thermal Averages Minkwoski})}, we obtain the thermal averages of the Rindler operators in the Minkowski vacuum as follows:
\begin{eqnarray}
&&\big\langle \hat{c}^{\dagger}_{k_{y},\Omega}\hat{c}_{k'_{y},\Omega'} \big\rangle_{\beta,\cal{M}} = \big\langle \hat{d}^{\dagger}_{k_{y},\Omega}\hat{d}_{k'_{y},\Omega'} \big\rangle_{\beta,\cal{M}} \nonumber \\
& = & \delta(k_{y}-k'_{y})\Bigg[\delta(\Omega-\Omega')\sqrt{n_{\text{F}}(2\pi\Omega)}\sqrt{n_{\text{F}}(2\pi\Omega')} \nonumber \\
& + & \bigg\{\sqrt{n_{\text{F}}(-2\pi\Omega)}\sqrt{n_{\text{F}}(-2\pi\Omega')} \nonumber \\
& - & \sqrt{n_{\text{F}}(2\pi\Omega)}\sqrt{n_{\text{F}}(2\pi\Omega')}\bigg\} Z_{k_{y}}(\Delta)\Bigg],
\label{Thermal Average Rindler Operators}
\end{eqnarray}
where $\Delta\equiv\Omega-\Omega'$ and we define the function
$Z_{k_{y}}(\Delta)$:
\begin{equation}
Z_{k_{y}}(\Delta) = \int_{-\infty}^{\infty}\frac{dk_{x}}{2\pi k} \bigg(\frac{k+k_{x}}{k-k_{x}}\bigg)^{-i\Delta/2}n_{\text{F}}(\beta\epsilon_{k}),
\end{equation}
which we emphasize is real (i.e., $Z^{*}_{k_{y}}(\Delta)=Z_{k_{y}}(\Delta)$).
Note the difference between the two types of Fermi distributions being used here. The first $n_{\text{F}}(\beta\epsilon_{k})$, is due to a heat bath labeled by the environment temperature parameter $\beta$ and is a function of the Minkowski energy $\epsilon_{k}$. The second one $n_{\text{F}}(2\pi\Omega)$ is an emergent thermal distribution governed by the strain frequency $\omega_{c}=v_{0}/\lambda$.
The thermal averages in {(\ref{Thermal Average Rindler Operators})} have a temperature-independent part proportional to a delta function in energy $\delta(\Omega-\Omega')$ and a temperature-dependent part having the function $Z_{k_{y}}(\Omega-\Omega')$.
To get an intuition for this formula, we discretize the wavevector and frequency delta functions to Kronecker delta functions,
effectively smearing the Rindler operators~\cite{Takagi:1986kn}. Then taking $k'_{y}=k_{y}$ and $\Omega'=\Omega$, the electron (or hole) thermal averages take the following form:
\begin{eqnarray}\label{Thermal Average Rindler Operators Smeared}
&& N_{k_{y},\Omega} \equiv \big\langle \hat{c}^{\dagger}_{k_{y},\Omega}\hat{c}_{k_{y},\Omega} \big\rangle_{\beta,\cal{M}} - Z_{k_{y}}(0) \nonumber \\
& = & n_{\text{F}}(2\pi\Omega)\Bigg[1-\frac{1}{\pi}\int_{-\infty}^{\infty}\frac{d\hat{k}_{x}}{\sqrt{\hat{k}^{2}_{x}+\hat{k}^{2}_{y}}} n_{\text{F}}\Big(\sqrt{\hat{k}^{2}_{x}+\hat{k}^{2}_{y}}\Big)\Bigg],~~~~~~~
\end{eqnarray}
where $\Omega$ is the dimensionless frequency used elsewhere (in which the Unruh temperature is $1/(2\pi)$) and
the wavevectors $\hat{\boldsymbol{k}}=\frac{\hbar v_{0}\boldsymbol{k}}{k_{B}T}$ are normalized using the real system temperature $T$. We have also renormalized the number average by subtracting off the Rindler vacuum contribution which can be found by setting $T_{\text{U}}=0$ ($\lambda\rightarrow\infty$) in (\ref{Thermal Average Rindler Operators}), or equivalently subtracting off $Z_{k_{y}}(0)$ from the expectation value in the first line. This is the same procedure as was discussed in (\ref{Renormalized Internal Energy}) without which the integrals inside the expectation values diverge.
In Fig.~\ref{Thermal Averages}, we plot the renormalized occupancy as a function of frequency for various values of
the normalized wavevector $\hat{k}_{y}$. This figure shows that nonzero environment temperature leads to a stimulated
reduction of fermions~\cite{Parker:1966,Parker:1971pt,Hu:1986jd,Hu:1986jj,Kandrup:1988sg}, i.e., a smaller Unruh effect. However, this reduction is dependent on the momentum $k_y$, with the
$\hat{k}_{y}\rightarrow\infty$ curve (dashed line) identical to the zero-temperature Unruh effect, and an increasing
stimulated reduction with decreasing $\hat{k}_y$.
This happens because we start with an initial thermal state of fermions and Pauli's exclusion principle does not allow new fermions to co-exist with them that are spontaneously created via strains, hence leading to reduction. The higher the initial temperature, the lower the value of $\hat{k}_{y}$ and therefore the further away the spectrum is from the Fermi-Dirac. In other words, if we keep the environment temperature fixed, then in the limit of small wavelength we recover the Unruh effect and for larger wavelengths the average fermion number strays away from the perfect Fermi-Dirac distribution.
\begin{figure}[h]
\centering
\includegraphics[width=1.0\columnwidth]{Fig4.pdf}
\caption{(Color Online) A figure showing the average number of fermions (plotted with respect to mode energy normalized with Unruh temperature, see Eq.(\ref{Thermal Average Rindler Operators Smeared})) in a graphene sheet which is initially in a thermal state and is then strained leading to stimulated particle reduction, for various values of momenta $\hat{k}_{y}$ normalized with real temperature. The dashed black curve represents the Unruh effect with a perfect Fermi-Dirac distribution which could be achieved in the limit of $\hat{k}_{y}\rightarrow\infty$, i.e. large $k_{y}$ or zero environment temperature. As the temperature rises, the Fermi-Dirac distribution gets reduced due to Pauli's principle.}
\label{Thermal Averages}
\end{figure}
Next, we turn to the direct calculation of the internal energy $U$ using Eq.~(\ref{Renormalized Internal Energy}).
For this task we shall use Eq.~(\ref{Thermal Average Rindler Operators}) without making the abovementioned discretization that
was used for Fig.~\ref{Thermal Averages}.
We find that the internal energy has two contributions $U=U^{0}+U^{\beta}$. For the zero temperature part $U^{0}$, the energy and momentum integrals inside the thermal averages can be simplified by using the Dirac delta functions $\delta(\Omega-\Omega')$ and $\delta(k_{y}-k_{y}')$ that pin $\Omega'=\Omega$ and $k_{y}=k_{y}'$. Then integration can be performed over momenta $k_{y}$ using the following identity:
\begin{eqnarray}
\hspace{-0.5cm} \int_{0}^{\infty}dk_{y}~k_{y}~ K_{\frac{1}{2}+i\Omega}(k_{y}x)K_{\frac{1}{2}-i\Omega}(k_{y}x) & = & \frac{\pi^{2}}{4x^{2}}\frac{\Omega}{\sinh\pi\Omega},~~~~~
\end{eqnarray}
which leads to:
\begin{eqnarray}\label{Temperature Independent Part}
U^{0} & = & \frac{L_{y}}{\pi}\int \frac{dx}{x^{2}} \int_{0}^{\infty}d\Omega~ \hbar\Omega~ \Omega\coth\pi\Omega~ n_{\text{F}}(2\pi\Omega), \nonumber \\
& = & \frac{L_{y}}{\pi}\int_a^\infty \frac{dx}{x^{2}} \int_{0}^{\infty}d\Omega~ \hbar\Omega~ \Omega~ n_{\text{B}}(2\pi\Omega),
\end{eqnarray}
where in going from first to second line we used the identity $\coth x\cdot n_{\text{F}}(2x)=n_{\text{B}}(2x)$, and we cutoff the $x$ spatial integral
at the lattice scale $a$.
We also note that the energy labels $\Omega$ that are not associated with an $\hbar$, need to be understood as being normalized with $\omega_{c}$. This result is for the right side of the honeycomb lattice per node and per spin state. This temperature independent contribution $U^{0}$ is made up of three elements: the mode energy $\hbar\Omega$, the density of states $\Omega\coth\pi\Omega$ and the occupancy of energy levels given by a Fermi-Dirac distribution $n_{\text{F}}(2\pi\Omega)$.
In the last line, however, we see that the product of the last two factors in the first line effectively
yields a linear-in-energy density of states multiplied by the Bose-Einstein distribution.
This is Takagi's apparent \emph{statistics inversion} {\cite{Takagi:1986kn}} that we discussed in equations {(\ref{Fluctuation-Dissipation Version 1})} and {(\ref{Fluctuation-Dissipation Version 2})}.
%
Thus, although Eq.~{(\ref{Temperature Independent Part})} pertains to fermions, the final result looks like
Planck's black body result for photons.
The temperature-dependent part of the total internal energy, $U^{\beta}$, depends on the temperature-dependent terms of the thermal averages in Eq.~(\ref{Thermal Average Rindler Operators}). For this contribution,
the momentum integrals inside the thermal averages can be simplified by using the Dirac delta function $\delta(k_{y}-k_{y}')$, pinning $k'_{y}=k_{y}$. Then, integrating over $x$ using Eq.~(\ref{Normalization Bessel}) gives us a Dirac delta function $\delta(\Omega-\Omega')$.
This along with the finite temperature renormalization discussed in Eq.~(\ref{Renormalized Internal Energy}) gives the temperature dependent part of internal energy:
\begin{equation}
U^{\beta} = -\frac{L_{y}}{\pi}\int_{0}^{\infty}d\Omega~ \hbar\Omega~ n_{\text{F}}(2\pi\Omega) \int_{-\infty}^{\infty}dk_{y}~Z_{k_{y}}(0),
\end{equation}
The momentum integral can be simplified by switching to polar coordinates, i.e. $(k_{x},k_{y})\rightarrow(k,\theta)$, and using the identity $\int_{0}^{\infty}dx~ n_{\text{F}}(x)=\log2$, thus yielding:
\begin{equation}
\int_{-\infty}^{\infty}dk_{y}~Z_{k_{y}}(0) = \frac{k_{\text{B}}T\log2}{\hbar v_{0}}.
\end{equation}
Compiling the results for the temperature independent and dependent cases we find that the
renormalized total internal energy $U=U^{0} + U^{\beta}$ for a strained graphene sheet kept in an environment with finite temperature is:
\begin{eqnarray}
U & = & \frac{L_{y}}{\pi a} \int_{0}^{\infty}d\Omega~ \Omega\coth\pi\Omega ~\hbar\Omega~ n_{\text{F}}(2\pi\Omega) \nonumber \\
& - & \frac{L_{y}}{\pi \lambda} \frac{\log2}{\beta\epsilon_{c}} \int_{0}^{\infty}d\Omega~ \hbar\Omega~ n_{\text{F}}(2\pi\Omega),
\end{eqnarray}
where $\epsilon_{c}=\hbar\omega_{c}$. This result depends linearly on temperature and manifestly shows that because we started with an initial thermal state of fermions in flat graphene, then the process of straining leads to stimulated particle reduction due to Pauli's exclusion principle {\cite{Parker:1966,Parker:1971pt,Hu:1986jd,Hu:1986jj,Kandrup:1988sg}}.
\section{Concluding remarks}
\label{SEC:nine}
In this paper, we have discussed how a honeycomb lattice that is strained inhomogenously can act as an arena where analogue Rindler physics associated with accelerating observers can be realized. We broke this problem into two stages. The first stage is that of an unstrained flat graphene sheet that possesses
(discrete) translation symmetry, and leads to an emergent Dirac equation for low energy modes. This mimics the evolution of fermions in flat Minkowski spacetime. We then solved the evolution equation to obtain the mode expansion in terms of plane waves in space and time. This choice helps us define a structure of ladder operators $\hat{a}_{\boldsymbol{k}}$ and $\hat{b}_{\boldsymbol{k}}$ which, when acting on the Minkowski or flat graphene vacuum state $|0_{\cal{M}}\rangle$, lead to excitation of electrons and holes that obey a linear in energy-momentum dispersion relation.
The second stage starts when we suddenly switch on strains to create a Rindler Hamiltonian with a spatially varying Fermi velocity $v(x)\simeq v_{0}\frac{|x|}{\lambda}$ where the origin $x=0$ acts as an analogue of Rindler horizon separating the $x<0$ and $x>0$ regions and forbidding low-energy and long-wavelength electrons to tunnel through. Thus the two disconnected sides of
strained graphene mimic the causally disconnected left and right Rindler wedges. Then we solved the Dirac equation for right handed Weyl fermions and obtained the solutions in terms of Bessel functions that blow up at the horizon and asymptotically vanish at large $x$. Here the plane wave basis in Rindler time helps us choose the structure of Rindler creation and annihilation operators $\hat{c}_{k_{y},\Omega}$ and $\hat{d}_{k_{y},\Omega}$ for electrons and holes with respect to the Rindler vacuum state $|0_{\cal{R}}\rangle$. However, unlike the Minkowski case, here due to broken translation symmetry there is no band dispersion and the energy and momentum are decoupled.
Since the same quantum field operator has two different representations in the flat and strained regimes, by projecting one onto the other we find that
the Minkowski vacuum
$|0_{\cal{M}}\rangle$ appears to operators of the strained system as
if it is at finite temperature, swarming with Rindler particles.
This can be understood in terms of the Heisenberg picture where the state of the system remains the same, whereas the operators evolve, and thus in the sudden approximation the original state is viewed as a linear combination of the eigenstates of the new Hamiltonian. In fact, the Minkowski vacuum state corresponding to the flat system can be expressed as a two-mode squeezed state with respect to the Rindler vacuum, since one side of the lattice is unavailable to the modes residing on the opposite side.
Thus, expectation values on the right side effectively involve a trace over the left side, amounting to a mixed thermal density operator for the right side. This is similar to what happens in Rindler spacetime because when an observer picks a certain acceleration say $a>0$, then they are naturally causally disconnected from the observers accelerating opposite to them. As a result of this, the Minkwoski vacuum averages of Rindler ladder operators pertaining to one side appear as thermal averages, which is known as the Fulling-Davies-Unruh effect.
After discussing this thermal-like creation of particles, we looked into the properties of the strained Green's functions which satisfy the KMS condition that ensures that if the analogue spacetime has a horizon in it, then the spectrum of particles it creates is bound to be thermal in nature. Another feature of these Green's functions was that the Huygens' principle gets violated due to graphene being a two-dimensional material and thus leads to a Bose-Einstein spectrum for electron-hole pairs created by strains,
a manifestation of Takagi's statistics inversion.
We then discussed how the Unruh thermality (for low-energy and long wavelength modes) could be measured in photo-emission spectroscopy (PES) experiments and the inversion factor could be seen in scanning tunneling microscopy experiments that measure the density of states. In PES, shining photons on graphene would excite fermions to higher states according to a Bose distribution and therefore, in this sense, these experiments are related to the Unruh-DeWitt detectors that also get excited with a Bose-Einstein response when interacting with acceleration radiation. We also found that a similar thermal like behavior could be seen in measurements of the spatially averaged electronic conductivity of an isolated strained honeycomb lattice, which at low energies, exhibits a frequency dependence
that is similar to that found in the case of a flat graphene sheet kept at finite environment temperature, hence signalling emergence of Unruh-like thermality. Finally, we ended our discussion with a calculation of the total system energy due to strains at finite environment temperature and found that it has a zero temperature portion which resembles the black body spectrum of photons thus signalling statistics inversion, and a finite temperature part whose contribution is negative. This is due to the fact that if we start with an initially excited (thermal) state in flat graphene, then strains lead to stimulated particle reduction due to the Pauli principle not allowing newly created fermions to occupy the energy levels already occupied by thermal fermions.
\section{Acknowledgements}
The authors are grateful to Jorma Louko for useful comments. AB acknowledges financial support from the Department of Physics and Astronomy at LSU. This work was supported by the National Science Foundation under Grant 2208036.
|
2,877,628,088,538 | arxiv | \section{Introduction}
Particles floating at an interface can interact and form aggregates~\cite{nicolson1949,kralchevsky1994,kralchevsky2000,vella2005}.
This is a well-known effect of the deformation of the surface around each particle.
Indeed, a single floating particle is often surrounded by a meniscus that is a function of its shape, buoyancy, and wetting properties~\cite{kralchevsky1994,danov2010,poty2014}.
For instance, a heavy and/or hydrophobic sphere will create a concave depression on a water surface.
When floating on a sloped surface, it can experience a lateral force, as it minimizes its potential energy~\cite{nicolson1949}.
Depending on whether the particle is buoyant or not, it will tend to move up or down the slope, respectively.
Two neighbouring particles will therefore attract or repel, as each particle experiences an inclination of the interface caused by the other~\cite{kralchevsky1994}.
Two heavy spheres will thus attract and cluster, as they are each dragged by gravity in the depression around the other.
This phenomenon of agglomeration is sometimes colloquially known as the Cheerios effect, as it can be observed with breakfast cereals floating in a bowl of milk~\cite{vella2005}.
It has been proposed to use magnetic floating particles to generate self-assemblies~\cite{golosovsky1999,wen2000,grzybowski2000,golosovsky2002,snezhko2006,vandewalle2012,vandewalle2013}.
When a magnetisation is induced perpendicular to the interface, the particles experience a repulsive dipole-dipole interaction at short range, opposing the capillary attraction and preventing clustering~\cite{wen2000,vandewalle2012,vandewalle2013}.
This can lead to the appearance of a finite equilibrium distance, so that the particles self-organize into a triangular lattice, which would be the expected symmetry on a flat surface~\cite{messina2015}.
Other planar crystal symmetries have also been observed~\cite{wen2000} as well as defects, such as fivefold symmetries in polydisperse assemblies~\cite{wen2000} or in heavy assemblies, when a large curvature of the surface is reached due to the combined weight of the particles~\cite{vandewalle2012,vandewalle2013}.
Similar structures have been reached with little to no capillary interaction, using a non-uniform magnetic field for confinement instead~\cite{golosovsky1999,golosovsky2002}.
Floating magnetic particles can also form structures outside of thermodynamic equilibrium, called dynamic self-assemblies~\cite{grzybowski2000,snezhko2006,snezhko2011}.
Such structures rely on a constant supply of energy, for instance using time-dependent magnetic fields.
Examples of forces driving these dynamic assemblies include self-induced surface waves~\cite{snezhko2006,snezhko2011}, and hydrodynamic interactions induced by the rapid rotation of the particles~\cite{grzybowski2000}.
This paper focuses on recent developments in the study of few-body assemblies of metallic spheres on a water surface.
Typically, spheres of diameter $D \approx 500~\mathrm{\mu m}$ are used, so that thermal agitation is negligible.
Contact between particles is usually prevented, as they are exposed to a constant vertical magnetic induction field $B_z$ of the order of a few mT.
A time-dependent horizontal magnetic field $\vec{B}_h (t) = B_x (t) \vec{e}_x + B_y (t) \vec{e}_y$ is added to generate deformations in the self-assembly.
\section{Methods}
Metallic spheres are placed on a water bath.
The steels or alloys used (mainly UNS S42000 and G52986) and the spherical shape of the particles allow for a linear magnetisation, with little residual magnetism~\cite{lagubeau2016}.
The magnetic moment of a particle exposed to an induction field $\vec{B}$ is expressed by $\vec{m} = \chi V \vec{B}/\mu_0$, where $\mu_0$ is the vacuum permeability, $V$ the volume of a sphere and $\chi$ its effective susceptibility.
For a spherical object, it is linked to the bulk susceptibility $\chi_{\mathrm{bulk}}$ by the relation $\chi = \chi_{\mathrm{bulk}}/(1+\chi_{\mathrm{bulk}}/3)$, such that for materials with a large susceptibility, we have $\chi \approx 3$~\cite{osborn1945}.
The materials used herein have a bulk susceptibility $\chi_{\mathrm{bulk}} > 300$.
The magnetic dipole-dipole potential between two identical spheres separated by a distance $d$ is given by
\begin{equation}
U_m = \frac{\mu_0 \left[ m_z^2 + m_h^2 \left(1 - 3\cos^2 \theta \right)\right]}{4\pi d^3},
\label{Um}
\end{equation}
where $\theta$ is the angle between the relative position of the pair and the horizontal magnetic field $\vec{B}_h$.
Vertical and horizontal components of the magnetic moments $m_z$ and $m_h$ have been separated, as $m_z$ can only generate a repulsion and is usually kept constant in the course of an experiment.
Figure~\ref{setup} illustrates the experimental setup.
The magnetic induction fields are generated using three orthogonal pairs of Helmholtz coils.
A direct current power supply provides the $z$ coils with a current $i_z$.
A multichannel arbitrary function generator going through a pairs of linear amplifiers feeds the $x$ and $y$ coils with currents $i_x (t)$ and $i_y (t)$.
The water bath is at the center of the coils.
It is lit from below through a diffuser and filmed from atop using a CCD video camera and a macro lens.
\begin{figure}
\includegraphics[width=\linewidth]{FIG_setup.pdf}
\caption{
Experimental setup of magnetocapillary self-assemblies.
A water bath is placed in the center of a tri-axis Helmholtz coil system.
A direct current $i_z$ is injected in the $z$ coils to generate a constant vertical magnetic induction field $B_z$.
Currents $i_x (t)$ and $i_y (t)$ are injected in the horizontal coils to generate time-dependent horizontal magnetic fields $B_x (t)$ and $B_y (t)$.
Steel spheres of diameter $D$ floating on the bath possess a magnetic induced moment $\vec{m}$ proportional to the total magnetic field in the center of the coils.}
\label{setup}
\end{figure}
The capillary interaction potential can be simplified by the assumption that the deformation of the surface in the presence of two particles is the sum of the deformations caused by each particle individually.
This approximation holds for a low number of particles separated by a large distance $d \gg D$~\cite{lagubeau2016}.
For small deformations, the meniscus profile around a particle is given by the Laplace equation and reads
\begin{equation}
\frac{2z}{D} = A K_0 (d/l_c),
\label{zeta}
\end{equation}
where $A$ is given by the boundary conditions at the contact line, $l_c$ is the capillary length and $K_0$ is the zero order modified Bessel function of the second kind~\cite{osborn1945}.
The potential energy associated with the capillary interaction between two identical spheres is therefore given by
\begin{equation}
U_c = -2\pi\gamma q^2 K_0 (d/l_c),
\label{Uc}
\end{equation}
where $\gamma$ denotes the surface tension and $q = a \sin \psi$ is a typical deformation length called the capillary charge, by analogy with the Coulomb interaction potential~\cite{kralchevsky1994}.
It is a function of the contact line radius $a$ and the meniscus slope angle $\psi$.
Contrary to electric charges, like capillary charges attract and unlike charges repel.
Both materials used have a density $\rho_s$ of approximately 7800~$\mathrm{kg/m^3}$ and thus would not float without surface tension.
This places an upper bound on the diameter $D$ of the spheres around $1~\mathrm{mm}$ on water.
For most experiments, we have monodisperse assemblies with $D=397$, $500$ or $793~\mathrm{\mu m}$, meaning that thermal agitation is considered negligible.
At room temperature, the diameter under which thermal agitation overcomes the capillary attraction for a pair of particles is $3.4~\mathrm{\mu m}$, which gives a lower bound on $D$~\cite{lagubeau2016}.
However, this bound could be further lowered by relying on other forces than gravity to generate the surface deformation, such as wetting~\cite{kralchevsky1994} or a magnetic force~\cite{vella2015}, or by using another geometry for the interface~\cite{kralchevsky2000,ershov2013}.
One can compare the magnetic and capillary energies by defining a magnetocapillary number~\cite{lagubeau2016,chinomona2015}.
Let $\mathcal{M}(B)$ denote the magnetocapillary number associated with a given magnetic induction field of amplitude $B$.
We have
\begin{equation}
\mathcal{M}(B) = \frac{\chi^2 V^2 B^2}{8\pi^2\gamma q^2 l_c^3 \mu_0}.
\label{M}
\end{equation}
Using this expression, we can write the potential energy of interaction for a pair of particles as
\begin{equation}
\begin{split}
U &= U_m + U_c \\
&= \Gamma \left[ \frac{\mathcal{M}(B_z) + \mathcal{M}(B_h) \left( 1 - 3\cos^2 \theta \right)}{d^3/l_c^3} -K_0 \left(d/l_c\right) \right]
\end{split}
\label{U}
\end{equation}
where we defined $\Gamma=2\pi\gamma q^2$. When $B_h = 0$, the potential shows a competition between the attractive capillary force and a repulsive dipole-dipole interaction.
When a horizontal component $B_h$ is added, a preferential orientation appears, as the interaction energy is minimal for $\theta = 0$, \emph{i.e.} when the pair is aligned with $B_h$.
Furthermore, the contribution of $B_h$ leads to either an attractive or a repulsive force depending on the sign of $\left( 1-3\cos^2\theta \right)$.
\section{Static assemblies}
Depending on the values of $B_z$, $B_h$ and the initial conditions, a wide variety of configurations can be observed.
When $B_h = 0$ and granted that $B_z$ is large enough to overcome the capillary attraction, a triangular lattice is typically observed.
If the weight of the assembly is enough to significantly curve the interface, fivefold symmetries can also be observed~\cite{vandewalle2012,vandewalle2013}.
The addition of a horizontal field $B_h$ can significantly change the symmetry of the assemblies.
Different configurations can coexist, and hysteresis is observed~\cite{vandewalle2013}.
In particular, contact between particles is not always reversible, as a capillary bridge can form between two spheres, significantly increasing the energy required to separate them again.
In the case of a pair of particles, the minimum of Eq.~(\ref{U}) leads to an expression for the equilibrium distance $d_{\mathrm{eq}}$.
We have
\begin{equation}
\frac{d^4_{\mathrm{eq}}}{l^4_c} K_1 (d_{\mathrm{eq}} / l_c) = 3 \mathcal{M}(B_z) - 6 \mathcal{M}(B_h)
\label{deq}
\end{equation}
as well as $\theta = 0$ if $B_h \neq 0$.
A simpler expression can be obtained by considering that $K_1 (d/l_c) \approx l_c/d$, which holds for $d\ll l_c$.
This leads to
\begin{equation}
\frac{d_{\mathrm{eq}}^3}{l_c^3} = 3 \mathcal{M}(B_z) - 6 \mathcal{M}(B_h).
\label{deq2}
\end{equation}
When $d_{\mathrm{eq}} = D$, the particles come into contact.
By injecting this into Eq.~(\ref{deq2}), we find a critical ratio of $B_h$ and $B_z$ at which contact occurs, namely
\begin{equation}
\frac{B_h}{B_z} = \sqrt{\frac{\mathcal{M}(B_h)}{\mathcal{M}(B_z)}} = \sqrt{\frac{1}{2} \left( 1-\frac{D^3}{l_c^3 \mathcal{M}(B_z)} \right)}.
\label{collapse}
\end{equation}
For large values of $\mathcal{M}(B_z)$, this corresponds to $B_h \approx B_z / \sqrt{2}$.
In general, the larger the assembly, the earlier contact occurs.
This is due to two distinct effects.
First, the addition of a horizontal field $B_h$ can either decrease or increase the distance between two particles, depending on $\theta$, the orientation of the pair relative to $B_h$.
If the pair is in line with $B_h$, the horizontal component of the dipole-dipole interaction is an attraction.
Therefore, the distance between particles in a line of 3 or more is further lowered by $B_h$.
For instance, 3 particles can assemble into either a triangle or a line.
The collinear configuration is stable for large values of $B_h$ and $\mathcal{M}_z$, as will be shown later.
In this case, as demonstrated in~\cite{chinomona2015}, contact occurs for
\begin{equation}
\frac{B_h}{B_z} = \sqrt{\frac{1}{2} \left( 1-\frac{24}{17}\frac{D^3}{l_c^3 \mathcal{M}(B_z)} \right)},
\label{collapse3}
\end{equation}
which is slightly lower than the critical value for a pair of particles.
Secondly, the increased weight of the assembly creates a curvature of the interface, lowering the distance between neighbouring particles at the center of the assembly.
This means that, in given conditions, there exists a maximum number of particles $N_c$ above which contact will occur.
As shown in~\cite{vandewalle2013}, by comparing the total weight of the assembly with the capillary force on the boundary of the assembly, a scaling of $N_c$ is found, namely
\begin{equation}
N_c \propto \frac{\gamma^2}{(\rho_s-\rho_l)^2 g^2 D^4}
\label{nmax}
\end{equation}
where $\rho_s$ and $\rho_l$ denote the densities of the particles and the liquid, respectively.
This global curvature is also at the origin of fivefold defects in the assemblies, as a triangular lattice would be expected on a flat surface.
\begin{figure}[t]
\includegraphics[width=\linewidth]{FIG_configs.pdf}
\caption{
Standard deviation of the distances between the particles and the center of mass during a quasistatic cycle of $B_h/B_z$, from zero to the contact point and back to zero.
The points are obtained by a Monte-Carlo simulation, while the corresponding configurations are shown in the experiment.
The green vertical solid line shows the contact threshold calculated from Eq.~(\ref{collapse3}).
(a) The regular triangle (I) formed by 3 particles deforms into an isosceles (II), then transitions into a collinear state (III).
Hysteresis is observed.
(b) The rhombus (IV) formed by 4 particles continuously deforms (V), then experiences a breaking of mirror symmetry in the direction perpendicular to the field (VI), with no hysteresis.
}
\label{configs}
\end{figure}
As mentioned earlier, a horizontal field can drastically change the symmetry of an assembly.
Figure~\ref{configs} shows the structures obtained as a function of the ratio $B_h/B_z$, for assemblies of 3 and 4 particles.
Different structures are identified by plotting the standard deviation $\sigma_d$ of the distances between the particles and the center of mass, or
\begin{equation}
\frac{\sigma_d}{D} = \sqrt{\frac{1}{N} \sum_{i=1}^{N} \frac{(d_i-\overline{d})^2}{D^2} }
\label{sigma}
\end{equation}
where $d_i$ denotes the distance between particle $i$ and the center of mass and $\overline{d}$ is the average distance to the center of mass.
The curve was obtained through a Monte-Carlo simulation based on the pair potential of Eq.~(\ref{U}).
Horizontal field $B_h$ is increased quasistatically from 0 to the point of contact, then decreased to zero, while $B_z$ is kept constant at about 3~mT.
The insets show the structures obtained in the experiment, next to the corresponding branch in the simulation.
While contact between particles is generally not reversible in the experiment, the same configurations and hysteretic behaviours are observed.
In the 3-particle case, the equilibrium configuration at $B_h = 0$ is a regular triangle.
When $B_h/B_z$ is increased, the triangle gradually deforms into an isosceles.
Close to the contact event, the system transitions into a collinear configuration.
Contact between particles occurs around $B_h/B_z \approx 0.65$, which is very close to the value obtained from Eq.~(\ref{collapse3}).
When $B_h/B_z$ is decreased, the system remains in the collinear configuration down to $B_h/B_z \approx 0.3$, where it goes back to the isosceles configuration.
This hysteresis loop demonstrates the coexistence of two stable configurations.
On the other hand, no hysteresis is observed in the 4-particle case.
At $B_h/B_z = 0$, we have $\sigma_d /D \neq 0$.
Indeed, the particles are not equidistant to the center of mass, as the assembly takes the shape of a rhombus with two particles closer to the center, and the other two further away.
When $B_h/B_z$ is increased, this difference becomes larger.
Around $B_h/B_z \approx 0.5$, the symmetry is further broken in the system when the two outermost particles align with one of the innermost ones.
This transition happens continuously.
Contact occurs when the three aligned particles touch, which also happens at $B_h/B_z \approx 0.65$.
Note that representing the deformation of the assemblies with a single variable, $\sigma_d$, does not allow to distinguish configurations that are too similar.
For instance, two isosceles configurations can exist in the 3-particle case, depending on the initial conditions.
However, they both produce similar values of $\sigma_d$.
To distinguish these isosceles requires a more detailed study of the internal angles of the triangular states~\cite{grosjean2015}.
Nonetheless, this brief quasistatic analysis of the assemblies demonstrates the presence of several states with hysteresis loops, and continuous as well as discontinuous transitions.
More importantly, the effect of $N$ on the configurations is profound, meaning that what is known for assemblies of $N$ particles cannot, in general, be transcribed to assemblies of $N'\neq N$ particles.
Each $N$ possesses its own energy landscape, so that the addition or removal of a particle has non-trivial consequences.
\section{Dynamic regimes}
Under time-dependent magnetic fields, magnetocapillary self-assemblies oscillate, producing a wide range of behaviours~\cite{lagubeau2016,chinomona2015,lumay2013,grosjean2015,grosjean2016}.
Most notably, some vibration modes are non-reciprocal, meaning that the succession of shapes adopted by the assembly is not invariant under a time-reversal transformation.
This can lead to the locomotion of the assembly, as was first reported in~\cite{lumay2013}.
The submillimetric size of the particles associated with the low frequencies, usually below 1~Hz, used for the magnetic oscillations lead to a Reynolds number $\mathrm{Re} = \rho_f U D / \eta$ typically between $10^{-3}$ and $10^{-1}$ in water.
This means that, in most cases, viscous dissipations dominate over inertia in the flows produced by the swimmer.
In these conditions, a breaking of time-reversal symmetry in the succession of shapes adopted by the swimmer is a necessary condition for propulsion~\cite{purcell1977,lauga2009}.
Note that this non-reciprocal motion spontaneously arises in the self-assemblies, even under reciprocal perturbations, due to the complex interactions between the particles.
Such a symmetry breaking can already be observed in a pair of particles, although no net motion is observed~\cite{lagubeau2016}.
An oscillating field $\vec{B}_h (t) = B_{x,\:0} \vec{e}_x + B \sin(\omega t) \vec{e}$ is applied to a pair of particles, with $B \ll B_{x,\:0}$ and $\vec{e}$ a unit vector at an angle $\alpha$ with $\vec{e}_x$.
Two vibration modes have been identified, a radial mode in $d$ and an angular mode in $\theta$, corresponding to the cases where the oscillation of the magnetic field is in line with and perpendicular to the pair, \emph{i.e.} $\alpha = 0$ and $\alpha = \pi /2$.
A perturbation analysis of Eq.~(\ref{U}) leads to an expression for the linear stiffness of $U$ in each case, namely
\begin{equation}
k_d = \frac{2 \Gamma}{l_c d}\left( 3 K_1 (d/l_c) - \frac{d}{l_c} K_0 (d/l_c) \right),
\label{kd}
\end{equation}
\begin{equation}
k_{\theta} = \frac{4 \Gamma l_c^3}{d^5}\mathcal{M}(B_x).
\label{ktheta}
\end{equation}
Both modes can therefore be treated as two independent damped oscillators which can experience a forcing by the external magnetic field.
Furthermore, the difference in stiffness means that each oscillation can respond with a different phase to an external oscillating perturbation, producing non-reciprocal deformations.
However, in the simple two-particle case, the particles oscillate symmetrically around the center of mass of the assembly, leading to no propulsion~\cite{lagubeau2016,lumay2013}.
\begin{figure}
\includegraphics[width=\linewidth]{FIG_collinear.pdf}
\caption{
The collinear swimmer is the simplest magnetocapillary swimmer.
(a) Three particles assemble in a line.
One can associate a spring constant with the radial oscillation of each pair, namely $k_a$ and $k_b$.
If the outermost particles differ in size, we have $k_a \neq k_b$.
(b) This can lead to non-reciprocal motion when the oscillations are out of phase, leading to locomotion.
}
\label{collinear}
\end{figure}
A minimum of three particles is therefore needed to generate a net motion.
The simplest magnetocapillary swimmer is produced in the collinear configuration of 3 particles~\cite{grosjean2016}.
If we neglect the interaction between the two outermost spheres, we can consider the collinear assembly as a combination of two particle pairs, named $a$ and $b$ (see Fig.~\ref{collinear}).
Because the particles are in line, it is possible to only excite the radial mode in $d$, keeping the motion one-dimensional.
If the particle pairs are identical, they oscillate in phase and the motion is reciprocal.
However, a phase difference can appear between the oscillations $a$ and $b$ if each pair possesses a different stiffness, namely $k_{d,\:a} \neq k_{d,\:b}$.
This can be achieved experimentally by changing the diameter of one of the outer spheres~\cite{grosjean2016}.
When a phase difference exists between the oscillations of each pair, the non-reciprocal sequence observed is similar to the Najafi-Golestanian model for a minimal microswimmer~\cite{najafi2004,golestanian2008,pickl2012,pande2015}.
The speed of such a swimmer is a function of the amplitudes $A_a$ and $A_b$ of each oscillation as well as their phase difference $\Delta\phi$, such that
\begin{equation}
V = K A_a A_b \omega \sin(\Delta\phi)
\label{Vgol}
\end{equation}
where $K$ is given by the geometry of the system~\cite{golestanian2008}.
An analytical expression for $V$ can be found that is a function of viscous damping and the stiffness of the oscillators.
This simple linear model accounts for the speed profile observed in the experiments, as well as the observed speeds of the order of 0.02~$D/T$ where $T=2\pi/\omega$ is the period of the oscillating field~\cite{grosjean2016}.
The case of the triangular swimmer is more complex.
Indeed, all three particle pairs must be considered.
Furthermore, because of the geometry, both $r$ and $\theta$ modes will be excited regardless of the orientation of $B_h$.
The lack of simplifying symmetry and the non-linearity of the interactions at play make it difficult to reach any analytical formulation.
However, one can understand the origin of non-reciprocal motion when $\omega \rightarrow 0$ by studying the effect of a quasistatic variation of $B_h$.
While a hysteresis cycle was clearly identified in the transition between the triangular and collinear states, as shown in Fig.~\ref{configs}, this cycle is not the one used to produce swimmers in the experiment.
Indeed, a spontaneous transition from the triangular to the collinear state would only happen near the contact event, where any perturbation leads to the collapse of the swimmer.
A more practical way of producing swimmers takes advantage of the coexistence of two isosceles states.
\begin{figure}
\includegraphics[width=\linewidth]{FIG_isosceles.pdf}
\caption{
Two isosceles configurations can be observed in 3-particle assemblies.
The \emph{lepto} (a) has two internal angles above $\pi/3$ and the \emph{platy} (b) has only one angle above $\pi/3$.
The switch from one state to the other is accompanied by a rotation of the triangle, leading to non-reciprocal motion.
This is represented by a cycle (c) in the configuration space, where $\theta$ is the orientation between the triangle and the field $\vec{B}_h$.
}
\label{isosceles}
\end{figure}
Indeed, it has been shown~\cite{grosjean2015} that several isosceles configurations can coexist, depending on the orientation of the triangle.
One is a pointy isosceles, called \emph{lepto}, with two angles above $\pi/3$ and one below (see Fig.~\ref{isosceles}a).
In this case, two of the three particles are aligned with $B_h$.
A second configuration is a flat isosceles, called \emph{platy}, with two angles below $\pi/3$ and one above (see Fig.~\ref{isosceles}b).
In that case, two of the three particles are perpendicular to $B_h$.
Because of their similar shape, those states are not distinguished on Fig.~\ref{configs}a.
However, it was shown previously in a similar Monte-Carlo study~\cite{grosjean2015} that the \emph{platy} state coexists with the \emph{lepto} state up to $B_h/B_z \approx 0.35$, above which the \emph{lepto} is the only triangular configuration observed.
One can see from Fig.~\ref{isosceles}a,b that the transition between those states must be accompanied by a rotation of the structure.
However, transition and rotation do not happen simultaneously, producing non-reciprocal motion, as illustrated in Fig.~\ref{isosceles}c.
Note that, in contrast with the collinear swimmer, the triangular one can break time-reversal symmetry with three identical particles.
One can wonder what is the direction followed by each swimmer.
In the case of the collinear swimmer, the swimmer moves in alignment with the field, in the direction determined by the sign of the phase difference $\Delta\phi$.
In general, however, swimming direction is at a non-trivial angle $\delta$ with the direction of $\vec{B}_h$.
If $\vec{B}_h = B_x \sin(\omega t) \vec{e}_x$, and granted that there is a unique $\delta$ between 0 and $\pi/2$, by symmetry, we can expect at least four swimming directions depending on the initial orientation of the assembly, namely $\delta$, $-\delta$, $\pi+\delta$ and $\pi-\delta$.
This is usually the case with the triangular swimmer.
Nonetheless, the trajectory of a triangular swimmer can be remote-controlled rather precisely, as shown on Fig.~\ref{SWIM}.
Indeed, once a swimming direction $\delta$ is observed, changing the orientation of $\vec{B}_h$ by an angle $\epsilon$ changes the swimming direction to $\delta+\epsilon$.
In order to facilitate the control, an offset $B_{h,\:0}$ is added to $\vec{B}_h$ so that it does not oscillate around zero any more.
A small offset of 1/10\textsuperscript{th} of the oscillation amplitude is enough to create a preferential orientation for the assembly on average, which helps keeping the swimmer in a well-defined swimming mode~\cite{grosjean2015}.
Compared to the collinear swimmer, higher speeds usually around $0.3~D/T$ are reached, with the fastest ones typically reaching $0.6~D/T$.
This relatively high speed explains why the triangular swimmer is used in remote-control experiments, despite the increased complexity.
For comparison, the early example of artificial microswimmer in~\cite{dreyfus2005}, which uses a magnetic filament for propulsion, achieved a comparable top speed of around $0.1~L/T$, where $L$ is the length of the filament.
Using the same definition for $L$, the speed of biological flagellates is typically quite low, of the order of $0.01~L/T$~\cite{derosier1998}.
This is explained by the very fast rotation of the flagellum and the prevalence of absolute speed over energy efficiency in this case~\cite{purcell1977}.
\begin{figure}
\includegraphics[width=\linewidth]{FIG_SWIM.pdf}
\caption{
Remote-control of a three-sphere swimmer by adjusting the orientation of $B_h$.
Four trajectories, representing the letters ``Swim", are shown to illustrate the level of control achieved.
The letter ``S" shows a smooth, curved trajectory.
The ``w" illustrates a succession of straight lines and sharp turns.
The cursive ``i" is a combination of a smooth turn and a sharp reorientation.
Finally, the ``m" shows all of the aforementioned movements in a small region of space.}
\label{SWIM}
\end{figure}
It is important to note that, while the quasistatic approach offers an intuitive explanation of the origin of reciprocal motion in the triangular case, it is not a definitive approach to characterize the swimmers.
Indeed, dynamical effects play an important role in the deformation of the assemblies.
For instance, no hysteresis loop is observed in a quasistatic deformation of a 4-particle swimmer, as was shown in Fig.~\ref{configs}.
However, although initially thought to be poor swimmers~\cite{lumay2013}, the 4-particle swimmers can reach similar speeds to the triangular swimmer, about $0.3~D/T$.
This shows the limit of the quasistatic approach and the necessity of studying the interaction dynamics in more detail.
\section{Applications}
Magnetocapillary swimmers have some unique characteristics that set them apart from other artificial microswimmers, which could be beneficial from the points of view of both fundamental and applied research.
Notably, they are bound to an interface, self-assembled from simple components, chemically inert and fuelless, while being relatively fast and controllable.
Naturally, many challenges remain to be addressed before any actual technological application can be considered for microswimmers in general, and magnetocapillary self-assemblies in particular.
Nonetheless, performing some simple tasks with the swimmer can serve as a proof of concept for future applications as well as a basis for further studies.
Several applications are commonly cited for microswimmers, notably concerning the manipulation and transport of micro-objects~\cite{dreyfus2005,hernandez2005,raz2008,lauga2009,zhang2010,baraban2012,tottori2012,sengupta2014,cheang2014,gao2014,ding2016} and the mixing and pumping of fluids~\cite{lauga2009,wu2000,kim2004,kim2007,pushkin2013,jalali2015}.
Other applications could make use of the magnetic properties of the spheres.
For example, it is possible to produce targeted heating in a fluid using magnetic particles under high frequency oscillating fields, which has long been studied for cancer treatment~\cite{giustini2010}.
The heating of a floating particle, in this case by a laser, has also been used to manipulate objects on a water surface by locally changing surface tension~\cite{mallea2017}.
This technique could be applied to magnetocapillary assemblies to provide greater control.
In this section, two experiments are performed and discussed in order to illustrate how magnetocapillary swimmers could perform two simple tasks : the transport of an object and the local mixing of fluids.
One of the main tasks proposed for microswimmers is the transport and delivery of a cargo~\cite{hernandez2005,raz2008,lauga2009,zhang2010,baraban2012,tottori2012,sengupta2014,cheang2014,gao2014,ding2016}.
Indeed, the controlled transport of a micro-object could potentially be used in microfabrication, manipulation of biological components or targeted drug delivery~\cite{zhang2010,cheang2014,gao2014,sitti2015,ding2016}.
The transport process can be divided into three steps: the capture, the towing and the release of the cargo.
The capture of a floating object by a magnetocapillary swimmer can be relatively straightforward by using capillary forces, granted that the swimmer and the object possess like capillary charges.
In other words, in the proximity of the swimmer, the metallic spheres naturally attract objects that produce a concave meniscus, which is the case for most heavier than water objects.
The capture of several floating objects by swimmers made of 500~$\mu$m particles is shown on Fig.~\ref{capture}.
This capture process has been successfully tested on various particles sizes, including a polyethylene sphere of about 80~$\mu$m (see Fig.~\ref{capture}a) and a sesame seed about 3.7~mm long (see Fig.~\ref{capture}b).
Finally, the capture of a polyethylene sphere of 600~$\mu$m is shown in Fig.~\ref{capture}c,d.
Note that a particle having a capillary charge of the opposite sign as the swimmer would experience a repulsion and be pushed away from the swimmer in a way that would be difficult to precisely control.
Nonetheless, capture could still be achieved.
For example, the swimmer could capture an intermediary, amphiphilic particle, possessing both negative and positive capillary poles~\cite{vella2005,poty2014}.
Capillary gripping has been proven to work in other systems using bubbles~\cite{giltinan2014} or droplets~\cite{lambert2007} as intermediary, although this does not rely on the Cheerios effect.
Furthermore, by introducing defects in the contact line of the spheres, one could produce a directed capture at a specific site on the object~\cite{wong2016}.
Finally, it might be possible to change the sign of the capillary charge of the swimmer by using another body force opposing gravity, for example with a vertical magnetic gradient~\cite{tsai2013,vella2015}.
The main drawback of the capillary interaction as a capture method is its lack of selectivity.
Any particle possessing a capillary charge of the same sign as the swimmer will be attracted when it comes in proximity with the swimmer.
This does not present a problem, though, if the intended goal is to collect as many floating particle as possible, for example in pollution removal applications~\cite{vilela2016}.
A possible strategy to obtain a selective capture by capillary interaction would be to code a specific succession of capillary poles on the particles, by forming an undulated contact line or using non-spherical particles~\cite{loudet2005,loudet2006,danov2010,davies2014}, matching the code of the target particle.
However, most applications requiring the selective capture of a target or the capture of objects possessing capillary charges close to zero would necessitate other methods.
It has been shown that a cargo can be kept close to the swimmer during transport by hydrodynamic interaction alone~\cite{diller2014}.
Note that because magnetocapillary swimmers are chemically inert, and their locomotion is not based on a chemical interaction with their environment, coating the particles to obtain desired properties~\cite{gera2010} is an option.
For example, targeted delivery of genes using magnetic swimmers coated with a reagent has already shown promising results in the case of artificial bacterial flagella~\cite{qiu2015}.
\begin{figure}
\includegraphics[width=\linewidth]{FIG_capture.pdf}
\caption{
Capture of floating particles by capillary attraction.
Objects of various sizes can be captured, such as a polyethylene sphere of 80~$\mu$m (a) or a sesame seed of 3.7~mm (b).
(c) A 4-particle swimmer is brought in the proximity of a polyethylene sphere of about $600~\mathrm{\mu m}$.
(d) After about 15~s, the sphere is captured by the swimmer through capillary attraction.
Lines show the motions of their center of mass.
}
\label{capture}
\end{figure}
The second step of the transport process is the cargo towing itself.
Depending on the size of the load relative to the swimmer, this can vary from a trivial matter to a very complex problem.
If $L$ denotes the typical length of the cargo, an object such that $L\ll D$, as in Fig.~\ref{capture}a, will have little influence on the deformation of the swimmer and the resulting fluid flow.
On the other hand, the case of a cargo much larger than the particles, \emph{i.e.} $L\gg D$, is significantly more complex.
The motion of several particles can be impeded by the cargo, which means that a larger number of particles is needed to produce non-reciprocal deformations.
For example, in Fig.~\ref{capture}b, 13 particles have been used to capture a 3.7~mm long sesame seed.
Finding efficient swimming regimes in assemblies of a large number of particles remains an open question.
Indeed, each subgroup of particles in the assembly would need to work in synergy, generating a net flow in the same global direction, in order to overcome the increased drag.
This is usually not the case in the experiment.
With the sesame seed of Fig.~\ref{capture}b, only low speeds of the order of $10^{-2}~D/T$ were observed.
The case of large cargoes should be further investigated, in particular to determine what the optimal number of particles is, depending on $L$ and $D$.
\begin{figure}[t]
\includegraphics[width=\linewidth]{FIG_transport.pdf}
\caption{
Towing and release of a floating particle.
(a) Under a sinusoidal field $B_h (t)$, the assembly and cargo swim.
Non-reciprocal deformations are shown over one period of the oscillating field $T=2~\mathrm{s}$.
(b) Same as in (a), following the capture shown in Fig.~\ref{capture}.
In this case, the cargo is stuck between two particles.
(c) Nonetheless, swimming direction is controllable.
Here, the cargo is moved then brought back to its initial position.
A notch in the trajectory (*) happened when the cargo jumped from one particle pair to another, as shown on the insert.
(d) The non-magnetic cargo is released by sinking the metallic spheres using a non-uniform magnetic field, \emph{i.e.} by approaching a neodymium magnet.
}
\label{transport}
\end{figure}
When $L\approx D$ (as in Fig.~\ref{capture}c,d) the cargo can more or less influence the motion of the particles depending on the situation.
Figure~\ref{transport}a shows the towing by a 4-particle swimmer of a 600~$\mu$m polyethylene sphere over one period of the oscillating field at 0.5~Hz.
The deformation of the swimmer is superimposed to the initial configuration at $t=0$, in order to highlight the displacement of each particle after each time step of $T/5$.
The load is in contact with a single magnetic particle.
Its motion evidences a rotation of the particle in question, showing that its movement is relatively unhampered by the presence of the cargo, in spite of the added mass and viscous drag.
It is not impossible that the presence of the cargo may actually improve propulsion, as evidenced by the relatively large swimming speed of about 0.5~$D/T$.
This might be an effect of the large amplitude of motion of the cargo.
Furthermore, it is has been shown numerically that an arbitrary swimmer loaded with a cargo can move faster than an unloaded one in some conditions~\cite{raz2008}.
Namely, an optimal size ratio between a swimmer and a cargo can exist where swimming efficiency is significantly improved, granted that the cargo be sufficiently close to the swimmer.
Figure~\ref{transport}b shows the towing of a similar polyethylene sphere, following the capture which was shown in Fig.~\ref{capture}c,d.
In this case, the sphere is in contact with two particles of the assembly.
This has the effect of creating a solid link between those particles, restricting their motion.
However, non-reciprocal motion is still observed, leading to a swimming speed of about 0.25~$D/T$.
In both situations, swimming trajectory is controllable.
Figure~\ref{transport}c shows the transport of the cargo on a small loop.
Every change in direction corresponds to a change in the direction of the oscillating field $\vec{B}_h$, with the exception of a small notch in the trajectory (*).
This notch corresponds to the moment the cargo detached from a particle and came into contact with another one.
Despite this perturbation, the swimmer was successfully brought back to its starting position.
The last step of a transport experiment is the release of the load.
If the cargo possesses a sufficiently large magnetic susceptibility, a strong vertical magnetic field will move the particles and cargo away from each other, freeing the load.
On the other hand, if the magnetic susceptibility of the cargo is small compared to which of the particles, as is the case with the polyethylene sphere, a strong vertical magnetic gradient can sink the metallic spheres while the cargo stays afloat.
This was done in Fig.~\ref{transport}d by approaching a neodymium magnet from below the bath.
The sphere in the foreground is the polyethylene cargo, while three of the four particles constituting the swimmer can be seen in the background, out of focus.
\begin{figure}[t]
\includegraphics[width=\linewidth]{FIG_mixing.pdf}
\caption{
Illustration of a simple mixing device.
(a) A triangular swimmer is moved to the intersection of three fluids, each a glycerol-water mixture with a red, yellow or blue dye.
(b) A large $\vec{B}_h$ is used so that the particles collapse in a line.
(c) $\vec{B}_h$ is rotated for about 8 turns clockwise.
Fluid is entrained as the line follows the rotation of $\vec{B}_h$.
(d) $\vec{B}_h$ is rotated for about 8 turns counter-clockwise.
Far from the particles, the flow is reversible (black arrow).
Close to the particles, however, mixing has occurred due to molecular diffusion (white arrow).
}
\label{mixing}
\end{figure}
Another commonly cited application of microswimmers is mixing at the micro scale~\cite{lauga2009,wu2000,kim2004,kim2007,pushkin2013,jalali2015}.
For instance, swarms of microswimmer can enhance the diffusion of constituents in a fluid~\cite{wu2000} or the mixing of several fluids in microfluidic devices~\cite{kim2004,kim2007}.
In general, a microswimmer generates fluid motion over a long range, producing stirring.
This stirring effect is most pronounced if swimming direction is frequently changed in a random way, in a so-called run and tumble motion~\cite{lin2011,pushkin2013}.
In the case of a remote-controlled artificial microswimmer like the magnetocapillary swimmer, parameters such as the typical run length between reorientations, the distribution of run lengths and the smoothness of the turns can be varied.
Another possibility is to generate local stirring by rotational motion or motion on a closed loop~\cite{jalali2015}.
This could prove useful in confined environments, such as small droplets or microfluidic devices, where stirring by run and tumble motion loses efficiency~\cite{pushkin2014}.
Furthermore, assembling an array of rotating objects has been shown to be a viable strategy for mixing in microfluidic devices~\cite{lu2002}.
Indeed, like the Reynolds number, the relative importance of advection and diffusion in a fluid, called the P\'eclet number, is proportional to the typical length scale and velocity of the flow.
Therefore, at the micro scale, molecular diffusion becomes the preferred mixing mechanism over advection.
In this case, the volume of fluid mixed can be defined as
\begin{equation}
V=\mathcal{A}\sqrt{Dt}
\label{volmix}
\end{equation}
where $D$ is the mass diffusion coefficient and $\mathcal{A}$ the contact area between the fluids to mix~\cite{lu2002}.
Maximizing contact area $\mathcal{A}$ is therefore a possible mixing strategy.
With this aim in mind, an investigation of the rotational regimes of magnetocapillary swimmers and their possible use as a micromixing device could prove useful.
The simplest way to produce rotational motion in a magnetocapillary swimmer is to submit it to a strong magnetic field rotating in the horizontal plane, causing the particles to come into contact and rotate.
In Fig.~\ref{mixing}, a triangular swimmer has been brought to the intersection of three fluids, each being the same mixture of water and glycerol with the addition of a red, a yellow and a blue dye.
A strong horizontal field $\vec{B}_h$ causes the particles to come into contact and form a line.
This magnetic rod follows the orientation of $\vec{B}_h$ in the manner of a compass needle.
After about 8 clockwise turns of the magnetic field, fluid close to the swimmer has been entrained, forming a spiral.
Contact area $\mathcal{A}$ has increased, as thin layers of each colour were intertwined (see white arrow on panel (c)).
The inverse rotation of $B_h$ is then applied to rotate the rod about 8 turns counter-clockwise.
On a circle of roughly the size of the swimmer, mixing by molecular diffusion has occurred, as evidenced by the grey area around the particles (see white arrow on panel (d)).
Further away from the swimmer, the fluid is back to its initial position due to the reversibility of laminar flows (see black arrows).
This simple mixing device illustrates the potential of artificial microswimmers to produce mixing at specific locations.
Note that, similarly to the swimming regimes, a rotational motion of a magnetocapillary swimmer can also be obtained from the non-reciprocal motion of the particle.
This should be further investigated due to the potential of creating more complex flows, as each particle follows an individual orbit, as opposed to the solid rotation shown in Fig.~\ref{mixing}.
\section{Conclusion}
In summary, combining the capillary interaction between floating objects with an interaction between magnetic induced dipoles leads to the formation of ordered structures by self-assembly.
Depending on the number of particles in the assembly and the amplitude and orientation of the imposed magnetic fields, a variety of structures can be observed.
The adopted configurations can also depend on the history of the system, in other words, hysteresis is observed.
This rich energy landscape partly explains the appearance of non-time-reversible deformations under periodic variations of the applied fields.
Some of these deformation modes lead to a net motion at low Reynolds number.
Furthermore, the swimming trajectory can be controlled in the plane of the interface.
A variety of applications can be envisioned for controllable microswimmers, based on their ability to move fluid and interact with micro-objects.
Examples include drug delivery, manipulation and assembly of small components, mixing and pumping.
This has been illustrated here, first by the capture, controlled transport and release of a floating cargo.
The capture is made by capillary interaction while the release of the cargo is achieved by sinking the swimmer with a magnetic gradient.
As is, this method is limited to cargoes of like capillary charge as the swimmer.
Furthermore, cargoes significantly larger than the particles of the swimmer can impede their movement.
To overcome this, many-particle assemblies must be used, which should thus be studied in more detail.
Indeed, changing the number of particles in an assembly can drastically change the equilibrium configurations and the dynamics.
A second possible application, the mixing of fluids, has been illustrated.
The rotation of a swimmer entrains fluid, intertwining the components to mix.
Such a process could be used to generate local mixing at a precise location, or to mix larger quantities by combining several mixers.
The flow produced by the rotation of various assemblies should be further studied with this aim in mind.
\section*{Acknowledgements}
This work was financially supported by the University of Li\`ege (Grant No. FSRC 11/36). GG thanks FRIA for financial support.
\section*{References}
|
2,877,628,088,539 | arxiv | \section{}
The conformational states of a semiflexible polymer enclosed in a volume $V:=\ell^{3}$ are studied as stochastic realizations of paths using the stochastic curvature approach developed in [Rev. E 100, 012503 (2019)], in the regime whenever $3\ell/\ell_ {p}> 1$, where $\ell_{p}$ is the persistence length. The cases of a semiflexible polymer enclosed in a cube and sphere are considered. In these cases, we explore the Spakowitz-Wang type polymer shape transition, where the critical persistence length distinguishes between an oscillating and a monotonic phase at the level of the mean-square end-to-end distance. This shape transition provides evidence of a universal signature of the behavior of a semiflexible polymer confined in a compact domain.
\tiny
\fontsize{8}{11}\helveticabold { \section{Keywords:} semiflexible polymer, stochastic curvature, shape transition, critical persistence length, mean square end-to-end distance.}
\end{abstract}
\section{Introduction}
Semiflexible polymers is a term coined to understand a variety of physical systems that involve linear molecules. The most popular polymers are industrial plastics, like polyethylene or polystyrene, with various applications in daily life \cite{RONCA2017247, polystyrene}.
Another prominent example is the DNA compacted in the nucleus of cells or viral DNA/RNA packed in capsids \cite{Fal-Cifra2010, Fal-Locker2006}. These last examples are of particular interest since they are confined semiflexible polymers. Indeed, biopolymers' functionality is ruled by their conformation, which in turn is considerably modified in the geometrically confined or crowded environment inside the cell \cite{Fal-Koster2008, Fal-Reisner2005, Fal-Benkova2017}.
A common well-known theoretical framework used to describe the fundamental properties of a semiflexible polymer is the well-known worm-like chain model (WLC), which pictures a polymer as a thin wire with a flexibility given by its bending rigidity constant $\alpha$ \cite{PSaito-Saito1967}. The central quantity in this model is the persistence length defined by $2\alpha/(k_{B}T(d-1))$ \cite{Kleinert_2006, benetatos}, being $d$ the space dimension, however, here we simply use $\ell_{p}:=\alpha/(k_{B}T)$\footnote{For the sake of notation, it is hidden the dimension of the space in the persistence length definition. In those cases where an explicit dependence on the dimension is needed, it should be adequately scaled by the factor $2/(d-1)$. }, which is the characteristic length along the chain over which the directional correlation between segments disappears. $k_{B}T$ is the thermal energy, with $k_{B}$ and $T$ the Boltzmann constant and $T$ the bath temperature, respectively.
\cite{19955-1}.
In the absence of thermal fluctuations, when $\alpha\gg k_{B}T$, the conformations of the polymer are well understood through different curve configurations determined by variational principles \cite{Psim-Guven2012, Polcla-Guven2014}.
For the WLC model, the bending energy functional is given by
\begin{equation}
H[\mathbf{R}]=\frac{\alpha}{2}\int ds\kappa^2(s),
\end{equation}
where ${\bf R}(s)$ is a polymer configuration and $\kappa(s)$ is the curvature of the chain, with $s$ the arc-length parameter.
Additional terms can be added to the Hamiltonian to account for other effects including multibody interactions, external fields, and constraints on the chain dimensions \cite{SW2005, Polcon-Chen2016}.
When the thermal fluctuations are relevant $\alpha\simeq k_{B}T$, it is usual to introduce a statistical mechanics description.
Since $H[\mathbf{R}]$ represents the bending energy for a curve configuration ${\bf R}$, the most natural approach is to defined the canonical probability density
\begin{eqnarray}
\mathcal{P}\left(\boldsymbol{R}\right)\mathcal{D}\boldsymbol{R}:=\frac{1}{\mathcal{Z}_{\rm c}}\exp\left(-\frac{\ell_{p}}{2}\int ds \kappa^2\left(s\right)\right)\mathcal{D}\boldsymbol{R},
\label{ProbDensityCanonical}
\end{eqnarray}
where $\mathcal{Z}_{\rm c}$ is the canonical partition function, and $\mathcal{D}{\bf R}$ is an appropriate functional measure. In this description, the theory turns out to be a one-dimensional statistical field theory.
Nonetheless, the theory is not easy to tackle since $\kappa\left(s\right)$ acquires non-linear terms in ${\bf R}$.
To avoid this difficulty, a different perspective was introduced by Saito's et al. \cite{PSaito-Saito1967}, where it was studied the following probability density function
\begin{eqnarray}
\mathcal{P}\left(\boldsymbol{T}\right)\mathcal{D}\boldsymbol{T}:=\frac{1}{\mathcal{Z}_{\rm s}}\exp\left(-\frac{\ell_{p}}{2}\int ds \kappa^2\left(s\right)\right)\mathcal{D}\boldsymbol{T},
\label{ProbDensityCanonicalSaito}
\end{eqnarray}
instead of Eq.~\eqref{ProbDensityCanonical}. Here $\mathcal{Z}_{\rm s}$ is the Saito's partition function and $\mathcal{D}{\bf T}$ is an appropriate functional measure for the tangent direction of a given polymer configuration ${\bf R}$. The Saito's partition function can be solved since one has $\kappa^{2}(s)=\left({d{\bf T}(s)/ds}\right)^{2}$, thus one can relate $\mathcal{Z}_{\rm s}$ with the Feynman's partition function for a quantum particle in the spherical surface described by ${\bf T}^{2}=1$. For the cases when the semiflexible polymer is in an open Euclidean space, the Saito's approach works very well. For instance, it reproduces the standard results of Kratky-Porod \cite{PSaito-Kratky1949}, among other results \cite{PSaito-Saito1967}. However, for the cases when the semiflexible polymer is confined to a bounded region of the space the Saito's approach is difficult to use, with some exceptional cases like the situation for semiflexible polymers confined to a spherical shell \cite{SW2005}.
For semiflexible polymers in plane space, an alternative theoretical approach to the above formalisms was introduced in \cite{Castro-JE}. This consists of postulating that each conformational realization of any polymer in the plane is described by a stochastic path satisfying the stochastic Frenet equations, defined by
$\frac{d}{ds}\mathbf{R}(s)=\mathbf{T}(s)$, and
$\frac{d}{ds} \mathbf{T}(s)= \kappa(s) {\bf N}\left(s\right)$,
where $\mathbf{R}(s)$ is the configuration of the polymer, $\mathbf{T}(s)$ is the tangent vector to the curve describing the chain at $s$, $ {\bf N}(s):=\mathbf{\epsilon} \mathbf{T}(s)$ is the normal stochastic unit vector, with $\mathbf{\epsilon}$ a rotation by an angle of $\pi/2$, and $\kappa(s)$ is the stochastic curvature that satisfies the following probability density function
\begin{eqnarray}
\mathcal{P}\left(\kappa\right)\mathcal{D}\kappa:=\frac{1}{\mathcal{Z}_{\rm s-c}}\exp\left(-\frac{\ell_{p}}{2}\int ds \kappa^2\left(s\right)\right)\mathcal{D}\kappa,
\label{ProbDensityCanonicalSTochastic}
\end{eqnarray}
where $\mathcal{Z}_{\rm s-c}$ is the partition function in the stochastic curvature formalism, and $\mathcal{D}\kappa$ is an appropriate measure for the curvature. This in particular, implies a white noise-like structure, i.e., $\langle \kappa(s) \rangle=0$ and $\langle \kappa(s) \kappa(s')\rangle=\delta(s-s')/\ell_p $ \cite{Castro-JE}.
This theoretical framework successfully explains, by first principles, the Kratky-Porod results for free chains confined into an open 2D-plane. Moreover, it correctly describes the mean square end-to-end distance for semiflexible polymers confined to a square box, a key descriptor of the statistical behavior of a polymer chain.
In the present work we carry out an extension of the stochastic curvature approach for semiflexible polymers in the three-dimensional space $\mathbb{R}^{3}$. In particular, we analyze the conformational states of a semiflexible polymer enclosed in a bounded region in three-dimensional space. This polymer is in a thermal bath with a uniform temperature. The shapes adopted by the polymer are studied through the mean-square end-to-end distance as a function of the polymer total length as well as its persistence length. In particular, we analyze the cases of a polymer confined to a cube of side $a$ and a sphere of radius $R$.
The plan of this paper is as follows. In Sect.~\ref{sec:preliminary}, we introduce the stochastic Frenet equations for the semiflexible polymers in three-dimensional spaces, and by using standard procedure we derive a corresponding Fokker-Planck equation. In particular, the Kratky-Porod result for polymers in a 3d open space is obtained. Sect. \ref{sec:compact} contains the derivation of the mean square end-to-end distance for semiflexible polymers confined to a compact domain.
In Sect. \ref{sec:results}, we present the analysis of the mean square end-to-end distance for the cases when the compact domain correspond with a cube of side $a$, and a sphere of radius $R$.
Finally, Sect. \ref{sec:conclusions} contains our concluding remarks.
\section{Preliminary notation and semiflexible polymers in 3D}\label{sec:preliminary}
Let us consider a polymer in a three-dimensional Euclidean space $\mathbb{R}^{3}$ as a space curve $\gamma$, ${\bf R}:I\subset \mathbb{R}\to\mathbb{R}^{3}$, parametrized by an arc-length, $s$. For each point $s\in I$, a Frenet-Serret trihedron can be defined in terms of the vector basis $\{{\bf T}(s), {\bf N}(s), {\bf B}(s)\}$, where ${\bf T}(s)=d{\bf R}/ds$ is the tangent vector, whereas ${\bf N}(s)$ and ${\bf B}(s)$ are the normal and bi-normal vector, respectively. It is well known that each regular curve $\gamma$ satisfies the Frenet-Serret structure equations, namely, $d{\bf T}/ds=\kappa(s){\bf N}$, $d{\bf N}/ds=-\kappa(s) {\bf T}-\tau(s) {\bf B}$ and $d{\bf B}/ds=\tau(s){\bf N}$, where $\kappa(s)$ and $\tau(s)$
are the curvature and the torsion of the space curve. In addition, the fundamental theorem of space curves estates that given continuous functions $\kappa(s)$ and $\tau(s)$ one can determine the shape curve uniquely, up to a Euclidean rigid motion \cite{Fal-Montiel2009}.
\subsection{Stochastic curvature approach in 3D}
In order to study the conformational states of a semiflexible polymer, we adapt the stochastic curvature approach introduced in \cite{Castro-JE} to the case of semiflexible polymers in 3D Euclidean space. For the 2D Euclidean space, the formalism starts by postulating that each conformational realization of any polymer is described by a stochastic path satisfying the stochastic Frenet equations. In the 3D case, it is enough to consider the following stochastic equations
\begin{subequations}\label{stoch-eq}
\begin{eqnarray}
\frac{d}{ds}{\bf R}(s)&=&{\bf T}(s),\label{stoch-eq0}\\
\frac{d}{ds}{\bf T}(s)&=&\mathbb{P}_{\bf T}\boldsymbol{\kappa}(s),
\end{eqnarray}
\end{subequations}
where ${\bf R}(s)$, ${\bf T}(s)$ and $\boldsymbol{\kappa}(s)$ are now random variables. $\boldsymbol{\kappa}(s)$ is named here stochastic vectorial curvature. Also, it has been introduced a normal projection operator $\mathbb{P}_{\bf T}=\boldsymbol{1}-{\bf T}\otimes {\bf T}$ such that ${\bf T}(s)\cdot \frac{d}{ds}{\bf T}(s) =0$. According to these equations, it can be shown that $\left|{\bf T}(s)\right|$ is a constant that can be fixed to unit, where $\left|~~\right|$ is the standard 3D Euclidean norm. The remaining geometrical notions also turn into random variables as follows. The stochastic curvature is defined by $\kappa(s):=\left|\boldsymbol{\kappa}(s)\right|$. The stochastic normal and bi-normal vectors are defined by ${\bf N}(s):=\boldsymbol{\kappa}(s)/\kappa(s)$ and ${\bf B}(s):={\bf T}(s)\times \boldsymbol{\kappa}(s)/\kappa(s)$, respectively, where $\kappa(s)$ is the stochastic curvature. In addition, the stochastic torsion is defined with the equation $\tau(s):={\bf N}(s)\cdot \frac{d}{ds}{\bf B}(s)$.
In addition to the stochastic equations (\ref{stoch-eq}), the random variable $\boldsymbol{\kappa}(s)$ is distributed according to the probability density function
\begin{eqnarray}
\mathcal{P}\left(\boldsymbol{\kappa}\right)\mathcal{D}\boldsymbol{\kappa}:=\frac{1}{\mathcal{Z}_{\rm s-c}}\exp\left(-\beta H\left[{\boldsymbol{\kappa}}\right]\right)\mathcal{D}\boldsymbol{\kappa},
\label{ProbDensity}
\end{eqnarray}
where $H\left[\boldsymbol{\kappa}\right]=\frac{\alpha}{2}\int \boldsymbol{\kappa}^2 ds$ is the bending energy, and $\alpha$ is the bending rigidity modulus. This energy functional corresponds to the continuous form of the WLC model \cite{PSaito-Saito1967}.
Also, in Eq. (\ref{ProbDensity}) $\mathcal{Z}_{\rm s-c}$ is an appropriate normalization constant, $\mathcal{D}{\boldsymbol{\kappa}}:=\prod_{i=1}^{3}\mathcal{D}\kappa_{i}$ is a functional measure, and $\beta=1/k_{B}T$ is the inverse of the thermal energy. The Gaussian structure of the probability density implies the zero mean $\left<\kappa_{i}(s)\right>=0$, and following $3D$ fluctuation theorem
\begin{eqnarray}
\left<\kappa_{i}(s)\kappa_{j}(s^{\prime})\right>=\frac{1}{\ell_{p}}\delta_{ij}\delta(s-s^{\prime}),
\end{eqnarray}
where $\kappa_{i}(s)$ is the $i-$th component of the stochastic vectorial curvature $\boldsymbol{\kappa}(s)$.
\subsection{From Frenet-Serret stochastic equations to Hermans-Ullman equation in 3D}
In this section, we present the Fokker-Planck formalism corresponding to the stochastic equations (\ref{stoch-eq}). This description allows us to determine an equation for the probability density function associated to the position and direction of the endings of the polymer
$P\left({\bf R}, {\bf T}\left.\right|{\bf R}^{\prime}, {\bf T}^{\prime}; s\right)=\left<\delta\left({\bf R}-{\bf R}(s)\right)\delta\left({\bf T}-{\bf T}(s)\right)\right>$, where ${\bf R}$ and ${\bf R}^{\prime}$ are the ending positions of the polymer, and ${\bf T}$ and ${\bf T}^{\prime}$ are the corresponding directions. The parameter $s$ is the polymer length.
Now, the stochastic Frenet-Serret equations (\ref{stoch-eq}) can be identified with a multi-dimensional stochastic differential equation in the Stratonovich perspective, thus applying the standard procedure \cite{Fal-Gardiner1986}, we find the following Fokker-Planck type equation
\begin{eqnarray}
\frac{\partial P}{\partial s}+\nabla\cdot\left({\bf T}~P\right)=\frac{1}{2\ell_{p}}\Delta_{g}P\label{FPeq2},
\end{eqnarray}
where ${\bf T}$ is identified with the unit normal vector on $S^{2}$, thus satisfies the condition ${\bf T}^{2}=1$. The operator $\Delta_{g}$ is the Laplace-Beltrami of the sphere $S^{2}$. Similarly, as the situation for semiflexible polymers confined to a plane space \cite{Castro-JE}, this equation is exactly the same as the one obtained by Hermans and Ullman in 1952 \cite{PSaito-Hermans1952}, where the heuristic parameter they included can now be identify exactly with $1/(2\ell_{p})$. In addition, we can make a contact with the Saito's approach \cite{PSaito-Saito1967} by considering the marginal probability density function
\begin{eqnarray}
\mathcal{Z}_{\rm s}\left({\bf T}, {\bf T}^{\prime}, s\right)\propto\int d^{3}{\bf R}d^{3}{\bf R}^{\prime} P\left({\bf R}, {\bf T}\left.\right|{\bf R}^{\prime}, {\bf T}^{\prime}, s\right).
\end{eqnarray}
Using the Hermans-Ullman equation, we can show that $\mathcal{Z}_{\rm c}$ satisfies a diffusion equation on a spherical surface with diffusion coefficient equal to $1/(2\ell_{p})$ \cite{PSaito-Saito1967}, that is,
\begin{eqnarray}
\frac{\partial \mathcal{Z}_{\rm c}}{\partial s}=\frac{1}{2\ell_{p}}\Delta_{S^{2}}\mathcal{Z}_{\rm s}.
\end{eqnarray}
An immediate consequence of the above equation is the exponential decay of the
correlation function between the two ending directions $C(L):=\left<{\bf T}(L)\cdot{\bf T}(0)\right>=\exp\left(-L/\ell_{p}\right)$, where $L$ is the polymer length. Indeed, this expectation value satisfies the following equation
$\frac{d}{ds}C(s)=\frac{1}{2\ell_{p}}\frac{1}{4\pi}\int_{S^{2}}d\Omega \left({\bf T}(s)\cdot{\bf T}(s^{\prime})\right)\Delta_{S^{2}}\mathbb{Z}$,
where $d\Omega$ is the solid angle and $4\pi$ is a normalization constant. Now, we can integrate twice by parts the r.h.s. of last equation and since $S^{2}$ is a compact manifold the boundary terms vanish. Also, using $\Delta_{S^{2}}{\bf T}=-\frac{2}{R^{2}}{\bf T}$, it is found that the correlation function satisfies the ordinary differential equation $\frac{d}{ds}C(s)=-\frac{2}{R^2}C(s)$. Now, we solve this equation using the initial condition $C(s^{\prime}=0)=1$, and the length of the polymer set up by $s=L$.
\subsection{Modified telegrapher equation }
As in the situation of the two-dimensional case \cite{Castro-JE}, we carry out a multipolar decomposition for HU equation in 3D. This consists of expanding the probability density function $P\left({\bf R}, {\bf T}\left.\right|{\bf R}^{\prime}, {\bf T}^{\prime}; s\right)$ in a linear combination of the cartesian tensor basis elements $1$, $T_{i}$, $T_{i}T_{j}-\frac{1}{3}\delta_{ij}$, $T_{i}T_{j}T_{k}-\frac{1}{5}\delta_{\left(ij\right.}T_{\left.k\right)}$, $\cdots$, where the symbols $(ijk)$ means symetrization of the indices $i, j, k$, that is, $\delta_{\left(ij\right.}T_{\left.k\right)}=\delta_{ij}T_{k}+\delta_{jk}T_{i}+\delta_{ki}T_{j}$ whose expansion coefficients are hydrodynamic-like tensor fields. These tensors are $\rho({\bf R}, s)$, meaning by the manner how the ending positions are distributed in the space; $
\mathbb{P}\left({\bf R}, s\right)$, meaning as the local average of the polymer direction; $\mathbb{Q}_{ij}({\bf R}, s)$, pointing the way how the directions are correlated along the points of the space, etc. These tensors are the moments associated to the cartesian tensor basis, {\it e.g.} $\mathbb{P}_{i}=\int \frac{d\Omega}{4\pi}T_{i}P\left({\bf R},{\bf T}, s\right)$. These fields satisfy the following hierarchy equations
\begin{eqnarray}
\frac{\partial \rho({\bf R}, s)}{\partial s}&=&-\partial_{i}\mathbb{P}^{i}\left({\bf R}, s\right),\label{1}\\
\frac{\partial \mathbb{P}_{i}({\bf R}, s)}{\partial s}&=&-\frac{1}{\ell_{p}}\, \mathbb{P}_{i}({\bf R}, s)-\frac{1}{3}\partial_{i}\rho({\bf R},s)-\partial^{j}\mathbb{Q}_{ij}\left({\bf R}, s\right),\label{2}\\
\frac{\partial \mathbb{Q}_{ij}\left({\bf R}, s\right)}{\partial s}&=&-\frac{3}{\ell_{p}} \mathbb{Q}_{ij}\left({\bf R}, s\right)-\frac{1}{5}\mathbb{T}_{ij}\left({\bf R}, s\right)-\partial^{k}\mathbb{R}_{ijk}\left({\bf R}, s\right),\label{3}\end{eqnarray}
where $\mathbb{T}^{ij}=\partial^{i}\mathbb{P}^{j}+\partial^{j}\mathbb{P}^{i}-\frac{2\delta^{ij}}{3}\partial_{k}\mathbb{P}^{k}$.
Now, by combining Eqs. (\ref{1}) and (\ref{2}) we can obtain a modified telegrapher equation
\begin{eqnarray}
\frac{\partial^2\rho\left({\bf R}, s\right)}{\partial s^2}+\frac{1}{\ell_{p}}\frac{\partial \rho\left({\bf R}, s\right)}{\partial s}=\frac{1}{3}\nabla^2\rho\left({\bf R}, s\right)+\partial_{i}\partial_{j}\mathbb{Q}^{ij}\left({\bf R}, s\right),\label{modTel}
\end{eqnarray}
where $\nabla^{2}$ is the 3D Laplacian. In a mean-field point of view one can consider the preceding equation as an equation for the probability density function $\rho\left({\bf R}, s\right)$ under the presence of a mean-field $\mathbb{Q}_{ij}\left({\bf R}, s\right)$. In particular, $\mathbb{Q}_{ij}\left({\bf R}, s\right)$ does not play any role for the mean-square end-to-end distance for a semiflexible polymer in the open Euclidean 3D space. Indeed, let us defined the end-to-end distance as $\delta{\bf R}:={\bf R}-{\bf R}^{\prime}$, thus the mean-square end-to-end distance is given by
\begin{eqnarray}
\left<\delta{\bf R}^{2}\right>_{\mathcal{D}}\equiv \int_{\mathcal{D}\times \mathcal{D}}\rho\left(\left.{\bf R}\right|{\bf R}^{\prime}, s\right)\delta{\bf R}^{2}d^{3}{\bf R}d^{3}{\bf R}^{\prime}.
\end{eqnarray}
Now, we implement the same procedure used in \cite{Castro-JE} to calculate the mean-square end-to-end distance in the open three-dimensional space $\mathcal{D}= \mathbb{R}^{3}$, where it is use the modify telegrapher equation (\ref{modTel}) and the traceless property of $\mathbb{Q}_{ij}\left({\bf R},s\right)$. We can reproduce the standard Kratky-Porod \cite{PSaito-Kratky1949} result for a semiflexible polymer in the three-dimensional space \cite{PSaito-Hermans1952, PSaito-Kratky1949}
\begin{eqnarray}
\left<\delta{\bf R}^{2}\right>_{\mathbb{R}^{3}}=2\ell_{p}L
-2\ell_{p}^{2}\left(1-\exp\left(-\frac{L}{\ell_{p}}\right)\right),\end{eqnarray}
with the typical well-known asymptotics limits: diffusive regime $\left<\delta{\bf R}^{2}\right>\simeq 2\ell_{p}L$ for $L\gg \ell_{p}$, and ballistic regime $\left<\delta{\bf R}^{2}\right>\simeq L^2$ for $L\ll \ell_{p}$.
\section{Semiflexible polymer in a compact domain}\label{sec:compact}
In this section, we apply the hierarchy equations developed in the previous section in order to the determine the conformational states of a semiflexible polymer confined to a compact volume domain of size $V$. From the hierarchy Eqs. (\ref{2}) and (\ref{3}), the tensors $\mathbb{P}_{i}({\bf R}, s)$ and $\mathbb{Q}_{ij}({\bf R}, s)$ damp out as $e^{-L/\ell_{p}}$ and $e^{-3L/\ell_{p}}$, respectively. Furthermore, if we consider that the semiflexible polymer is enclosed in a compact volume $V:=\ell^3$, with a typical length $\ell$, thus as long as we consider cases when $3\ell/\ell_{p}$ is far from 1, we may assume that $\mathbb{Q}_{ij}({\bf R}, s)$ is uniformly distributed. This condition corresponds to truncate the hierarchy equations at the second level, that is, the only equations that survive in this approximation are Eqs. (\ref{1}) and (\ref{2}).
In the latter situation, the distribution, $\rho({\bf R}, s)$ of the endings of the semiflexible polymer is describes through the following telegrapher's equation
\begin{eqnarray}
\frac{\partial^2\rho\left({\bf R}, s\right)}{\partial s^2}+\frac{1}{\ell_{p}}\frac{\partial \rho\left({\bf R}, s\right)}{\partial s}=\frac{1}{3}\nabla^2\rho\left({\bf R}, s\right),
\label{tel3d}\end{eqnarray}
that satisfies the initial conditions
\begin{eqnarray}
\lim_{s\to 0}\rho\left({\bf R}\right.\left|{\bf R}^{\prime}, s\right)&=&\delta^{(3)}\left({\bf R}-{\bf R}^{\prime}\right),\label{cond1}\\
\lim_{s\to 0}\frac{\partial\rho\left({\bf R}\right.\left|{\bf R}^{\prime}, s\right)}{\partial s}&=&0.\label{cond2}
\end{eqnarray}
The condition (\ref{cond1}) means that the polymers' ends coincide when the polymer length is zero, whereas (\ref{cond2}) means that the polymer length does not change spontaneously. In addition, since the polymer is enclosed in the compact domain $\mathcal{D}$ of volume $V(\mathcal{D})$, we also impose a Neumann boundary condition
\begin{eqnarray}
\left.\nabla\rho\left({\bf R}\right.\left|{\bf R}^{\prime}, s\right)\right|_{{\bf R}, {\bf R}^{\prime}\in \partial D}=0, ~~~\forall s,
\end{eqnarray}
where $\partial D$ is a surface bounding the domain $\mathcal{D}$. This boundary condition means that the polymer does not cross the boundary neither wrap the domain. The procedure to obtain a solution of the above telegrapher's equation (\ref{tel3d}) is identically to the one developed in \cite{Castro-JE}. We just have to take into account the right factors and the dimensionality considerations. In this sense, the probability density function is given by
\begin{eqnarray}
\rho\left(\left.{\bf R}\right|{\bf R}^{\prime};s\right)=\frac{1}{V({\mathcal{D}})}\sum_{{\bf k}\in I}G\left(\frac{s}{2\ell_{p}}, \frac{4\ell^{2}_{p}}{3}\lambda_{{\bf k}}\right)\psi^{\dagger}_{{\bf k}}\left({\bf R}\right)\psi_{{\bf k}}\left({\bf R}^{\prime}\right)\label{density}
\end{eqnarray}
where we recall from \cite{Castro-JE}
\begin{eqnarray}
G(v,w)=e^{-v}\left[\cosh\left(v\sqrt{1-w}\right)+\frac{\sinh\left(v\sqrt{1-w}\right)}{\sqrt{1-w}}\right],
\end{eqnarray}
and $\{\psi_{\bf k}\}$ and $\{\lambda_{{\bf k}}\}$ are a complete set of orthonormal eigenfunctions and a set of corresponding eigenvalues of the Laplace operator $-\nabla^{2}$ in $\mathbb{R}^{3}$. Notice that each $\psi_{\bf k}({\bf R})$ must satisfy the Neumann boundary equation $\left.\nabla \psi_{{\bf k}}\right|_{{\bf R}\in \partial \mathcal{D}}=0$. In addition, it is known \cite{Fal-Feshbach1953, Fal-Chavel1984} that for Neumann boundary Laplacian eigenvalue problem there is a zero eigenvalue $\lambda_{0}=0$ corresponding to a positive eigenfunction given by $\psi_{0}={1/\sqrt{V}}$.
Now, using (\ref{density}), the mean-square end-to-end distance $\left<\delta{\bf R}^{2}\right>_{\mathcal{D}}$ can be computed in the standard fashion by\begin{eqnarray}
\left<\left(\delta {\bf R}\right)^{2}\right>_{\mathcal{D}}=\sum_{{\bf k}\in I}a_{k}G\left(\frac{s}{2\ell_{p}}, \frac{4\ell^{2}_{p}}{3}\lambda_{{\bf k}}\right),
\end{eqnarray}
where the coefficients $a_{k}$ are obtained from
\begin{eqnarray}
a_{k}=\frac{1}{V(\mathcal{D})}\int_{\mathcal{D}\times \mathcal{D}}\left({\bf R}-{\bf R}^{\prime}\right)^{2}\psi^{\dagger}_{{\bf k}}({\bf R})\psi_{{\bf k}}({\bf R}^{\prime})d^{3}{\bf R}d^{3}{\bf R}^{\prime}.
\end{eqnarray}
We can have a further simplification since after squaring the end-to-end distance inside the last integral. It is not difficult to see that the square terms ${\bf R}^{2}$ and ${\bf R}^{\prime 2}$ in $({\bf R}-{\bf R}^{\prime})^{2}$ only the zero mode contribute, thus we have
\begin{eqnarray}
\left<\left(\delta{\bf R}\right)^{2}\right>_{\mathcal{D}}=2\sigma^2\left({\bf R}\right)-\frac{2}{V\left(\mathcal{D}\right)}\sum_{{\bf k}\neq 0}{\bf r}^{*}_{{\bf k}}\cdot {\bf r}_{{\bf k}}~G\left(\frac{s}{2\ell_{p}}, \frac{4\ell^{2}_{p}}{3}\lambda_{{\bf k}}\right),\label{MSD}
\end{eqnarray}
where $\sigma^2\left({\bf R}\right):=\left<{\bf R}^{2}\right>_{g}-\left<{\bf R}\right>^{2}_{g}$ is called mean-square end position, $\left<\cdots\right>_{g}:=\frac{1}{V\left(\mathcal{D}\right)}\int_{\mathcal{D}}d^{3}{\bf R}\cdots$ is termed geometric average, and the factor ${\bf r}_{{\bf k}}:=\int_{\mathcal{D}}{\bf R}\psi_{{\bf k}}\left({\bf R}\right)d^{3}{\bf R}$ for ${\bf k}\neq 0$. The factor ${\bf r}_{{\bf k}}$ can be written in a simpler form for Neumann boundary conditions, since $\psi_{{\bf k}}=-\frac{1}{\lambda_{\bf k}}\nabla^{2}\psi_{{\bf k}}$ and by integrating out by parts, this factor is expressed in terms of a boundary integral
\begin{eqnarray}
{\bf r}_{{\bf k}}=\frac{1}{\lambda_{\bf k}}\oint_{\partial \mathcal{D}}dS~{\bf n}\psi_{\bf k}\left({\bf R}_{S}\right), \label{expresion}
\end{eqnarray}
where ${\bf R}_{S}\in \partial\mathcal{D}$ and $dS$ is the area element of $\partial \mathcal{D}$. Since the function $G\left(v, w\right)$ decays exponentially as the polymer length gets larger values, we can convince ourselves that twice the mean-square end position corresponds to a saturation value for the mean-square end-to-end distance. An additional property of ${\bf r}_{\bf k}$ is the identity
\begin{eqnarray}
\frac{1}{V\left(\mathcal{D}\right)}\sum_{{\bf k}\neq 0}{\bf r}_{k}^{*}{\bf r}_{\bf k}=\sigma^{2}\left({\bf R}\right).\label{equidentity}
\end{eqnarray}
This identity can be proved using the completeness relation of the eigenfunctions, that is, $\sum_{\bf k}\psi^{*}_{\bf k}\left({\bf R}\right)\psi_{\bf k}\left({\bf R}^{\prime}\right)=\delta^{(3)}\left({\bf R}^{\prime}-{\bf R}\right)$. This identity allows us to proved that in general $\left<\left(\delta{\bf R}\right)^{2}\right>_{\mathcal{D}}$ starts at zero.
\section{Results}\label{sec:results}
\subsection{Semiflexible polymer enclosed by a cube surface}
In this section we provide results for the mean-square end-to-end distance for a semiflexible polymer enclosed inside of a cube domain. All the problem is reduced to solve the Neumann eigenvalue problem $-\nabla^2\psi=\lambda\psi$ with Neumann boundary condition when the compact domain is $\mathcal{C}:=\left\{(x,y,z)\in \mathbb{R}^{3}: 0\leq x\leq a, 0\leq y\leq a, 0\leq z\leq a \right\}$ is a cube of side $a$ in the positive octant. This problem is widely studied in different mathematical physics problems \cite{Fal-Feshbach1953, Grebenkov}. The eigenfunctions in this case can be given by
\begin{eqnarray}
\psi_{\bf k}\left({\bf R}\right)=\frac{N_{nmp}}{a^{3/2}}\cos\left(\frac{\pi n}{a}x\right)\cos\left(\frac{\pi m}{a}y\right)\cos\left(\frac{\pi p}{a}z\right),
\end{eqnarray}
where $x, y$ and $z$ are the standard cartesian coordinates, and ${\bf R}=(x, y,z)$ is the usual vector position.
The eigenfunctions are enumerated by the collective index $n m p$, with $n, m, p=0, 1,2, \cdots$.
$N_{nmp}$ is a normalization constant with respect to the volume of the cube $V(\mathcal{D})=a^{3}$, whose values are given by $N_{000}=1$; $N_{n00}=N_{0n0}=N_{00n}=\sqrt{2}$, for $n\neq 0$; $N_{np0}=N_{n0p}=N_{0np}=2$, for $n, p\neq 0$; and $
N_{npm}=2\sqrt{2}$, for $n,m,p\neq 0$. The eigenvalues of the Laplacian are given by $\lambda_{{\bf k}}={\bf k}^2$, where ${\bf k}=\left(\frac{\pi n}{a}, \frac{\pi m}{a}, \frac{\pi p}{a}\right)$.
Now, we proceed to calculate ${\bf r}_{{\bf k}}$ using its definition, that is, ${\bf r}_{{\bf k}}=\int_{\mathcal{C}}{\bf R}\psi_{\bf k}\left({\bf R}\right)d^{3}{\bf R}$. The three components are given by
\begin{eqnarray}
\left( {\bf r}_{\bf k}\right)_{x}&=&-\sqrt{2}\frac{a^{5/2}}{n^2\pi^2}\left(1-(-1)^{n}\right)\delta_{m0}\delta_{p0}\nonumber\\
\left( {\bf r}_{\bf k}\right)_{y}&=&-\sqrt{2}\frac{a^{5/2}}{m^2\pi^2}\left(1-(-1)^{m}\right)\delta_{n0}\delta_{p0}\nonumber\\
\left( {\bf r}_{\bf k}\right)_{z}&=&-\sqrt{2}\frac{a^{5/2}}{p^2\pi^2}\left(1-(-1)^{p}\right)\delta_{n0}\delta_{m0}
\end{eqnarray}
In the following, we use the general expression (\ref{MSD}) for the mean-square end-to-end distance. The mean-square end position can be easily calculated $\sigma^{2}\left({\bf R}\right)=\frac{a^{2}}{4}$. Since the Kronecker deltas in ${\bf r}_{\bf k}$ each contribution of $\left({\bf r}_{\bf k}\right)_{i}$ is the same, thus taking into account the correct counting factor the mean square end-to-end distance is
\begin{eqnarray}
\left<\delta{\bf R}^{2}\right>_{\mathcal{C}}=\frac{a^{2}}{2}-24a^2\sum_{k=1}^{\infty}\frac{\left(1-(-1)^{k}\right)}{k^{4}\pi^{4}}G\left(\frac{s}{2\ell_{p}}, \frac{4}{3}\left(\frac{\ell_{p}}{a}\right)^{2}\pi^{2}k^{2}\right). \label{MSDCube}
\end{eqnarray}
Following, the same line of argument performed in \cite{Castro-JE}, it is observed that $24\sum_{k=1}^{\infty}\frac{\left(1-(-1)^{k}\right)}{k^{4}\pi^{4}}=\frac{1}{2}$ consistently with (\ref{equidentity}), thus up to a numerical error of $10^{-2}$, we claim that
{\begin{eqnarray}
\frac{\left<\delta{\bf R}^2\right>_{\mathcal{C}}}{a^2}&\simeq&\frac{1}{2}-\frac{1}{2}\exp\left(-\frac{L}{2\ell_{p}}\right)
\left\{ \cosh\left[\frac{L}{2\ell_{p}}\left(1-\frac{4\pi^2}{3}\frac{\ell^2_{p}}{a^2}\right)^{\frac{1}{2}}\right]\right.\nonumber\\&+&\left.\left(1-\frac{4\pi^2}{3}\frac{\ell^2_{p}}{a^2}\right)^{-\frac{1}{2}}\sinh\left[\frac{L}{2\ell_{p}}\left(1-\frac{4\pi^2}{3}\frac{\ell^2_{p}}{a^2}\right)^{\frac{1}{2}}\right]\right\}.
\label{approxx}
\end{eqnarray}}
Let us remark that for any fixed value of $a$, the r.h.s of (\ref{approxx}), as a function of $L$, shows the existence of a critical persistence length, $\ell_{p}^{*}=\sqrt{3}a/(2\pi)$ such that for all values of $\ell_{p}>\ell_{p}^{*}$ it exhibits an oscillating behavior, whereas for $\ell_{p}<\ell_{p}^{*}$ it is monotonically increasing. In Fig. (\ref{fig1}), we show the behavior of the mean-square end-to-end distance versus the length of the polymer for several values of the persistence length below and above $\ell_p^*$. Moreover, we also show sketches of conformational states corresponding to the monotonous and oscillating behaviors of the mean square end-to-end distance. In addition, it is noticeably that the same mathematical structure as the mean-square end-to-end distance found by Spakowitz and Wang \cite{SW2005} for semiflexible polymers wrapping a spherical shell, and recently for semiflexible polymers confined to a square box \cite{Castro-JE}.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=1.3]{cubo.eps}
\caption{\small
Monotonous and oscillating behaviors of the mean square end-to-end distance (Eq. (\ref{MSDCube})) of polymers with $\ell_{p}$ below [a)] and above [b)] the critical persistence length $\ell^{*}_{p}=\sqrt{3}/(2\pi)a$ in cubic confinement. Inside the plotting area we sketch the conformational states of each class of polymers.
}
\label{fig1}
\end{center}
\end{figure}
\subsection{Semiflexible polymer enclosed by a spherical surface}
In this section we provide results for the mean-square end-to-end distance for a semiflexible polymer enclosed inside of a spherical domain. All the problem is reduced to solve the Neumann eigenvalue problem $-\nabla^2\psi=\lambda\psi$ with Neumann boundary condition when the compact domain is $\mathcal{B}:=\left\{{\bf r}\in \mathbb{R}^{3}: {\bf r}^{2}\leq R^{2}\right\}$ is a center ball of radius $R$. This problem is widely studied in different mathematical physics problems \cite{Fal-Feshbach1953, Grebenkov}. The eigenfunctions in this case can be given in terms of spherical Bessel functions $j_{\ell}\left(x\right)$ and spherical harmonic functions $Y_{\ell m}\left(\theta, \varphi\right)$,
\begin{eqnarray}
\psi_{\ell m k}\left(r, \theta, \varphi\right)=N_{\ell k}~j_{\ell}\left(\alpha_{\ell k}\frac{r}{R}\right)Y_{\ell m}\left(\theta, \varphi\right)
\end{eqnarray}
where $r, \theta$ and $\varphi$ are the standard spherical coordinates. The factor $N_{\ell k}$ is a normalization constant with respect to the volume of the ball $\mathcal{B}$, given by
\begin{eqnarray}
N_{\ell k}=\frac{\sqrt{2}}{R^{3/2}}\frac{\alpha_{\ell k}}{j_{\ell}\left(\alpha_{\ell k}\right)\left(\alpha_{\ell k}^{2}-\ell\left(\ell+1\right)\right)^{1/2}}.\label{norma}
\end{eqnarray}
The coefficients $\alpha_{\ell k}$ are the roots of $\partial j_{\ell}\left(x\right)/\partial x$, which by using the identity $\ell j_{\ell-1}\left(x\right)-\left(\ell+1\right)j_{\ell+1}=\left(2\ell+1\right)\partial j_{\ell}\left(x\right)/\partial x$, that satisfies the equation $\ell j_{\ell-1}\left(\alpha_{\ell k}\right)=\left(\ell+1\right)j_{\ell+1}\left(\alpha_{\ell k}\right)$. The eigenfunctions are enumerated by the collective index $\ell m k$, with $\ell=0, 1,2, \cdots$ countings the order of spherical Bessel functions, $m=-\ell, -\ell+1, \cdots, \ell$, and $k=1,2,3, \cdots$ counting zeros. The eigenvalues of the Laplacian are given by $\lambda_{\ell m k}={\alpha^{2}_{\ell k}/R^{2}}$, which are independent of the numbers $m$. Now, we proceed to calculate ${\bf r}_{\ell m k}$ using (\ref{expresion}). It is enough to calculate $\oint_{S^{2}} dS~{\bf n}~Y_{\ell m}\left(\theta, \varphi\right)$, since ${\bf n}\propto Y_{1m}$, thus $\oint_{S^{2}} dS~{\bf n}~Y_{1, \pm1}\left(\theta, \varphi\right)=-\sqrt{\frac{2\pi}{3}}R^{2}\left(\pm 1, i, 0\right)$ and $\oint_{S^{2}} dS~{\bf n}~Y_{1, 0}\left(\theta, \varphi\right)=2\sqrt{\frac{\pi}{3}}R^2\left(0,0,1\right)$. Now, we call $\alpha_{1k}:=\alpha_{k}$, then using (\ref{norma}) one has
\begin{eqnarray}
{\bf r}_{1, \pm 1, k}&=&-2\sqrt{\frac{\pi}{3}}\frac{R^{5/2}}{\alpha_{k}\left(\alpha^{2}_{k}-2\right)^{1/2}}\left(\pm 1, i, 0\right),\\
{\bf r}_{1, 0, k}&=&2\sqrt{\frac{2\pi}{3}}\frac{R^{5/2}}{\alpha_{k}\left(\alpha^{2}_{k}-2\right)^{1/2}}\left(0, 0, 1\right),
\end{eqnarray}
where roots $\{\alpha_{k}\}$ satisfy the equation $j_{0}\left(\alpha_{k}\right)=2j_{2}\left(\alpha_{k}\right)$. Using explicit functions of the spherical Bessel functions the root condition is $F(\alpha_{k})=0$, where
\begin{eqnarray}
F\left(x\right)=\left(\frac{x^2}{2}-1\right)\sin x+x\cos x.\label{Raices}
\end{eqnarray}
In the following, we use the general expression (\ref{MSD}) for the mean-square end-to-end distance. We calculate the mean-square end position, $\sigma^{2}\left({\bf R}\right)=\frac{3}{5}R^2$, and use the factors ${\bf r}_{\ell m k}$, thus the mean square end-to-end distance is
\begin{eqnarray}
\left<\delta{\bf R}^{2}\right>_{\mathcal{B}}=\frac{6}{5}R^{2}-12R^2\sum_{k=1}^{\infty}\frac{1}{\alpha^2_{k}(\alpha_{k}^{2}-2)}G\left(\frac{s}{2\ell_{p}}, \frac{4}{3}\left(\frac{\ell_{p}}{R}\right)^{2}\alpha^{2}_{k}\right). \label{MSDSBall}
\end{eqnarray}
Following, the same line of argument performed in \cite{Castro-JE}, we observe numerically that
$12\sum_{k=1}^{N}\frac{1}{\alpha^{2}_{k}\left(\alpha_{k}^2-2\right)}\to6/5$ as $N$ increases, this is consistent with Eq. (\ref{equidentity}). Thus up to a numerical error $10^{-2}$, we claim that
{\begin{eqnarray}
\frac{\left<\delta{\bf R}^2\right>_{\mathcal{B}}}{R^2}&\simeq&\frac{6}{5}-\frac{6}{5}\exp\left(-\frac{L}{2\ell_{p}}\right)
\left\{ \cosh\left[\frac{L}{2\ell_{p}}\left(1-\frac{4\alpha_{1}^2}{3}\frac{\ell^2_{p}}{R^2}\right)^{\frac{1}{2}}\right]\right.\nonumber\\&+&\left.\left(1-\frac{4\alpha_{1}^2}{3}\frac{\ell^2_{p}}{R^2}\right)^{-\frac{1}{2}}\sinh\left[\frac{L}{2\ell_{p}}\left(1-\frac{4\alpha_{1}^2}{3}\frac{\ell^2_{p}}{R^2}\right)^{\frac{1}{2}}\right]\right\}.
\label{approx2}
\end{eqnarray}}
Let us remark that for any fixed value of $R$, the r.h.s of (\ref{approx2}), as a function of $L$, shows the existence of a critical persistence length, $\ell_{p}^{*}=\sqrt{3}R/(2\alpha_{1})$, with $\alpha_{1}\simeq 2.08158 $ according to (\ref{Raices}), such that for all values of $\ell_{p}>\ell_{p}^{*}$ it exhibits an oscillating behavior, whereas for $\ell_{p}<\ell_{p}^{*}$ it is monotonically increasing. In Fig. (\ref{fig2}), we show the behavior of the mean-square end-to-end distance versus the length of the polymer for several values of the persistence length below and above $\ell_p^*$. Moreover, we also show sketches of conformational states corresponding to the monotonous and oscillating behaviors of the mean square end-to-end distance. In addition, it is noticeably that the same mathematical structure as the mean-square end-to-end distance found by Spakowitz and Wang \cite{SW2005} for semiflexible polymers wrapping a spherical shell, and recently for semiflexible polymers confined to a square box \cite{Castro-JE}.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=1.3]{esfera.eps}
\caption{\small
Monotonous and oscillating behaviors of the mean square end-to-end distance (Eq. (\ref{MSDSBall})) of polymers with $\ell_{p}$ below [a)] and above [b)] the critical persistence length $\ell^{*}_{p}=\sqrt{3}R/(2\alpha_1)$ in spherical confinement. Inside the plotting area we sketch the conformational states of each class of polymers.
}
\label{fig2}
\end{center}
\end{figure}
\section{Concluding remarks}\label{sec:conclusions}
In this work we carry out an extension of the stochastic curvature formalism introduced in \cite{Castro-JE} to analyze the conformational states of a semiflexible polymer in a thermal bath for the cases when the polymer is in the open space $ \mathbb{R}^ {3}$, and when is in a bounded domain $ \mathcal{D}\subset \mathbb{R}^{3}$. The basic idea of formalism in the 3D case is followed by two postulates, that is, that each conformational state corresponds to the realization of a path described by the stochastic Frenet-Serret equations (\ref{stoch-eq}), to introduce an stochastic curvature vector $\boldsymbol{k}\left(s\right)$, and a second postulate that gives the manner how $\boldsymbol{\kappa}(s)$ is distributed according to the thermal fluctuations.
In the case of a polymer in an open space $\mathbb{R}^3$, the standard Kratky-Porod formula for polymers is reproduced in three dimensions \cite{PSaito-Kratky1949}, while when the polymer is confined to a space bounded region $\mathcal{D}\subset\mathbb{R}^{3}$ the conformational states shows the existence of a critical persistence length $\ell_{p}^{*}$ such that for all values of $\ell_{p}>\ell_{p}^{*}$ the mean square distance from end-to-end exhibits an oscillating behavior, while for $\ell_{p} <\ell_{p}^{*}$ it exhibits a monotonic behavior in both cases of a cubic region and a spherical region . Furthermore, for each value of $\ell_{p}$, the function converges to the twice of the mean-square end position $\sigma^{2}\left({\bf R}\right)$, that is, twice the variance of ${\bf R}^2$ with respect to the volume of the domain.
The critical persistence length, therefore, distinguishes two conformational behaviors of the semiflexible polymer in the bound domain. On one hand, polymers with persistence length below the critical value have a conformation similar to a Brownian random path. On the other hand, polymers with persistence length above the critical value adopt smooth conformations.
In addition, it is highlighted that the mean square end-to-end distance exhibits the same mathematical form for the discussed cases along with the manuscript (see Eq.(31) and Eq. (38)), and with the results reported for a polymer enclosed to a square box and rolling up a spherical surface \cite{Castro-JE, SWPRL}.
Nevertheless, the values-difference of saturation and the critical persistence length reflects the particular geometric nature of the compact domain, including the dimensionality of the space.
Note the particular mathematical expression in our work is due to the probability density function of the polymer's ends which is governed by a modified telegrapher equation.
As a consequence of this resemblance, it can be concluded that the shape transition from oscillating to monotonous conformational states provides furthermore evidence of a universal signature for a semiflexible polymer enclosed in compact space.
\section*{Conflict of Interest Statement}
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
\section*{Author Contributions}
Both authors contributed to the formulation of the method, and the writing of the manuscript. JER contributed in the numerical analysis that provide the figures, while PCV in the mathematical calculations.
\section*{Funding}
\section*{Acknowledgments}
P.C.V. acknowledges financial support CONACyT. .
\bibliographystyle{ieeetr}
|
2,877,628,088,540 | arxiv | \section{Introduction}
\label{sec:introduction}
Parallel computing approaches for solving complex problems modeled by partial differential equations have become increasingly attractive over the last decade, as Moore's Law has continued, but transitioned from fabricating more transistors on a chip to constructing more cores. Spatial parallelization of partial differential equations is well established \cite{chan1994domain, dolean2015introduction, lions1990schwarz, lions1988schwarz, xu1992iterative}, but the improvement available through spatial parallelization alone typically decays as the number of processors increases for any fixed-size problem (the so-called strong scaling behavior), see e.g., \cite{falgout2017multigrid}. Parallel in time methods provide an additional way to effectively utilize modern high performance computer architectures \cite{emmett2014efficient, falgout2014parallel, gander201550, gotschel2019efficient, howse2019parallel, maday2002parareal, ong2020applications}. Unfortunately parallel-in-time methods also introduce significant additional sources of error which must be estimated for the purposes of validation when seeking to construct adaptive algorithms, or when performing uncertainty quantification.
We consider a PDE system of the form: Find $u(x,t) \in L^2(0,T; H^1_0(\Omega))$ such that
\begin{equation}
\label{eq:pde}
\begin{aligned}
(\dot u, v) + a(u, v) &= l(v), \quad \forall v \in H^1_0(\Omega) \text{ and } t \in (0,T], \\
u(x,0) &= u_0(x),
\end{aligned}
\end{equation}
where $\dot u = \frac{\partial u}{\partial t}$, $a(\cdot, \cdot)$ is a coercive, positive definite bilinear form, $l(\cdot)$ is a linear form, $\Omega$ is a convex polygonal domain, and $u_0 \in H^1_0$ is the initial condition. Here $( \cdot, \cdot)$ denotes the $L_2(\Omega)$ inner product, so that $(w,v) = \int_{\Omega} w v \, {\rm d}x$.
We assume that the forms $a$ and $l$ satisfy sufficient regularity assumptions so that a weak solution $u(x,t) \in L^2(0,T; H^1_0(\Omega))$ exists and is unique \cite{lawrence2010evanspartial}. Further, we consider a quantity of interest (QoI) given by
\begin{equation}
\label{eq:qoi_def}
\mathcal{Q}(u) = \int_{\Omega} \psi(x) u(x,T) \, {\rm d}x
\end{equation}
where $\psi \in L^2(\Omega)$.
The aim of this work is to derive error estimates for the numerically computed value of $\mathcal{Q}$ when time-parallel and spatial domain decomposition techniques are employed for the approximation of $u(x,t)$. In particular, we consider two algorithms which we call the Time Parallel Algorithm (TPA) and the Space-Time Parallel Algorithm (STPA). The TPA implements the Parareal algorithm\cite{Emmett2012, GV07, MH08, Maday08, Maday2002387}, a time-parallel method to solve \eqref{eq:pde}, while the STPA also employs a parallel domain decomposition strategy in space \cite{Keyes:1995:DBP, Tarek2008, Smith:1996:DPM, Toselli:2004:DDM} in addition to the time-parallelism of the Parareal method.
The analysis presented here extends earlier work on the \emph{a posteriori} error analysis of the Parareal algorithm in \cite{chaudhry2016posteriori}, and incorporates the work of \cite{chaudhry2019posteriori} on the \emph{a posteriori} error analysis of overlapping Schwarz domain decomposition algorithms to provide an estimate of the spatial discretization error. One significant difference between the earlier work
and the current work
is that we analyze the effects of using an iterative method to solve the discrete equations arising from an implicit time integration scheme.
The analysis builds on early work of adjoint-based \emph{a posteriori} error analysis \cite{AO2000, beckerrannacher, eriksson1995introduction, Giles:Suli:02} and more recent work addressing iterative and multi-rate methods \cite{CEGT13a, iternon, EGT2012}.
\subsection{Notation}
Our analysis requires consideration of both exact and discrete solutions. Analytical solutions are indicated using a lower case letter, discrete solutions with an upper case letter. Solutions of the PDE will be designated $u$ and adjoint solutions $\phi$. A hat indicates a coarse scale entity. Errors are indicated using $e$ and residuals with $\mathcal{R}$. A dot indicates a partial differential in time. We use $N$ to indicate the number of finite elements, $Q$ to indicate the degree of interpolation, $P$ to indicate the number of temporal or spatial subdomains, and $K$ to indicate the number of iterations. A subscript $t$ for these symbols indicates a temporal quantity, a subscript $s$ indicates a spatial quantity. A full list of variables and discretization choices appears in \ref{sec:appendix}.
For a function $w(x,t)$, we denote $w(t):= w(\cdot,t)$, i.e., $w(t)$ refers to a function of space only for a fixed time $t$. We use the notation $(w,v)(t) := (w(t),v(t)) = \int_\Omega w(x,t) v(x,t) \, {\rm d}x$, so for example the QoI~\eqref{eq:qoi_def} may be expressed as $\mathcal{Q}(u) = (\psi, u)(T)$. Finally, we let $\langle \cdot \rangle_I$ indicate integration over time interval $I$, i.e. $\langle \cdot \rangle_I = \int_I \, \cdot \, {\rm d}t$.
\subsection{Outline}
We begin the analysis from the perspective of Parareal integration in time, presented in \cite{chaudhry2016posteriori}. The Parareal approach utilizes both coarse and fine scale discretizations in time. Extending this analysis to PDEs requires corresponding coarse and fine discretizations in space. Since we implement a domain decomposition strategy in space, we employ iterative solution methods in both time and space.
We introduce two algorithms in section \ref{sec:parareal_algorithms}: a Time Parallel Algorithm (TPA), and a Space-Time Parallel Algorithm (SPTA). The TPA employs the time-parallel algorithm, Parareal, for the solution of PDEs, whereas the SPTA additionally utilizes additive Schwarz domain decomposition to achieve parallelization in space.
This section also contains precise details of the spatial and temporal discretizations.
We recall some essential mathematical results needed for the subsequent analysis for the two algorithms in section~\ref{sec:var_mthds}. We
adjust the previous \emph{a posteriori} error analysis for Parareal time integration to analyze the TPA in in section \ref{sec:temporal_errors}.
We extend this analysis to account for the errors arising using additive Schwarz domain decomposition in section \ref{sec:spatial_errors}. Numerical results supporting the accuracy of the \emph{a posteriori} error estimates are provided in section \ref{sec:numerical_results}.
\section{Discretizations: The ``Time Parallel Algorithm'' and the ``Space-Time Parallel Algorithm''}
\label{sec:parareal_algorithms}
We consider two algorithms: (1) An algorithm which is parallel in time only, referred to as Time Parallel Algorithm (TPA), and (2) An algorithm which is parallel in both space and time, referred to as Space-Time Parallel Algorithm (STPA). Both algorithms use the Parareal algorithm for time-parallelism. The spatial parallelization in STPA is achieved through the use of overlapping Schwarz domain decomposition method. The discretization also involves using the finite element method in space and the implicit Euler method in time. The implicit Euler method is appropriate for dissipative problems modeled by \eqref{eq:pde}. We describe these algorithms in detail below.
\subsection{Time Parallel Algorithm (TPA)}
\label{sec:TPA}
The Parareal algorithm is based on a pair of coarse and fine scale solvers, $\csolver{\alpha(x)}{t}$ and $\fsolver{\alpha(x)}{t}$ respectively. Let $0 = T_0 < \cdots < T_{p-1} < T_p < \cdots < T_P = T$ be a partition of the time domain. We view the times $T_p$ as ``synchronization'' times at which point the coarse and fine scale solutions may exchange information. We call the time interval $[T_{p-1},T_p]$ the temporal subdomain $p$, see Figure \ref{fig:subdomains_and_discretizations}. Note that the temporal subdomains are non-overlapping.
The numerical solver $\fsolver{\alpha(x)}{t}$ is more accurate than $\csolver{\alpha(x)}{t}$, e.g., employing a finer discretization or a higher order method. The super-script $p$ indicates that the solvers produce a space-time solution for the temporal subdomain $p$. That is, for $t \in (T_{p-1},T_p]$, $\csolver{\alpha(x)}{t}$ (resp. $\fsolver{\alpha(x)}{t}$) indicates the space-time coarse scale (resp. fine scale) solution at $t$, where $\alpha(x)$ denotes the initial conditions for the solver at $t = T_{p-1}$.
Specific examples of the solvers are presented in \ref{sec:time_space_discretization}.
The standard version of the Parareal algorithm defines the solutions only at the times $T_p$ and is not amenable to adjoint-based \emph{a posteriori} analysis, which requires solutions to be in variational form. An equivalent, variational version of the Parareal algorithm, suitable for an adjoint-based \emph{a posteriori} error analysis, is provided in Algorithm \ref{alg:parareal_variational}. The standard Parareal algorithm is provided in \ref{sec:appendix_Parareal} and a proof of its equivalence with the variational version is given in \cite{chaudhry2016posteriori}. Here $\csolnpk{p}{k_t}(x,t) $ and $\fsolnpk{p}{k_t}(x,t)$ are the coarse and fine scale solutions for the variational Parareal algorithm at iteration $k_t$ on temporal subdomain $p$. The fine-scale Parareal solution after $k_t$ iterations is defined to be $\fsolnk{k_t} = \fsolnpk{p}{k_t}$ for $t \in (T_{p-1}, T_p]$. The initial condition approximation to $u_0$ is referred to as $\widehat{U}_0$. Note that since $\widehat{U}_0$ is represented in a finite dimensional space, it is possible that $u_0 \neq \widehat{U}_0$.
\begin{algorithm}[H]\caption{Variational form of the Parareal algorithm}
\label{alg:parareal_variational}
\begin{algorithmic}
\Procedure{VPAR}{$P_t, K_t, \widehat{U}_0$ }
\State $\corr{p}{0} = 0, p=0,\ldots,P_t$ \Comment{Initialize corrections}
\For{$k_t = 1, \ldots, K_t$}
\State $\csolnpk{0}{k_t}(x,0):= \widehat{U}_0$ \Comment{Set initial conditions}
\For{$p=1, \ldots, P_t$}
\State $\csolnpk{p}{k_t}(x,t) = \csolver{\csolnpk{p-1}{k_t}(x,T_{p-1}) + \corr{p}{k_t-1}(x)}{t}$
\Comment{Serial computation}
\State $\fsolnpk{p}{k_t}(x,t) = \fsolver{\csolnpk{p}{k_t}(x,T_{p-1})}{t}$
\Comment{Parallel computation}
\State $\corr{p}{k_t}(x) = \fsolnpk{p}{k_t}(x,T_p) - \csolver{\csolnpk{p}{k_t}(x,T_{p-1})}{T_p}$
\Comment{Update corrections}
\EndFor
\EndFor
\State \Return $\fsolnpk{p}{K_t}(x,t)$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\subsubsection{Temporal and spatial discretizations of the fine scale and coarse scale solvers}
\label{sec:time_space_discretization}
We consider the implicit Euler method for time discretization and finite elements for the spatial discretization to define coarse and fine scale solutions on $[T_{p-1}, T_p]$. We omit the superscript $p$ in this section when the temporal subdomain is clear.
We discretize the \textbf{spatial} domain $\Omega$ into a quasi-uniform triangulation $\mathcal{T}_{h}$, where $h$ denotes the maximum diameter of the elements. This triangulation is chosen so the union of the elements of $\mathcal{T}_{h}$ is $\Omega$ and the intersection of any two elements is either a common edge, node, or is empty. Let $\Vspace{q}$ be the standard space of continuous piecewise polynomials of degree $q$ on $\mathcal{T}_{h}$, such that if $U \in \Vspace{q}$ then $U = 0$ on $\partial \Omega$. In particular, we consider a coarse space $\Vspace{\widehat{q}_s}$ and fine space $\Vspace{q_s}$ such that $\widehat{q}_s < q_s$.
On each \textbf{time} subdomain $p$, the coarse temporal discretization, uses $\Nctimep{p}$ time steps while the fine temporal discretization uses $\Nftimep{p}$ time steps. Thus, the total coarse time steps over $[0,T]$ are $\widehat{N}_t = \sum_{p=1}^{P_t} \Nctimep{p}$, and the total fine time steps are $N_t = \sum_{p=1}^{P_t} \Nftimep{p}$. On the coarse scale, we partition $[T_{p-1}, T_p]$ as $T_{p-1} = \ctimepj{p}{0} < \cdots < \ctimepj{p}{\Nctimep{p-1}} < \ctimepj{p}{\Nctimep{p}} = T_p$, as shown in Figure~\ref{fig:subdomains_and_discretizations}, with $\Intcpj{p}{\hat n} = [\ctimepj{p}{\hat n-1}, \ctimepj{p}{\hat n}]$. Let $\Taucp{p} = \lbrace \Intcpj{p}{1}, \cdots, \Intcpj{p}{\hat n}, \cdots, \Intcpj{p}{\Nctimep{p}} \rbrace$. On the fine scale we introduce a finer discretization $T_{p-1} = \ftimepj{p}{0} < \cdots < \ftimepj{p}{\Nftimep{p-1}} < \ftimepj{p}{\Nftimep{p}} = T_p$ as shown in Figure~\ref{fig:subdomains_and_discretizations}. We let $\Intfpj{p}{n} = [ \ftimepj{p}{n-1}, \ftimepj{p}{n} ]$ and $\Taufp{p} = \lbrace \Intfpj{p}{1}, \cdots, \Intfpj{p}{n}, \cdots, \Intfpj{p}{\Nftimep{p}} \rbrace$.
\begin{figure}
\centering
\includegraphics[width=.77\textwidth]{time_disc}
\caption{Top: Subdomain $[T_{p-1},T_p]$. Middle: Stage 1 discretization for $[T_{p-1},T_p]$. Bottom: Stage 2 discretization for $[T_{p-1},T_p]$.}\label{fig:subdomains_and_discretizations}
\end{figure}
We now use the implicit Euler method on the coarse time step $\Intcpj{p}{\hat n}$: given the approximate solution $\csolnj{n-1}$ at $\ctimepj{p}{\hat n-1}$, compute approximation $\csolnj{n} \in \Vspace{\widehat{q}_s}$ at $\ctimepj{p}{\hat n}$by,
\begin{equation}
\label{eq:be_step_coarse}
(\csolnj{n},v ) = (\csolnj{n-1},v) + \cdTpj{p}{n}\left[-a(\csolnj{n},v) + l(v) \right],
\end{equation}
for all $v \in \Vspace{\widehat{q}_s}$,where $\cdTpj{p}{n} = (\ctimepj{p}{n}-\ctimepj{p}{n-1})$. Moreover, in the context of Parareal algorithm, we have $\csolnj{n-1} \in \Vspace{\widehat{q}_s}$ for all ${\hat n}$ except possibly for ${\hat n} = 1$, where $\csolnj{0}$ (the initial condition for the coarse scale solver) may belong to $\Vspace{q}$.
Similarly, the implicit Euler method on the fine time step $\Intfpj{p}{n}$ is: given the approximate solution $\fsolnj{n-1}$ at $\ftimepj{p}{n-1}$, compute approximation $\fsolnj{n} \in \Vspace{q}$ at $\ftimepj{p}{n}$ by,
\begin{equation}
\label{eq:be_step_fine}
(\fsolnj{n},v ) = (\fsolnj{n-1},v) + \fdTpj{p}{n}\left[-a(\fsolnj{n},v) + l(v) \right],
\end{equation}
for all $v \in \Vspace{q}$, where $\fdTpj{p}{n} = (\ftimepj{p}{n}-\ftimepj{p}{n-1})$. In the context of Parareal algorithm, we have $\fsolnj{n-1} \in \Vspace{q}$ for all $n$ except for $n=1$ for which $\fsolnj{0}$ (the initial condition for the fine scale solver) belongs to $\Vspace{\widehat{q}_s}$.
In the TPA, we assume that equations \ref{eq:be_step_coarse} and \ref{eq:be_step_fine} are solved exactly (up to numerical precision).
\subsection{Space-Time Parallel Algorithm (STPA)}
\label{sec:STPA}
The STPA introduces spatial parallelization (in addition to temporal parallelization) by using a Schwarz domain decomposition iteration at the fine scale. It is assumed in the spirit of the Parareal algorithm that the coarse scale is solved implicitly without needing any spatial domain decomposition.
\subsubsection{Schwarz Domain Decomposition at a single time step}
The additive Schwarz Domain decomposition iteration is employed to solve the spatial problem in \eqref{eq:be_step_fine}.
Similar to \S \ref{sec:time_space_discretization}, we omit the subscript $p$ where this is clear.
We rewrite the problem \eqref{eq:be_step_fine} on subinterval $\Intfpj{p}{n}$ as: find $\fsolnj{n}\in \Vspace{q_s}$ such that,
\begin{equation}
\label{eq:be_step_bilin form}
B^n(\fsolnj{n},v) = \lspace{n}(v)
\end{equation}
for all $v \in \Vspace{q}$, where the bilinear form $B^n(\cdot, \cdot)$ and the linear form $\lspace{n}(\cdot)$ are defined by,
\begin{equation}
\label{eq:bilin_lin_forms_dd}
\begin{aligned}
B^n(u,v) &= (u,v) + \fdTpj{p}{n} a(u,v),\\
\lspace{n}(v) &= (\fsolnj{n-1},v) + \fdTpj{p}{n} l(v).
\end{aligned}
\end{equation}
Assume that we have $P_s$ overlapping subdomains $\Omega_1, \cdots, \Omega_{P_s}$ on $\Omega$, such that for any $\Omega_i$, there exists a $\Omega_j$, $i \neq j$, for which $\Omega_i \cap \Omega_j \neq \emptyset$ and $\cup_i \Omega_i = \Omega$.
Let $\mathcal{T}_{h,i} \equiv \mathcal{T}_{h} \cap \Omega_i$. We further assume that the triangulation $\mathcal{T}_{h}$ is consistent with the domain decomposition in the sense that for any $T_s \in \mathcal{T}_{h}$, if $T_s \cap \Omega_j \neq \emptyset$ then $T_s \subset \Omega_j$.
We let $(\cdot, \cdot)_{ij}$ represent the $L_2(\Omega_i \cap \Omega_j)$ inner product.
We denote by $B^n_i(\cdot, \cdot)$ the restriction of $B^n(\cdot, \cdot)$ to $\Omega_i$ and $B^n_{ij} (\cdot, \cdot)$ the restriction of $B^n(\cdot, \cdot)$ to $\Omega_i \cap \Omega_j$. Similarly, we let $\lspace{n}_i(\cdot)$ be the restriction of $\lspace{n}(\cdot)$ to $\Omega_i$.
The additive Schwarz method for the solution of \eqref{eq:be_step_bilin form} is given in Algorithm \ref{alg:additive_basic}.
The domain decomposition solution for $n =1, \ldots, \Nftimep{p}$ are $\fsolnjk{n}{K_s} =$ ASDD($B^n, \lspace{n}, K_s, P_s,\fsolnjk{n-1}{K_s} $). Note that the initial guess to the algorithm is the solution from the previous time step $\fsolnjk{n-1}{K_s}$.
In the algorithm $\Vspacedd{q_s}{i}$ represents the space of continuous piecewise polynomials of degree $q_s$ on $\mathcal{T}_{h,i}$ such that if $U \in \Vspacedd{q_s}{i}$ then $U = 0$ on $\partial \Omega_i$, and $\Vspaceddbc{q_s}{i}{k_s}$ represents the space of continuous piecewise polynomials of degree $q_s$ on $\mathcal{T}_{h,i}$ such that if $U \in \Vspaceddbc{q_s}{i}{k_s}$ then $U = 0$ on $\partial \Omega$ and $U = \fsolnjk{n}{k_s}$ on $\partial \Omega_i \setminus \partial \Omega$.
The Richardson parameter $\tau$, is needed to ensure that the iteration converges \cite{Tarek2008}.
\begin{algorithm}[H]\caption{Overlapping additive Schwarz domain decomposition}
\label{alg:additive_basic}
\begin{algorithmic}
\Procedure{ASDD}{$B^n, \lspace{n}, K_s, P_s,\fsolnjk{n}{0}$}
\For{$k=0, 1, 2, \dots, K_s-1$ }
\For{$i= 1, 2, \dots, P_s$ }
\State Find $ \fsolnikdd{i}{k_s+1} \in \Vspaceddbc{q_s}{i}{k_s}$ such that \\ \qquad\qquad
\begin{equation} \label{eq:additive_basic_local}
B^n_i \big( \fsolnikdd{i}{k_s+1},v \big)= \lspace{n}_i(v), \quad \forall v \in \Vspacedd{q_s}{i}.
\end{equation}
\State Let
\begin{equation} \label{eq:additive_basic_global}
\fsolnjk{n}{k_s+1} = (1 - \tau P_s)\fsolnjk{n}{k_s}
+ \tau \left ( \sum_{i=1}^{P_s } {\Pi_i} \fsolnikdd{i}{k_s+1} \right )
\;\hbox{where}\quad
\Pi_i \fsolnikdd{i}{k_s+1} = \left \{
\begin{gathered} \begin{aligned}
&\fsolnikdd{i}{k_s+1}, \; &&\hbox{on } \overline{\Omega}_i, \\
&\fsolnjk{n}{k_s}, &&\hbox{on } \Omega \backslash \overline{\Omega}_i.
\end{aligned} \end{gathered} \right .
\end{equation}
\EndFor
\EndFor
\State \Return $\fsolnjk{n}{K_s}$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\subsubsection{The global STPA solution}
In the context of the Parareal algorithm (Algorithm \ref{alg:parareal_variational}), we denote the solution by $\fsolnpkl{p}{k_t}{K_s}$, where we have now added $K_s$ to the superscript to indicate the dependence of the solution on domain decomposition iterations at the $k_t$th Parareal iteration. The global STPA solution on $[0,T]$ after $k_t$ Parareal iterations and $K_s$ domain decomposition iterations is defined as $\fsolnkl{k_t}{K_s} = \fsolnpkl{p}{k_t}{K_s}$ for $t \in (T_{p-1}, T_p]$.
\section{Variational methods and preliminary \emph{a posteriori} error analysis}
\label{sec:var_mthds}
In this section, we introduce basic elements needed for the error analysis of the numerical value of the QoI (see \eqref{eq:qoi_def}) computed using the fine scale solution from either the the Time Parallel Algorithm or the Space-Time Parallel Algorithm.
The main tools used in the analysis are adjoint problems coupled with an appropriate definition of residuals. This kind of analysis relies on the solution being in a variational format. Hence, we present the implicit Euler method as a variational method in \S \ref{sec:be_as_dg0}, and point out a fundamental property of variational methods which is used subsequently in the analysis of TPA and STPA.
\subsection{Variational methods}
We define two variational methods; the continuous Galerkin method and the discontinuous Galerkin method for the fine scale; the extension to the coarse scale is analogous. Note that the usage of ``continuous'' or ``discontinuous'' refers to the property of the solution in time only; in space we always use continuous finite elements.
On the space-time slab $S^p_{n} = \Omega \times \Intfpj{p}{n}$ we define the space
\begin{equation}
\label{Vspacetimef}
\Vspacetime{n}{q_t}{q_s} := \{ w(x,t): w(x,t) = \sum_{j=0}^{\widehat{q}_t} t^j v_j(x), v_j \in \Vspace{q_s}, (x,t) \in S^p_{n} \},
\end{equation}
The \textbf{continuous Galerkin method, cG($q_t$)} is: find $U_{CG}$ such that it is continuous on $S^p_{n}$, its restriction to $S^p_{n}$ belongs to $\Vspacetime{n}{q_t}{q_s}$ and also satisfies,
\begin{equation}
\label{eq:fine_solv_one_dt_cg}
\finv{ (\dot{U}_{CG},v)}{p}{n} = \finv{-a(U_{CG},v)+ l(v)}{p}{n}
\end{equation}
for all $v \in \Vspacetime{n}{q_t-1}{q_s}$.
We denote the jump across $\ftimepj{p}{n}$ as $[w]_{n,p} = w_{n,p}^+ - w_{n,p}^-$, where $w_{n,p}^{\pm} = \lim_{s \rightarrow (\ftimepj{p}{n})^{\pm}}w(s)$.
The \textbf{discontinuous Galerkin method, dG($q_t$)} is: find $U_{CG}$ its restriction to $S^p_{n}$ belongs to $\Vspacetime{n}{q_t}{q_s}$ and also satisfies,
\begin{equation}
\label{eq:fine_solv_one_dt_dg}
\finv{ (\dot{U}_{DG},v)}{p}{n} + ([U_{DG}]_{n-1,p},v_{n-1,p}^+) = \finv{-a(U_{DG},v)+ l(v)}{p}{n}
\end{equation}
for all $v \in \Vspacetime{n}{q_t}{q_s}$.
Note that the solution may be discontinuous in at the time nodes $\ftimepj{p}{n}$.
\subsection{ A posteriori error analysis for variational methods on a single time subdomain}
Variational methods satisfy an important property that underpins the \emph{a posteriori} error analysis presented here. Let $\phi \in L^2(T_{p-1}, T_p; H^1_0(\Omega))$ satisfy,
\begin{equation}
\label{eq:generic_phi_p}
(-\dot{\phi}, v) = -a(v, \phi) \quad \forall v \in H^1_0(\Omega) \text{ and } t \in (T_{p-1},T_p]
\end{equation}
Equation \eqref{eq:generic_phi_p} is referred to as an adjoint equation.
Next we quantify the accumulation of error contributions.
\begin{lem}
Let $e_{\Var} = u - U_{\Var}$, where $u$ is the solution to \eqref{eq:pde}, $\mathrm{Var} \in \{ CG, DG \}$ and $U_{\Var}$ is either the cG($q_t$) or the dG($q_t$) solution on a time subdomain $p$.
Then,
\begin{equation}
\label{eq:var_prop}
(\phi, e_{\Var})(T_p) = (\phi, e_{\Var})(T_{p-1}) + \VARrespuphi{p}{U_{\Var}}{\phi}
\end{equation}
where $\phi$ is defined in \eqref{eq:generic_phi_p} and $\VARrespuphi{p}{U_{\Var}}{\phi}$ is the adjoint-weighted space-time residual of the solution $U_{\Var}$ on time subdomain $p$ defined as
\begin{equation}
\label{eq:res_cg_dg}
\VARrespuphi{p}{U_{\Var}}{\phi} = \begin{cases}
\sum_{n=1}^{{N}_p} \left[ \finv{l(v) - a(U_{CG}, v}{p}{n}
-\finv{( \dot{U}_{CG}, v)}{p}{n} \right], \qquad &\text{ if } \mathrm{Var} = CG,\\
\sum_{n=1}^{{N}_p} \left[ \finv{l(v) - a(U_{DG}, v}{p}{n}
-\finv{( \dot{U}_{DG}, v)}{p}{n} - ([U_{DG}]_{n-1,p},\phi_{n-1,p}^+) \right], \; &\text{ if } \mathrm{Var} = DG.
\end{cases}
\end{equation}
\end{lem}
\begin{proof}
We prove property \eqref{eq:var_prop} for the dG method, the proof for the cG method is similar. In the notation of the dG method, we have
\begin{equation}
\label{eq:not_para_dg}
e_{\Var}(T_{p-1}) = e_{\Var}^+(\ftimepj{p}{0}) \qquad \text{and} \qquad e_{\Var}(T_{p}) = e_{\Var}^-(\ftimepj{p}{{N}_p}).
\end{equation}
Consider
\begin{equation}
\label{eq:lem_err_acc_1}
\finv{ (-\dot{\phi},e_{DG})}{p}{n} = -\finv{a(e_{DG},\phi)}{p}{n}.
\end{equation}
The left-hand side is
\begin{equation}
\label{eq:lem_err_acc_2}
\finv{ (-\dot{\phi},e_{DG})}{p}{n} = - (\phi, e_{DG}^-)(\ftimepj{p}{n}) + (\phi, e_{DG}^+)(\ftimepj{p}{n-1}) + \finv{(\phi, \dot u - \dot{U}_{DG})}{p}{n}.
\end{equation}
Combining \eqref{eq:lem_err_acc_1} and \eqref{eq:lem_err_acc_2} with \eqref{eq:pde} leads to
\begin{equation}
\label{eq:err_sing_int}
\begin{aligned}
(\phi, e_{DG}^-)(\ftimepj{p}{n}) &= (\phi, e_{DG}^+)(\ftimepj{p}{n-1})
+\finv{l(v) - a(\fsolnDGp{p}, v}{p}{n}
-\finv{( \dot{U}_{DG}, v)}{p}{n},\\
&= (\phi, e_{DG}^-)(\ftimepj{p}{n-1})
-(\phi_{n-1,p}^+, [U_{DG}]_{n,p})
+\finv{l(v) - a(\fsolnDGp{p}, v}{p}{n}
-\finv{( \dot{U}_{DG}, v)}{p}{n}
\end{aligned}
\end{equation}
Summing \eqref{eq:err_sing_int} over $n = 1$ to $n = \Nftimep{p}$ and using \eqref{eq:not_para_dg},
\begin{equation}
\begin{aligned}
(\phi, e_{DG})(T_p) &= (\phi, e_{DG})(T_{p-1}) + \frespuphi{p}{U_{DG}}{\phi},
\end{aligned}
\end{equation}
which is the desired result.
\end{proof}
Note that a similar result, with similar derivation, also holds for the coarse scale.
\subsection{\emph{A posteriori} error analysis for implicit Euler on a single time subdomain}
\label{sec:be_as_dg0}
It is well known that the implicit Euler method with a certain choice of quadrature is nodally equivalent to the $dG(0)$ method and hence may be considered a variational method~\cite{eehj_book_96}. For completeness, we show this equivalence for the $dG(0)$ in \eqref{eq:fine_solv_one_dt_dg} with the implicit Euler method in \eqref{eq:be_step_fine}. For the $dG(0)$ method, we have $\dot{U}_{DG} = 0$, since the solution is constant in time over $\Intfpj{p}{n}$ for a fixed spatial point. Moreover, $\Vspacetime{n}{0}{q_s}$ is simply $\Vspace{q_s}$.
Now, using these facts, along with the right-hand rule of evaluating quadrature in \eqref{eq:fine_solv_one_dt_dg},
\begin{equation}
( (U_{DG})_{n,p}^- - (U_{DG})_{n,p}^+,v_{n-1,p}^+) = \fdTpj{p}{n} \left[ a((U_{DG})_{n,p}^-,v_{n,p}^-) + l(v) \right]
\end{equation}
Now, since $v \in \Vspacetime{n}{0}{q_s}$ is constant in time, we have $v_{n-1,p}^+ = v_{n,p}^- = v$. Finally, identifying $(U_{DG})_{n,p}^-$ with $\fsolnj{n}$ and $(U_{DG})_{n,p}^+$ with $\fsolnj{n-1}$ completes the equivalence of the two equations.
Now we define coarse and fine scale analogues to \eqref{eq:res_cg_dg} for the implicit Euler method.
The fine scale residuals are,
\begin{align}
\frespnarg{p}{n}{w}{v} &= \finv{l(v) - a(w, v}{p}{n}
- ([w]_{n-1,p},v^+), \label{eq:fine_scale_residual_I} \\
\frespuphi{p}{w}{v}&= \sum_{n=1}^{{N}_p}\frespnarg{p}{n}{w}{v}. \label{eq:fine_scale_residual}
\end{align}
The coarse scale residuals are,
\begin{align}
\crespnarg{p}{\hat{n}}{w}{v} &= \cinv{l(v) - a(w, v}{p}{\hat{n}}
- ([w]_{\hat{n}-1,p},v^+), \label{eq:coarse_scale_residual_I} \\
\crespuphi{p}{w}{v}&= \sum_{\hat{n}=1}^{\hat{N}_p}\crespnarg{p}{\hat{n}}{w}{v}. \label{eq:coarse_scale_residual}
\end{align}
Notice that there is no time derivative in the residuals because, as discussed earlier, those terms are 0 for the $dG(0)$ method.
The analogues of property \eqref{eq:var_prop} for the coarse and fine scale solutions in Algorithm~\ref{alg:parareal_variational} follow immediately.
Let $\fsolnp{p} = F^p[\fsolnp{p}(T_{p-1})]$ be the fine-scale solution on time domain $p$, and $\ferrp{p} = u - \fsolnp{p}$. Then,
\begin{equation}
\label{eq:fres}
(\phi, \ferrp{p})(T_p) = (\phi, \ferrp{p})(T_{p-1}) + \frespuphi{p}{\fsolnp{p}}{\phi}.
\end{equation}
Let $\csolnp{p} = \widehat{G}[\csolnp{p}(T_{p-1})]$ be the coarse scale solution on time domain $p$. Let $\cerrp{p} = u - \csolnp{p}$.
Then,
\begin{equation}
\label{eq:cres}
(\phi, \cerrp{p})(T_p) = (\phi, \cerrp{p})(T_{p-1}) + \crespuphi{p}{\csolnp{p}}{\phi}.
\end{equation}
\section{\emph{A posteriori} error analysis for the Time Parallel Algorithm (TPA)}
\label{sec:temporal_errors}
In this section we derive estimates for the error in the QoI computed from the fine scale solution of the TPA presented in \S \ref{sec:TPA}. That is, we seek to estimate $\mathcal{Q}(u - \fsolnk{K_t}) = ( \psi, u(T) -\fsolnk{K_t}(T))$.
The analysis for TPA is quite similar to the analysis for ODEs appearing in \cite{chaudhry2016posteriori}, except that the analysis for PDEs includes the error in the initial conditions, which is assumed to be zero for ODEs.
\subsection{Coarse and fine scale adjoint problems}
The coarse scale adjoint problem is: find $\widehat{\phi} \in L^2(0,T; H^1_0(\Omega))$ such that
\begin{equation}
\label{eq:coarse_adjoint}
\left .
\begin{gathered}\begin{aligned}
(-\dot{\widehat{\phi}}, v) &= -a(v, \widehat{\phi}) \quad \forall v \in H^1_0(\Omega) \text{ and } t \in (0,T], \\
\widehat{\phi}(T) &=\psi.
\end{aligned}\end{gathered}
\qquad \right \}
\end{equation}
Here, the term ``coarse'' indicates that in numerical experiments, this adjoint is approximated using a coarse (spatial and temporal) scale space.
For $p = 1, 2, \ldots, P_t$, the fine scale adjoint problem for the $p$th time domain $[T_{p-1}, T_p]$ is: find $\fadjp{p} \in L^2(T_{p-1}, T_p; H^1_0(\Omega))$ such that
\begin{equation}
\label{eq:fine_adjoint_equation}
\left .
\begin{gathered}\begin{aligned}
(-\dfadjp{p},v) &= -a(v,\fadjp{p}) \quad \forall v \in H^1_0(\Omega) \text{ and } t \in (T_{p-1}, T_p], \\
\fadjp{p}(T_p) &= \widehat{\phi} (T_p).
\end{aligned}\end{gathered}
\qquad \right \}
\end{equation}
Notice that the fine scale adjoint problem on the $p$th time subdomain receives initial conditions from the coarse scale adjoint problem and therefore the adjoint on the fine temporal scale can be solved independently on each of the $p$ time domains. However, while reducing solution cost, the resulting discontinuities in the fine temporal scale adjoint solution occurring at the boundaries of temporal subdomains give rise to an additional contribution to the error. In numerical experiments experiments, the fine scale adjoints are approximated using a fine (relative to the space used for the approximation of the coarse scale adjoints) scale space.
We seek the error in the fine scale solution below. However, since coarse scale solutions are used as initial conditions for fine scale integration on each subdomain, the analysis indicates the expected interaction between errors at the two different scales.
\subsection{Error representations}
Let $\ferrk{k_t} = u - \fsolnk{k_t}$ and $\cerrk{k_t} = u - \csolnk{k_t}$.
\begin{lem}
\begin{equation}
\label{eq:par_err_rep_without_aux_errs}
\begin{aligned}
(\psi, \ferrk{k_t}(T)) &=
\sum_{p=1}^{P_t}\frespuphi{p}{\fsolnpk{p}{k_t}}{\fadjp{p}} + (\fadjp{1}, \cerrpk{1}{k_t})(0) \\
& + \sum_{p=2}^{P_t}\bigg[ (\fadjp{p} - \widehat{\phi},\cerrpk{p-1}{k_t}) + (\fadjp{p} - \widehat{\phi}, \csolnpk{p-1}{k_t} - \csolnpk{p}{k_t}) + (\widehat{\phi}, \fsolnpk{p-1}{k_t} - \fsolnpk{p}{k_t})\bigg] (T_{p-1}).
\end{aligned}
\end{equation}
\end{lem}
\begin{proof}
By definition of $\fsolnk{k_t}$, we have $\fsolnk{k_t}(T) = \fsolnpk{P_t}{k_t}(T) = \fsolnpk{P_t}{k_t}(T_{P_t})$. From \eqref{eq:fres} and
$\fadjp{p}(T_p) = \widehat{\phi}(T_p)$ (see \eqref{eq:fine_adjoint_equation}) we have
\begin{equation}
(\widehat{\phi}, \ferrpk{p}{k_t})(T_p) = (\fadjp{p}, \ferrpk{p}{k_t})(T_{p-1}) + \frespuphi{p}{\fsolnpk{p}{k_t}}{\fadjp{p}}.
\end{equation}
Summing over all time subdomains from $p=1$ to $P_t$, and isolating $(\psi, \ferrpk{P_t}{k_t}(T)) = (\psi, \ferrk{k_t}(T))$ leads to
\begin{equation}
\begin{aligned}
(\psi, \ferrk{k_t}(T)) &= -\sum_{p=1}^{P_t-1} (\widehat{\phi}, \ferrpk{p}{k_t})(T_p) + \sum_{p=2}^{P_t} (\fadjp{p}, \ferrpk{p}{k_t})(T_{p-1}) + \sum_{p=1}^{P_t}\frespuphi{p}{\fsolnpk{p}{k_t}}{\fadjp{p}} + (\fadjp{1}, \ferrpk{1}{k_t})(0).
\end{aligned}
\end{equation}
Using $\fsolnpk{p}{k_t}(T_{p-1}) = \csolnpk{p}{k_t}(T_{p-1})$ (see Algorithm \ref{alg:parareal_variational}),
\begin{equation}
(\psi, \ferrk{k_t}(T)) = -\sum_{p=1}^{P_t-1} (\widehat{\phi}, \ferrpk{p}{k_t})(T_p) + \sum_{p=2}^{P_t} (\fadjp{p}, \cerrpk{p}{k_t})(T_{p-1}) + \sum_{p=1}^{P_t}\frespuphi{p}{\fsolnpk{p}{k_t}}{\fadjp{p}}+ (\fadjp{1}, \cerrpk{1}{k_t})(0).
\end{equation}
Rearranging, adding and subtracting terms as necessary, we arrive at
\begin{equation}
\begin{aligned}
(\psi, \ferrk{k_t}(T)) &= \sum_{p=1}^{P_t}\frespuphi{p}{\fsolnpk{p}{k_t}}{\fadjp{p}} + (\fadjp{1}, \cerrp{1})(0) \\
&+ \sum_{p=2}^{P_t}\bigg[ (\fadjp{p} - \widehat{\phi},\cerrpk{p-1}{k_t}) + (\fadjp{p} - \widehat{\phi}, \csolnpk{p-1}{k_t} - \csolnpk{p}{k_t}) + (\widehat{\phi}, \fsolnpk{p-1}{k_t} - \fsolnpk{p}{k_t})\bigg](T_{p-1}).
\end{aligned}
\end{equation}
\end{proof}
The terms $(\fadjp{p} - \widehat{\phi},\cerrpk{p-1}{k_t})(T_{p-1})$ in the error representation \eqref{eq:par_err_rep_without_aux_errs} are not directly computable as they contain the unknown error $\cerrp{p-1}$. We solve auxiliary adjoint problems to account for these error terms. As noted earlier, this is the tradeoff between solving the fine-scale adjoint problem from $T$ to $0$ and solving $P$ adjoint problems on each subdomain independently using the coarse scale adjoint problem as ``initial'' conditions. Consider $(P_t-1)$ auxiliary (coarse scale) QoIs as
\begin{align}
\Psi(u) = (\fadjp{p} - \widehat{\phi}, u),
\end{align}
and $(P_t-1)$ auxiliary adjoint problems: find $\dauxadjp{p} \in L^2(0,T_{p-1}; H^1_0(\Omega))$ such that
\begin{equation}
\label{eq:auxiliary_adjoint_equation}
\left .
\begin{gathered}\begin{aligned}
-\dauxadjp{p} &= -a(v, \auxadjp{p} ), \qquad \quad \forall v \in H^1_0(\Omega) \text{ and } t \in (0,T_{p-1}]\\
\auxadjp{p}(x,T_{p-1}) &= \fadjp{p}(x,T_{p-1}) - \widehat{\phi}(x,T_{p-1}).
\end{aligned}\end{gathered}
\qquad \right \}
\end{equation}
Replacing $\phi$ by $\auxadjp{p}$ in \eqref{eq:cres} on $k$th time subdomain we have
\begin{equation}
(\auxadjp{p}, \cerrpk{k}{k_t})(T_{k}) = (\auxadjp{p}, \cerrpk{k}{k_t})(T_{k-1}) + \crespuphi{k}{\csolnpk{k}{k_t}}{\auxadjp{p}}.
\end{equation}
Summing from $k=1$ to $k = p-1$ and using \eqref{eq:auxiliary_adjoint_equation},
\begin{equation}
\label{eq:aux_err}
(\fadjp{p} - \widehat{\phi},\cerrpk{p-1}{k_t})(T_{p-1})
= \sum_{k=2}^{p-1}(\auxadjp{p}, \csolnpk{k-1}{k_t} - \csolnpk{k}{k_t})(T_{k-1})
+ \sum_{k=1}^{p-1} \crespuphi{k}{\csolnpk{k}{k_t}}{\auxadjp{p}}
+ (\auxadjp{p}, \cerrpk{1}{k_t})(0).
\end{equation}
Combining \eqref{eq:par_err_rep_without_aux_errs} and \eqref{eq:aux_err} yields
\begin{equation}
\begin{aligned}
(\psi, \ferrk{k_t}(T)) &= \sum_{p=1}^{P_t}\frespuphi{p}{\fsolnpk{p}{k_t}}{\fadjp{p}} + (\fadjp{1}, \cerrpk{1}{k_t})(0) \\
&+ \sum_{p=2}^{P_t}\bigg[
\sum_{k=2}^{p-1}(\auxadjp{p}, \csolnpk{k-1}{k_t} - \csolnpk{k}{k_t})(T_{k-1})
+ \sum_{k=1}^{p-1} \crespuphi{k}{\csolnpk{k}{k_t}}{\auxadjp{p}} + (\auxadjp{p}, \cerrpk{1}{k_t})(0) \bigg] \\
&+ \sum_{p=2}^{P_t}\bigg[(\fadjp{p} - \widehat{\phi}, \csolnpk{p-1}{k_t} - \fsolnpk{p}{k_t})
+ (\widehat{\phi}, \fsolnpk{p-1}{k_t} - \fsolnpk{p}{k_t})\bigg](T_{p-1}).
\end{aligned}
\end{equation}
We summarize as Theorem \ref{thm:par_err} below.
\begin{thm}
\label{thm:par_err}
\begin{equation}
\label{eq:err_decomp_parareal}
(\psi, \ferrk{k_t}(T)) = \mathcal{D} + \mathcal{A} + \mathcal{C} + \mathcal{K},
\end{equation}
where
\begin{eqnarray}
\label{eq:fine_scale_temporal_error_comps}
\mathcal{D} &=& \sum_{p=1}^{P_t} \frespuphi{p}{\fsolnpk{p}{k_t}}{\fadjp{p}} + (\fadjp{1},\cerrpk{1}{k_t}))(0), \nonumber \\
\mathcal{A} &=& \sum_{p=2}^{P_t} \left[
\sum_{k=2}^{p-1}(\auxadjp{p}, \csolnpk{k-1}{k_t} - \csolnpk{k}{k_t})(T_{k-1})
+ \sum_{k=1}^{p-1} \crespuphi{k}{\csolnpk{k}{k_t}}{\auxadjp{p}} + (\auxadjp{p}, \cerrpk{1}{k_t})(0) \right], \nonumber \\
\mathcal{C} &=& \sum_{p=2}^{P_t} (\fadjp{p} - \widehat{\phi}, \csolnpk{p-1}{k_t} - \csolnpk{p}{k_t}) (T_{p-1}), \nonumber \\
\mathcal{K} &=& \sum_{p=2}^{P_t} (\widehat{\phi}, \fsolnpk{p-1}{k_t} - \fsolnpk{p}{k_t} )(T_{p-1}). \nonumber
\end{eqnarray}
\end{thm}
We identify the following components that comprise the total error.
\begin{enumerate}
\item $\mathcal{D}$ is the ``standard'' fine scale discretization error contribution.
\item $\mathcal{A}$ is the temporal auxiliary error contribution arising from the discontinuities in the fine scale adjoint solution at the synchronization points,
\item $\mathcal{C}$ is the temporal coarse scale error contribution from the discontinuities in the coarse scale solution and the fine scale adjoint solution at synchronization points,
\item $\mathcal{K}$ is the temporal iteration error contribution arising from the difference between the (fine scale) solutions at the synchronization points.
\end{enumerate}
Numerical examples demonstrating the accuracy of this error estimator are provided in section \ref{sec:numerical_results_Par}. While the analysis above concerns the fine scale error, we note that the coarse scale error may also be estimated as described in \ref{sec:appendix_coarse_error}.
\subsection{Extension to other time integrators}
The analysis in the previous section assumed the implicit Euler method as the time integration procedure for both the coarse and fine scale solutions. However, the analysis remains valid provided the time integration method used at the fine and coarse scales satisfies ~\eqref{eq:fres} and \eqref{eq:cres}.
These requirements are satisfied for variational methods as demonstrated in \S \ref{sec:var_mthds}. Moreover, these requirements are also satisfied for
a number of numerical schemes which may be interpreted as a variational methods, e.g., Crank Nicolson, BDF, IMEX, etc \cite{collins2014_LaxWendroff, collins2014_explicit, CEG+2015, Chaudhry2019, chaudhry2020posteriori}.
\section{\emph{A posteriori} analysis of the Space-Time Parallel Algorithm (STPA)}
\label{sec:spatial_errors}
In this section we derive error estimates for the error in the QoI computed from the fine scale solution of the STPA presented in \S \ref{sec:STPA}. That is, we seek to estimate $\mathcal{Q}(u - \fsolnpkl{p}{k_t}{K_s}) = ( \psi, u(T) -\fsolnpkl{p}{k_t}{K_s}(T))$.
Theorem \ref{thm:par_err} also gives the error in the STPA by replacing by replacing $\fsolnpk{p}{k_t}$ with $\fsolnpkl{p}{k_t}{K_s}$. However, this result does not account for the error due to the spatial Schwarz domain decomposition iteration. In this section we extend the analysis to account for this source of error.
Let $\fsolnpjkl{p}{n}{k_t}{K_s} = \fsolnpkl{p}{k_t}{K_s}(\ftimepj{p}{n})$, where $\fsolnpjkl{p}{n}{k_t}{K_s}$ is the domain decomposition solution to \eqref{eq:be_step_bilin form} on time domain $p$ for Parareal iteration $k_t$. More precisely, $\fsolnpjkl{p}{n}{k_t}{K_s}$ is the domain decomposition solution to the problem: find $\fsolnpjk{p}{n}{k_t} \in \Vspace{q_s}$ such that
\begin{equation}
\label{eq:be_step_bilin form_pn}
B^n(\fsolnpjk{p}{n}{k_t},v) = \lspace{n}(v),
\end{equation}
for all $v \in \Vspace{q}$, where the bilinear form $B^n(\cdot, \cdot)$ and the linear form $\lspace{n}(\cdot)$ are given in \eqref{eq:bilin_lin_forms_dd}, and $\fsolnpjk{p}{0}{k_t} = \fsolnpjkl{p}{0}{k_t}{K_s} = \csolnpk{p}{k_t}(x,T_{p-1})$.
For analysis purposes we also introduce an analytical solution: find $\solnpjk{p}{n}{k_t} \in H_0^1(\Omega)$ such that,
\begin{equation}
\label{eq:be_step_bilin form_pn_analytical}
B^n(\solnpjk{p}{n}{k_t},v) = \lspace{n}(v) \qquad \forall v \in H^1_0(\Omega).
\end{equation}
Based on the discussion in section \ref{sec:be_as_dg0}, we may also consider $\solnpjk{p}{n}{k_t}$ in variational form.
\subsection{Overview of the strategy}
Trivially,
\begin{equation}
\label{eq:error_components_dd}
\begin{aligned}
\left( u(\ftimepj{p}{n}) - \fsolnpjkl{p}{n}{k_t}{K_s}, \fadjp{p}(t_n) \right)
&= \underbrace{ \left( u(\ftimepj{p}{n})-\solnpjk{p}{n}{k_t}, \fadjp{p}(t_n) \right)}_{\mathcal{E}_{I}}
+ \underbrace{\left( \solnpjk{p}{n}{k_t}- \fsolnpjkl{p}{n}{k_t}{K_s}, \fadjp{p}(t_n) \right) }_{\mathcal{E}_{II}}
\end{aligned}
\end{equation}
\begin{enumerate}
\item We estimate $\mathcal{E}_{I}$ by interpreting $\solnpjk{p}{n}{k_t}$ as a variational solution and using \emph{a posteriori} techniques developed in section \ref{sec:var_mthds}.
\item We estimate $\mathcal{E}_{II}$ using the \emph{a posteriori} error analysis for overlapping domain decomposition presented in \cite{chaudhry2019posteriori}. This analysis allows us to further split $\mathcal{E}_{II}$ into iterative and discretization components.
\end{enumerate}
\subsection{Estimating $\mathcal{E}_{I}$}
In a derivation akin to that of \eqref{eq:err_sing_int}, interpreting $\solnpjk{p}{n}{k_t}$ as a space-time solution to the dG(0) method in time leads to,
\begin{equation}
\label{eq:err_be_inf_one_int}
\mathcal{E}_{I} = (\fadjp{p}(\ftimepj{p}{n}), u(\ftimepj{p}{n}) - \solnpjk{p}{n}{k_t}) = (\fadjp{p}(\ftimepj{p}{n}), u(\ftimepj{p}{n-1}) - \fsolnpjkl{p}{n-1}{k_t}{K_s})
+\frespnarg{p}{n}{\solnpjk{p}{n}{k_t}}{\fadjp{p}}
\end{equation}
where we also used the fact that $\solnpjk{p}{n}{k_t}(\ftimepj{p}{n-1}) = \fsolnpjkl{p}{n-1}{k_t}{K_s}$.
\subsection{Estimating $\mathcal{E}_{II}$}
\label{sec:be_rule_analysis}
The analysis of \cite{chaudhry2019posteriori} is used to estimate the error term $\mathcal{E}_{II}$. We define the global (spatial) adjoint problem: find $\adjdd{p}{n} \in H^1_0(\Omega)$ such that,
\begin{equation}
\label{eq:global_adj}
B^n(v,\adjdd{p}{n}) = (\fadjp{p},v), \quad \forall v \in H^1_0(\Omega),
\end{equation}
and adjoint problems on the spatial subdomains: find
$\adjddadd{p}{n}{k_s}{i} \in H^{1}_0(\Omega_i)$ such that,
\begin{equation}\label{eq:additive_adjoints}
B_{i}^n\left(v, \adjddadd{p}{n}{k_s}{i} \right)
= \tau \sum_{j=1}^{P_s} \left\{ (\ddpsi{p}{n}{j},v)_{ij} - B_{ij}\left(v, \sum_{l=k_s+1}^{K_s} \adjddadd{p}{n}{l}{i} \right) \right \}, \quad \forall v \in H^{1}_0(\Omega_i),
\end{equation}
where $\ddpsi{p}{n}{i}$ is the restriction of $\fadjp{p}(t_n)$ to $\Omega_i$,
and $B_i^n$ and $B_{ij}^n$ are the restrictions of $B^n$ to $\Omega_i$ and $\Omega_{ij}$ respectively. For a fixed $k_s$, the adjoint problems \eqref{eq:additive_adjoints} are independent for each $i$, so $\adjddadd{p}{n}{k_s}{i}$ may be computed backwards from $K_s, \ K_s-1, \ K_s-2, \cdots, 1$ in parallel analogous to the solution strategy in the additive Schwarz Algorithm \ref{alg:additive_basic}.
\begin{thm}
\label{thm:dd_errs}
\begin{equation}
\label{eq:err_dd_only}
\mathcal{E}_{II} = \left( \solnpjk{p}{n}{k_t} - \fsolnpjkl{p}{n}{k_t}{K_s}, \fadjp{p}(t_n) \right) = \fterrKspn{p}{n} + \fterrNspn{p}{n}
\end{equation}
where
\begin{align}
\fterrNspn{p}{n} &= \sum_{i=1}^{P_s} \sum_{k_s = 1}^{K_s} \lspace{n}(\adjddadd{p}{n}{k_s}{i}) - B_i^n(\fsolpnkidd{p}{n}{i}{k_s},\adjddadd{p}{n}{k_s}{i})\\
\fterrKspn{p}{n} &= \lspace{n}(\adjdd{p}{n}) - B^n(\fsolnpjkl{p}{n}{k_t}{K_s},\adjdd{p}{n}) - \fterrNspn{p}{n}
\end{align}
where $\fsolpnkidd{p}{n}{i}{k_s}$ are the spatial subdomain solutions on spatial subdomain $i$ defined by \eqref{eq:additive_basic_local}.
\end{thm}
The proof of Theorem~\ref{thm:dd_errs} is provided in \cite{chaudhry2019posteriori}.
The term $\fterrNspn{p}{n}$ quantifies the discretization error contribution (that is, due to using the spaces $\Vspaceddbc{q_s}{i}{k_s}$) while the term $\fterrKspn{p}{n}$ quantifies the domain decomposition iteration error contribution (that is, due to using a finite number $K_s$ iterations).
\subsection{Estimating $\mathcal{E}_{I} + \mathcal{E}_{II}$}
Combining \eqref{eq:err_be_inf_one_int} and \eqref{eq:err_dd_only},
\begin{equation}
\label{eq:one_int_dd_b}
\begin{aligned}
\mathcal{E}_{I} + \mathcal{E}_{II} &=
(\fadjp{p}(\ftimepj{p}{n}), u(\ftimepj{p}{n-1}) - \fsolnpjkl{p}{n-1}{k_t}{K_s})
+\frespnarg{p}{n}{\fsolnpjkl{p}{n}{k_t}{K_s}}{\fadjp{p}}\\
&= (\fadjp{p}(\ftimepj{p}{n}), u(\ftimepj{p}{n-1}) - \fsolnpjkl{p}{n-1}{k_t}{K_s})
+\frespnarg{p}{n}{\solnpjk{p}{n}{k_t}}{\fadjp{p}}
+ \fterrKspn{p}{n} + \fterrNspn{p}{n},
\end{aligned}
\end{equation}
To provide a comparison with the two part construction above, following the direct approach prescribed in \eqref{eq:err_sing_int} we have,
\begin{equation}
\label{eq:one_int_dd}
\mathcal{E}_{I} + \mathcal{E}_{II} = (\fadjp{p}(\ftimepj{p}{n}), u(\ftimepj{p}{n}) - \fsolnpjkl{p}{n}{k_t}{K_s}) = (\fadjp{p}(\ftimepj{p}{n}), u(\ftimepj{p}{n-1}) - \fsolnpjkl{p}{n-1}{k_t}{K_s})
+\frespnarg{p}{n}{\fsolnpjkl{p}{n}{k_t}{K_s}}{\fadjp{p}}.
\end{equation}
Comparing \eqref{eq:one_int_dd} and \eqref{eq:one_int_dd_b},
\begin{equation}
\frespnarg{p}{n}{\fsolnpjkl{p}{n}{k_t}{K_s}}{\fadjp{p}}
= \frespnarg{p}{n}{\solnpjk{p}{n}{k_t}}{\fadjp{p}}
+ \fterrKspn{p}{n} + \fterrNspn{p}{n}.
\end{equation}
Rearranging,
\begin{equation}
\label{eq:ParDD_computable_resid}
\frespnarg{p}{n}{\solnpjk{p}{n}{k_t}}{\fadjp{p}} = \frespnarg{p}{n}{\fsolnpjkl{p}{n}{k_t}{K_s}}{\fadjp{p}} -\fterrKspn{p}{n} - \fterrNspn{p}{n}.
\end{equation}
Since all terms on the RHS of \eqref{eq:ParDD_computable_resid} are computable, so is $\frespnarg{p}{n}{\solnpjk{p}{n}{k_t}}{\fadjp{p}}$.
\subsection{Summing contributions over time subdomain $p$}
Using \eqref{eq:fine_scale_residual}, the residual over the time domain $[T_{p-1},T_p]$ is
\begin{equation}
\label{eq:res_dd_one_time_sub_domain}
\frespuphi{p}{\fsolnpjkl{p}{n}{k_t}{K_s}}{\fadjp{p}} = \sum_{n=1}^{{N}_p} \frespnarg{p}{n}{\fsolnpjkl{p}{n}{k_t}{K_s}}{\fadjp{p}} = \sum_{n=1}^{{N}_p} \left( \frespnarg{p}{n}{\solnpjk{p}{n}{k_t}}{\fadjp{p}}
+ \fterrKspn{p}{n} + \fterrNspn{p}{n} \right) = \fterrftp{p} + \fterrNsp{p} + \fterrKsp{p}
\end{equation}
where
\begin{equation}
\fterrftp{p} = \sum_{n=1}^{{N}_p}\frespnarg{p}{n}{\solnpjk{p}{n}{k_t}}{\fadjp{p}}, \qquad \fterrKsp{p} = \sum_{n=1}^{{N}_p}\fterrKspn{p}{n}, \qquad \fterrNsp{p}= \sum_{n=1}^{{N}_p} \fterrNspn{p}{n}.
\end{equation}
\subsection{Error Representation for Space-Time Parallel Domain-Decomposition-Parareal Algorithm}
Let $\ferrkl{k_t }{K_s} = u(T) - \fsolnpkl{p}{k_t}{K_s}(T)$.
\begin{thm}
\label{thm:par_dd_err}
\begin{equation}
(\psi, \ferrkl{k_t}{K_s}(T)) = \mathcal{D}_t + \mathcal{D}_s + \mathcal{D}_k + \mathcal{A} + \mathcal{C} + \mathcal{K},
\end{equation}
where $\mathcal{A}, \mathcal{C}$ and $\mathcal{K}$ are defined in Theorem~\ref{thm:par_err} by \eqref{eq:fine_scale_temporal_error_comps} and,
\begin{equation}
\mathcal{D}_t = \sum_{p=1}^{P_t} \fterrftp{p} + (\fadjp{1},\cerrpk{1}{k_t})(0), \qquad \mathcal{D}_s = \sum_{p=1}^{P_t} \fterrNsp{p}, \qquad \mathcal{D}_k = \sum_{p=1}^{P_t} \fterrKsp{p}.
\end{equation}
\end{thm}
\begin{proof}
The proof follows directly from Theorem~\ref{thm:par_err} by replacing $\fsolnpk{p}{k_t}$ with $\fsolnpkl{p}{k_t}{K_s}$ and then using \eqref{eq:res_dd_one_time_sub_domain}.
\end{proof}
We identify the following additional error components.
\begin{enumerate}
\item $\mathcal{D}_t$ is the error contribution due to the use of implicit Euler time integration.
\item $\mathcal{D}_s$ is the error contribution due to using a finite dimensional space for the solution of domain decomposition.
\item $\mathcal{D}_k$ is the error contribution due to using a finite number, $K_s$, iterations of domain decomposition.
\end{enumerate}
\section{Numerical results}
\label{sec:numerical_results}
We present numerical results to support the analyses of both section \ref{sec:temporal_errors} and section \ref{sec:spatial_errors}, highlighting the accuracy of the error estimates developed. Accordingly, we first consider the error analysis for the Time Parallel Algorithm of section \ref{sec:numerical_results_Par} and then consider the effect of a further spatial domain decomposition iteration in the Space-Time Parallel Algorithm of section \ref{sec:numerical_results_Par_DD}. The implicit Euler method is used in the fine and coarse scale solvers for these two sections. In section~\ref{sec:numerical_results_Par_CG} we demonstrate the accuracy of the error estimates when a different time integration, a cG method, is employed in the coarse and fine scale solvers.
The accuracy of the \emph{a posteriori} error estimates is measured by the effectivity ratio $\gamma$, where
\begin{equation}
\label{eq:effectivity_ratio}
\gamma = \frac{\hbox{Estimated error}}{\hbox{True error}}.
\end{equation}
The bilinear and linear forms considered in the numerical examples are
\begin{equation}
\label{eq:temporal_errors_test_equation}
a(u,v) = -(\nabla u, \nabla v) \quad \hbox{and} \quad l(v) = (f(x),v)
\end{equation}
where
\begin{equation}
\label{eq:temporal_errors_test_rhs}
f(x) = \sin( \mu \pi x)(\mu^2 \pi^2 \cos(\nu \pi t) - \nu \pi \sin(\nu \pi t)).
\end{equation}
This choice of $f(x)$ corresponds to the true solution $ u(x,t) = \cos(\nu \pi t) \sin(\mu \pi x)$.
In all examples, the final time $T=2$ and the quantity of interest at the final time $T$ is defined by the ``adjoint data''
\begin{equation}
\label{eq:temporal_errors_test_adjoint_data}
\psi(x) = \left \{ \qquad
\begin{aligned}
10000 [ (x-0.2)^2 (x-0.6)^2 ], \qquad &0.2 < x < 0.6,\\
0, \qquad\qquad\qquad & \text{ otherwise}.
\end{aligned}
\qquad \right.
\end{equation}
This choice of $\psi(x)$ corresponds to a local weighted average of the solution around $x=0.4$ at the final time.
The ratio of number of fine and coarse time steps is denoted by $r = N_t/\widehat{N}_t$. For the spatial discretization, the number of spatial finite elements at the two scales remained fixed, i.e., $N_s = \widehat{N}_s$, but different degrees of interpolation were employed according to whether the corresponding temporal integration was performed on the coarse or fine scales. A lower degree of spatial interpolation, $\widehat{q}_s$, was employed when constructing the solution on the coarse scale, and a higher degree of spatial interpolation, $q_s$, was employed on the same spatial mesh when constructing the solution at the fine scale, i.e., $\hat{q}_s < q_s$. The notation used in this section is also summarized in \ref{sec:appendix}.
Two different time steps were employed for the temporal integration, a coarse time step and a smaller, fine time step. The implicit Euler method was chosen as the time integrator for the coarse and fine scale solvers $\widehat{G}$ and $F^p$ in \S \ref{sec:numerical_results_Par} and \S \ref{sec:numerical_results_Par_DD}. A continuous Galerkin method was used to obtain the results in \S \ref{sec:numerical_results_Par_CG}.
The adjoint solutions in \eqref{eq:coarse_adjoint}, \eqref{eq:fine_adjoint_equation}, \eqref{eq:auxiliary_adjoint_equation}, \eqref{eq:global_adj} and \eqref{eq:additive_adjoints} need to be approximated. In all cases, the same temporal and spatial meshes were used as those used to compute the numerical solutions, however, higher degree approximations were employed to obtain accurate estimates. The adjoint solutions corresponding to \eqref{eq:coarse_adjoint}, \eqref{eq:fine_adjoint_equation}and \eqref{eq:auxiliary_adjoint_equation} were approximated on the space-time slab $S^p_{n}$ using the cG(3) method in the space $\Vspacetime{n}{3}{3}$ (see \eqref{Vspacetimef} for the definition of this space). The adjoint solutions \eqref{eq:global_adj} and \eqref{eq:additive_adjoints}, needed for the analysis of STPA, were approximated by using a spatial finite element employing continuous piecewise cubic polynomial functions.
\subsection{Paralellism in time only: Time Parallel Algorithm}
\label{sec:numerical_results_Par}
We first consider the \emph{a posteriori} error estimate given by Theorem \ref{thm:par_err}, Eq. \eqref{eq:fine_scale_temporal_error_comps} in section \ref{sec:temporal_errors}.
The Parareal algorithm presents numerous discretization choices. We investigate the effect of the number of Parareal iterations in section \ref{sec:example_Par_iterations}, and the effect of the number of temporal subdomains in section \ref{sec:example_Par_subdomains}. We then consider the effect of the fine time scale and the coarse time scale in sections \ref{sec:example_Par_fine_time_scale} and \ref{sec:example_Par_coarse_time_scale} respectively. In sections \ref{sec:example_Par_iterations} to \ref{sec:example_Par_coarse_time_scale}, we ensure the temporal errors dominate spatial errors by choosing a modestly large number of spatial elements, so that the effects of changes to the spatial discretization parameters can be observed. Finally we consider the effect of the spatial discretization in section \ref{sec:example_Par_space_scale}. In all cases the effectivity ratio of the error estimator is 1.00. For these examples $\nu = 4, \mu = 1$ in the function $f(x)$ in equation \eqref{eq:temporal_errors_test_rhs}.
The examples not only provide convincing evidence of the accuracy of the error estimator,
but illustrate the importance of identifying and estimating distinct error contributions. We
observe fortuitous cancellation in the first example so that further refinement actually
increases the error. The second example demonstrates a situation in which the refinement
strategy has no effect on the overall error since it does not address the dominant source
of error. Later examples show that when the dominant error term is targeted and it is at
least an order of magnitude larger than all the other error contributions, refinements strategies
have the anticipated effect on the overall error.
\subsubsection{Effect of the number of Parareal iterations}
\label{sec:example_Par_iterations}
The total error in Table \ref{tab:example_Par_iterations} initially decreases with the number of Parareal iterations ($K_t$), but then increases somewhat. The initial decrease is expected, since for $K_t=1$, the iteration error $\mathcal{K}$ is the dominant source of error, and hence increasing the number of iterations leads to a decrease in this iteration error, and hence the overall error as well. However, after a single Parareal iteration, the iteration error $\mathcal{K}$ is no longer the dominant error component. Rather the discretization error $\mathcal{D}$, which remains essentially constant during the iterative process, becomes the dominant error. Moreover, the discretization and iteration errors have opposite signs, and there is a fortuitous cancellation of error for $K_t=2$ between these two terms. When the number of iterations is increased to 3, $\mathcal{K}$ decreases as expected, but so does the cancellation of error between this term and $\mathcal{D}$, and hence the total error shows a modest increase.
\begin{table}[H]
\centering
\begin{tabular}{c|c|c|c|c|c|c}
\toprule
$K_t$ & Est. Err. & $\gamma$ & $\mathcal{D}$ & $\mathcal{K}$ & $\mathcal{C}$ & $\mathcal{A}$ \\
\midrule
1& -1.02e-01 & 1.00e+00 & 5.10e-02 & -1.53e-01 & 0.00e+00 & -2.30e-06\\
2& 4.39e-02 & 1.00e+00 & 5.82e-02 & -1.43e-02 & 2.86e-06 & -2.91e-06\\
3& 5.73e-02 & 1.00e+00 & 5.89e-02 & -1.59e-03 & 3.11e-06 & -2.96e-06\\
\bottomrule
\end{tabular}
\caption{$\widehat{N}_t$ = 20,$r$ = 16, $P_t$ = 10, $\widehat{N}_s$ = 20, $\widehat{q}_s$ = 1, $q_s$ = 2 }
\label{tab:example_Par_iterations}
\end{table}
\subsubsection{Effect of the number of temporal subdomains}
\label{sec:example_Par_subdomains}
Increasing the number of temporal subdomains ($P_t$) is not expected to affect the discretization error $\mathcal{D}$ which is determined by the spatial and temporal element scales, but is expected to increase the iteration error $\mathcal{K}$. Both are supported by the results in Table \ref{tab:example_Par_subdomains} below.
The discretization error remains the largest error for all choices of the number of temporal subdomains.
\begin{table}[H]
\centering
\begin{tabular}{c|c|c|c|c|c|c}
\toprule
$P_t$ & Est. Err. & $\gamma$ & $\mathcal{D}$ & $\mathcal{K}$ & $\mathcal{C}$ & $\mathcal{A}$ \\
\midrule
2& 1.16e-01 & 1.00e+00 & 1.16e-01 & 3.53e-07 & -5.81e-09 & -3.58e-08\\
5& 1.16e-01 & 1.00e+00 & 1.16e-01 & 3.68e-05 & 6.18e-09 & 1.02e-07\\
10& 1.12e-01 & 1.00e+00 & 1.15e-01 & -3.26e-03 & 1.52e-08 & 9.56e-08\\
\bottomrule
\end{tabular}
\caption{$\widehat{N}_t$ = 40,$r$ = 4, $K_t$ = 2, $\widehat{N}_s$ = 20, $\widehat{q}_s$ = 1, $q_s$ = 2 }
\label{tab:example_Par_subdomains}
\end{table}
\subsubsection{Effect of the fine time scale}
\label{sec:example_Par_fine_time_scale}
As is evident in Table \ref{tab:example_Par_fine_time_scale}, increasing the ratio $r$ between the temporal fine and coarse scales reduces the discretization error $\mathcal{D}$. Since this is the dominant error, this also leads to a decrease in the total error.
\begin{table}[H]
\centering
\begin{tabular}{c|c|c|c|c|c|c}
\toprule
r & Est. Err. & $\gamma$ & $\mathcal{D}$ & $\mathcal{K}$ & $\mathcal{C}$ & $\mathcal{A}$ \\
\midrule
2& 7.21e-01 & 1.00e+00 & 7.31e-01 & -9.49e-03 & 1.78e-04 & -3.77e-04\\
4& 3.82e-01 & 1.00e+00 & 4.06e-01 & -2.31e-02 & 2.75e-04 & -4.08e-04\\
\bottomrule
\end{tabular}
\caption{$\widehat{N}_t$ = 10,$P_t$ = 10, $K_t$ = 2, $\widehat{q}_t$ = 1, $q_t$ = 1, $\widehat{N}_s$ = 20, $\widehat{q}_s$ = 1, $q_s$ = 2 }
\label{tab:example_Par_fine_time_scale}
\end{table}
\subsubsection{Effect of the coarse time scale}
\label{sec:example_Par_coarse_time_scale}
The results in Table \ref{tab:example_Par_coarse_time_scale} demonstrate that improving the accuracy of the coarse temporal solution reduces \emph{all} components of the error, provided that the temporal errors dominate the spatial errors.
\begin{table}[H]
\centering
\begin{tabular}{c|c|c|c|c|c|c}
\toprule
$\widehat{N}_t$ & Est. Err. & $\gamma$ & $\mathcal{D}$ & $\mathcal{K}$ & $\mathcal{C}$ & $\mathcal{A}$ \\
\midrule
10& 7.21e-01 & 1.00e+00 & 7.31e-01 & -9.49e-03 & 1.78e-04 & -3.77e-04\\
20& 4.09e-01 & 1.00e+00 & 4.13e-01 & -3.91e-03 & 1.42e-06 & -2.94e-06\\
\bottomrule
\end{tabular}
\caption{$r$ = 2, $P_t$ = 10, $K_t$ = 2, $\widehat{N}_s$ = 20, $\widehat{q}_s$ = 1, $q_s$ = 2 }
\label{tab:example_Par_coarse_time_scale}
\end{table}
\subsubsection{Effect of the spatial scale}
\label{sec:example_Par_space_scale}
For the numerical results presented in Table \ref{tab:example_Par_space_scale} we have increased the number of coarse temporal elements to
$100$,
$r$ to $8$,
and the number of Parareal iterations to $6$, in order to ensure the temporal errors are small compared with the spatial errors. While decreasing the discretization error as anticipated, improving the spatial accuracy is also seen to decrease the coarse temporal and auxiliary errors (though not monotonically) since these both have a spatial component.
\begin{table}[H]
\centering
\begin{tabular}{c|c|c|c|c|c|c}
\toprule
$\widehat{N}_s$ & Est. Err. & $\gamma$ & $\mathcal{D}$ & $\mathcal{K}$ & $\mathcal{C}$ & $\mathcal{A}$ \\
\midrule
5& 6.61e-02 & 1.00e+00 & 6.61e-02 & 1.00e-10 & 1.16e-07 & 9.08e-07\\
10& 3.40e-02 & 1.00e+00 & 3.40e-02 & 9.78e-11 & 2.20e-07 & 1.92e-06\\
20& 2.64e-02 & 1.00e+00 & 2.64e-02 & 9.72e-11 & 2.51e-09 & 6.32e-08\\
\bottomrule
\end{tabular}
\caption{$\widehat{N}_t$ = 100, $r$ = 8, $P_t$ = 10, $K_t$ = 6, $\widehat{q}_s$ = 1, $q_s$ = 1 }
\label{tab:example_Par_space_scale}
\end{table}
\subsection{Space-Time Parallel Algorithm}
\label{sec:numerical_results_Par_DD}
The following results were obtained through a combination of the Parareal integration in time and additive Schwarz domain decomposition in space.
We decompose the error $\mathcal{D}$ in to its various components as presented in Theorem \ref{thm:par_dd_err}. The effects of varying the fine and coarse time scales are considered in sections \ref{sec:example_ParDD_fine_time_scale} and \ref{sec:example_ParDD_coarse_time_scale} respectively. The number of domain decomposition iterations is varied in section \ref{sec:example_ParDD_DD_iterations}, the number of spatial subdomains in section \ref{sec:example_ParDD_spatial_subdomains}, and the degree of overlap of the spatial subdomains in \ref{sec:example_ParDD_overlap}. In all examples the effectivity ratio of the error estimator is 1.00. For these examples, we set $\nu = 4, \mu = 2$ in the function $f(x)$ defined by equation \eqref{eq:temporal_errors_test_rhs}. The Richardson parameter used in spatial domain decomposition iterations was always set to $\tau=0.4$.
Once again the examples not only provide convincing evidence of the accuracy of the error estimator,
but illustrate the complex interplay that can occur between different contributions to the overall error.
We frequently observe second order effects of a refinement strategy on error components other than
those the strategy is directly targeting. These second order effects are largely inconsequential
if the contributions to the overall error have widely different magnitude, but may become important
when the error components are similar in scale and particularly when they are opposite in sign.
\subsubsection{Effect of the fine time scale}
\label{sec:example_ParDD_fine_time_scale}
Consistent with the results in \S \ref{sec:example_Par_fine_time_scale}, decreasing the temporal time step decreases the temporal component $\mathcal{D}_t$ of the discretization error. Notice that for this example the number of spatial elements has been increased so that the temporal discretization error is dominant. All other error contributions in Table \ref{tab:example_ParDD_fine_time_scale} are largely unaffected.
\begin{table}[H]
\centering
\begin{tabular}{c|c|c|c|c|c|c|c|c}
\toprule
r & Est. Err. & $\gamma$ & $\mathcal{D}_t$ & $\mathcal{D}_s$ & $\mathcal{D}_k$ & $\mathcal{K}$ & $\mathcal{C}$ & $\mathcal{A}$ \\
\midrule
2& 2.42e-01 & 1.00e+00 & 2.45e-01 & 6.94e-08 & 1.11e-02 & 7.98e-04 & 1.23e-02 & -2.58e-02\\
4& 1.55e-01 & 1.00e+00 & 1.48e-01 & 6.93e-08 & 1.37e-02 & 1.70e-03 & 1.95e-02 & -2.65e-02\\
8& 1.05e-01 & 1.00e+00 & 8.34e-02 & 7.32e-08 & 2.51e-02 & 2.12e-03 & 2.30e-02 & -2.68e-02\\
\bottomrule
\end{tabular}
\caption{$\widehat{N}_t$ = 10,$P_t$ = 10, $K_t$ = 2, $\widehat{N}_s$ = 80, $P_s$ = 2, $K_s$=8, $\beta$ = 0.2, $\widehat{q}_s$ = 1, $q_s$ = 2 }
\label{tab:example_ParDD_fine_time_scale}
\end{table}
\subsubsection{Effect of the coarse time scale}
\label{sec:example_ParDD_coarse_time_scale}
Again, consistent with the results in section \ref{sec:example_Par_coarse_time_scale}, all temporal components of the error decrease as the coarse time scale is decreased. The spatial error components in Table \ref{tab:example_ParDD_coarse_time_scale} are seen to be largely unaffected since the number of spatial elements has been chosen so that temporal errors dominate.
\begin{table}[H]
\centering
\begin{tabular}{c|c|c|c|c|c|c|c|c}
\toprule
$\widehat{N}_t$ & Est. Err. & $\gamma$ & $\mathcal{D}_t$ & $\mathcal{D}_s$ & $\mathcal{D}_k$ & $\mathcal{K}$ & $\mathcal{C}$ & $\mathcal{A}$ \\
\midrule
10& 2.42e-01 & 1.00e+00 & 2.45e-01 & 6.94e-08 & 1.11e-02 & 7.98e-04 & 1.23e-02 & -2.58e-02\\
20& 1.55e-01 & 1.00e+00 & 1.41e-01 & 6.55e-08 & 1.46e-02 & -9.88e-07 & -5.09e-04 & -4.81e-05\\
40& 1.05e-01 & 1.00e+00 & 7.95e-02 & 6.89e-08 & 2.54e-02 & -4.12e-07 & -8.73e-06 & -3.05e-07\\
\bottomrule
\end{tabular}
\caption{$r$ = 2, $P_t$ = 10, $K_t$ = 2, $\widehat{N}_s$ = 80, $P_s$ = 2, $K_s$=8, $\beta$ = 0.2, $\widehat{q}_s$ = 1, $q_s$ = 2 }
\label{tab:example_ParDD_coarse_time_scale}
\end{table}
\subsubsection{Effect of the number of domain decomposition iterations}
\label{sec:example_ParDD_DD_iterations}
The spatial iteration error $\mathcal{D}_k$ decreases with number of domain decomposition iterations, as shown in Table \ref{tab:example_ParDD_DD_iterations}, while the spatial and temporal discretization errors remain essentially constant. There is a second order effect of decreasing the temporal iterative and coarse time scale error contributions since these error contributions also have a spatial component.
\begin{table}[H]
\centering
\begin{tabular}{c|c|c|c|c|c|c|c|c}
\toprule
$K_s$ & Est. Err. & $\gamma$ & $\mathcal{D}_t$ & $\mathcal{D}_s$ & $\mathcal{D}_k$ & $\mathcal{K}$ & $\mathcal{C}$ & $\mathcal{A}$ \\
\midrule
2& 6.61e-01 & 1.00e+00 & 2.16e-01 & 1.95e-05 & 4.49e-01 & -1.10e-04 & -4.94e-03 & -4.52e-05\\
6& 1.88e-01 & 1.00e+00 & 1.45e-01 & 1.71e-05 & 4.40e-02 & -1.84e-05 & -1.31e-03 & -4.87e-05\\
\bottomrule
\end{tabular}
\caption{$\widehat{N}_t$ = 20, $r$ = 2, $P_t$ = 10, $K_t$ = 2, $\widehat{N}_s$ = 20, $P_s$ = 2, $\beta$ = 0.2, $\widehat{q}_s$ = 1, $q_s$ = 2 }
\label{tab:example_ParDD_DD_iterations}
\end{table}
\subsubsection{Effect of the number of spatial subdomains}
\label{sec:example_ParDD_spatial_subdomains}
The spatial iteration error $\mathcal{D}_k$ increases with number of spatial subdomains, while the spatial and temporal discretization errors remain essentially constant. A second order effect is again apparent in Table \ref{tab:example_ParDD_spatial_subdomains}, where the temporal iterative error is see to also increase due to its spatial component.
\begin{table}[H]
\centering
\begin{tabular}{c|c|c|c|c|c|c|c|c}
\toprule
$P_s$ & Est. Err. & $\gamma$ & $\mathcal{D}_t$ & $\mathcal{D}_s$ & $\mathcal{D}_k$ & $\mathcal{K}$ & $\mathcal{C}$ & $\mathcal{A}$ \\
\midrule
2& 7.17e-01 & 1.00e+00 & 2.25e-01 & 1.02e-06 & 4.96e-01 & -8.83e-06 & -4.51e-03 & -4.46e-05\\
4& 1.23e+00 & 1.00e+00 & 3.13e-01 & 1.26e-06 & 9.18e-01 & 3.84e-05 & -5.76e-03 & -4.43e-05\\
\bottomrule
\end{tabular}
\caption{$\widehat{N}_t$ = 20, $r$ = 2, $P_t$ = 10, $K_t$ = 2, $\widehat{N}_s$ = 40, $K_s$=2, $\beta$ = 0.1, $\widehat{q}_s$ = 1, $q_s$ = 2 }
\label{tab:example_ParDD_spatial_subdomains}
\end{table}
\subsubsection{Effect of spatial domain overlap}
\label{sec:example_ParDD_overlap}
Increasing the degree of overlap between the spatial domains is expected to decrease the spatial iterative error $\mathcal{D}_k$ while leaving the spatial and temporal discretization errors largely unchanged. Slightly different behavior is observed in Table \ref{tab:example_ParDD_overlap} for this example, perhaps because the temporal discretization error is orders of magnitude larger than the spatial discretization error. Never-the-less, the error estimator is accurate and the effectivity ratio is 1.00.
\begin{table}[H]
\centering
\begin{tabular}{c|c|c|c|c|c|c|c|c}
\toprule
$\beta$ & Est. Err. & $\gamma$ & $\mathcal{D}_t$ & $\mathcal{D}_s$ & $\mathcal{D}_k$ & $\mathcal{K}$ & $\mathcal{C}$ & $\mathcal{A}$ \\
\midrule
0.1& 7.17e-01 & 1.00e+00 & 2.25e-01 & 1.02e-06 & 4.96e-01 & -8.83e-06 & -4.51e-03 & -4.46e-05\\
0.2& 6.61e-01 & 1.00e+00 & 2.16e-01 & 1.95e-05 & 4.49e-01 & -1.10e-04 & -4.94e-03 & -4.52e-05\\
\bottomrule
\end{tabular}
\caption{$\widehat{N}_t$ = 20, $r$ = 2, $P_t$ = 10, $K_t$ = 2, $\widehat{N}_s$ = 20, $P_s$ = 2, $K_s$=2, $\widehat{q}_s$ = 1, $q_s$ = 2 }
\label{tab:example_ParDD_overlap}
\end{table}
\subsection{Results for a different time integrator for the Time parallel Algorithm}
\label{sec:numerical_results_Par_CG}
We briefly demonstrate the accuracy of the \emph{a posteriori} error estimates when the continuous Galerkin method, cG($q_t$) (see section~\ref{sec:var_mthds}), is employed as the time integration scheme in the fine and coarse scale solvers for the TPA. The approximation space for the coarse and fine scales on the the space-time slab $S^p_{n}$ are $\Vspacetime{n}{\widehat{q}_t}{\widehat{q}_s}$ and $\Vspacetime{n}{q_t}{q_s}$ respectively. The results of Theorems~\ref{thm:par_err} and \ref{thm:par_dd_err} remain valid; however, the definition of the residuals involved need to be modified to reflect the cG method. The residual for the cG method is given in \eqref{eq:res_cg_dg}. The results are qualitative similar to those in section~\ref{sec:numerical_results_Par}, and hence we present them without any comment. The aim is to show that the results are still accurate, and that the analysis is applicable to a broad class of numerical methods. The results are given in Tables~\ref{tab:CG_example_Par_iterations}, \ref{tab:CG_example_Par_subdomains}, \ref{tab:CG_example_Par_fine_time_scale}, \ref{tab:CG_example_Par_coarse_time_scale} and \ref{tab:CG_example_Par_space_scale}.
\begin{table}[H]
\centering
\begin{tabular}{c|c|c|c|c|c|c}
\toprule
$K_t$ & Est. Err. & $\gamma$ & $\mathcal{D}$ & $\mathcal{K}$ & $\mathcal{C}$ & $\mathcal{A}$ \\
\midrule
1& 1.02e-01 & 1.00e+00 & -4.46e-02 & 1.47e-01 & 0.00e+00 & 1.79e-04\\
2& -6.34e-02 & 1.00e+00 & -3.86e-02 & -2.48e-02 & -1.99e-04 & 1.80e-04\\
3& -3.81e-02 & 1.00e+00 & -3.96e-02 & 1.50e-03 & -1.67e-04 & 1.80e-04\\
\bottomrule
\end{tabular}
\caption{$\widehat{N}_t$ = 10, $r$ = 4, $P_t$ = 10, $\widehat{q}_t$ = 1, $q_t$ = 1, $\widehat{N}_s$ = 20, $\widehat{q}_s$ = 1, $q_s$ = 2 }
\label{tab:CG_example_Par_iterations}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{c|c|c|c|c|c|c}
\toprule
$P_t$ & Est. Err. & $\gamma$ & $\mathcal{D}$ & $\mathcal{K}$ & $\mathcal{C}$ & $\mathcal{A}$ \\
\midrule
2& -3.95e-02 & 1.00e+00 & -3.95e-02 & 3.86e-07 & 1.28e-07 & 1.20e-08\\
5& -3.96e-02 & 1.00e+00 & -3.95e-02 & -2.55e-05 & 6.94e-05 & -7.27e-05\\
10& -6.34e-02 & 1.00e+00 & -3.86e-02 & -2.48e-02 & -1.99e-04 & 1.80e-04\\
\bottomrule
\end{tabular}
\caption{$\widehat{N}_t$ = 10, $r$ = 4, $K_t$ = 2, $\widehat{q}_t$ = 1, $q_t$ = 1, $\widehat{N}_s$ = 20, $\widehat{q}_s$ = 1, $q_s$ = 2 }
\label{tab:CG_example_Par_subdomains}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{c|c|c|c|c|c|c}
\toprule
r & Est. Err. & $\gamma$ & $\mathcal{D}$ & $\mathcal{K}$ & $\mathcal{C}$ & $\mathcal{A}$ \\
\midrule
2& -1.71e-01 & 1.00e+00 & -1.53e-01 & -1.82e-02 & -1.57e-04 & 1.78e-04\\
4& -6.34e-02 & 1.00e+00 & -3.86e-02 & -2.48e-02 & -1.99e-04 & 1.80e-04\\
\bottomrule
\end{tabular}
\caption{$\widehat{N}_t$ = 10, $P_t$ = 10, $K_t$ = 2, $\widehat{q}_t$ = 1, $q_t$ = 1, $\widehat{N}_s$ = 20, $\widehat{q}_s$ = 1, $q_s$ = 2 }
\label{tab:CG_example_Par_fine_time_scale}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{c|c|c|c|c|c|c}
\toprule
$\widehat{N}_t$ & Est. Err. & $\gamma$ & $\mathcal{D}$ & $\mathcal{K}$ & $\mathcal{C}$ & $\mathcal{A}$ \\
\midrule
10& -1.71e-01 & 1.00e+00 & -1.53e-01 & -1.82e-02 & -1.57e-04 & 1.78e-04\\
20& -4.10e-02 & 1.00e+00 & -3.95e-02 & -1.55e-03 & -5.17e-07 & 6.50e-07\\
\bottomrule
\end{tabular}
\caption{$r$ = 2, $P_t$ = 10, $K_t$ = 2, $\widehat{q}_t$ = 1, $q_t$ = 1, $\widehat{N}_s$ = 20, $\widehat{q}_s$ = 1, $q_s$ = 2 }
\label{tab:CG_example_Par_coarse_time_scale}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{c|c|c|c|c|c|c}
\toprule
$\widehat{N}_s$ & Est. Err. & $\gamma$ & $\mathcal{D}$ & $\mathcal{K}$ & $\mathcal{C}$ & $\mathcal{A}$ \\
\midrule
5& 3.73e-02 & 1.00e+00 & 3.69e-02 & 1.68e-10 & -1.95e-05 & 8.85e-04\\
10& 5.55e-03 & 1.00e+00 & 5.54e-03 & 1.40e-10 & -7.21e-07 & 4.15e-05\\
20& -1.93e-03 & 1.00e+00 & -1.93e-03 & 1.33e-10 & -6.45e-07 & 2.55e-07\\
\bottomrule
\end{tabular}
\caption{$\widehat{N}_t$ = 20, $r$ = 6, $P_t$ = 10, $K_t$ = 6, $\widehat{q}_t$ = 1, $q_t$ = 1, $\widehat{q}_s$ = 1, $q_s$ = 1 }
\label{tab:CG_example_Par_space_scale}
\end{table}
\section{Conclusions and future work}
\label{sec:conclusions}
We first developed an accurate adjoint-based \emph{a posteriori error} analysis for the Time Parallel Algorithm, which applies the Parareal method in time for the solution of parabolic partial differential equations. This analysis does not seek to separate spatial and temporal sources of error, but combines the two as ``discretization'' error. Additional error contributions arise due to incomplete iteration, discontinuities in the coarse forward solution, and the fine adjoint solution when it is solved in parallel. We then extended this analysis to the Space-Time Parallel Algorithm by assuming that the spatial solution is determined through a second iterative method, in this case domain decomposition. The combined analysis is able to disaggregate the spatial and temporal discretization errors, as well as identify additional iterative errors resulting from the domain decomposition iteration in space. Thus the analysis presented here provides a basis for separating discretization and iteration errors and for estimating the effects of incomplete iteration in both space and time. Accurate error estimates provide a foundation for adaptivity and are essential for accurate uncertainty quantification where it is necessary to distinguish variation due to parameter changes from variation due to numerical errors which can also vary across the parameter domain.
We have limited the analysis to linear problems and intend to extend these results to nonlinear problems using linearization techniques that have previously proven effective~\cite{chaudhry2016posteriori}. We also investigate more sophisticated temporal solvers than backward Euler and a simple cG method. Parallel iterative methods for solving PDEs require a large number of discretization choices. The error analysis developed here, which accurately distinguishes multiple sources of error, provides a sound foundation on which to make many of these choices. Finally, we can investigate adaptive strategies, noting the complex interaction between error components.
Waveform relaxation methods \cite{gander2007optimized, gander2013parareal, gander1998space} make a fundamentally different choice when combining the two iterative methods of domain decomposition and Parareal iteration.
Assume we wish to solve a parabolic partial differential equation on $\Omega \times (0,T]$ and let $\Omega_i \subset \Omega, i=1, \dots p$ be a set of overlapping (spatial) subdomains. Waveform relaxation methods consider domain decomposition as the outer iteration and employs Parareal iteration (or some other time integration technique) to solve subproblems on each spatio-temporal subdomain $\Omega_i \times (0,T], i=1, \dots p$ independently, and then perform a domain decomposition like iteration on the $p$ spatial-temporal blocks. Eficient implementations of waveform relaxation require additional computation to determine Robin conditions for the boundaries of subdomains. An analysis of this approach is saved for future work, starting with an \emph{ a posteriori} error analysis for waveform relaxation implementing a simple, discontinuous Galerkin method for time integration and then extending to Parareal integration in time.
\section*{Acknowledgments}
J. Chaudhry’s work is supported by the NSF-DMS 1720402.
S. Tavener's work is supported by NSF-DMS 1720473.
D. Estep's work is supported by NSF-DMS 1720473 and NSERC grants.
\bibliographystyle{plain}
|
2,877,628,088,541 | arxiv | \section{Introduction}
The theory of fractional calculus is an extension of ordinary calculus
that considers integrals and derivatives of arbitrary real
or complex order. Although its birth goes back to Euler,
fractional calculus has gained a great importance
only in recent decades, with the applicability
of such operators for the efficient dynamic modeling
of some real phenomena \cite{Coimbra,Sun}. More recently,
a general theory of fractional calculus was presented,
where the order of the fractional operators is not constant
in time \cite{Samko_2}. This is a natural extension,
since fractional derivatives are nonlocal operators
and contain memory. Therefore, it is reasonable that
the order of the derivative may vary along time.
The variational problem of Herglotz is a generalization
of the classical variational problem \cite{MR3462534,MyID:342}.
It allows us to describe nonconservative processes,
even in the case when the Lagrangian is autonomous
(that is, when the Lagrangian does not depend explicitly on time).
In contrast to the calculus of variations, where the cost functional
is given by an integral depending only on time, space and velocity,
in the Herglotz variational problem the model is given by a differential
equation involving the derivative of the objective functional $z$
and the Lagrange function depends on time, trajectories $x$ and $z$
and on the derivative of $x$. The problem of Herglotz was posed
by Herglotz himself in 1930 \cite{Herglotz}, but only in 1996,
with the works \cite{Guenther,Guenther:book}, it has gained
a wide attention from the mathematical community. Indeed,
since 1996, several papers were devoted to this subject: see
\cite{Almeida,Georgieva,Georgieva:sev,Santos:Viet,Santos:Disc,Santos:Spri,MR3462534,MyID:342}
and references therein.
\section{Preliminaries}
In this section we present some needed concepts and results.
\subsection{The fractional calculus of variable order}
\label{sec:FC}
We deal with fractional operators of variable fractional order
on two variables, with range on the open interval $(0,1)$,
that is, the order is a function $\alpha:[a,b]^2\to(0,1)$.
Given a function $x:[a,b]\to\mathbb{R}$, we present two
different concepts of fractional derivatives of $x$.
First, we recall the definition of fractional
integral \cite{Tatiana:IDOTA2011}.
\begin{definition}
The left Riemann--Liouville fractional integral
of order $\a$ of $x$ is defined by
$$
\LI x(t)=\int_a^t \frac{1}{\Gamma(\alpha(t,\t))}(t-\t)^{\alpha(t,\t)-1}x(\t)d\t
$$
and the right Riemann--Liouville fractional integral of $x$ by
$$
\RI x(t)=\int_t^b\frac{1}{\Gamma(\alpha(\t,t))}(\t-t)^{\alpha(\t,t)-1} x(\t)d\t.
$$
\end{definition}
For fractional derivatives, we consider two types:
Riemann--Liouville and Caputo fractional derivatives.
\begin{definition}
The left Riemann--Liouville fractional derivative
of order $\a$ of $x$ is defined by
$$
\LDa x(t)=\frac{d}{dt}\int_a^t
\frac{1}{\Gamma(1-\alpha(t,\t))}(t-\t)^{-\alpha(t,\t)}x(\t)d\t
$$
and the right Riemann--Liouville fractional derivative of $x$ by
$$
\RDa x(t)=\frac{d}{dt}\int_t^b
\frac{-1}{\Gamma(1-\alpha(\t,t))}(\t-t)^{-\alpha(\t,t)} x(\t)d\t.
$$
\end{definition}
\begin{definition}
The left Caputo fractional derivative of order $\a$ of $x$ is defined by
$$
\LC x(t)=\int_a^t
\frac{1}{\Gamma(1-\alpha(t,\t))}(t-\t)^{-\alpha(t,\t)}x^{(1)}(\t)d\t
$$
and the right Caputo fractional derivative of $x$ by
$$
\RCa x(t)=\int_t^b
\frac{-1}{\Gamma(1-\alpha(\t,t))}(\t-t)^{-\alpha(\t,t)}x^{(1)}(\t)d\t.
$$
\end{definition}
Motivated by the works \cite{Malin:Tor,MyID:207}, we consider here
a generalization of previous concepts by introducing a linear
combination of the fractional derivatives of variable fractional order.
\begin{definition}
\label{def1}
Let $\alpha, \beta: [a,b]^2\rightarrow(0,1)$
be two functions and $\gamma=(\gamma_1,\gamma_2) \in [0,1]^{2}$ a vector.
The combined Riemann--Liouville fractional derivative
of function $x$ is defined by
\begin{equation}
\label{eq:cfd:RL}
D_\gamma^{\a,\b}x(t)=\gamma_1 \, \LDa x(t)+\gamma_2 \, \RDb x(t).
\end{equation}
Similarly, the combined Caputo fractional derivative
of function $x$ is defined by
\begin{equation}
\label{eq:cfd:C}
^{C}D_\gamma^{\a,\b}x(t)=\gamma_1 \, \LC x(t)+\gamma_2 \, \RCb x(t).
\end{equation}
\end{definition}
When dealing with variational problems and necessary optimality
conditions, an important ingredient is always an integration
by parts formula. Here, we present two such formulas,
involving the Caputo fractional derivative of variable order.
\begin{theorem}[See \protect{\cite[Theorem 3.2]{Od}}]
\label{thm:FIP}
If $x,y \in C^1[a,b]$, then
$$
\int_{a}^{b}y(t) \, \LC x(t)dt
=\int_a^b x(t) \, {\RDa}y(t)dt+\left[x(t)
\, {_tI_b^{1-\a}}y(t) \right]_{t=a}^{t=b}
$$
and
$$
\int_{a}^{b}y(t) \, {\RCa}x(t)dt=\int_a^b x(t)
\, {\LDa} y(t)dt-\left[x(t) \, {_aI_t^{1-\a}}y(t)\right]_{t=a}^{t=b}.
$$
\end{theorem}
To end this short introduction to
the fractional calculus of variable order, we introduce
one more notation. The dual fractional derivative
of \eqref{eq:cfd:RL} is defined by
$$
D_{\overline{\gamma}}^{\b,\a}=\gamma_2 \, {_aD_t^{\b}}
+\gamma_1 \, {_tD_T^{\a}},
$$
where $\overline{\gamma}=(\gamma_2,\gamma_1)$
and $T\in[a,b]$ is the final time of the problem
under consideration (see \eqref{funct1} below).
The dual fractional derivative of
\eqref{eq:cfd:C} is defined similarly.
\subsection{The fractional calculus of variations}
Let $D$ denote the linear subspace of $C^1([a,b])\times [a,b]$ defined by
\begin{equation*}
D=\left\{ (x,t)\in C^1([a,b])\times [a,b] : \DC x(t)
\mbox{ exists and is continuous on }[a,b] \right\}.
\end{equation*}
We endow $D$ with the following norm:
$$
\|(x,t)\|=\max_{a\leq t \leq b}|x(t)|
+\max_{a\leq t \leq b}\left| \DC x(t)\right|+|t|.
$$
To fix notation, throughout the text we denote by
$\partial_i \psi$ the partial derivative of a function
$\psi:\mathbb{R}^{n} \rightarrow\mathbb{R}$ with respect
to its $i$th argument, $i = 1, \ldots, n$. For simplicity of notation,
we also introduce the operator $[\cdot]_\gamma^{\alpha, \beta}$ defined by
$$
[x]_\gamma^{\alpha, \beta}(t)=\left(t, x(t), \DC x(t)\right).
$$
Let $L$ be a Lagrangian
$L:C^{1}\left([a,b]\times \mathbb{R}^2 \right)\to\mathbb{R}$.
Consider the following problem of the calculus of variations:
minimize functional $\mathcal{J}:D\rightarrow \mathbb{R}$ with
\begin{equation}
\label{funct1}
\mathcal{J}(x,T)=\int_a^T
L[x]_\gamma^{\alpha, \beta}(t) dt + \phi(T,x(T))
\end{equation}
over all $(x,T)\in D$ satisfying the initial condition $x(a)=x_a$,
for a given $x_a\in \mathbb{R}$. The terminal time $T$
and the terminal state $x(T)$ are considered here free.
The \emph{terminal cost function}
$\phi:[a,b]\times \mathbb{R}\to\mathbb{R}$ is at least of class $C^1$.
\begin{theorem}[See \cite{Tavares}]
\label{teo1}
If $(x,T)$ is a minimizer of functional \eqref{funct1} on $D$,
then $(x,T)$ satisfies the fractional differential equations
\begin{equation}
\label{ELeq_1}
\partial_2 L[x]_\gamma^{\alpha, \beta}(t)
+D{_{\overline{\gamma}}^{\b,\a}}\partial_3 L[x]_\gamma^{\alpha, \beta}(t)=0
\end{equation}
on $[a,T]$ and
\begin{equation}
\label{ELeq_2}
\gamma_2\left({\LDb}\partial_3 L[x]_\gamma^{\alpha, \beta}(t)
-{ _TD{_t^{\b}}\partial_3 L[x]_\gamma^{\alpha, \beta}(t)}\right)=0
\end{equation}
on $[T,b]$. Moreover, the following transversality conditions hold:
\begin{equation}
\label{CT1}
\begin{cases}
L[x]_\gamma^{\alpha, \beta}(T)+\partial_1\phi(T,x(T))+\partial_2\phi(T,x(T))x'(T)=0,\\
\left[\gamma_1 \, {_tI_T^{1-\a}} \partial_3L[x]_\gamma^{\alpha, \beta}(t)
-\gamma_2 \, {_TI_t^{1-\b}} \partial_3 L[x]_\gamma^{\alpha, \beta}(t)\right]_{t=T}
+\partial_2 \phi(T,x(T))=0,\\
\gamma_2 \left[ {_TI_t^{1-\b}}\partial_3 L[x]_\gamma^{\alpha, \beta}(t)
-{_aI_t^{1-\b}\partial_3L[x]_\gamma^{\alpha, \beta}(t)}\right]_{t=b}=0.
\end{cases}
\end{equation}
\end{theorem}
We can rewrite the transversality conditions
\eqref{CT1}, obtaining the next result.
\begin{theorem}[See \cite{Tavares}]
\label{teo2}
If $(x,T)$ is a minimizer of functional \eqref{funct1} on $D$,
then the fractional Euler--Lagrange equations \eqref{ELeq_1}
and \eqref{ELeq_2} are satisfied together with
the following transversality conditions:
\begin{equation*}
\begin{cases}
L[x]_\gamma^{\alpha, \beta}(T)+\partial_1\phi(T,x(T))\\
\qquad + x'(T) \left[ \gamma_2 {_TI}_t^{1-\b} \partial_3L[x]_\gamma^{\alpha, \beta}(t)
- \gamma_1 {_tI_T^{1-\a} \partial_3L[x]_\gamma^{\alpha, \beta}(t)} \right]_{t=T} =0,\\
\left[ \gamma_1\, {_tI_T^{1-\a}} \partial_3L[x]_\gamma^{\alpha, \beta}(t)
- \gamma_2\, {_TI_t^{1-\b}} \partial_3L[x]_\gamma^{\alpha, \beta}(t)\right]_{t=T}
+\partial_2 \phi(T,x(T))=0,\\
\gamma_2 \left[ _TI_t^{1-\b}\partial_3 L[x]_\gamma^{\alpha, \beta}(t)
-{_aI_t^{1-\b}\partial_3L[x]_\gamma^{\alpha, \beta}(t)}\right]_{t=b}=0.
\end{cases}
\end{equation*}
\end{theorem}
\section{Herglotz's variational principle}
\label{sec:theorems}
In this section we present a fractional variational principle
of Herglotz depending on Caputo fractional derivatives.
Let $\alpha,\beta: [a,b]^2\rightarrow (0,1)$ be two functions.
The fractional Herglotz variational problem that we study
consists in the determination of trajectories
$x \in C^{1}\left([a,b]\right)$, satisfying a given initial
condition $x(a)=x_{a} \in \mathbb{R}$, and a real $T \in [a,b]$
that extremize the value of $z(T)$, where $z$ satisfies
the following differential equation with dependence
on a combined Caputo fractional derivative operator:
\begin{equation}
\label{funct1H}
\dot{z}(t)=L\left(t,x(t), \DC x(t), z(t) \right),
\quad t \in [a,b],
\end{equation}
subject to the initial condition
\begin{equation}
\label{ICond}
z(a)=z_{a},
\end{equation}
where $z_a$ is a given real number.
In the sequel, we use the auxiliary notation
$$
[x,z]_\gamma^{\alpha, \beta}(t)=\left(t,x(t), \DC x(t), z(t) \right).
$$
The Lagrangian $L$ is assumed to satisfy the following hypothesis:
\begin{enumerate}
\item $L \in C^{1}\left([a,b] \times \mathbb{R}^{3},\mathbb{R}\right)$,
\item $t\rightarrow \lambda(t)\partial_3L[x,z]_\gamma^{\alpha, \beta}(t)$
is such that $_TD_t^{\b} \left( \lambda(t)
\partial_3L[x,z]_\gamma^{\alpha, \beta}(t)\right)$,
\begin{equation*}
\LDb \left( \lambda(t) \partial_{3} L[x,z]_\gamma^{\alpha, \beta}(t) \right)\
\text{ and } \
D_{\overline{\gamma}}^{\b,\a}
\left( \lambda(t) \partial_{3} L[x,z]_\gamma^{\alpha, \beta}(t) \right)
\end{equation*}
exist and are continuous on $[a,b]$, where
$$
\lambda(t)=\exp \left(-\int_a^t \partial_4
L\left[x,z \right]_\gamma^{\alpha, \beta}(\tau)d\tau \right).
$$
\end{enumerate}
The following result gives necessary conditions
of Euler--Lagrange type for an admissible
function $x$ to be solution of the problem.
\begin{theorem}
\label{mainteo}
Let $x \in C^{1}\left([a,b]\right)$ be such that $z$ defined
by \eqref{funct1H} subject to the initial condition \eqref{ICond}
has an extremum. Then, $(x,z)$ satisfies the fractional differential equations
\begin{equation}
\label{CEL1_Herg}
\partial_2 L[x,z]_\gamma^{\alpha, \beta}(t)\lambda(t)
+D{_{\overline{\gamma}}^{\b,\a}}\left(\lambda(t)
\partial_3 L[x,z]_\gamma^{\alpha, \beta}(t)\right)=0
\end{equation}
on $[a,T]$ and
\begin{equation}
\label{CEL2_Herg}
\gamma_2\left({\LDb} \left(\lambda(t)
\partial_3 L[x,z]_\gamma^{\alpha, \beta}(t)\right)
-{ _TD{_t^{\b}}\left(\lambda (t)\partial_3
L[x,z]_\gamma^{\alpha, \beta}(t)\right)}\right)=0
\end{equation}
on $[T,b]$.
Moreover, the following transversality conditions are satisfied:
\begin{equation}
\label{CTransH}
\left\{
\begin{array}{l}
\left[\gamma_1 {_tI_T^{1-\a}} \left(\lambda (t)
\partial_3L[x,z]_\gamma^{\alpha, \beta}(t)\right)
-\gamma_2 {_TI_t^{1-\b}} \left(\lambda (t)
\partial_3L[x,z]_\gamma^{\alpha, \beta}(t)\right)\right]_{t=T}=0,\\
\gamma_2 \left[ {_TI_t^{1-\b}} \left( \lambda (t)
\partial_3L[x,z]_\gamma^{\alpha, \beta}(t)\right) -{_aI_t^{1-\b} \left( \lambda (t)
\partial_3L[x,z]_\gamma^{\alpha, \beta}(t)\right)}\right]_{t=b}=0.
\end{array}\right.
\end{equation}
If $T<b$, then $L[x,z]_\gamma^{\alpha, \beta}(T)=0$.
\end{theorem}
\begin{proof}
Let $x$ be a solution to the problem and consider an admissible variation
of $x$, $\overline {x}= x+\e{h}$, where $h\in C^1([a,b])$ is an arbitrary
perturbation curve and $\e \in\mathbb{R}$ represents a small number
$\left(\e\rightarrow 0\right)$. The constraint $x(a)=x_a$ implies
that all admissible variations must fulfill the condition $h(a)=0$.
On the other hand, consider an admissible variation of $z$,
$\overline {z}= z+\e\theta$, where $\theta$ is a perturbation
curve (not arbitrary) such that
\begin{enumerate}
\item $\theta(a)=0$, so that $z(a)=z_{a}$,
\item $\theta (T)=0$, because $z(T)$ is a maximum
($\overline{z}(T)-z(T) \leq 0$) or a minimum
($\overline{z}(T)-z(T)\geq0$),
\item $\theta (t)= \dfrac {d}{d \varepsilon} z(\overline {x},t)
\biggr\rvert_{\varepsilon=0}$, so that the variation satisfies
equation \eqref{funct1H}.
\end{enumerate}
Differentiating $\theta$ with respect to $t$, we obtain that
\begin{equation*}
\begin{split}
\dfrac{d}{dt}\theta (t)
& =\dfrac{d}{dt} \dfrac{d}{d\varepsilon}
z(\overline {x},t) \biggr\rvert_{\varepsilon=0}\\
&= \dfrac{d}{d\varepsilon}\dfrac{d}{dt}
z(\overline {x},t) \biggr\rvert_{\varepsilon=0}\\
&= \dfrac{d}{d\varepsilon} L\left(t,x(t)+\e{h(t)},
\DC x(t)+\e\DC {h(t)}, z(t) \right) \biggr\rvert_{\varepsilon=0}
\end{split}
\end{equation*}
and rewriting this relation, we obtain
the following differential equation for $\theta$:
\begin{equation*}
\dot{\theta}(t) - \partial_{4}
L[x,z]_\gamma^{\alpha, \beta}(t) \theta(t)
=\partial_{2} L[x,z]_\gamma^{\alpha, \beta}(t) h(t)
+ \partial_{3}L[x,z]_\gamma^{\alpha, \beta}(t) \DC h(t).
\end{equation*}
Considering $\lambda(t)=\exp \left(- \displaystyle
\int_a^{t} \partial_{4}L[x,z]_\gamma^{\alpha, \beta}(\tau)d\tau \right)$,
we obtain the solution for the last differential equation:
\begin{equation*}
\theta(T)\lambda(T) - \theta(a)
= \int_a^{T} \left(\partial_{2} L[x,z]_\gamma^{\alpha, \beta}(t) h(t)
+ \partial_{3}L[x,z]_\gamma^{\alpha, \beta}(t) \DC h(t) \right) \lambda(t) dt.
\end{equation*}
By hypothesis, $\theta(a)=0$. If $x$ is such that $z(x,t)$ defined
by \eqref{funct1H} attains an extremum at $t=T$,
then $\theta(T)$ is identically zero. Hence, we get
\begin{equation}
\label{solutionPH}
\int_a^{T} \left(\partial_{2} L[x,z]_\gamma^{\alpha, \beta}(t)h(t)
+ \partial_{3}L[x,z]_\gamma^{\alpha, \beta}(t) \DC h(t) \right) \lambda(t) dt = 0.
\end{equation}
Considering only the second term in \eqref{solutionPH},
and the definition of combined Caputo derivative,
we obtain that
\begin{equation*}
\begin{split}
&\int_a^{T} \lambda(t) \partial_{3} L[x,z]_\gamma^{\alpha, \beta}(t) \left( \gamma_{1}
\LC h(t) + \gamma_{2} \RCb h(t) \right) dt\\
&=\gamma_{1} \int_a^{T} \lambda(t) \partial_{3} L[x,z]_\gamma^{\alpha, \beta}(t) \LC h(t)dt\\
&\ +\gamma_{2} \left[ \int_a^{b} \lambda(t) \partial_{3}
L[x,z]_\gamma^{\alpha, \beta}(t) \RCb h(t)dt - \int_T^{b} \lambda(t)
\partial_{3} L[x,z]_\gamma^{\alpha, \beta}(t) \RCb h(t)dt \right]\\
&=\star.
\end{split}
\end{equation*}
Using Theorem~\ref{thm:FIP}, and considering
$\overline {\gamma} =(\gamma_{2}, \gamma_{1})$,
we get
\begin{equation*}
\begin{split}
\star
&= \int_a^{T} h(t) D_{\overline{\gamma}}^{\b,\a}
\left( \lambda(t) \partial_{3} L[x,z]_\gamma^{\alpha, \beta}(t) \right) dt \\
&+ \int_T^{b} \gamma_{2} h(t) \left[ \LDb \left(
\lambda(t) \partial_{3} L[x,z]_\gamma^{\alpha, \beta}(t) \right)
- _TD_t^{\b} \left( \lambda(t) \partial_{3} L[x,z]_\gamma^{\alpha, \beta}(t) \right) \right] dt\\
& + h(T) \left[ \gamma_{1} {}_tI_{T}^{1-\a} \left( \lambda(t)
\partial_{3} L[x,z]_\gamma^{\alpha, \beta}(t) \right) - \gamma_{2} {}_TI_{t}^{1-\b}
\left( \lambda(t) \partial_{3} L[x,z]_\gamma^{\alpha, \beta}(t) \right) \right]_{t=T}\\
&+ h(b) \gamma_{2} \left[ _TI_{t}^{1-\b} \left( \lambda(t)
\partial_{3} L[x,z]_\gamma^{\alpha, \beta}(t) \right) - _aI_{t}^{1-\b} \left( \lambda(t)
\partial_{3} L[x,z]_\gamma^{\alpha, \beta}(t) \right) \right]_{t=b}.
\end{split}
\end{equation*}
Substituting this relation into expression \eqref{solutionPH}, we obtain
\begin{equation*}
\begin{split}
& \int_a^{T} h(t) \left[ \partial_{2} L[x,z]_\gamma^{\alpha, \beta}(t) \lambda(t)
+ D_{\overline{\gamma}}^{\b,\a} \left( \lambda(t)
\partial_{3} L[x,z]_\gamma^{\alpha, \beta}(t) \right)\right] dt \\
&+ \int_T^{b} \gamma_{2} h(t) \left[ \LDb \left( \lambda(t)
\partial_{3} L[x,z]_\gamma^{\alpha, \beta}(t) \right) - _TD_t^{\b} \left( \lambda(t)
\partial_{3} L[x,z]_\gamma^{\alpha, \beta}(t) \right) \right] dt\\
&+ h(T) \left[ \gamma_{1} {}_tI_{T}^{1-\a} \left( \lambda(t)
\partial_{3} L[x,z]_\gamma^{\alpha, \beta}(t) \right) - \gamma_{2} {}_TI_{t}^{1-\b}
\left( \lambda(t) \partial_{3} L[x,z]_\gamma^{\alpha, \beta}(t) \right) \right]_{t=T}\\
&+ h(b) \gamma_{2} \left[ _TI_{t}^{1-\b} \left( \lambda(t)
\partial_{3} L[x,z]_\gamma^{\alpha, \beta}(t) \right) - _aI_{t}^{1-\b} \left( \lambda(t)
\partial_{3} L[x,z]_\gamma^{\alpha, \beta}(t) \right) \right]_{t=b} =0.
\end{split}
\end{equation*}
With appropriate choices for the variations $h(\cdot)$,
we get the Euler--Lagrange equations \eqref{CEL1_Herg}--\eqref{CEL2_Herg}
and the transversality conditions \eqref{CTransH}.
\end{proof}
\begin{remark}
If $\a$ and $\b$ tend to 1, and if the Lagrangian $L$
is of class $C^2$, then the first Euler--Lagrange
equation \eqref{CEL1_Herg} becomes
$$
\partial_2 L[x,z]_\gamma^{\alpha, \beta}(t)\lambda(t)+(\gamma_2- \gamma_1)
\frac{d}{dt}\left[\lambda(t)\partial_{3} L[x,z]_\gamma^{\alpha, \beta}(t)\right]=0.
$$
Differentiating and considering the derivative
of the lambda function, we get
\begin{multline*}
\lambda(t) \Biggl[\partial_2 L[x,z]_\gamma^{\alpha, \beta}(t)\\
+(\gamma_2 - \gamma_1)\left[-\partial_{4} L[x,z]_\gamma^{\alpha, \beta}(t)
\partial_{3} L[x,z]_\gamma^{\alpha, \beta}(t)
+\frac{d}{dt}\partial_{3} L[x,z]_\gamma^{\alpha, \beta}(t)\right]\Biggr]=0.
\end{multline*}
As $\lambda(t)>0$ for all $t$, we deduce that
$$
\partial_2 L[x,z]_\gamma^{\alpha, \beta}(t)
+(\gamma_2- \gamma_1)\left[
\frac{d}{dt}\partial_{3} L[x,z]_\gamma^{\alpha, \beta}(t)
-\partial_{4} L[x,z]_\gamma^{\alpha, \beta}(t)
\partial_{3} L[x,z]_\gamma^{\alpha, \beta}(t)\right]=0.
$$
\end{remark}
\section{The case of several independent variables}
We now obtain a generalization of Herglotz's principle
of Section~\ref{sec:theorems} for problems involving
$n+1$ independent variables. Define
$\Omega=\prod_{i=1}^{n} [a_i,b_i]$, with $n \in \mathbb{N}$,
$P=[a,b]\times \Omega$ and consider the vector
$s=(s_1, s_2, \ldots, s_n)\in \Omega$. The new problem consists
in determining the trajectories $x \in C^{1}\left(P\right)$
that give an extremum to $z[x,T]$, where the functional
$z$ satisfies the differential equation
\begin{multline}
\label{funct_siv1}
\dot{z}(t)=\int_{\Omega}L\left(t,s, x(t,s), \DC x(t,s),\right.\\
\left.^CD_{\gamma^1}^{\alpha_1(\cdot,\cdot),\beta_1(\cdot,\cdot)}x(t,s),\ldots, ^CD_{\gamma^n}^{\alpha_n(\cdot,\cdot),\beta_n(\cdot,\cdot)}x(t,s), z(t) \right)d^{n}s
\end{multline}
subject to the constraint
\begin{equation}
\label{herg:bound}
x(t,s)=g(t,s)\quad \mbox{ for all }(t,s) \in \partial P,
\end{equation}
where $\partial P$ is the boundary of $P$ and $g$ is a given function
$g:\partial P \rightarrow \mathbb{R}$. Assume that
\begin{enumerate}
\item $\alpha, \alpha_i, \beta,
\beta_i: [a,b]^2 \rightarrow (0,1)$ with $i=1,\ldots n$,
\item $\gamma,\gamma^1, \ldots, \gamma^n \in [0,1]^2$,
\item $d^{n}s=ds_1\ldots ds_n$,
\item $\DC x(t,s)$, $^CD_{\gamma^1}^{\alpha_1(\cdot,\cdot),\beta_1(\cdot,\cdot)}x(t,s),
\ldots, ^CD_{\gamma^n}^{\alpha_n(\cdot,\cdot),\beta_n(\cdot,\cdot)}x(t,s)$
exist and are continuous functions,
\item the Lagrangian $L:P\times\mathbb{R}^{n+3}
\rightarrow \mathbb{R}$ is of class $C^1$.
\end{enumerate}
\begin{remark}
By $\DC x(t,s)$ we mean the Caputo fractional derivative
with respect to the independent variable $t$ and by
$^CD_{\gamma^i}^{\alpha_i(\cdot,\cdot),\beta_i(\cdot,\cdot)}x(t,s)$
we mean the Caputo fractional derivative with respect
to the independent variable $s_i$, $i=1,\ldots,n$.
\end{remark}
In the sequel, we use the auxiliary notation
\begin{multline*}
[x,z]_{n, \gamma}^{\alpha, \beta}(t,s)
=\left(t,s,x(t,s), \DC x(t,s),
^CD_{\gamma^1}^{\alpha_1(\cdot,\cdot),
\beta_1(\cdot,\cdot)}x(t,s),\right.\\
\left. \ldots, ^CD_{\gamma^n}^{\alpha_n(\cdot,\cdot),
\beta_n(\cdot,\cdot)}x(t,s), z(t) \right).
\end{multline*}
Consider the function
$$
\lambda(t)=\exp \left(-\int_a^t
\int_{\Omega} \partial_{2n+4}
\left[x,z \right]_{n, \gamma}^{\alpha, \beta}(\tau,s) d^ns d\tau \right).
$$
\begin{theorem}
If $(x,z)$ is an extremizer to the functional \eqref{funct_siv1},
then $(x,z)$ satisfies the fractional differential equation
\begin{multline}
\label{CEL1_Herg2}
\partial_{n+2} L[x,z]_{n, \gamma}^{\alpha, \beta}(t,s)\lambda(t)
+D{_{\overline{\gamma}}^{\b,\a}}\left(\lambda(t)\partial_{n+3}
L[x,z]_{n, \gamma}^{\alpha, \beta}(t,s)\right)\\
+ \sum_{i=1}^{n} D_{\overline{\gamma}^{i}}^{\beta_i(\cdot,\cdot),
\alpha_i(\cdot,\cdot)}\left(\lambda(t) \partial_{n+3+i}
L[x,z]_{n, \gamma}^{\alpha, \beta}(t,s)\right)=0
\end{multline}
on $[a,T] \times \Omega$ and
\begin{equation}
\label{CEL2_Herg2}
\gamma_2\left({\LDb} \left(\lambda(t)\partial_{n+3}
L[x,z]_{n, \gamma}^{\alpha, \beta}(t,s)\right)
-{ _TD{_t^{\b}}\left(\lambda (t) \partial_{n+3}
L[x,z]_{n, \gamma}^{\alpha, \beta}(t,s)\right)}\right)=0
\end{equation}
on $[T,b]\times \Omega$. Moreover, $(x,z)$
satisfies the transversality condition
\begin{multline}
\label{CTransH2}
\Bigl[\gamma_1 {_tI_T^{1-\a}} \left(\lambda (t) \partial_{n+3}
L[x,z]_{n, \gamma}^{\alpha, \beta}(t,s)\right) \\
-\gamma_2 {_TI_t^{1-\b}} \left(\lambda(t)\partial_{n+3}
L[x,z]_{n, \gamma}^{\alpha, \beta}(t,s)\right)\Bigr]_{t=T}=0,
\quad s \in\Omega.
\end{multline}
If $T<b$, then $\displaystyle \int_{\Omega}
L[x,z]_{n, \gamma}^{\alpha, \beta}(T,s)d^{n}s=0$.
\end{theorem}
\begin{proof}
Let $x$ be a solution to the problem. Consider an admissible variation of $x$,
$\overline {x}(t,s)= x(t,s)+\e{h(t,s)}$, where $h\in C^1(P)$ is an arbitrary
perturbing curve and $\e \in\mathbb{R}$ is such that $|\e|\ll 1$.
Consequently, from the boundary condition \eqref{herg:bound},
$h(t,s)=0$ for all $(t,s)\in \partial P$. On the other hand,
consider an admissible variation of $z$, $\overline {z}= z+\e\theta$,
where $\theta$ is a perturbing curve such that $\theta (a)=0$ and
$$
\theta(t)= \dfrac{d}{d \varepsilon}
z(\overline {x},t) \biggr\rvert_{\varepsilon=0}.
$$
Differentiating $\theta(t)$ with respect to $t$, we obtain that
\begin{equation*}
\begin{split}
\dfrac{d}{dt}\theta (t)
&=\dfrac{d}{dt} \dfrac{d}{d\varepsilon} z(\overline {x},t)
\biggr\rvert_{\varepsilon=0}\\
&= \dfrac{d}{d\varepsilon}\dfrac{d}{dt} z\left(\overline {x},t\right)
\biggr\rvert_{\varepsilon=0}\\
&= \dfrac{d}{d\varepsilon} \int_{\Omega}
L[\overline{x},z]_{n, \gamma}^{\alpha, \beta}(t,s) d^{n}s
\biggr\rvert_{\varepsilon=0}.
\end{split}
\end{equation*}
We conclude that
\begin{multline*}
\dot{\theta}(t)
=\int_{\Omega} \left( \partial_{n+2}
L[x,z]_{n, \gamma}^{\alpha, \beta}(t) h(t,s)
+\partial_{n+3}
L[x,z]_{n, \gamma}^{\alpha, \beta}(t,s) \DC h(t,s)\right.\\
+\left. \sum_{i=1}^{n}\partial_{n+3+i}
L[x,z]_{n, \gamma}^{\alpha, \beta}(t,s) ^CD_{\gamma^{i}}^{\alpha_i(\cdot,\cdot),
\beta_i(\cdot,\cdot)}h(t,s)+\partial_{2n+4}
L[x,z]_{n, \gamma}^{\alpha, \beta}(t,s) \theta(t)\right) d^{n}s.
\end{multline*}
To simplify the notation, define
$$
B(t)=\int_{\Omega}\partial_{2n+4}
L[x,z]_{n, \gamma}^{\alpha, \beta}(t,s)d^{n}s
$$
and
\begin{multline*}
A(t)
=\int_{\Omega}\Bigl( \partial_{n+2}
L[x,z]_{n, \gamma}^{\alpha, \beta}(t) h(t,s)
+\partial_{n+3} L[x,z]_{n, \gamma}^{\alpha, \beta}(t,s)
\DC h(t,s)\\
+\sum_{i=1}^{n}\partial_{n+3+i}
L[x,z]_{n, \gamma}^{\alpha, \beta}(t,s)
^CD_{\gamma^{i}}^{\alpha_i(\cdot,\cdot),
\beta_i(\cdot,\cdot)}h(t,s) \Bigr) d^{n}s.
\end{multline*}
Then, we obtain the linear differential equation
$$
\dot{\theta}(t)-B(t)\theta(t)=A(t),
$$
whose solution is
\begin{equation*}
\theta(T)\lambda(T) - \theta(a) = \int_a^{T} A(t) \lambda(t) dt.
\end{equation*}
Since $\theta(a)=\theta(T)=0$, we get
\begin{equation}
\label{solutionPH2}
\int_a^{T} A(t)\lambda(t)dt = 0.
\end{equation}
Considering only the second term in \eqref{solutionPH2}, we can write
\begin{equation*}
\begin{split}
\int_a^{T}
&\int_{\Omega}\lambda(t) \partial_{n+3}
L[x,z]_{n, \gamma}^{\alpha, \beta}(t,s)
\left( \gamma_{1} \LC h(t,s) + \gamma_{2} \RCb h(t,s) \right) d^{n}s dt\\
&=\gamma_{1} \int_a^{T} \int_{\Omega}\lambda(t) \partial_{n+3}
L[x,z]_{n, \gamma}^{\alpha, \beta}(t,s) \LC h(t,s)d^{n}s dt\\
&\quad +\gamma_{2} \left[ \int_a^{b} \int_{\Omega}\lambda(t) \partial_{n+3}
L[x,z]_{n, \gamma}^{\alpha, \beta}(t,s) \RCb h(t,s) d^{n}s dt\right.\\
& \qquad\qquad - \left.\int_T^{b} \int_{\Omega}\lambda(t)
\partial_{n+3} L[x,z]_{n, \gamma}^{\alpha, \beta}(t,s) \RCb h(t,s) d^{n}sdt \right].
\end{split}
\end{equation*}
Let $\overline {\gamma} =(\gamma_{_{2}}, \gamma_{1})$.
Integrating by parts (cf. Theorem~\ref{thm:FIP})
and since $h(a,s)=0$ and $h(b,s)=0$ for all $s \in \Omega$,
we obtain the following expression:
\begin{equation*}
\begin{split}
&\int_a^{T} \int_{\Omega} h(t,s) D_{ \overline{\gamma}}^{\b,\a}
\left( \lambda(t) \partial_{n+3}
L[x,z]_{n, \gamma}^{\alpha, \beta}(t,s) \right) d^{n}sdt \\
&+ \gamma_{2}\int_T^{b} \int_{\Omega} h(t,s) \left[
\LDb \left( \lambda(t) \partial_{n+3}
L[x,z]_{n, \gamma}^{\alpha, \beta}(t,s) \right)\right.\\
&\qquad\qquad\qquad\qquad\qquad
-\left. _TD_t^{\b} \left( \lambda(t) \partial_{n+3}
L[x,z]_{n, \gamma}^{\alpha, \beta}(t,s) \right) \right] d^{n}sdt\\
& + \int_{\Omega} h(T,s) \left[ \gamma_{1} {}_tI_{T}^{1-\a} \left( \lambda(t)
\partial_{n+3} L[x,z]_{n, \gamma}^{\alpha, \beta}(t,s) \right)\right.\\
&\qquad\qquad\qquad\qquad \left. - \gamma_{2} \, _TI_{t}^{1-\b}
\left( \lambda(t) \partial_{n+3}
L[x,z]_{n, \gamma}^{\alpha, \beta}(t,s) \right) d^n s\right]_{t=T}.
\end{split}
\end{equation*}
Doing similarly for the $(i+2)$th term of \eqref{solutionPH2},
$i=1,\ldots, n$, letting $\overline {\gamma}^i =(\gamma_{_{2}}^i,
\gamma_{1}^{i})$, and since $h(t,a_i)=h(t,b_i)=0$ for all $t \in [a,b]$,
we obtain
\begin{multline*}
\int_a^{T} \int_{\Omega}\lambda(t) \partial_{n+3+i}
L[x,z]_{n, \gamma}^{\alpha, \beta}(t,s) \left( \gamma_{1}^{i}
{_{a_i}^CD_{s_i}^{\alpha_{i}(\cdot,\cdot)} }h(t,s)
+ \gamma_{2}^{i} {_{s_i}^CD_{b_i}^{\beta_{i}(\cdot,\cdot)}} h(t,s)\right) d^{n}s dt\\
=\int_a^{T} \int_{\Omega} h(t,s) D_{\overline {\gamma}^i}^{~\beta_{i}(\cdot,\cdot),
\alpha_{i}(\cdot,\cdot)} \left(\lambda(t)\partial_{n+3+i}
L[x,z]_{n, \gamma}^{\alpha, \beta}(t,s) \right) d^{n}s dt.
\end{multline*}
Substituting these relations into \eqref{solutionPH2}, we deduce that
\begin{equation*}
\begin{split}
\int_a^{T} &\int_{\Omega} h(t,s)\left[ \partial_{n+2}
L[x,z]_{n, \gamma}^{\alpha, \beta}(t,s)\lambda(t)
+ D_{ \overline{\gamma}}^{\b,\a}
\left( \lambda(t) \partial_{n+3}
L[x,z]_{n, \gamma}^{\alpha, \beta}(t,s) \right) \right.\\
&+\sum_{i=1}^{n} D_{\overline {\gamma}^i}^{~\beta_{i}(\cdot,\cdot),
\alpha_{i}(\cdot,\cdot)} \left(\lambda(t)
\partial_{n+3+i} L[x,z]_{n, \gamma}^{\alpha, \beta}(t,s) \right) d^{n}sdt \\
&+ \gamma_{2}\int_T^{b} \int_{\Omega} h(t,s) \left[
\LDb\left( \lambda(t) \partial_{n+3}
L[x,z]_{n, \gamma}^{\alpha, \beta}(t,s) \right)\right.\\
& \qquad\qquad\qquad\qquad\quad \left.
- _TD_t^{\b} \left( \lambda(t) \partial_{n+3}
L[x,z]_{n, \gamma}^{\alpha, \beta}(t,s) \right) \right] d^{n}sdt\\
&+ \int_{\Omega} h(T,s) \left[ \gamma_{1} {}_tI_{T}^{1-\a}
\left( \lambda(t) \partial_{n+3}
L[x,z]_{n, \gamma}^{\alpha, \beta}(t,s) \right)\right.\\
&\qquad\qquad\qquad\qquad\left.
-\gamma_{2} \, _TI_{t}^{1-\b} \left( \lambda(t)
\partial_{n+3} L[x,z]_{n, \gamma}^{\alpha, \beta}(t,s)
\right)d^ns \right]_{t=T}.
\end{split}
\end{equation*}
We get the Euler--Lagrange equations
\eqref{CEL1_Herg2}--\eqref{CEL2_Herg2}
and the transversality condition \eqref{CTransH2}
with appropriate choices of $h$.
\end{proof}
\section{Illustrative examples}
We present three examples.
\begin{example}
\label{ex:2}
Consider
\begin{equation}
\label{exemp2}
\begin{gathered}
\dot{z}(t)=\left(\DC x(t)\right)^2+z(t)+t^2-1, \quad t\in [0,3],\\
x(0)=1, \quad z(0)=0.
\end{gathered}
\end{equation}
In this case, $\lambda(t)=\exp(-t)$.
The necessary optimality conditions \eqref{CEL1_Herg}--\eqref{CEL2_Herg}
of Theorem~\ref{mainteo} hold for $\overline{x}(t) \equiv 1$.
If we replace $x$ by $\overline{x}$ in \eqref{exemp2}, we obtain
\begin{gather*}
\dot{z}(t)-z(t)=t^2-1, \quad t\in [0,3],\\
z(0)=0,
\end{gather*}
whose solution is
\begin{equation}
\label{z:sol}
z(t)=\exp(t)-(t+1)^2.
\end{equation}
The last transversality condition of Theorem~\ref{mainteo} asserts that
$$
L[\overline x,z]_\gamma^{\alpha, \beta}(T)=0 \Leftrightarrow \exp(T)-2T-2=0,
$$
whose solution is approximately
$$
T\approx 1.67835.
$$
We remark that $z$ \eqref{z:sol} actually attains a minimum value
at this point (see Figure~\ref{heg1},~(a)):
$$
z(1.67835)\approx -1.81685.
$$
\end{example}
\begin{example}
\label{ex:3}
Consider now
\begin{equation}
\label{exemp3}
\begin{gathered}
\dot{z}(t)=(t-1)\left(x^2(t)+z^2(t)+1\right), \quad t\in [0,3],\\
x(0)=0, \quad z(0)=0.
\end{gathered}
\end{equation}
Since the first Euler--Lagrange equation \eqref{CEL1_Herg} reads
$$
(t-1)x(t)=0 \quad \forall \, t\in[0,T],
$$
we see that $\overline x(t) \equiv 0$ is a solution of this equation.
The second transversality condition of \eqref{CTransH}
asserts that, at $t=T$, we must have
$$
L[\overline x,z]_\gamma^{\alpha, \beta}(t)=0,
$$
that is,
$$
(t-1)(z^2(t)+1)=0,
$$
and so $T=1$ is a solution of this equation.
Substituting $x$ by $\overline{x}$ in \eqref{exemp3}, we get
\begin{gather*}
\dot{z}(t)=(t-1)(z^2(t)+1), \quad t\in [0,3],\\
z(0)=0.
\end{gather*}
The solution to this Cauchy problem
is the function
$$
z(t)=\tan\left(\frac{t^2}{2}-t\right)
$$
(see Figure~\ref{heg1},~(b)) and the minimum value is
$$
z(1)=\tan\left(-\frac{1}{2}\right).
$$
\end{example}
\begin{example}
\label{ex:1}
For our last example, consider
\begin{equation}
\label{exemp}
\begin{gathered}
\dot{z}(t)=\left(\DC x(t) - f(t)\right)^2+t^2-1, \quad t\in [0,3],\\
x(0)=0, \quad z(0)=0,
\end{gathered}
\end{equation}
where
$$
f(t) := \frac{t^{1-\alpha(t)}}{2\Gamma(2-\alpha(t))}
-\frac{(3-t)^{1-\beta(t)}}{2\Gamma(2-\beta(t))}.
$$
In this case, $\lambda(t)\equiv 1$. We intend to find a pair $(x,z)$,
satisfying all the conditions in \eqref{exemp},
for which $z(T)$ attains a minimum value.
It is easy to verify that $\overline{x}(t) = t$ and $T=1$ satisfy
the necessary conditions given by Theorem~\ref{mainteo}.
Replacing $x$ by $\overline{x}$ in system \eqref{exemp},
we get a Cauchy problem of form
\begin{gather*}
\dot{z}(t)=t^2-1, \quad t\in [0,3],\\
z(0)=0,
\end{gather*}
whose solution is
$$
z(t)=\frac{t^3}{3}-t.
$$
Observe that this function attains a minimum value
at $T=1$, which is $z(1)=-2/3$ (Figure~\ref{heg1},~(c)).
\end{example}
\begin{figure}[ht!]
\begin{center}
\subfigure[Extremal $z$ of Example~\ref{ex:2}.]{\includegraphics[scale=0.23]{heg2.eps}}
\subfigure[Extremal $z$ of Example~\ref{ex:3}.]{\includegraphics[scale=0.23]{heg3.eps}}
\subfigure[Extremal $z$ of Example~\ref{ex:1}.]{\includegraphics[scale=0.23]{heg1.eps}}
\caption{Graphics of function $z(\overline x,t)$.}\label{heg1}
\end{center}
\end{figure}
\section*{Acknowledgements}
The authors are grateful to two anonymous referees
for their comments.
|
2,877,628,088,542 | arxiv | \section{Introduction}
Understanding the physics determining charge transport at interfaces between metal electrodes and molecules is key to the advancement of the field of molecular electronics. In a molecular device, the alignment of frontier molecular orbital levels relative to the metals' Fermi energies determines the contribution of the different channels available for transport. Due to their proximity to the electrodes, the levels themselves are shifted relative to those of the molecule in gas phase, and may hybridize with electrode levels as well. Together, level alignment and hybridization determine electron transport in the molecular junction.
In this paper we describe an approach to investigating these effects based on density-functional theory (DFT)\cite{Jones1989} and the non-equilibrium Green's functions (NEGF) formalism.\cite{Meir1992,Datta2000,Brandbyge2002,Stokbro2003a,Evers2003,Rocha2006,Rothig2006,Arnold2007,Verzijl2012}
DFT is frequently used in calculations of charge transport because of its efficiency, and because computationally it scales well to realistic nanoscale junction sizes. It does suffer from a few drawbacks, however, the most important of which are poor predictions of one- and two-particle excitations.\cite{Jones1989,Burke2012}
The reason for the failure of DFT to predict excitation energies from a single neutral-state calculation is mainly due to the inclusion of spurious self-interactions,\cite{Perdew1981,Toher2005} and the omission of dynamic polarization effects.\cite{Hybertsen1986,Neaton2006}
Both effects are captured in GW calculations,\cite{Aryasetiawan1998,Neaton2006,Thygesen2009} usually within the COHSEX approach,\cite{Hybertsen1986} and time-dependent density-functional theory (TDDFT).\cite{Stefanucci2004,Kurth2005,Perfetto2010} However, these are computationally expensive and not (yet) feasible except for very small molecules, in contrast to DFT-based approaches.
Approximate methods have been proposed and used with some success to address the shortcomings of DFT in predicting excitations. These include the use of a scissors-operator\cite{Quek2007,Mowbray2008} and simple image-charge models based on atomic charges,\cite{Hedegaard2005,Neaton2006,Kaasbjerg2008,Mowbray2008}
used to address the location of resonant levels in the transport region of the molecular device.
In this paper, we focus on the latter and argue that image charges used in an electrostatic-energy calculation should be taken from the molecule in the presence of contacts rather than from the gas phase.
In section \ref{models} we provide a brief introduction to interface effects, and outline our method for the calculation of the image-charge effects.
In section \ref{Sec:BDA}, we apply our method to the 1,4-benzenediamine molecule between two gold electrodes and compare it to other approaches which have appeared in the literature\cite{Quek2007,Kaasbjerg2008,Mowbray2008}. Then, in section~\ref{Sec:ZnTPP}, we cover the application of our method to Zn-porphyrin devices studied in recent experiments by Perrin \emph{et al.}\xspace \cite{Perrin2013}, in which image-charge effects play an important role.
\section{Theoretical model}\label{models}
\begin{figure*}
\subfloat[]{
\includegraphics[width=1.2\columnwidth]{landscape4}\label{fg:shiftstoon0}
}\\
\subfloat[]{
\includegraphics[width=1.0\columnwidth]{landscape7a}\label{fg:shiftstoon1}
}
\subfloat[]{
\raisebox{.4cm}{\includegraphics[width=1.0\columnwidth]{landscape7b}\label{fg:shiftstoon2}}
}
\caption{\textbf{Energy landscape during the formation of a metal-molecule interface.}
(a) Combined rigid and dynamical image-charge effects on molecular levels at a single interface, relative to the molecule in isolation far away. These are a superposition of (b) and (c), where in (b) the surface dipole (shaded red/green) raises the background potential by $V_s-V_\infty$. The static image-charge effect, intrinsic molecular and interface dipoles shift the molecular levels back by $\Delta$, while electrostatic gating shifts by $\beta V_g$. (c) Levels are also subject to renormalization of the gap between the electron affinity $\epsilon_{EA}$ and the ionization potential $\epsilon_{IP}$ levels, where the prime indicates the position after the shift.
}\label{fg:landscape}
\end{figure*}
We illustrate the most important physical effects as a molecule approaches a clean metal surface in Fig.~\ref{fg:landscape}, following Ishii \emph{et al.}\xspace\cite{Ishii2000}
The shift of the levels occurring when a molecule is moved towards a metal surface can be divided into two classes. The first type of shift is due to a change
in the background potential induced by the proximity of the metal and, although different molecular orbitals may shift differently from their gas phase values, generally the \emph{direction} of these shifts is the same for all, and the differences between them are rather small. Therefore, we denote these shifts as ``rigid''. Usually, these shifts are upward with respect to the gas phase.
These background effects have their origin in the so-called ``push back'', or ``pillow'' effect, which refers to the reduction of the spill-out of electronic charge from the surface occurring for a clean metal surface. This spill-out results in a surface dipole which increases the work function.
As the push back effect reduces this spill-out (the molecule pushes the electronic charge back into the metal) it \emph{lowers} the work function.\cite{Seki1997,Ishii1999,Ishii2000,Vazquez2004a,Vazquez2004b}
A second mechanism resulting in a uniform shift of the orbital levels is charge transfer as a result of chemisorption, which also
changes the surface dipole. Finally, the charge distribution on the
(possibly neutral) molecule generates an image charge distribution in the metal. The potential between the charges on the
molecule and their images then results in a shift. The uniform shift resulting from all three mechanisms is denoted as $\Delta$
-- see Fig.~\ref{fg:shiftstoon0}.
Oszwaldowski \emph{et al.}\xspace have introduced a many-body method based on DFT\cite{Oszwaldowski2003} for capturing some of this dependence, deriving from dipole and pillow effects.
The length scale over which the changes to the energy landscape due to a surface dipole layer take place is related to the lateral extent of the surface dipole layer formed at the metal surface. This is typically the scale of the electrode in a mechanically controlled break-junction (MCBJ) experiment, which is of the order of $5$ nm.\footnote{Estimated from the fits of the junction area in Perrin \emph{et al.}\xspace's experiments\cite{Perrin2013}, which fit this as $28$ nm$^2$ and considered the range of $10-50$ nm$^2$ as representative.}
The magnitude of $\Delta$ is suggested by the measurements summarized by Ishii \emph{et al.}\xspace: roughly $0.5-1$ eV, typically a negative correction on an Au substrate.
Measurements by Koch \emph{et al.}\xspace\cite{Duhm2008,Broker2010,Niederhausen2011} on thin-films with different molecules support these considerations: they find a constant-shift region very near the interface, followed by a linear shift of $\sim1$ eV over a range of roughly $8$\AA\xspace beyond which a regime with constant $\Delta$ sets in.
In addition to this rigid shift, there is a shift which is very different between occupied and unoccupied levels, causing the transport gap between them to close (``renormalize'') as the molecule approaches the surface. The upward shift of occupied levels is caused by the fact that
an electron moving away from the molecule leaves a positive hole behind. The electrostatic force needed to overcome when moving an electron from the molecule to
infinity is for a substantial part responsible for the ionization potential of the molecule. If the molecule is close to a metal, removing electron
from it will not only leave a positive hole behind, but also a negative image charge in the metal bulk. This \emph{reduces} the binding energy and
therefore the ionization potential (IP).
Adding an electron to the molecule usually costs energy -- this is the \emph{addition energy}, AE. Close to a metal surface, however, the additional
electron feels an attraction from the positive image charge it creates in the metal. Therefore, also the addition energy is reduced. We see that the
gap between the occupied and unoccupied levels therefore shrinks; this is denoted the gap renormalisation. In a transport junction, this gap
is called the \emph{transport gap}. It should be noted that the above discussion relies on weak coupling between the molecule and the metal, implying
a preferentially integer electron occupation on the molecule. The rigid shift and the gap renormalization are effect is schematically represented in Fig.~\ref{fg:shiftstoon0}.
Gap renormalization has been studied extensively in the literature \cite{Quek2007,Neaton2006,Hybertsen1986,Hybertsen2008,Thygesen2009}.
It was shown by Neaton \emph{et al.}\xspace\cite{Neaton2006} that for small molecules this effect can be well fitted by an image-potential of the form
$-\frac{1}{4z}$ (with $z$ the distance to the image plane).
These effects are important in many nanoscale molecular systems, as has been argued on both experimental\cite{Kubatkin2003,Hedegaard2005,Bruot2012} and theoretical grounds \cite{Neaton2006,Quek2007,Mowbray2008,Kaasbjerg2008,Hybertsen2008,Thygesen2009} and are crucial for understanding and designing future molecular devices.
Electrostatic relaxation upon changing the charge on a molecule is not appropriately accounted for in DFT calculations, and in particular, it is missed in DFT-based NEGF calculations commonly used in studying single-molecule charge transport.
\cite{Brandbyge2002,Stokbro2003a,Rocha2006,Evers2006,Verzijl2012} We note that there are two types of relaxation: the first is the relaxation of the
resident electrons on the molecule upon removing or adding an electron. These effects are responsible for the difference that is observed between
the HOMO (highest occupied level found in a DFT calculation) and the ionization potential, and similar for the AE and the LUMO.
This notion has led to applying the molecular shifts to the transport junction, one of the ingredients in the `scissors operator' approach.
\cite{Quek2007,Neaton2006,Thygesen2009} We have however seen that the image charges in the metal also shift the IP and AE, and these effects
vary with the distance from the molecule to the metal contacts. This distance dependence is accessible in experiments (see section~\ref{experiments}) and the focus of this paper.
Kaasbjerg and Flensberg\cite{Kaasbjerg2008} have addressed this effect and reported substantial gap reductions, and
even dramatic ones in the presence of a gate. GW calculations in principle address such polarization effects, but these require very heavy computer resources
even for small systems.
Here, we use classical electrostatic calculations to address polarization effects due to the contacts, based on charge distributions obtained
from DFT calculations in different charge states. We note that DFT is designed for and has proven to be reliable for calculating ground state properties, and these
are the only ones used in our calculations.
\begin{figure}
\includegraphics[width=\columnwidth]{mirrorplanes}
\caption{Point charges between parallel plates, leading to an infinite series of image charges (first set of images in green, second set (images of images) in blue, \emph{etc}). Note the $\delta q_i$ added to the charges $q_i$ of the $N^\text{th}$ charge state, in going to the $(N+1)^\text{st}$ charge state: these also induce a series of repeated images.}\label{fg:imagemodel}
\end{figure}
Following Kaasbjerg and Flensberg,\cite{Kaasbjerg2008} and Mowbray \emph{et al.}\xspace\cite{Mowbray2008}, we simplify the image-charge effects for the full spatial charge density by considering atomic point charges.
These are calculated from the charge states with $N-1$, $N$ (neutral) and $N+1$ electrons on the molecule. The atomic charges are denoted $q_j$, and are located at ${\bm r}_j$. The images of the atomic charges are denoted as $q^I_j$, and are located at ${\bm r}^I_j$; this position is found by (multiple) reflection with respect to the image planes.
\cite{Smith1989,Quek2007} When the total charge on the molecule changes, the atomic charges change by $\delta q_j$, inducing additional image charges $\delta q^I_j$ (see Fig.~\ref{fg:imagemodel}). The correction to a molecular level for a change in the charge state is then:
\begin{align}
\label{eq:JT}
\Delta = \sum_{i, j} \frac{\delta q_i q_j^I}{|{\bm r}_i-{\bm r}_{j}^I|}
+ \sum_{i,j} \frac{\delta q_i \delta q_j^I}{|{\bm r}_i-{\bm r}_{j}^I|} + U_\text{self}(\delta q_i)\;.
\end{align}
Eq.~\eqref{eq:JT} can be derived by considering the work needed to assemble the point-charge configuration.
The superscript $I$ implies a summation over the images.
The first term is linear in the $\delta q_i$ and it represents the interaction between this added charge and the images $q^I_j$ of the reference configuration: this term affects the (constant) level shift.
The second term is quadratic in the $\delta q_j$ and it contains the interaction between the added charges and their images $\delta q^I_j$, and it is responsible for the gap renormalization. The last term collects the effects not depending on the molecule-metal separation.
When there is only one image plane, and we neglect the self-interaction, the image-charge effect reduces to:
\[
\Delta_\text{ICE} = - \frac{2q\,\delta q + \delta q^2}{4z}\;,
\]
where we recognize the $1/(4z)$ potential shifts in the second term.
We study the image charge effects based on atomic charges calculated for the molecule \emph{inside the junction} (due to their nature as spatial decompositions, we prefer Hirshfeld or Voronoi decompositions over the basis-set decomposition involved in Mulliken decompositions) and compare the results with those based on the gas phase, as was done in the literature \cite{Mowbray2008,Hybertsen1986,Quek2007}.
In this way, we obtain the image-charge corrections to the occupied and unoccupied levels respectively for varying molecule-electrode distance.
To obtain the atomic charge distributions for different charge states of the molecule inside the junction, we use constrained density functional theory (CDFT). In CDFT, the minimum of the energy functional is searched under the constraint that the integral $\int f({\bm r}) n({\bm r}) \; d^3 r,$ has a pre-defined value. $f({\bm r})$ is a given function and $n({\bm r})$ is the electron density. We take $f({\bm r})$ to be 1 on the molecule, and 0 outside.
The constraint is ensured through a Lagrange parameter $V$ and is translated in to a term $Vf({\bm r})$ added to the potential. This extra potential, which is equivalent to a gate voltage (it is constant over the molecule), has been implemented in our transport code.
In section~\ref{Sec:BDA}, we shall apply our method to a standard molecule and compare our results with those in the literature.
In section \ref{Sec:ZnTPP} we then apply our method for Zn-porphyrins, for which Perrin \emph{et al.}\xspace performed single-molecule experiments, and argue that the extra physics captured in
our approach is essential for understanding the transport experiments.
\section{Image charge effects for benzenediamine (BDA)} \label{Sec:BDA}
To analyze the transport through the molecule, we perform DFT-NEGF calculations for the Au-BDA-Au fragment attached to a FCC (111) surface (Fig.~\ref{fg:3BDAgeometries}). We consider a junction of type (I,I) according to the Quek \emph{et al.}\xspace classification \cite{Quek2007}. For our calculations we use a TZP-basis of numerical atomic orbitals on the molecule, a DZ-basis of numerical atomic orbitals on the metal atoms and the GGA PBE functional in our implementation of NEGF-based transport in the ADF/Band quantum chemistry package \cite{Velde1991,Wiesenekker1991,Verzijl2012}. See supplemental material at [URL will be inserted by AIP] for further details concerning to the calculations. \footnote{See supplemental material at [URL will be inserted by AIP] for: Computational details in Section I; Explanation of where the gate field is applied in Section II; Comparison of the results obtained using LDA and GGA in Section III; Figure S1 shows the ZnTPPdT orbitals in gas phase; Figure S2 shows the regions where we applied the gate field used to determine the weakest coupling in the junction; Figure S3 shows the levels shift obtained using LDA and GGA. Table I shows the Hirschfeld charge distribution for the BDA molecule.}
\begin{figure}
\subfloat[BDA Gas-Phase Geometry]{\includegraphics[width=.475\columnwidth]{BDA-fragment}\label{BDA-gas-phase}}\hfill
\subfloat[Au-BDA Fragment Geometry]{\includegraphics[width=.525\columnwidth]{BDA-fragment-au}\label{BDA-fragment}}\\
\subfloat[Au-BDA Junction Geometry]{\includegraphics[width=.85\columnwidth]{BDA-binding1} \label{BDA-binding}}
\caption{Geometries of BDA in (a) gas phase and (b) as a fragment. (c) (I,I) junction geometry. Metal ions are pink-grey, the blue-gray atoms are the substrate atoms coupled to the protruding gold atom. Left and right Au atoms show placement relative to a (111) surface.} \label{fg:3BDAgeometries}
\end{figure}
\begin{figure}
\subfloat[LUMO+1]{ \includegraphics[width=.5\columnwidth]{BDA-LUMO+1} }
\subfloat[LUMO]{ \includegraphics[width=.5\columnwidth]{BDA-LUMO} }\\
\subfloat[HOMO]{ \includegraphics[width=.5\columnwidth]{BDA-HOMO} }
\subfloat[HOMO-1]{ \includegraphics[width=.5\columnwidth]{BDA-HOMO-1} }
\caption{Orbitals of BDA molecule in gas-phase ordered by decreasing energy.}\label{fg:BDA-gas}
\end{figure}
\begin{figure}
\subfloat[LUMO+1]{ \includegraphics[width=.4\columnwidth]{BDA-fragment-LUMO+1} }
\subfloat[LUMO]{ \includegraphics[width=.4\columnwidth]{BDA-fragment-LUMO} \label{fg:BDA-frag-LUMO}}\\
\subfloat[Interface state A]{ \includegraphics[width=.4\columnwidth]{BDA-fragment-hybrid1}\label{fg:BDA-frag-gapA} }
\subfloat[Interface state B]{ \includegraphics[width=.4\columnwidth]{BDA-fragment-hybrid2}\label{fg:BDA-frag-gapB} }\\
\subfloat[HOMO]{ \includegraphics[width=.4\columnwidth]{BDA-fragment-HOMO} \label{fg:BDA-frag-HOMO}}
\subfloat[HOMO-1]{ \includegraphics[width=.4\columnwidth]{BDA-fragment-HOMO-1} }
\caption{Au-BDA-Au Fragment orbitals labeled by their correspondence with the BDA gas phase orbitals (see Fig.~\ref{fg:BDA-gas}) and ordered by decreasing energy.}\label{fg:BDA-frag}
\end{figure}
For the molecule in the junction, we relax the geometry and we find the minimum energy configuration. Then, we stretch the junction separating the contacts with the molecule's conformation unchanged.
\begin{figure}
\subfloat[Molecule close to the contacs (stronger coupling).]{ \includegraphics[width=.65\columnwidth]{charges-vs-gate_GGA} \label{fg:Molecule-close}}\\
\subfloat[Molecule far from the contacts (weaker coupling).]{ \includegraphics[width=.65\columnwidth]{charges-vs-gate_GGA-d=6} }
\caption{Spin-resolved occupation as a function of the applied gate when (a) the molecule is close to the contacts and (b) the molecule is far from the contacts.}\label{fg:spin-resolv}
\end{figure}
The spin-resolved occupation (see Fig. \ref{fg:spin-resolv}) indicates how the filling of the individual levels changes upon varying the gate. We have calculated the spin-resolved occupation for two cases: one where the molecule is close to the contacts (we consider the energy minimum for this, see Fig. \ref{BDA-binding}) and one where the molecule is far away.
Sometimes, we see spin-polarization, unless the occupation happens to be an even integer. We also observe absence of this polarization in the strong coupling limit for the charge between 0 and 1.
We expect the presence of polarization to be related to the weak coupling condition $\Gamma < U$, where $U$ is the Coulomb repulsion for electrons at the relevant level. Polarization is not expected when $\Gamma > U$. This appears to be the case in the short distance configuration (strong coupling limit) when the charge is between 0 and 1.
We emphasize that this polarization is not physically correct as the system has unpolarized leads -- hence the chemical potentials for both spin directions are identical. However, it has been pointed out by several researchers that the spin-polarized states found in DFT calculations can give us valuable information about the levels and their occupation \cite{FLiu2013,Bergfield2012,ZFLiu2015,Burke2012}.
For the weak-coupling case, we see `plateaus' occurring in the levels corresponding to fixed occupation demonstrating that only one type of spin is added to or removed from the system when changing the gate. These plateaus are sometimes interrupted by unpolarized points; we assign these to anomalies i
the self-consistency cycle.
In the stronger-coupling case, we also see plateaus, although they are less flat, and, importantly, they do not correspond to integer occupation, but slightly abov
. Apparently there is some `extra' charge on the molecule in these cases -- however, the deviations may also be related to the (Hirschfeld) calculation of the atomic charges. We conclude from figure \ref{fg:Molecule-close} that there is a constant background charge on the molecule, corresponding to $+0.05e$ per spin (see dashed line). The fact that the two easily identifiable plateaus (red curve around $-10~eV$ and green curve around $+5~eV$) are separated by (very nearly) 1e per spin, indicates how charges should be added or removed from the reference state.
The reference charge of the molecule in the junction, for the relaxed geometry, is $+0.274e$. This is due to \emph{both} spin directions -- therefore we have a charge of $+0.137e$ per spin. In this state, the molecule has already some extra charge due to partial charge transfer across the interface \cite{Thygesen2009}. This is a charge excess of $+0.087e$ with respect to the background charge $+0.05e$. In order to remove one electron from the molecule, we therefore need to add $+0.913e$ and to put an extra electron corresponds to $-1.087e$ (see Fig.~\ref{fg:gated_charges_BDA}).
\begin{figure}
\includegraphics[width=.8\columnwidth]{gating-BDA
\caption{Hirshfeld projected charges for the three gated transport levels (the reference state and $\approx\pm e$ charged states), showing the difference in charging the molecule, amine groups and molecule-without-amine as the gate field is varied.} \label{fg:gated_charges_BDA}
\end{figure}
Fig.~\ref{fg:BDA-peak-composition} shows the compositions of the peaks in the transmission through the Au-BDA-Au junction near $\epsilon_{f}$. For these, we project the eigenstates of the transport calculation onto the orbitals of the Au-BDA-Au fragment \cite{Verzijl2012}.
The HOMO projection is composed of many such orbitals as a result of the hybridization with Au, in contrast to the LUMO, the LUMO$+1$ and the HOMO$-1$ states. The HOMO and LUMO$+1$ are more dominant in charge transport than the LUMO and HOMO$-1$ states which contribute weakly due to their strong localization at the center of the molecule. The interface states ${A}$ and ${B}$ do not show up as peaks in the transmission due their very low density at the center of the molecule -- see Fig. \ref{fg:BDA-frag-gapA} and Fig \ref{fg:BDA-frag-gapB}.
\begin{figure}
\includegraphics[width=\columnwidth]{BDA-decomposition}
\caption{Peaks Decomposition with fragment Orbital Levels (grey shaded curve is transmission). Composition of peaks in transport, constructed by projection onto fragment molecular orbitals. A state with value 1 is a decoupled state, and completely un-hybridized (\emph{e.g.}\xspace HOMO-1), while HOMO, is strongly hybridized, with the rest originating from Au).}\label{fg:BDA-peak-composition}
\end{figure}
\begin{figure}
\subfloat[Geometry for Image-Charge Shifts]
{
{\includegraphics[width=.5\columnwidth]{contacts}\label{fg:shiftsgeomBDA}}
}\\
\subfloat[Transport Gap Renormalization]{
{\includegraphics[width=0.9\columnwidth]{Us_vs_Mowbraygas_BDA}\label{fg:shiftscalcBDA}}
}
\caption{(a) Geometry used in the image-charge model (with uncertainties) (b) Comparison of results for total image-charge corrections using charges from gas phase calculations of BDA and from the molecular junction as a function of the distance between the contacts.
}\label{fg:contactsBDA}
\end{figure}
We now consider the results for the calculation of the image charge effect.
In Fig.~\ref{fg:shiftsgeomBDA} we show the geometry used for our image-charge calculation and in Fig.~\ref{fg:shiftscalcBDA} we show the resulting shifts of the occupied and unoccupied levels as function of the distance between the two contacts. The uncertainty bands are calculated based on a $\pm0.25\ \AA$ uncertainty in the position of the image planes.
In Fig.~\ref{fg:shiftscalcBDA}, we also compare the results obtained by our method using different assumptions during the calculations. The dashed line is calculated using the gas phase charge distribution, zero charge on each atom for the reference state and omit the atomic charges associated with the EA, following the Mowbray \emph{et al.}\xspace assumptions.
Using \emph{different} charges for the calculation of the image charge effect of the occupied level (blue line), the symmetry for the shifts of the occupied and unoccupied levels is not maintained, although the curves remain close. Using the charge distribution of the neutral molecule as reference state (red line), these differences increase slightly. Finally, using the junction charge distribution (green line), results in a substantial difference. This shows that by using charges obtained with the junction geometry (from a NEGF+DFT calculation) for the image-charge calculation, we are including features that are absent when the gas phase charges are used.
\section{Au-ZnTPPdT Molecular Devices} \label{Sec:ZnTPP}
We now proceed to a more complicated application of the method, which allows for comparison with a recent experiment that revealed the image-charge effects on both occupied and unoccupied molecular levels.
\subsection{Experimental Results}\label{experiments}
We consider the experimental findings of Perrin \emph{et al.}\xspace\cite{Perrin2011a,Perrin2011b,Perrin2013} who studied thiol-terminated zinc-porphyrin molecules [Zn(5,15-di(p-thiolphenyl)-10,20-di(p-tolyl)porphyrin)], abbreviated as ZnTPPdT.
In the experiments, the current was recorded as a function of gate and bias voltage, and of the electrode separation.
Peaks in the differential conductance were identified as transport resonances.
These resonances show a marked ``mechanical gating'' effect, where a level shift is induced by a change in the metal-molecule distance (for both the occupied and unoccupied levels of the molecule). The efficiency of the effect can be expressed by a mechanical gate coupling (MGC) defined as
\begin{equation}
\epsilon_F=\frac{d V_b}{dx},
\end{equation}
where $V_b$ is the bias voltage and $x$ the electrode separation .
We show experimental data for these shifts in Fig.~\ref{fg:experiment}, where the measurements show a distance-dependent energy for the lowest resonance. A linear fit of the resonance positions was used to find the MGC.
The broadness of the distribution is presumably due to the fact that ZnTPPdT is not a rod-like molecule; it can form molecular junctions with various geometries, as has been reported previously for similar molecules.\cite{Perrin2011b}
\begin{figure}
\includegraphics[width=.8\columnwidth]{Fig3TH}\label{fg:exp_measurement}
\caption{Representative measurement,\cite{Perrin2013} showing HOMO-like and LUMO-like observed MGC's. Note the dilation of the y-axis in the case of LUMO-like resonances.
}
\label{fg:experiment}
\end{figure}
The MGC's values are
in the range of $0.2-1$ $V_b$/nm
Combined with a typical range of $0.5$ nm over which the junctions formed are stable,
implying levels shift of roughly $50-250$ meV in energy, if we assume the bias voltage drops symmetrically. An average MGC values of $0.40$ V/nm was found for occupied, and $0.18$ V/nm for unoccupied levels.
\subsection{Calculations}\label{calculations}
We will now show that our approach yields trends matching the experiment, and explains the asymmetry in the shifts found between occupied and unoccupied levels.
\begin{figure}
\subfloat[Interface State A]{ \includegraphics[width=.4\columnwidth]{ZnTPP-gap1C} }
\subfloat[Interface State B]{ \includegraphics[width=.4\columnwidth]{ZnTPP-gap2A} }\\
\caption{Typical interface levels which form on hybridizing with Au: 6 total between the analogue of the gas phase HOMO and LUMO.}\label{fg:ZnTPP-frag}
\end{figure}
We focus on the frontier orbitals (HOMO and LUMO) which are generally considered to be the most useful for transport. We find a the HOMO-LUMO gap to be $1.8$ eV in our LDA and GGA calculations and $2.7$ eV using the B3LYP functional, consistent with the reports of Park \emph{et al.}\xspace\cite{Park2008}, and in general agreement with their redox measurements of roughly $2.2$ eV.
Our Au-ZnTPPdT binding geometry is based on a phenyl ring bonded to an FCC (111) gold surface via a thiolate bond, in a hollow-site configuration.\cite{Nara2004,Andrews2006,Kondo2006,Pontes2011}
In the calculations, the binding is characterized by chemisorption, with significant charge transfer to the thiols, which act as acceptors. This is in agreement with the literature on such bindings. \cite{Xue2003a,Xue2003b,Love2005,Hoft2006,Romaner2006}
All calculations were performed using a TZP-basis of numerical atomic orbitals on the molecule, using the LDA functional with thiols located at a 2.59\AA\xspace from the electrodes.
In figure~\ref{fg:ZnTPP-frag}, we show two interface orbitals of the ZnTPP-fragment, which contains two extra gold atoms. There are six such states, in addition to the direct counterparts of the LUMO, HOMO and HOMO-1 of the gas phase ("See supplemental material at [URL will be inserted by AIP] for the ZnTPPdT orbitals in gas phase." ). Of these six, two pairs relate to the HOMO, and one to the LUMO. The orbital levels in these fragment pairs appear to be of a bonding/anti-bonding character, with splittings on the order of $0.1$~eV.
\begin{figure}
\subfloat[Peaks Decomposition with Molecular Orbital Levels (grey shaded curve is transmission)]{
\includegraphics[width=0.95\columnwidth]{ZnTPP_peaks}\label{fg:decompositions}
}\\
\subfloat[Peaks Decomposition with Interface Levels]{
\includegraphics[width=0.95\columnwidth]{ZnTPP_peaks2}\label{fg:gaplevels}
}
\caption{(a) Composition of peaks in transport, constructed by projection onto fragment molecular orbitals. A state with value 1 is a decoupled state, and completely un-hybridized (\emph{e.g.}\xspace HOMO-4 through --7), while HOMO-1, --2 and HOMO are strongly hybridized with each other \emph{and} the Au electrodes (reflected in their 30-50\% representation in the junction levels, with the rest originating from Au). The LUMO and LUMO+1 peaks are likewise strongly mixed with each other, coupling much less to the Au, reflected in the much narrower transport peaks near $1.7$ eV. (a) As in (a) for the interface levels rather than for the molecular orbitals shown in (a). }\label{peak-compositions}
\end{figure}
\begin{figure}
\includegraphics[width=.8\columnwidth]{gating-Porphirin}
\caption{Partial charges for the three gated transport levels (the reference state and gated such that the net charge is $\approx\pm e$), showing the difference in charging the molecule, thiols and molecule-without-thiols as the gating is varied. At zero-field, the molecule is roughly neutral, with negative thiols and a positive core.} \label{fg:gated_charges}
\end{figure}
Fig.~\ref{peak-compositions} shows the transmission of a typical transport calculation for the MCBJ geometry.
We observe a cluster of HOMO-like peaks near $\epsilon_f$ (defined as $0$ eV), some small peaks inside the gap near $0.4$ eV, and the nearly-degenerate LUMO and LUMO+1 around $1.7$ eV. Fig.~\ref{fg:decompositions}, shows the decomposition of the transmission into fragment orbitals directly corresponding to molecular orbitals, and in Fig.~\ref{fg:gaplevels} for the interface orbitals.
The peaks right below the Fermi level derive mostly from the HOMO,\footnote{Identified by analyzing the orbital symmetries of the wavefunctions of these levels.} with significant amounts of interface levels mixed in.
Fig.~\ref{fg:gaplevels} shows the role of the 6 interface levels labeled L$_\text{A,B}$, H$^1_\text{A,B}$ and H$^2_\text{A,B}$, derived from hybridization of HOMO and LUMO with the gold.
For ZnPPTdT, the level splitting between the interface states is extremely small. This means that there is not a unique state going to be filled, and this precludes spin polarization and plateaus like those in Fig. \ref{fg:spin-resolv} are absent. On the other hand, the total charge in the reference state is only $0.05e$, distributed over the two spin directions, and this will therefore contribute only very slightly to the difference between the curves for occupied and unoccupied levels. We therefore just add or subtract one unit charge in order to find the reduced and oxidized states.
We have applied our method for calculating image-charge effects to this junction. In the reference state the net charge is $-0.05e$ with a strongly negative charge ($-0.34e$) on the thiols, and $+0.29e$ on the rest of the molecule, mainly on the Zn ion. Figure~\ref{fg:NNpm1-charges} shows the
difference in charge for the ionized ($N-1$) states with respect to the reference state. This difference
resides mostly on the arms, increasing the image charge effect a lot due to the proximity of the extra charge to the contacts.
\begin{figure}
\subfloat[]{\includegraphics[width=.6\columnwidth]{gas_charge}}\\
\subfloat[]{\includegraphics[width=\columnwidth]{gate_charge}}
\caption{Difference in charge distribution in the $N+1$ relative to the $N$ electron charge states. Red indicates the increase of negative charge when adding an electron; blue the decrease. Differences for (a) gas phase DFT calculations (LUMO like difference) and (b) for gated DFT+NEGF transport calculations (recalling the interface levels of Fig.~\ref{fg:ZnTPP-frag}).} \label{fg:NNpm1-charges}
\end{figure}
The fact that in the reference state, the charge on the thiols is approximately the opposite of the
charge on the rest of the molecule, is responsible for a significant difference in slope for the occupied and
unoccupied levels.
\begin{figure*}
\subfloat[Geometry for Image-Charge Shifts]{
\raisebox{0.8cm}{\includegraphics[width=.9\columnwidth]{Simplified-Images}\label{fg:shiftsgeom}}
}
\subfloat[Transport Gap Renormalization]{
\includegraphics[width=.8\columnwidth]{SingleGapPlot3_eV}\label{fg:shiftscalc}
}
\caption{
(a) Geometry used in the image-charge model, and (b) shifts predicted by the model (with uncertainties) showing the occupied- and unoccupied-levels both shifting towards $\epsilon_F$ with MGC's (the derivative with distance) in the range of $0.2-1.4$ eV/nm, expressed in the symmetrically applied bias.
}\label{fg:Shifts}
\end{figure*}
The calculated level shifts as a function of distance are plotted in Fig.~\ref{fg:shiftscalc}
Our calculations predict MGC's in the range of
$1.1-2.8$ eV/nm for an occupied level and
$0.4-2.1$ eV/nm for an unoccupied level
(in opposite directions), depending on the electrode separation (see Fig.~\ref{fg:shiftscalc}).
The different slopes differ significantly indeed, confirming the experimental findings.
To obtain this difference, a detailed calculation of the molecule inside the junction is essential. Using gas-phase orbitals, the wrong orbital
(LUMO) would have been chosen as the unoccupied transport level, and the substantial contribution of the charge located at the arms of the hybridized
HOMO would have been missed.
Our calculations reveal that the background image-charge effect contributes significantly to the MGC and explains the distance-dependent renormalization of the position of the molecular orbital levels with respect to the Fermi level of the electrodes.
Taking the reference state to be the gas phase neutral state suppresses the asymmetry between the shifts for occupied and unoccupied levels.
This supports our conclusion that for the measurements of Fig.~\ref{fg:experiment} an interface-stabilized level of the fragment has lost some charge, as is suggested by the peak above the Fermi level in our transport calculations, and that this level is being addressed in electron transport through the unoccupied state.
\section{Conclusions}
In summary, we have presented a method for calculating the image-charge effects which change the alignment of the occupied and unoccupied levels in molecular devices with the Fermi levels of the electrodes. Our approach is based on the charge distribution of the molecule in the junction in different charge states. It is essential to use these rather than their gas phase equivalents for two reasons. First, the relevant charge states may have a different character in gas phase molecules and molecules in a junction, due to the formation of ``interface levels'' in the latter. These are stabilized by the metal-molecule interface, and have no counterpart in the gas phase. Second, unlike in the gas phase, the reference state in the junction (at zero bias and gate) can carry a net charge, which implies a significant contribution to the reduction of the metal work function upon chemisorption of a molecule.
We have applied our method to a standard benzenediamine molecule and found results in good agreement with those obtained using Mowbray's \emph{et al.}\xspace model. The results differ however in our approach, mainly
due to the nonzero charge in the reference state, and because we also address interface states that differ essentially from gas level orbitals.
Perrin \emph{et al.}\xspace's\cite{Perrin2013} experiments on Au-ZnTPPdT reveal distance-dependent level shifts which are in agreement with our calculations. In this experiment, the fact that the reference state is non-neutral causes the MGC for occupied and unoccupied levels to be quite different.
Our model agrees with the experimentally determined shifts within a factor of two.
Our approach demonstrates that for addressing image-charge effects within DFT, considering molecules in the junction is essential.
\begin{acknowledgements}
The authors thank the financial support by the Dutch Foundation for Fundamental Research on Matter (FOM), the EU FP7 program under the ``ELFOS'' grant agreement and a grant by the Netherlands' National Computing Facilities Foundation, financed by The Netherlands Organization for Scientific Research (NWO). We also thank C.A. Martin, J.S. Seldenthuis, F. Grozema, R. Eelkema and J. van Ruitenbeek for fruitful discussions.
\end{acknowledgements}
|
2,877,628,088,543 | arxiv | \section{Introduction and summary}
\label{sec:intro}
Supersymmetric localization leads to a dramatic simplification of the calculation of
sphere partition functions (and some other observables) by reducing the infinite
dimensional path integral to a finite dimensional matrix model \cite{Pestun2012,Kapustin2010}
This matrix model can
then be solved (sometimes) by a variety of old and new techniques to yield exact results.
A particular application of this is to check dualities --- two theories which are equivalent
(or flow to the same IR fixed point) should have the same partition function.
In practice, it is often very hard to solve the matrix models exactly, so dualities
are checked by comparing the matrix models of the two theories and using integral
identities to relate them. The first beautiful realization of this is in the
Alday-Gaiotto-Tachikawa (AGT) correspondence, where the matrix models evaluating
the partition functions of 4d ${\mathcal N}=2$ theories were shown to be essentially identical to
correlation functions of Liouville theory as expressed via the conformal bootstrap in
a specific channel. $S$-duality in 4d was then related to the associativity of the OPE
in Liouville, which is manifested by complicated integral identities for the fusion and
braiding matrices \cite{Alday:2009aq, Dorn:1994xn, Zamolodchikov:1995aa,
Ponsot:1999uf, Teschner:2001rv,Nekrasov:2002qd}.
Here we study 3d supersymmetric theories, which have several types of dualities, of which we
will consider mirror symmetry and its $SL(2,\mathbb{Z})$ extension
\cite{Intriligator1996, Hanany1997, Boer1997a, Boer1997, Witten2003, Assel2014}.
Indeed one may use
integral identities (in the simplest case just the Fourier transform of the
sech function) \cite{Kapustin2010a} to show that the matrix models for certain mirror pairs are equivalent.
But is there a way to simplify the calculation such that we can rely on a known
duality of a model equivalent to the matrix model to get the answer without
any work, as in the case of AGT?
Indeed for necklace quiver theories with at least ${\mathcal N}=3$ supersymmetry (and one copy of
each bifundamental field) there is a simple realization of the matrix model in terms of
a gas of non-interacting fermions in 1d with a complicated
Hamiltonian \cite{Marino2012}.
The purpose of this note is to point out that the Hamiltonians of
pairs of ${\mathcal N}=4$
mirror theories are related
by a linear canonical transformation.%
\footnote{In the specific case of ABJM theory, this was in fact already noted in \cite{Marino2012},
but here we prove it more generally.}
Furthermore we show that the transformations
between three known mirror theories close to $SL(2,\mathbb{Z})$, which is
natural to identify with the $S$-duality group of type IIB, where the three theories
have Hanany-Witten brane realizations.%
\footnote{We should mention of course also the 3d-3d relation \cite{Dimofte:2011ju},
which is closer in spirit to AGT and realizes mirror symmetry by
geometrical surgery.}
In order to demonstrate this we generalize the Fermi-gas formalism of
Mari\~no and Putrov to theories with nonzero Fayet-Iliopoulos (FI) parameters as
well as mass terms for the bi-fundamental fields. This is presented in
Section~\ref{sec:fermi} where we focus for simplicity on a two-node circular quiver.
In section~\ref{sec:mirror} we then present the action of mirror symmetry on the
density operator of the Fermi-gas (the exponential of the Hamiltonian). We also
outline the generalization to arbitrary circular quivers.
The generalization of this formalism to $D$-quivers and theories with symplectic
gauge group will be presented in \cite{ADF}.
In the appendix we proceed to evaluate the partition function of the two-node quiver
(and its mirrors). This was done for the theory without FI terms and bifundamental
masses in \cite{Marino2012}, and we here verify that the calculation can be carried through
also with these parameters turned on. The resulting expressions are not modified
much and one can still express them in terms of an Airy function.
\section{Fermi-gas formalism with masses and FI-terms}
\label{sec:fermi}
In this section we review the Fermi-gas formulation \cite{Marino2012}
of the matrix model of 3d supersymmetric field theories
and generalize it to a particular $\mathcal{N}=4$
theory that includes all of the ingredients we will require for our study of mirror symmetry in the
following section. This is a two node
quiver guage theory with gauge group $U(N) \times U(N)$. Each node has a Chern Simons (CS) term with
levels $k$ and $-k$. There is a single matter hypermultiplet transforming
in the fundamental representation of each $U(N)$ factor, and two matter hypermultiplets transforming in
the bifundamental and anti-bifundamental representations of $U(N) \times U(N)$.
The bifundamental fields have masses
$m_1$ and $m_2$ and each node has a Fayet-Iliopoulos term with parameters
$\zeta_1$ and $\zeta_2$.
The matrix model for this theory is computed via localisation \cite{Kapustin2010}. The result can be
easily derived by applying the rules presented for instance in
\cite{Kapustin2010a,Gulotta2012,Yaakov2010}
\begin{equation}\begin{aligned}
\label{matrixmodel}
Z(N)= \frac{1}{(N!)^2}\int d^N \lambda^{(1)}d^N \lambda^{(2)}
&\frac{\prod_{i<j} 4\sinh^2 \pi(\lambda^{(1)}_i - \lambda^{(1)}_j)\,
4\sinh^2 \pi(\lambda^{(2)}_i - \lambda^{(2)}_j)}
{\prod_{i, j} 2\cosh\pi(\lambda^{(1)}_i - \lambda^{(2)}_j + m_1)
\,2\cosh\pi(\lambda^{(2)}_i - \lambda_j^{(1)}+ m_2)}
\\&\hskip1in
{}\times\prod_{i=1}^N
\frac{ e^{2 \pi i \zeta_1 \lambda^{(1)}_i+\pi i k (\lambda^{(1) }_i)^2}
e^{2 \pi i \zeta_2 \lambda^{(2)}_i-\pi i k (\lambda^{(2) }_i)^2}}
{2\cosh \pi{\lambda^{(1)}_i} \, 2\cosh \pi{\lambda^{(2)}_i}}
\,.
\end{aligned}\end{equation}
The crucial step in rewriting this expression as a Fermi-gas partition function
is the use of the Cauchy determinant identity
\begin{equation}
\frac{\prod_{i<j}(x_i -x_j)(y_i - y_j)}
{\prod_{i, j}(x_i - y_j)}
= \sum_{\sigma \in S_N}(-1)^{\sigma}\prod_{i=1}^N
\frac{1}{(x_i - y_{\sigma(i)})}\,.
\end{equation}
Applying this to \eqn{matrixmodel} we may write the partition function as
\begin{equation}\begin{aligned}
Z(N) = &\frac{1}{(N!)^2}\int d^N \lambda^{(1)}d^N \lambda^{(2)}
\sum_{\sigma_1 \in S_N}(-1)^{\sigma_1}
\prod_{i=1}^N \frac{1}{2\cosh \pi (\lambda^{(1)}_i - \lambda^{(2)}_{\sigma_1(i)}+ m_1)}
\\ &\times \sum_{\sigma_2 \in S_N}(-1)^{\sigma_2}
\prod_{i=1}^N \frac{1}{2\cosh \pi (\lambda^{(2)}_i - \lambda^{(1)}_{\sigma_2(i)}+ m_2)}
\prod_{i=1}^N \frac{ e^{2 \pi i \zeta_1 \lambda^{(1)}_i+\pi ik (\lambda^{(1) }_i)^2}
e^{2 \pi i \zeta_2 \lambda^{(2)}_i - \pi ik(\lambda^{(2) }_i)^2}}
{2\cosh \pi{\lambda^{(1)}_i}2\cosh \pi{\lambda^{(2)}_i}}\,.
\end{aligned}\end{equation}
A relabelling of eigenvalues
$\lambda^{(2)}_i \rightarrow \lambda^{(2)}_{\sigma^{\smash{-1}}_1 (i)}$
allows us to resolve one of the sums over permutations, pulling out an overall factor of $N!$ giving
\begin{equation}\begin{aligned}
\label{Zisrho}
Z(N) &= \frac{1}{N!}\sum_{\sigma \in S_N}(-1)^\sigma \int d^N \lambda^{(1)}_i \, d^N \lambda^{(2)}_i
\prod_{i=1}^N
\frac{e^{2 \pi i \zeta_1 \lambda^{(1)}_i}e^{ i \pi k (\lambda^{(1)}_i)^2}}{2\cosh \pi{\lambda^{(1)}_i}}
\frac{1}{2\cosh\pi(\lambda^{(1)}_i - \lambda^{(2)}_i + m_1)}
\\&\hskip2in{}\times
\frac{e^{2\pi i \zeta_2 \lambda^{(2)}_i}e^{-i \pi k (\lambda^{(2)}_i)^2}}{2\cosh \pi{\lambda^{(2)}_i}}
\frac{1}{2\cosh\pi(\lambda^{(2)}_i - \lambda^{(1)}_{\sigma(i)}+ m_2)}
\\&
= \frac{1}{N!}\sum_{\sigma \in S_N}(-1)^\sigma
\int d^N \lambda^{(1)}_i \, K\big(\lambda^{(1)}_i, \lambda^{(1)}_{\sigma(i)}\big).
\end{aligned}\end{equation}
Here we expressed the interaction between the eigenvalues $\lambda^{(1)}_i$ in terms
of the kernel $K$, which can be considered the matrix
element of the density operator $\hat K$ defined by
\begin{equation}
\label{K}
K(q_1, q_2) = \bra{q_1}\hat K \ket{q_2},
\qquad
\hat K = \frac{e^{2 \pi i \zeta_1 \hat q+\pi i k \hat q^2}}{2\cosh \pi \hat q}\,
\frac{e^{2 \pi i m_1 \hat p}}{2\cosh \pi \hat p}\,
\frac{e^{2 \pi i \zeta_2 \hat q- \pi ik \hat q^2}}{2\cosh \pi \hat q}\,
\frac{e^{2 \pi i m_2 \hat p}}{2\cosh \pi \hat p}
\,,
\end{equation}
where $\hat p$ and $\hat q$ are canonical conjugate variables
$[\hat q,\hat p]=i\hbar$ with $\hbar=1/2\pi$
and we have made use of the elementary identities
\begin{align}
f(\hat q) \ket{q}&= f(q) \ket{q} \label{id1}
\\
e^{-2 \pi i m \hat p} f(\hat q) e^{2 \pi i m\hat p}&= f(\hat q - m) \label{id2}
\\
\bra{q_1}\frac{1}{\cosh \pi\hat p}\ket{q_2}&= \frac{1}{\cosh \pi(q_1 - q_2)}\,. \label{id4}
\end{align}
To study the system in a semiclassical expansion it is useful to represent the operators in
Wigner's phase space, where the Wigner transform of an operator $\hat A$ is defined as
\begin{equation}
\label{wigtrans}
A_W(q,p) = \int d q^\prime
\Bra{q - \frac{q^\prime}{2}}\hat A \Ket{q + \frac{q^\prime}{2}}e^{i p q^\prime / \hbar}\,.
\end{equation}
Some important properties are
\begin{equation}
\label{wigident}
(\hat A \hat B)_W = A_W \star B_W, \qquad \star
= \exp \left[ \frac{i \hbar}{2}\left(\overleftarrow{\partial_q}\overrightarrow{\partial_p}
- \overrightarrow{\partial_p}\overleftarrow{\partial_q}\right) \right],
\qquad
\Tr(\hat A) = \int \frac{d q d p}{2 \pi \hbar}A_W \,.
\end{equation}
For a detailed
discussion of the phase space approach to Fermi-gasses see \cite{Marino2012} and the
original paper \cite{Grammaticos1979}. For a more general review of Wigner's phase space
and also many original papers see \cite{Zachos2005}.
In the language of phase space the kernel $\hat K$ \eqn{K} becomes
\begin{equation}
\label{KW}
K_W
= \frac{e^{2 \pi i \zeta_1 q+\pi i k q^2}}{2\cosh \pi q}
\star\frac{e^{2 \pi i m_1 p}}{2\cosh \pi p}
\star\frac{e^{2 \pi i \zeta_2 q- \pi ik q^2}}{2\cosh \pi q}
\star\frac{e^{2 \pi i m_2 p}}{2\cosh \pi p}\,.
\end{equation}
Clearly the partition function can be determined from the spectrum of $\hat K$ or $K_W$.
The leading classical part comes from replacing the star product with a regular product. In
the appendix we outline the calculation of the partition function, extending \cite{Marino2012}.
\section{Mirror symmetry}
\label{sec:mirror}
In this section we examine the theory studied in the previous
section, with vanishing CS levels (see top quiver in Figure~\ref{fig:mirror}),
where the density function \eqn{KW} becomes
\begin{equation}
\label{rhonoCS}
K_W = \frac{e^{2 \pi i \zeta_1 q}}{2\cosh \pi q}\star
\frac{e^{2 \pi i m_1 p}}{2\cosh \pi p}\star
\frac{e^{2 \pi i \zeta_2 q}}{2\cosh \pi q}\star
\frac{e^{2 \pi i m_2 p}}{2\cosh \pi p}\,.
\end{equation}
It has been known for a long time that this theory has two
mirror theories, related in the IIB brane construction by $SL(2, \mathbb{Z})$
transformations \cite{Boer1997}. As we show, the density functions of these
theories are simply related by linear canonical transformations.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{mirror}
\caption{\label{fig:mirror}%
Quiver diagrams summarising the two node theory we discuss in the text and its two mirror duals.
Each circle represents a $U(N)$ vector multiplet, labelled inside by the CS level $k$
and outside by the FI parameter. Edges represent hypermultiplets. Those connecting two circles
are bifundamental fields with the mass indicated next to them. The boxes represent $U(1)$
flavor symmetries of fundamental hypers, which in our examples are massless.}
\end{figure}
\subsection{$S$ transformation}
\label{sec:S}
The first of the known mirror theories is one with identical matter content but with mass
and FI parameters exchanged \cite{Boer1997a}
\begin{equation}
m_1 \rightarrow\tilde m_1= -\zeta_1\,,
\qquad
m_2 \rightarrow\tilde m_2=-\zeta_2\,,
\qquad
\zeta_1 \rightarrow \tilde\zeta_1= m_2 \,,
\qquad
\zeta_2 \rightarrow \tilde\zeta_2= m_1 \,.
\end{equation}
This is illustrated by the bottom right quiver in Figure~\ref{fig:mirror}.
At the level of the density function, this gives
\begin{equation}
\label{Srho}
K_W^{(S)}=
\frac{e^{2 \pi i m_2 q}}{2\cosh \pi q}\star
\frac{e^{-2 \pi i \zeta_1 p}}{2\cosh \pi p}\star
\frac{e^{2 \pi i m_1 q}}{2\cosh \pi q}\star
\frac{e^{-2 \pi i \zeta_2 p}}{2\cosh \pi p} \sim
\frac{e^{-2 \pi i \zeta_1 p}}{2\cosh \pi p}\star
\frac{e^{2 \pi i m_1 q}}{2\cosh \pi q}\star
\frac{e^{-2 \pi i \zeta_2 p}}{2\cosh \pi p}\star
\frac{e^{2 \pi i m_2 q}}{2\cosh \pi q}\,,
\end{equation}
where the last relation represents equivalence under conjugating by
$\frac{e^{2 \pi i m_2 q}}{2\cosh \pi q}$.
We find that this density is the same as \eqn{rhonoCS} under the replacement
\begin{equation}
p \to q \,,
\qquad
q \to - p\,.
\end{equation}
\subsection{$U$ transformation}
\label{sec:U}
To get the second mirror theory we apply to \eqn{rhonoCS} the replacement
\begin{equation}
\label{Utrans}
p \rightarrow p + q\,,
\qquad
q \rightarrow - p \,.
\end{equation}
The result is%
\footnote{Note that the definition of the star product \eqn{wigident} is invariant
under linear canonical transformations, and in particular under \eqn{Utrans}.}%
\begin{equation}\begin{aligned}
\label{Urho}
K_W^{(U)} &=
\frac{e^{- 2 \pi i \zeta_1 p}}{2\cosh \pi p}\star
\frac{e^{ 2 \pi i m_1 (p+q)}}{2\cosh \pi (p+q)}\star
\frac{e^{-2 \pi i \zeta_2 p}}{2\cosh \pi p}\star
\frac{e^{2 \pi i m_2 (p+q)}}{2\cosh \pi (p+q)}
\\
&= \frac{e^{- 2 \pi i \zeta_1 p}}{2\cosh \pi p}\star
e^{-i \pi {q}^2}\star
\frac{e^{ 2 \pi i m_1 p}}{2\cosh \pi p}\star
e^{i \pi {q}^2}\star
\frac{e^{-2 \pi i \zeta_2 p}}{2\cosh \pi p}\star
e^{- i \pi {q}^2}\star
\frac{e^{2 \pi i m_2 p}}{2\cosh \pi p}\star
e^{i \pi {q}^2}\,.
\end{aligned}\end{equation}
In the second line we have made use of the identity
\begin{equation}
\label{cs-shift}
e^{-\pi i{q}^2}\star f(p) \star e^{\pi i{q}^2}= f(p + q) \,.
\end{equation}
To read off the corresponding quiver theory from \eqn{Urho}, each
$e^{i \pi (k q^2 + 2 \zeta q)}$ term can be associated to
a $U(N)$ node with CS level $k$ and FI parameter $\zeta$, while each
$\frac{e^{2\pi i m p}}{2\cosh \pi p}$ comes from a bifundamental hypermultiplet
with mass $m$.
The transformed density operator corresponds therefore to a circular
quiver with four nodes that have alternating Chern-Simons
levels $k = \pm 1$ and vanishing FI parameters. The bifundamental multiplets connecting adjacent nodes have masses $\{- \zeta_1, m_1,-\zeta_2,m_2 \}$,
as in the bottom left diagram of Figure~\ref{fig:mirror}.
As we discuss in the appendix, the partition function can be expressed in terms of
$\Tr \hat K^l$ \eqn{Zconj}, which in phase space is given by an integral over $p,q$ \eqn{wigident}.
Since we showed that mirror symmetry can be viewed just as a linear canonical
transformation, which is a change of variables with unit Jacobian, it is clear that
mirror symmetry preserves the partition function.
\subsection{$SL(2,\mathbb{Z}) $}
\label{sec:sl2}
It is easy to see that the transformations we used in the previous sections close onto
$SL(2, \mathbb{Z})$.
Indeed, defining $T=SU$ we find the defining relations
\begin{equation}
S^2 = - I\,,
\qquad
(ST)^3 = I\,.
\end{equation}
More general $SL(2,\mathbb{Z})$ transformations will give density operators with terms of the form
\begin{equation}
\frac{1}{\cosh \pi (a p + b q) } \, .
\end{equation}
The cases with $a=0$ and $b=1$ or $a=\pm1$ and $b\in\mathbb{Z}$ have a natural interpretation
as a contribution of a fundamental field, or
as we have seen in \eqn{cs-shift}, from conjugating the usual $1/\cosh\pi p$ by CS terms.
But these manipulations cannot undo expressions one
finds from a general $SL(2,\mathbb{Z})$ transformation of $K_W$.
In these more general cases, the transformed
density operator can still be associated to a matrix model, but it cannot be derived from
any known 3d lagrangian.
This is also manifested in the IIB brane realization, where any $SL(2,\mathbb{Z})$ transformation will lead to
some configuration of $(p,q)$ branes. Most of those do not have a known Lagrangian
description \cite{Gaiotto2009}, but one could associate to them a matrix model \cite{Assel2014},
which would indeed lead to the transformed density operator.
\subsection{Mirror symmetry for generic circular quiver}
\label{sec:general}
The manifestation of mirror symmetry as a canonical transformation
naturally generalises to the entire family of ${\mathcal N}=4 $ circular quivers with an arbitrary
number of nodes.
Applying the Fermi-gas formalism, it is easy to see that the density function
for such a theory with $n$ nodes is given by%
\footnote{The $\prod_\star$ product is defined by ordered star multiplication.}
\begin{equation}
\label{rhogen}
K_W = \prod_{a=1}^n\raisebox{-1.2ex}{}_\star \frac{e^{2 \pi i \zeta_a q}}{\left(2\cosh\pi q \right)^{N_a}}\star
\frac{e^{2 \pi i m_a p}}{2\cosh \pi p}\,,
\end{equation}
where $\zeta_a $ denotes the FI parameter of the $a$\textsuperscript{th} node,
$N_a$ denotes the number of
fundamental matter fields attached to the $a$\textsuperscript{th} node and $m_a$
denotes the mass of the bifundamental field connecting the $a$\textsuperscript{th} and
$(a+1)$\textsuperscript{th} nodes.
We can now apply the $S$ and $U$ transformations of the previous section, and look to see if
the resulting density functions can again be interpreted as coming from the mirror gauge
theories. Applying the $S$ transformation we get
\begin{equation}
K_W^{(S)}
= \prod_{a=1}^n\raisebox{-1.2ex}{}_\star
\frac{e^{-2 \pi i \zeta_a p}}{\left(2\cosh\pi p\right)^{N_a} }\star
\frac{e^{2 \pi i m_a q}}{2\cosh \pi q}\,.
\end{equation}
This density function is that of a circular quiver theory with $\sum_{a=1}^n N_a$
nodes and $n$ fundamental matter fields. The fundamentals are attached
to nodes which have FI parameters $m_a$, and are
separated by $N_a-1$ other nodes. The masses of the bifundamentals connecting
them add up to $-\zeta_{a}$.%
\footnote{\label{freedom}At the level of the matrix model, this additional freedom to choose mass parameters
in the mirror theory simply amounts to the freedom to make constant shifts in the integration variables.}
Applying the $U$ transformation we get
\begin{equation}
K_W^{(U)}
= \prod_{a=1}^n\raisebox{-1.2ex}{}_\star \frac{e^{- 2 \pi i \zeta_a p}}{\left(2\cosh\pi p\right)^{N_a}}\star
\frac{e^{ 2 \pi i m_a (p + q)}}{2\cosh \pi (p + q)}
= \prod_{a=1}^n\raisebox{-1.2ex}{}_\star \frac{e^{- 2 \pi i \zeta_a p}}{\left(2\cosh\pi p\right)^{N_a}}
\star e^{-\pi iq^2}\star\frac{e^{ 2 \pi i m_ap}}{2\cosh \pi p}\star e^{\pi iq^2}\,.
\end{equation}
The mirror theory can be readily read off from this density function as a circular quiver
theory with $\sum_{a=1}^n N_a + n$ nodes and no fundamental matter. Each node
has Chern-Simons level $k= +1, -1$ or $0$. Further details concerning the mass parameters
and value of the Chern-Simons level at each node can be read off in much the same
way as for the previous example.
A further generalisation we have not yet considered is to turn on masses for the
fundamental fields. This corresponds to
replacing each of the $(2\cosh\pi q)^{-N_a}$
in \eqn{rhogen} with a product of $N_a$ terms with masses $\mu_i$
\begin{equation}\begin{aligned}
\label{fundmasses}
e^{2 \pi i \zeta_a q}\star
\prod_{i=1}^{N_a}\raisebox{-1.2ex}{}_\star\frac{1}{2\cosh \pi (q + \mu_i)}
&= e^{2 \pi i \zeta_a q}\star
\prod_{i=1}^{N_a}\raisebox{-1.2ex}{}_\star
\left(e^{2 \pi i \mu_i p}\star \frac{1}{2\cosh \pi q }\star e^{-2 \pi i \mu_i p}\right)\\
&\hskip-1.2in{}
=e^{-2 \pi i \zeta_a\mu_1}e^{ 2 \pi i \mu_1 p}\star
\frac{e^{2 \pi i \zeta_a q}}{2\cosh\pi q}\star
e^{-2 \pi i \mu_1 p}\star
\prod_{i=2}^{N_a}\raisebox{-1.2ex}{}_\star
\left(e^{ 2 \pi i \mu_i p}\star
\frac{1}{2\cosh \pi q}\star
e^{ -2 \pi i \mu_{i} p}\right).
\end{aligned}\end{equation}
Where in the second line we chose to associate the FI term to the
first fundamental field, picking up an overall phase.\footnote{There is a freedom to distribute
the FI terms arbitrarily among the fundamental fields, leading to a different phase in front,
see also footnote~\ref{freedom}.}
Once we apply $S$ or $U$ transformations to \eqn{fundmasses} it becomes clear that
these mass terms become additional FI parameters, as is expected.
\subsection*{Acknowlegments}
We are grateful to Benjamin Assel and Marcos Mari\~no for useful discussions.
N.D. is grateful for the hospitality of APCTP, of Nordita, CERN (via the
CERN-Korea Theory Collaboration) and the Simons Center for Geometry and Physics, Stony Brook University
during the course of this work.
The research of N.D. is underwritten by an STFC advanced fellowship. The
CERN visit was funded by the National Research Foundation (Korea).
The research of J.F. is funded by an STFC studentship ST/K502066/1
|
2,877,628,088,544 | arxiv | \section{Introduction}
\subsection{Description of the model}
\qquad In this paper, we analyse the stability of non-conforming numerical schemes~for~a~system~describing the evolution of an incompressible non-Newtonian fluid. Namely, for a given spatial domain $\Omega \subseteq \mathbb{R}^d$, with $d\in\{2,3\}$, and a final time $0<T<\infty$, in the continuous setting, one looks for~a~velocity~vector~field $\vec{u}\colon [0,T]\times \overline{\Omega} \to \mathbb{R}^d$, a pressure field $\pi\colon (0,T)\times \Omega \to \mathbb{R}$, and~a~(symmetric~and~traceless) stress tensor $\tens{S}\colon (0,T)\times \Omega \to \mathbb{R}^{d \times d}_{\sym,\tr}$ such that
\begin{subequations}
\label{eq:continuous_PDE}
\begin{alignat}{2}
\begin{aligned}
\label{eq:continuous_balance}
\partial_t \vec{u} - \rmdiv\tens{S} + \rmdiv(\vec{u} \otimes \vec{u}) + \nabla \pi
&= \bm{f} \qquad \quad &&\text{ in } (0,T)\times \Omega\,,\\
\rmdiv\vec{u} &= 0 \qquad \quad &&\text{ in }{(0,T)\times \Omega}\,,\\
\vec{u} &= \bm{0} \qquad \quad &&\text{ on }(0,T)\times \partial\Omega\,,\\
\vec{u}(0,\cdot) &= \vec{u}_0 \qquad \quad &&\text{ in }\Omega\,,
\end{aligned}
\end{alignat}
where the initial velocity vector field $\vec{u}_0\colon \Omega \to \mathbb{R}^d$ and the body force $\bm{f}\colon (0,T)\times \Omega \to \mathbb{R}^d$~are~given. To close the system, we consider an implicit constitutive law of the form
\begin{equation}
\label{eq:continuous_constitutive_law}
\tens{G}(\tens{S}, \tens{D}(\vec{u})) = \bm{0} \qquad
\text{ in } (0,T)\times \Omega\,,
\end{equation}
\end{subequations}
where $\tens{D}(\vec{u}) = \frac{1}{2}(\nabla\vec{u} + \nabla\vec{u}^\top)\colon(0,T)\times \Omega\to\mathbb{R}^{d \times d}_{\sym} $ denotes the strain rate tensor, i.e., symmetric part of the velocity gradient, and $\tens{G}\colon \mathbb{R}^{d \times d}_{\sym} \times \mathbb{R}^{d \times d}_{\sym} \to \mathbb{R}^{d \times d}_{\sym}$ is a locally Lipschitz function such that $\tens{G}(\bm{0},\bm{0})=\bm{0}$ and such that it defines a $p$-coercive~graph~for~$p>1$, in the sense that there exist two constants $c_1, c_2>0$ such that
\begin{equation}\label{eq:coercivity}
\tens{G}(\tens{A},\tens{B}) = \bm{0} \qquad
\Longrightarrow \qquad
\tens{A}\fp\tens{B} \geq c_1(|\tens{A}|^{p'} + |\tens{B}|^p) - c_2\,,
\end{equation}
for every $(\tens{A},\tens{B})\in \mathbb{R}^{d \times d}_{\sym}\times \mathbb{R}^{d \times d}_{\sym}$. Such a class of constitutive relations captures many models that are popular in applications. Prototypical examples that, in addition, define a \emph{monotone graph} include fluids with power-law structure
\begin{subequations}\label{eq:power_law}
\begin{gather}
\tens{G}(\tens{S},\tens{D}) \coloneqq \tens{S} - K_\star(1 + \Gamma_\star |\tens{D}|^2)^{\frac{p-2}{2}}\tens{D}
\qquad K_\star,\Gamma_\star>0\,,\; p>1\,, \label{eq:power_lawA}\\
\tens{G}(\tens{S},\tens{D}) \coloneqq K_\star(1 + \Gamma_\star |\tens{S}|^2)^{\frac{p'-2}{2}}\tens{S} - \tens{D}
\qquad K_\star,\Gamma_\star>0\,,\; p>1\,,
\end{gather}
\end{subequations}
or viscoplastic Bingham fluids
\begin{equation}\label{eq:bingham_implicit}
\tens{G}(\tens{S},\tens{D}) \coloneqq (|\tens{S}| - \tau_\star)^+ \tens{S}
- 2\nu_\star(\tau_\star + (|\tens{S}|-\tau_\star)^+)\tens{D}
\qquad \nu_\star > 0\,,\; \tau_\star \geq 0\,,
\end{equation}
where $(\cdot)^+ \!\coloneqq \! (s\mapsto\max\{s,0\})\colon \!\mathbb{R}\!\to\! \mathbb{R}$; this relation is more commonly written~in~terms~of~the~dichotomy
\begin{equation}\label{eq:bingham_dichotomy}
\renewcommand{\arraystretch}{1.2}
\left\{
\begin{array}{ccc}
|\tens{S}|\leq \tau_\star & \Longleftrightarrow & \tens{D} = \bm{0}\,, \\[1mm]
|\tens{S}|> \tau_\star & \Longleftrightarrow & \tens{S} = 2\nu_\star \tens{D} + \displaystyle\frac{\tau_\star}{|\tens{D}|}\tens{D}\,.
\end{array}
\right.
\end{equation}
Note that while it is not possible to write the relation \eqref{eq:bingham_dichotomy} in terms of a single valued function~$\tens{S}(\tens{D})$, within the implicit framework, one can express it in terms of elementary functions without issue. We note further that the Newtonian constitutive relation is of course also considered here (e.g., take $\tau_\star = 0$~in \eqref{eq:bingham_implicit} or $p=2$ in \eqref{eq:power_law}). We refer to \cite{BMR.2020,BMM.2021}, for an in-depth discussion of the different models that can be described with such monotone constitutive relations and the corresponding PDE analysis.
The implicit constitutive relations considered here also includes non-monotone relations that can describe hysteretic behaviour, e.g.,
\begin{equation}\label{eq:non_monotone}
\tens{G}(\tens{S},\tens{D}) = \Big[a(1 + b|\tens{S}|^2)^{\frac{q-2}{2}} + c\Big]\tens{S}
- \tens{D}
\qquad a,b,c>0\,,\; q\in \mathbb{R}\, .
\end{equation}
which for $q<0$, in general, is \emph{non-monotone} (see \cite{LRR.2013} for details), but has, nevertheless,~been~shown to be thermodynamically consistent \cite{JP.2018}. See also \cite{JMPT.2019} for insightful numerical experiments.
In this work, we concentrate on non-conforming discretisations of the problem \eqref{eq:continuous_PDE}; namely, a discontinuous Galerkin in time method $\mathrm{DG}(k)$ and a discontinuous Galerkin discretisation in space that can, in particular, be taken to be a Local Discontinuous Galerkin (LDG) method or an Interior Penalty (IP) method (possibly incomplete). The DG time discretisation we consider here can be shown to be equivalent to a RadauIIA Implicit Runge--Kutta scheme \cite{MN.2006}, which, due to its L-stability, is popular in applications modeled by parabolic problems.
Regarding the spatial discretisation, in the case of incompressible fluid models such as \eqref{eq:continuous_PDE}, one has the additional concern of the preservation of the divergence-free constraint \eqref{eq:continuous_balance}$_2$ at the discrete level; in recent years, the importance of this has been recognised and schemes that lead to point-wise divergence-free approximations~have~many~desirable~qualities, such as pressure robust error estimates (see \cite{JLMNR.2017} for more details). One of the main ways of obtaining exactly divergence-free approximations is to relax the conformity requirement and employ a finite element space for the velocity that is $H(\rmdiv;\Omega)$-conforming only. This non-conformity is then handled by including DG terms in the formulation (see, e.g., \cite{CKS.2007,SLLL.2018} for the Newtonian case). While this is one of our main motivations, here we will analyse more general discretisations that might not~enforce~the~divergence~constraint~exactly.
Given the highly non-linear nature of the models considered here, deriving~error~estimates~seems out of reach. In such cases, one can turn instead to proving weak convergence (of a subsequence) to minimal regularity solutions by using compactness arguments; a crucial step in such arguments is to establish stability of the corresponding discrete scheme, from which one then~extracts~converging~subsequences; this approach was taken in \cite{ST.2019,FGS.2020} for conforming-in-space discretisations of implicitly constituted models; for the case with explicit constitutive relations (and implicit Euler in time), see \cite{BKR.2021,KR.2022}. In the setting considered here, the coercivity condition \eqref{eq:coercivity} results in a stability estimate that guarantees the uniform boundedness of the velocity approximations in $L^p(0,T;W^{1,p}(\Omega)^d)$ (or, more precisely, on its broken counterpart) and of the stress approximations~in~$L^{p'}((0,T)\times \Omega)^{d\times d}$.~This~is,~however,~not enough as the usual notions of energy solutions for incompressible models require also that $\vec{u} \in L^\infty(0,T;L^2(\Omega)^d)$; among other things, this condition is useful because (see, e.g., \cite{ST.2019} for more details):
\begin{itemize}
\item Together with a Gagliardo--Nirenberg-type interpolation inequality, cf. \cite[Theorem I.2.1]{dibene}, it implies that $$\vec{u} \in L^{\frac{p(d+2)}{d}}((0,T)\times\Omega)^d\,,$$ which, in turn, implies, e.g., that if $p\geq \frac{3d+2}{d+2}$ (and so, in particular, for the Newtonian problem in 2D), then the velocity is an admissible test function in the balance of momentum and, which guarantees an energy identity and, thus, uniqueness of solutions;\vspace{1mm}
\item It is used when proving that $$\vec{u} \in C_w^0([0,T];L^2(\Omega)^d)\,,$$ meaning that the initial condition is a priori meaningful in this weak sense, but in fact this allows one to prove that $$\lim_{t\to 0}\|\vec{u}(t) - \vec{u}_0\|_{L^2(\Omega)}= 0\,.$$
\end{itemize}
It is, therefore, highly desirable that the discretisation methods produce solutions which are also uniformly stable in $L^\infty(0,T;L^2(\Omega)^d)$. By testing the DG-in-time discretised system with the solution, it is straightforward (see Lemma \ref{lem:apriori} below) to prove $L^2(\Omega)^d$-stability at the partition points $\{t_j\}$. However, this only yields the desired $L^\infty(0,T;L^2(\Omega)^d)$ bound in the lowest order case $\mathrm{DG}(0)$ (i.e.\ implicit~Euler), since the function is piece-wise constant in time. In general, when working with general DG in time discretisations, one can only guarantee stability in $L^{2p}(0,T;L^2(\Omega)^d)$; see \cite{W.2010} and \cite{AGF.2023} for the spatially conforming and non-conforming cases, respectively. Thus, in general, one would obtain convergence to a weaker notion of solution that might not be unique even when $p=2=d$. Chrysafinos and Walkington \cite{CW.2010b} proved, however, with the help of Ladyzhenskaya's inequality, that for spatially conforming discretisations, one can still obtain $L^\infty(0,T;L^2(\Omega)^d)$-stability for the Newtonian problem ($p=2$) in two spatial dimensions ($d=2$). The main contribution of this work is the extension of this result to the non-Newtonian and non-conforming setting; in particular, we establish that if $p\geq \frac{3d+2}{d+2}$ (i.e.\ when the velocity is an admissible test function), DG discretisations are stable also in $L^\infty(0,T;L^2(\Omega)^d)$. An important step in the proof is the application of a Gagliardo--Nirenberg inequality on DG spaces, which is needed since the numerical solutions are discontinuous across elements, which we also derive and is to the best of our knowledge also new.
\textit{This article is organized as follows:} In Section \ref{sec:preliminaries}, we introduce the employed notation, the basic assumptions on the mesh regularity, and the relevant spaces and operators from~DG~theory.~In~Section~\ref{sec:gagliardo}, we establish a discrete Gagliardo--Nirenberg-type inequality on DG spaces. In Section, \ref{sec:parabolic_interpolation}, using the discrete Gagliardo--Nirenberg-type inequality from Section \ref{sec:gagliardo}, we derive several parabolc discrete interpolation inequalities. These discrete parabolic interpolation inequalities are employed in Section \ref{sec:stablity} to prove the $L^\infty(0,T,L^2(\Omega)^d)$-stability of discontinuous Galerkin schemes for incompressible flows.
\section{Preliminaries}\label{sec:preliminaries}
\qquad Throughout the entire article, if not otherwise specified, we always denote by ${\Omega\subseteq \mathbb{R}^d}$,~${d\in\mathbb{N}}$, a bounded polyhedral Lipschitz domain with outward-pointing unit vector field $\vec{n}\colon \partial\Omega\to \mathbb{S}^{d-1}$.
Then, the time interval will be denoted by $I\coloneqq (0,T)$, $0<T<\infty$, and the parabolic~cylinder~by~$Q\coloneqq I\times \Omega$. For $p\in [1,\infty]$ and $k\in\mathbb{N}$, we will employ standard notation for Lebesgue $L^p(\Omega)$, Sobolev $W^{k,p}(\Omega)$, and Bochner--Sobolev $L^p(I;W^{k,p}(\Omega))$ spaces throughout. For $p\in [1,\infty)$ and $k\in \mathbb{N}$, we~denote~by $W_0^{k,p}(\Omega)$, the closure of the space of smooth functions on $\Omega$ with compact support, with respect to the $\|\cdot\|_{W^{k,p}(\Omega)}$-norm. The subspace of $L^p(\Omega)$ functions with zero mean~will~be~denoted~by~$L_0^p(\Omega)$.
\subsection{Mesh regularity}
\qquad In this subsection, we propose a set of assumptions on the family of partitions $\{\mathcal{T}_{h}\}_{h\in (0,1]}$, which are required in order to apply the theory developed in this paper. These assumptions correspond to the choice in \cite{BO.2009}.
Let $\{\mathcal{T}_{h}\}_{h\in (0,1]}$ be a family of partitions of the closure $\overline{\Omega}$ into convex polyhedral elements, which are affine images of a set of reference polyhedra. More precisely, we assume that there exists a finite number of convex reference polyedra $\widehat{K}_1,\dots,\widehat{K}_N$, such that $\vert\widehat{K}_N\vert = 1 $ for $i = 1,\dots, N$, and that for each $K \in \mathcal{T}_{h}$, there exists a reference element $\widehat{K}_i$ for some $i\in \{1,\dots, N\}$ and an invertible affine map $F_K\colon \widehat{K}_i\to K$ such that $K = F_K (\widehat{K}_i)$. The symbol $h>0$ denotes the maximal mesh size, i.e., if we define $h_K\coloneqq \text{diam}(K)$ for every $K\in \mathcal{T}_{h}$, then we have that $h=\max_{T\in \mathcal{T}_{h}}{h_K}$. Without loss of generality, we assume that $h\in (0, 1]$. We will provide further assumptions on the mesh regularity in the curse of this section.
We define the sets of $(d-1)$-dimensional faces $\Gamma_h$, interior faces $\Gamma_h^{i}$, and boundary faces $\Gamma_h^{\partial}$ of the partition $\mathcal{T}_{h}$ by
\begin{align*}
\Gamma_h&\coloneqq \Gamma_h^{i}\cup \Gamma_h^{\partial}\,,\\[-0.5mm]
\Gamma_h^{i}&\coloneqq \{K\cap K'\mid K,K'\in \mathcal{T}_{h}\,,\text{dim}_{\mathscr{H}}(K\cap K')=d-1\}\,,\\[-0.5mm]
\Gamma_h^{\partial}&\coloneqq\{K\cap \partial\Omega\mid K\in \mathcal{T}_{h}\,,\text{dim}_{\mathscr{H}}(K\cap \partial\Omega)=d-1\}\,,
\end{align*}
where for every $S\subseteq \mathbb{R}^d$, we denote by $\text{dim}_{\mathscr{H}}(S)\coloneqq\inf\{d'\geq 0\mid \mathscr{H}^{d'}(S)=0\}$, the~Hausdorff~dimension. The (local) mesh-size function $h_{\mathcal{T}}\colon \Omega\to \mathbb{R}$ for every element $K\in \mathcal{T}_{h}$ is defined by $h_{\mathcal{T}}|_K\coloneqq h_K $.
The (local) face-size function $h_{\Gamma}\colon \Gamma_h\to \mathbb{R}$ for every facet $F\in \Gamma_h$ is defined by $h_{\Gamma}|_F\coloneqq h_F \coloneqq \text{diam}(F)$.\enlargethispage{4mm}
\begin{assumption}[Mesh quality; cf. \cite{BO.2009}]\label{assum:mesh}
We assume that $\{\mathcal{T}_{h}\}_{h\in (0,1]}$ satisfies the following conditions:
\begin{itemize}[{(iii)}
\item[(i)] \textup{Shape Regularity.} There exist constants $c_1,c_2>0$ such that for every $K\in \mathcal{T}_h$ and~$h\in (0,1]$,~it~holds\vspace{-1.5mm}
\begin{align*}c_1\, h_K^d\leq \vert K\vert\leq c_2\, h_K^d\,.\\[-7mm]
\end{align*}
\item[(ii)] \textup{Contact Regularity.} There exists a constant $c_3>0$ such that for every $F\in \Gamma_h$ with $F\subseteq \overline{K}$ for some $K\in \mathcal{T}_h$ and $h\in (0,1]$, it holds\vspace{-1.5mm}
\begin{align*}c_3\, h_K^{d-1}\leq \mathscr{H}^{d-1}(F)\,.\\[-7mm]
\end{align*}
\item[(iii)] \textup{Submesh condition.} There exists a shape-regular, conforming, matching simplicial submesh $\widetilde{\mathcal{T}_{h}}$ such that\vspace{-2mm}
\begin{itemize}[{3.}
\item[1.] For each $\widetilde{K}\in \widetilde{\mathcal{T}_{h}}$, there exists $K\in \mathcal{T}_{h}$ such that $\widetilde{K}\subseteq K$,
\item[2.] The family $\{\widetilde{\mathcal{T}_{h}}\}_{h\in (0,1]}$ satisfies (i) and (ii).\vspace{-0.5mm}
\item[3.] There exists a constant $\tilde{c}>0$ such that for any $\widetilde{K}\in \smash{\widetilde{\mathcal{T}_{h}}}$, $K\in \mathcal{T}_{h}$ with $\widetilde{K}\subseteq K$,~it~holds~${h_K \leq \tilde{c}\, h_{\widetilde{K}}}$.
\end{itemize}
\end{itemize}
\end{assumption}
\begin{remark}
We note that in dimension $d \in \{ 2, 3\}$ a simplicial submesh can be constructed under mild assumptions on the partitions $\{\mathcal{T}_{h}\}_{h\in (0,1]}$ (cf. \cite[Corollary 7.3]{Brenner.2003}). In addition, it seems straightforward to generalize this proof to arbitrary dimensions $d\ge 2$.
\end{remark}
\subsubsection{Broken function spaces and projectors}
\qquad For every $k \in \mathbb{N}_0$~and~${K\in \mathcal{T}_h}$,
we denote by $\mathbb{P}_k(K)$, the space of
polynomials of degree at most $k$ on $K$. Then, for given $k\in \mathbb{N}_0$, we define the space of \textit{broken polynomials of global degree at most $k$}
\begin{align*}
\mathbb{P}_k(\mathcal T_h)&\coloneqq\big\{v_h\in L^\infty(\Omega)\mid v_h|_K\in \mathbb{P}_k(K)\text{ for all }K\in \mathcal{T}_h\big\}\,.
\end{align*}
In addition, for
given~$p\in (1,\infty)$,
we define the \textit{broken Sobolev space}
\begin{align*}
W^{1,p}(\mathcal T_h)&\coloneqq\big\{w_h\in L^p(\Omega)\mid w_h|_K\in W^{1,p}(K)\text{ for all }K\in \mathcal{T}_h\big\}\,.
\end{align*}
For each $w_h\!\in\! W^{1,p}(\mathcal{T}_h)$, we denote by $\nabla_h w_h\!\in\! L^p(\Omega)^d$,
the \textit{local gradient}, for~every~${K\!\in\!\mathcal{T}_h}$,~defined by
$(\nabla_h w_h)|_K\!\coloneqq\!\nabla(w_h|_K)$~for~all~${K\!\in\!\mathcal{T}_h}$.
For each $K\!\in\! \mathcal{T}_h$,~${w_h\!\in\! W^{1,p}(\mathcal{T}_h)}$~admits~a~trace~${\textrm{tr}^K(w_h)\!\in\! L^p(\partial K)}$. For each face
$F\in \Gamma_h$ of a given element $K\in \mathcal{T}_h$, we define~this~interior trace by
$\smash{\textup{tr}^K_F(w_h)\in L^p(F)}$. Then, given some multiplication operator~${\odot\colon \mathbb{R}^m\times \mathbb{R}^d\to \mathbb{R}^l}$,~${m,l\in \mathbb{N}}$, for
every $w_h\in W^{1,p}(\mathcal{T}_h)$ and interior faces $F\in \Gamma_h^{i}$ shared by
adjacent elements $K^-_F, K^+_F\in \mathcal{T}_h$,
we denote by
\begin{align*}
\{w_h\}_F&\coloneqq\tfrac{1}{2}\big(\textup{tr}_F^{K^+}(w_h)+
\textup{tr}_F^{K^-}(w_h)\big)\in
L^p(F)\,,\\
\llbracket w_h\odot \vec{n}\rrbracket_F
&\coloneqq\textup{tr}_F^{K^+}(w_h)\odot\vec{n}^+_F+
\textup{tr}_F^{K^-}(w_h)\odot\vec{n}_F^-
\in L^p(F)\,,
\end{align*}
the \textit{average} and \textit{jump}, respectively, of $w_h$ on $F$.
Moreover, for every $w_h\in W^{1,p}(\mathcal{T}_h)$ and boundary faces $F\in \Gamma_h^{\partial}$, we define \textit{boundary averages} and
\textit{boundary jumps}, respectively, by
\begin{align*}
\{w_h\}_F&\coloneqq\textup{tr}^\Omega_F(w_h) \in L^p(F)\,, \\
\llbracket w_h\odot\vec{n}\rrbracket_F&\coloneqq
\textup{tr}^\Omega_F(w_h)\odot\vec{n} \in L^p(F)\,.
\end{align*}
If there is no
danger~of~confusion, we will omit the index $F\in \Gamma_h$; in particular, if we interpret jumps and averages as global functions defined on the whole of $\Gamma_h$.
Apart from that, for every $w_h\in W^{1,p}(\mathcal{T}_h)$, we introduce the DG norm via
\begin{align*}
\|w_h\|_{h,p}\coloneqq\Big(\|\nabla_hw_h\|_{L^p(\Omega)}^p+\big\|h^{-\frac{1}{p'}}_\Gamma\jump{w_h\vec{n}}\big\|_{L^p(\Gamma_h)}^p\Big)^{1/p}\,,
\end{align*}
which turns $W^{1,p}(\mathcal{T}_h)$ into a Banach space\footnote{The completeness of $W^{1,p}(\mathcal{T}_h)$ equipped with $\|\cdot\|_{h,p}$, for each fixed $h\in (0,1]$, follows from ${\|w_h\|_{L^p(\Omega)}\lesssim\|w_h\|_{\nabla,p,h}}$ for all $w_h\in \smash{W^{1,p}(\mathcal{T}_h)}$ (cf.~\cite[Lemma A.9]{DKRI14}) and an element-wise application of the trace theorem.}.
With this norm, cf.~\cite[Lm. A.9]{DKRI14},~for~every~${w_h\in W^{1,p}(\mathcal{T}_h)}$, there holds the discrete Poincar\'e inequality
\begin{equation}\label{eq:poincare}
\|w_h\|_{L^p(\Omega)} \lesssim \|w_h\|_{h,p}\,.
\end{equation}
Whenever we write $A \lesssim B$, it is meant that $A \leq c\, B$ with a constant $c>0$ that might depend on the domain, polynomial degree and/or shape regularity, but is independent of the discretisation parameters (i.e., the mesh size $h>0$ or the time step size $\tau>0$).
\section{Discrete Gagliardo--Nirenberg-type inequality}\label{sec:gagliardo}
\qquad In this section, we derive a discrete Gagliardo--Nirenberg-type inequality.
Key ingredient is
the quasi-interpolation operator $Q_h\colon \mathbb{P}_k(\mathcal{T}_{h})\to \mathbb{P}_1(\smash{\widetilde{\mathcal{T}_{h}}})\cap W^{1,\infty}(\Omega)$, where $\smash{\widetilde{\mathcal{T}_{h}}}$ denotes the simplicial submesh in Assumption \ref{assum:mesh} (c), introduced~in~\cite{BO.2009}, and its approximation and stability properties~on~DG~spaces:\enlargethispage{11.5mm}
\begin{lemma}\label{lem:scott_zhang_stable}
Let $p\in [1,\infty)$ and $k\in \mathbb{N}_0$. Then,
for every $v_h\in \mathbb{P}_k(\mathcal{T}_{h})$, it holds\vspace{-1mm}
$$
\|\nabla Q_hv_h\|_{L^p(\Omega)}\lesssi
\|v_h\|_{h,p}\,.\\[-3.5mm]
$$
\end{lemma}
\begin{proof}
See \cite[Thm. 3.1, (3.11)]{BO.2009}.
\end{proof}
\begin{lemma}\label{lem:scott_zhang_approx}
Let $p,s\in [1,\infty)$ and $k\in\mathbb{N}_0$.
Then,
for every $v_h\in \mathbb{P}_k(\mathcal{T}_{h})$ and $K\in \mathcal{T}_{h}$, it holds\vspace{-2mm}\footnote{For every $p\in [1,\infty)$, $w_h\in W^{1,p}(\mathcal{T}_{h})$, and $K\in \mathcal{T}_{h}$, we define
$\smash{\|w_h\|_{h,p,\omega_K}\coloneqq (\|\nabla_hw_h\|_{L^p(\omega_K)}^p+\|h^{-1/p'}_\Gamma\jump{w_h\vec{n}}\|_{L^p(\Gamma_h\cap \omega_K)}^p)^{1/p}}$}
$$
\|v_h-Q_hv_h\|_{L^s(K)}\lesssim
h_K^{1+d(\frac{1}{s}-\frac{1}{p})}\|v_h\|_{h,p,\omega_K}\,,
$$
where $\omega_K\coloneqq \bigcup\{K'\in\mathcal{T}_{h}\mid K'\cap K\neq\emptyset\} $. In particular, for every $v_h\in \mathbb{P}_k(\mathcal{T}_{h})$,
it holds\vspace{-1mm}
$$
\|v_h-Q_hv_h\|_{L^p(\Omega)}\lesssim \|h_{\mathcal{T}}v_h\|_{h,p}\,.\\[-3.5mm]
$$
\end{lemma}
\begin{proof}
See \cite[Thm. 3.1, (3.7) \& (3.10)]{BO.2009}.
\end{proof}
\begin{corollary}\label{cor:scott_zhang_stable}
Let $p\in [1,\infty)$ and $k\in\mathbb{N}_0$. Then,
for every $v_h\in \mathbb{P}_k(\mathcal{T}_{h})$ and $K\in \mathcal{T}_{h}$, it holds\vspace{-1mm}
$$
\| Q_hv_h\|_{L^p(K)}+\| v_h-Q_hv_h\|_{L^p(K)}\lesssim
\|v_h\|_{L^p(\omega_K)}
\,
$$
In particular, for every $v_h\in \mathbb{P}_k(\mathcal{T}_{h})$,
it holds\vspace{-1mm}
$$
\| Q_hv_h\|_{L^p(\Omega)}+\| v_h-Q_hv_h\|_{L^p(\Omega)}\lesssim
\|v_h\|_{L^p(\Omega)} \,.\\[-3.5mm]
$$
\end{corollary}
\begin{proof}\let\qed\relax
Using the $L^p$-approximation property of $Q_h$ for $s=p$ (cf. Lemma \ref{lem:scott_zhang_approx}), the inverse inequality (cf. \cite[Ex. 12.3]{EG21}), and the discrete trace inequality (cf.~\cite[Lm. 12.8]{EG21}), we find that
\begin{align*}
\| Q_hv_h\|_{L^p(K)}+\| v_h-Q_hv_h\|_{L^p(K)}&\lesssim \| v_h\|_{L^p(K)} + \| v_h -Q_hv_h\|_{L^p(K)}\\&
\lesssim \| v_h\|_{L^p(K)}+h_K\,\| v_h\|_{h,p,\omega_K}
\lesssim \| v_h\|_{L^p(\omega_K)}\,.\tag*{\qedsymbol}
\end{align*}
\end{proof}\vspace*{-10mm}
\begin{lemma}[Gagliardo--Nirenberg]\label{lem:gagliardo}
Let $p,q\in [1,\infty)$ and $k\in\mathbb{N}_0$. Then,
for every $v_h\in \mathbb{P}_k(\mathcal{T}_{h})$, it holds\vspace{-1mm}\enlargethispage{3mm}
\begin{align*}
\|v_h\|_{L^s(\Omega)}\lesssim
\|v_h\|_{h,p}^\gamma\|v_h\|_{L^q(\Omega)}^{1-\gamma}\,,
\end{align*}
where $s\in [1,\infty)$ and $\gamma\in [0,1]$ satisfy\vspace{-1.5mm}
\begin{align}
\gamma=\frac{\frac{1}{q}-\frac{1}{s}}{\frac{1}{q}+\frac{1}{d}-\frac{1}{p}}\,.\label{eq:gamma}
\end{align}
\end{lemma}
Analogously to \cite[Thm. I.2.1]{dibene}, for each $d\ge 2$, the admissible range for $p,q,s\in [1,\infty)$ and $\gamma\in [0,1]$ satisfying \eqref{eq:gamma}, setting $p_*\coloneqq \smash{\frac{dp}{d-p}}$ if $p<d$, is given by:\vspace{-3mm}
\begin{subequations}\label{eq:admissibility}
\begin{alignat}{4}
&\text{if }p\in [1,d):\quad &&\gamma\in [0,1]\qquad &&\text{and}\qquad &&s\in \begin{cases}
[q,p_*]&\text{if }q\in [1,p_*]\\
[p_*,q]&\text{if }q\in [p_*,\infty)
\end{cases}\,,\label{eq:admissibility.1}\\
&\text{if }p\in [d,\infty):\quad &&s\in [q,\infty)\qquad &&\text{and}\qquad &&\gamma\in \big[0,\tfrac{dp}{dp+q(p-d)}\big)\,.\label{eq:admissibility.2}
\end{alignat}
\end{subequations}
\begin{proof}[Proof (of Lemma \ref{lem:gagliardo}).]
To begin with, we observe that
\begin{align}\label{eq:gagliardo.1}
\|v_h\|_{L^s(\Omega)}\leq \|Q_hv_h\|_{L^s(\Omega)}+\|v_h-Q_hv_h\|_{L^s(\Omega)}\eqqcolon I_h^1+I_h^2\,.
\end{align}
As a result, it suffices to estimate $I_h^1$ and $I_h^2$ separately:
\textit{ad $I_h^1$.} Using the classical Galgiardo--Nirenberg inequality \cite{Nir59}, the discrete Poincar\'e~inequality~\eqref{eq:poincare}, the DG-stability of $Q_h$ (cf.~Lemma~\ref{lem:scott_zhang_stable}), and the $L^q$-stability~property~of~$Q_h$~(cf.~Corollary~\ref{cor:scott_zhang_stable}),~we~deduce~that
\begin{align}\label{eq:gagliardo.2}
\begin{aligned}
I_h^1&\lesssim \,(\| Q_hv_h\|_{L^p(\Omega)}+\|\nabla Q_hv_h\|_{L^p(\Omega)})^{\gamma}\|Q_hv_h\|_{L^q(\Omega)}^{1-\gamma}
\\
&\lesssim \|Q_h v_h\|_{h,p}^{\gamma}\|Q_hv_h\|_{L^q(\Omega)}^{1-\gamma}
\\&\lesssim \|v_h\|_{h,p}^{\gamma}\|v_h\|_{L^q(\Omega)}^{1-\gamma}\,.
\end{aligned}
\end{align}
\textit{ad $I_h^2$.} Using Lemma \ref{lem:scott_zhang_approx}, \cite[Ex. 12.4]{EG21} for all $K\in \mathcal{T}_{h}$ and $\widetilde{K}\in \widetilde{\mathcal{T}_{h}}$, that $h_K\leq \tilde{c}\,h_{\widetilde{K}}\leq \tilde{c}\,h_K$~for~all~$K\in \mathcal{T}_{h}$ and $\widetilde{K}\in \widetilde{\mathcal{T}_{h}}$ with $\widetilde{K}\subseteq K$ (cf. Assumption \ref{assum:mesh} (c) 3.), that $\textup{card}(\{\widetilde{K}\in \smash{\widetilde{\mathcal{T}_{h}}}\mid \widetilde{K}\subseteq K\})\lesssim 1$ for all $K\in \mathcal{T}_{h}$ (cf. \cite[Lm. 1.40]{EP12}), Corollary \ref{cor:scott_zhang_stable}, and that
\begin{align*}
\sum_{i\in \mathbb{L}}{\vert a_i\vert^s}\leq \bigg(\sum_{i\in \mathbb{L}}{\vert a_i\vert}\bigg)^s
\end{align*}
for any finite subset $\mathbb{L}\subseteq \mathbb{N}$ and finite sequence $(a_i)_{i\in \mathbb{L}}\subseteq \mathbb{R}$,
we find that
\begin{align}\label{eq:gagliardo.3}
\begin{aligned}
(I_h^2)^s&\leq \sum_{K\in \mathcal{T}_{h}}{\Big(\|v_h-Q_hv_h\|_{L^s(K)}^{\gamma}\|v_h-Q_hv_h\|_{L^s(K)}^{1-\gamma}}\Big)^s
\\&\lesssim \sum_{K\in \mathcal{T}_{h}}{\Bigg(\Big(h_K^{1+d(\frac{1}{s}-\frac{1}{p})}\|v_h\|_{h,p,\omega_K}\Big)^{\gamma}\bigg(\sum_{\widetilde{K}\in \widetilde{\mathcal{T}_{h}};\widetilde{K}\subseteq K}{\|v_h-Q_hv_h\|_{L^s(\widetilde{K})}^s}\bigg)^{\smash{\frac{1-\gamma}{s}}}\Bigg)^s}
\\&\lesssim \sum_{K\in \mathcal{T}_{h}}{\Bigg(\Big(h_K^{1+d(\frac{1}{s}-\frac{1}{p})}\|v_h\|_{h,p,\omega_K}\Big)^{\gamma}\bigg(\sum_{\widetilde{K}\in \widetilde{\mathcal{T}_{h}};\widetilde{K}\subseteq K}{h_{\widetilde{K}}^{d(\frac{1}{s}-\frac{1}{q})s}\|v_h-Q_hv_h\|_{L^q(\widetilde{K})}^s\bigg)^{\smash{\frac{1-\gamma}{s}}}\Bigg)}\Bigg)^s}
\\&\lesssim \sum_{K\in \mathcal{T}_{h}}{\bigg(\Big(h_K^{1+d(\frac{1}{s}-\frac{1}{p})}\|v_h\|_{h,p,\omega_K}\Big)^{\gamma}\Big(h_K^{d(\frac{1}{s}-\frac{1}{q})}\|v_h-Q_hv_h\|_{L^q(K)}\Big)^{1-\gamma}\bigg)^s}
\\&\lesssim \sum_{K\in \mathcal{T}_{h}}{\bigg(h_K^{(1+d(\frac{1}{s}-\frac{1}{p}))\gamma+d(\frac{1}{s}-\frac{1}{q})(1-\gamma)}}\|v_h\|_{h,p,\omega_K}^{\gamma}\|v_h\|_{L^q(\omega_K)}^{1-\gamma}\bigg)^s
\\&\lesssim \bigg(\sum_{K\in \mathcal{T}_{h}}{h_K^{(1+d(\frac{1}{s}-\frac{1}{p}))\gamma+d(\frac{1}{s}-\frac{1}{q})(1-\gamma)}}\|v_h\|_{h,p,\omega_K}^{\gamma}\|v_h\|_{L^q(\omega_K)}^{1-\gamma}\bigg)^s\,.
\end{aligned}
\end{align}
By the definition of $\gamma\in [0,1]$, cf. \eqref{eq:gamma}, it holds
\begin{align}\label{eq:gagliardo.4}
\begin{aligned}
(1+d(\tfrac{1}{s}-\tfrac{1}{p}))\gamma+d(\tfrac{1}{s}-\tfrac{1}{q})(1-\gamma)=0 \,.
\end{aligned}
\end{align}
Using~\eqref{eq:gagliardo.4}~in~\eqref{eq:gagliardo.3}, in particular, using that each $K\in \mathcal{T}_{h}$ appears only in finitely many $\omega_{K'}$, $K'\in \mathcal{T}_{h}$, we arrive at\enlargethispage{1mm}
\begin{align}\label{eq:gagliardo.5}
I_h^2\lesssim \|v_h\|_{h,p}^{\gamma}\|v_h\|_{L^q(\Omega)}^{1-\gamma}\,.
\end{align}
Eventually, combining \eqref{eq:gagliardo.2} and \eqref{eq:gagliardo.5} in \eqref{eq:gagliardo.1}, we conclude the assertion.
\end{proof}
\section{Parabolic interpolation inequalities for discontinuous elements}\label{sec:parabolic_interpolation}
\qquad In this section, we derive parabolic interpolation inequalities which will be employed in Section~\ref{sec:stablity} to establish the $L^\infty(I;L^2(\Omega)^d)$-stability of discontinuous Galerkin~schemes.\vspace{-1.5mm}
\begin{lemma}[Parabolic interpolation inequality]\label{lem:parabolic_interpolation}
Let $p,q,s\in [1,\infty)$ be such that $q\leq s$, let $\gamma\in [0,1]$ be such that \eqref{eq:gamma} is satisfied and let $k\in \mathbb{N}_0$. Then,
for every $v_h\in L^\infty(I;\mathbb{P}_k(\mathcal{T}_{h}))$, it holds
\begin{align*}
\|v_h\|_{L^r(I;L^s(\Omega))}\lesssim \bigg(\int_I{\|v_h(t)\|_{h,p}^q\,\mathrm{d}t}\bigg)^{\smash{\gamma/p}}\|v_h\|_{L^\infty(I;L^q(\Omega))}^{1-\gamma}\,,
\end{align*}
where $r=\smash{\frac{s(p(q+d)-dq)}{(s-q)d}}\in (1,\infty]$.\vspace{-2.5mm}
\end{lemma}
\begin{proof}
By assumption on $p,q,s\in [1,\infty)$ and $\gamma\in [0,1]$, cf. \eqref{eq:gamma}, we can apply the discrete Gagliardo--Nirenberg-type inequality (cf. Lemma \ref{lem:gagliardo}) to find for almost every $t\in I$ that
\begin{align}
\smash{\|v_h(t)\|_{L^s(\Omega)}\lesssim \|v_h(t)\|_{h,p}^\gamma\|v_h(t)\|_{L^q(\Omega)}^{1-\gamma}\,,}\label{eq:parabolic_interpolation}
\end{align}
where $\gamma=\smash{\frac{(s-q)dp}{s(p(q+d)-dq)}}\in [0,1]$. Next, we need to distinguish the cases $s>q$ and $s=q$:
\textit{Case $s>q$.} If $s>q$, then, we have that $0<\gamma\leq 1<p$ and, consequently,~$r=\smash{\frac{p}{\gamma}}\in (1,\infty)$. Raising the inequality \eqref{eq:parabolic_interpolation} to the power $r\in (1,\infty)$, integrating with respect to $t\in I$, pulling out the $L^\infty$-norm of the second factor of the integrand and taking the $r$-th root shows the claim.
\textit{Case $s=q$.} If $s = q$, using Hölder’s inequality, the claim follows with $r = \infty$ and $\gamma = 0$.
\end{proof}
\begin{corollary}\label{cor:parabolic_interpolation}
Let $p\in [\frac{2d}{d+2},\infty)$ and $k\in \mathbb{N}_0$. Then,
for every $v_h\in L^\infty(I;\mathbb{P}_k(\mathcal{T}_{h}))$, it holds
\begin{align*}
\|v_h\|_{L^{p_*}(Q)}\lesssim \bigg(\int_I{\|v_h(t)\|_{h,p}^p\,\mathrm{d}t}\bigg)^{\smash{\gamma/p}}\|v_h\|_{L^\infty(I;L^2(\Omega))}^{1-\gamma}\,,
\end{align*}
where $\gamma=\frac{d}{d+2}$ and $p_*=p\frac{d+2}{d}$.\vspace{-5mm}
\end{corollary}
\begin{proof}
We apply Lemma \ref{lem:parabolic_interpolation} with $q \!= \!2$ and $r \!= \!s\! =\! p_*$, noting that one has admissibility by \eqref{eq:admissibility},~if~${p \!\ge\! \frac{2d}{d+2}}$. In fact, this is obvious if $p \in [d, \infty)$. For $p \in [1, d)$, it holds $s=p_* \in [2, p^*]$ if and only if $p\ge \frac{2d}{d+2}$.
\end{proof}
\begin{remark}
Applying the results we have presented so far component-wise, one can obtain analogous statements for vector-valued functions. In this case, one defines the DG~norm~of~$\vec{w}\in W^{1,p}(\mathcal{T}_{h})^d$ as:
\begin{align*}
\smash{\|\vec{w}_h\|_{h,p}\coloneqq\Big(\|\nabla_h \vec{w}_h\|_{L^p(\Omega)}^p+\big\|h^{-\frac{1}{p'}}_\Gamma\jump{\vec{w}_h \otimes\vec{n}}\big\|_{L^p(\Gamma_h)}^p\Big)^{\smash{1/p}}}\,.
\end{align*}
\end{remark}
\begin{remark}
Consider the alternative norm for $\vec{w}_h \in W^{1,p}(\mathcal{T}_{h})^d$:
\begin{align*}
\smash{{|||}\bm{w}_h{|||}_{h,p} \coloneqq \Big( \|\tens{D}_h(\bm{w}_h)\|^p_{L^p(\Omega)}+\|h_\Gamma^{-\frac{1}{p'}}\jump{\bm{w}_h\otimes \bm{n}}\|^p_{L^p(\Gamma_h^{i})}
+\|h_\Gamma^{-\frac{1}{p'}}\bm{w}_h\cdot \bm{n}\|^p_{L^p(\Gamma_h^{\partial})}+\|(\bm{w}_h)_\tau\|^p_{L^p(\Gamma_h^{\partial})} \Big)^{\smash{1/p}}\,,}
\end{align*}
where only the normal component $\vec{w}_h \cdot \vec{n}$ is penalised on $\Gamma_h^{\partial}$; here, $(\vec{w}_h)_\tau$ denotes the tangential part of $\vec{w}_h$ on the boundary, i.e., $(\vec{w}_h)_\tau \coloneqq \vec{w}_h - (\vec{w}_h\cdot \vec{n})\vec{n}$. If one~manages~to~prove~the~existence of a quasi-interpolation operator $Q_h^{\boldsymbol{n}}\colon \mathbb{P}^k(\mathcal{T}_{h})^d\to W^{1,\infty}(\Omega)^d$ that has analogous stability and approximation properties to those described in Lemma \ref{lem:scott_zhang_stable} and Lemma \ref{lem:scott_zhang_approx}, but using the norm ${|||}\cdot{|||}_{h,p}$, then all the results presented in this work would also apply for the problem with Navier's~slip~boundary~conditions:
\begin{align*}
\begin{aligned}
\vec{u}\cdot \vec{n} &= 0 &\quad \text{on }\partial \Omega\,,\\
-(\tens{S}\vec{n})_\tau &= \gamma \vec{u}_\tau &\quad \text{on }\partial\Omega\,,
\end{aligned}
\end{align*}
where $\gamma>0$ is a parameter. Such a DG method enforces the normal condition $\vec{u}\cdot\vec{n}=0$ weakly, which has been observed to be advantageous in practice; see, e.g., \cite{GS.2022}. To the best of our knowledge, such an operator is not yet available in the literature.
\end{remark}
\section{Stability of DG schemes for non-Newtonian fluids}\label{sec:stablity}
\subsection{Continuous model and its discretisation}
\qquad Let us assume that the initial data belongs to $\vec{u}_0 \in L^2_{\rmdiv}(\Omega)^d$ and, for simplicity, we will take the forcing function in $\bm{f}\in C^0(I;L^{p'}(\Omega)^d)$. In the weak formulation of problem \eqref{eq:continuous_PDE}, we look for a triplet of functions
\begin{gather*}
\tens{S} \in L^{p'}(Q)^{d\times d}_{\mathop{\mathrm{sym}}\nolimits, \mathop{\mathrm{tr}}\nolimits}\,,\quad
\vec{u} \in L^{p}(I;W^{1,p}_0(\Omega)^d) \cap L^\infty(I;L^2(\Omega)^d)\,, \quad
p \in H^{-1}(I;\Lmean{p'})\,,
\end{gather*}
such that for every $\vec{v}\in C^\infty_0(\Omega)^d$, $\phi \in C^\infty_0([0,T))$, and $ q\in C^\infty_0(Q)$, it holds
\begin{subequations}\label{eq:weak_PDE}
\begin{align}
\tens{G}(\tens{S}, \tens{D}(\vec{u})) &= \bm{0} \quad\text{a.e. in }Q\,, \\
-\int_Q \vec{u} \cdot \vec{v} \partial_t \phi \,{\rm d} t{\rm d} x
-\int_\Omega \vec{u}_0 \cdot \vec{v} \phi(0) \,{\rm d} x
+ \int_Q [\tens{S}-\vec{u} \otimes \vec{u}-p\mathbb{I}_d] \fp \tens{D}(\vec{v}) \phi \,{\rm d} t{\rm d} x
&= \int_Q \bm{f}\cdot \vec{v} \phi \,{\rm d} t{\rm d} x\,,\\
-\int_Q q \rmdiv \vec{u}\,{\rm d} t{\rm d} x &= 0\,.
\end{align}
\end{subequations}
Note that the exponent $p>1$ is determined by the coercivity condition \eqref{eq:coercivity}. The existence of global weak solutions for large data (assuming $p>\frac{2d}{d+2}$) under monotonicity assumptions for $\tens{G}$ was proved in \cite{BGMS.2012} by working with the graph induced by $\tens{G}$, and later in \cite{BMM.2021} by working with the function $\tens{G}$ directly. In the non-monotone case, existence of weak solutions is not known, but numerical experiments seem to produce reasonable results \cite{JMPT.2019}.
Let us fix polynomial degrees $k_{\vec{u}}, k_{\pi}\in \mathbb{N}$ for the velocity and pressure approximations, respectively; we assume that $k_{\vec{u}}\geq 1$ and $k_{\pi} \leq k_{\vec{u}}$. The spaces corresponding to the discrete approximations~are,~then, defined as
\begin{equation*}
\mathbb{V}^h \coloneqq \mathbb{P}_{k_{\vec{u}}}(\mathcal{T}_{h})^d\,,
\qquad
\mathbb{M}^h \coloneqq \mathbb{P}_{k_{\pi}}(\mathcal{T}_{h}) \cap \Lmean{p'}\,.
\end{equation*}
The space $\mathbb{M}^h$ is equipped with the norm $\norm{\cdot}_{\Lp{p'}}$, while the velocity space $\mathbb{V}^h$~is~equipped~with~the~norm
\begin{equation}\label{eq:DG_norm2}
\norm{\cdot}_{h,p}
\coloneqq \big(
\norm{\tens{D}_h (\cdot)}_{\Lp{p}}^p
+
|\cdot|^p_{\Gamma_h,p}\big)^{1/p}\,,
\end{equation}
where the jump semi-norm for vector-valued functions $\vec{v}_h\in \mathbb{V}^h$ is defined as
\begin{equation}\label{eq:jump_semi-norm2}
|\vec{v}_h|^p_{\Gamma_h,p} \coloneqq \int_{\Gamma_h} h_\Gamma^{1-p} |\jump{\vec{v}_h\otimes \vec{n}}|^p\,{\rm d} s.
\end{equation}
It can be shown (see \cite[Eq. (1.19)]{B.2003} or \cite[Prop.\ 2.4]{KR.2022}) that for every $\vec{v}_h\in \mathbb{V}^h$, there holds
the discrete Korn-type inequality
\begin{equation}\label{eq:korn}
\begin{gathered}
\|\vec{v}_h\|_{L^p(\Omega)} + \|\nabla_h \vec{v}_h\|_{L^p(\Omega)} \lesssim \|\vec{v}_h\|_{h,p}\,.
\end{gathered}
\end{equation}
Before we present the discretised system, it will be useful to introduce the~notion~of~discrete~gradients. For $l\geq 0$, let us define a discrete gradient operator $\mathcal{G}_h^l\colon \mathbb{V}^h \to \mathbb{P}_{\max\{k_{\vec{u}}-1,l\}}(\mathcal{T}_{h})^{d\times d}$ through the relation
\begin{equation}\label{eq:discrete_gradient}
\mathcal{G}_h^l(\vec{v}_h) \coloneqq \nabla_h \vec{v}_h - R^l_h(\vec{v}_h)\quad\text{ in }\mathbb{P}_{\max\{k_{\vec{u}}-1,l\}}(\mathcal{T}_{h})^{d\times d}\,,
\end{equation}
where $R^l_h(\vec{v}_h) \in \mathbb{P}_l(\mathcal{T}_{h})^{d\times d}$, for every $\bm{t}_h \in \mathbb{P}_l(\mathcal{T}_{h})^{d\times d}$, is defined through
\begin{equation}\label{eq:lifting_jumps}
\int_\Omega R_h^l(\vec{v}_h) \fp \bm{t}_h\,{\rm d} x
=
\int_{\Gamma_h} \left[\!\left[ \vec{v}_h\otimes \vec{n} \right]\!\right] \fp \avg{\bm{t}_h} \,{\rm d} s\,.
\end{equation}
While the natural choice seems to be $l\!=\!k_{\vec{u}}\!-\!1\!\in\! \mathbb{N}_0$ (this will be set whenever the index~${l\!\in \!\mathbb{N}_0}$~is~omitted), the number $l\in \mathbb{N}_0$ is a parameter and can be chosen freely; for instance, if $l=0$, the implementation becomes easier as $R^l_h$ can be, then, computed through element-wise averages; on the other hand, taking $l=k_{\vec{u}}+1\in\mathbb{N}$ seems to be advantageous, in the linear case at least, in that the method does not require jump penalisation \cite{JNS.2016}. We will shortly explore yet another choice when defining the discrete convective term. Note that if $\bm{t}_h \in C_0^\infty(\Omega)^{d\times d}$, then this is precisely the distributional gradient of $\vec{v}_h$. It is possible to prove stability of the discrete gradient (see e.g.\ \cite[Prop.\ 2.1]{dPE.2010} or \cite[Lm.\ 7]{BO.2009}), i.e., that for every $\vec{v}_h \in \mathbb{V}^h$, it holds
\begin{equation}\label{eq:discrete_gradient_stability}
\|\mathcal{G}_h^l(\vec{v}_h)\|_{L^p(\Omega)} \lesssim \|\vec{v}_h\|_{h,p} \,.
\end{equation}
The discrete symmetric gradient $\mathcal{G}^l_{h,\mathop{\mathrm{sym}}\nolimits}\colon \mathbb{V}^h\to \mathbb{P}_l(\mathcal{T}_{h})_{\mathop{\mathrm{sym}}\nolimits}^{d\times d}$, for every $\vec{v}_h\in \mathbb{V}^h$,
is defined through
\begin{align}
\mathcal{G}^l_{h,\mathop{\mathrm{sym}}\nolimits}(\vec{v}_h) \coloneqq \tens{D}_h(\vec{v}_h) - R^l_{h,\mathop{\mathrm{sym}}\nolimits}(\vec{v}_h)\quad\text{ in }\mathbb{P}_{\max\{k_{\vec{u}}-1,l\}}(\mathcal{T}_{h})_{\mathop{\mathrm{sym}}\nolimits}^{d\times d}\,,
\end{align} where now $ R_{h,\mathop{\mathrm{sym}}\nolimits}^l(\vec{v}_h)\in \mathbb{P}_l(\mathcal{T}_{h})_{\mathop{\mathrm{sym}}\nolimits}^{d\times d}$, for every $\bm{t}_h \in \mathbb{P}_l(\mathcal{T}_{h})_{\mathop{\mathrm{sym}}\nolimits}^{d\times d}$, is defined through
\begin{equation}
\int_\Omega R_{h,\mathop{\mathrm{sym}}\nolimits}^l(\vec{v}_h) \fp \bm{t}_h \,{\rm d} x
=
\int_{\Gamma_h} \left[\!\left[ \vec{v}_h\otimes \vec{n} \right]\!\right] \fp \avg{\bm{t}_h}\,{\rm d} s\,.
\end{equation}
Similarly, one can define a discrete divergence operator $\mathcal{D}_h^l\colon \mathbb{V}^h\to \mathbb{P}_{\max\{k_{\bm{u}}-1,l\}}(\mathcal{T}_{h})$
by taking the trace, i.e.,
for every $\vec{v}_h\in \mathbb{V}^h$, we define
\begin{align}
\mathcal{D}_h^l(\vec{v}_h) \coloneqq \mathop{\mathrm{tr}}\nolimits(\mathcal{G}_h^l(\vec{v}_h)) = \rmdiv_h(\vec{v}_h) + \mathop{\mathrm{tr}}\nolimits(R^l_h(\vec{v}_h))\quad\text{ in }\mathbb{P}_{\max\{k_{\bm{u}}-1,l\}}(\mathcal{T}_{h})\,.
\end{align}
The trace of $R^l_h(\vec{v}_h)\!\in \!\mathbb{P}_l(\mathcal{T}_{h})^{d\times d}$ for $\vec{v}_h\!\in\! \mathbb{V}^h$ can be computed from \eqref{eq:lifting_jumps} by~taking~${\bm{t}_h\! =\! q_h \mathbb{I}_d\!\in\! \mathbb{P}_l(\mathcal{T}_{h})_{\mathop{\mathrm{sym}}\nolimits}^{d\times d}}$, where $q_h \in \mathbb{P}_l(\mathcal{T}_{h})$ is arbitrary and $\mathbb{I}_d\in \mathbb{R}^{d\times d}$ is the identity matrix. In particular, for every $q_h \in \mathbb{P}_l(\mathcal{T}_{h})$, we can write
\begin{equation}\label{eq:discrete_divergence}
\int_\Omega q_h \mathcal{D}_h^l(\vec{v}_h)\,{\rm d} x
=
\int_\Omega q_h \rmdiv_h\vec{v}_h\,{\rm d} x
- \int_{\Gamma_h} \jump{\vec{v}_h \cdot \vec{n}} \avg{q_h}\,{\rm d} s\,.
\end{equation}
Whenever the index $l\in \mathbb{N}_0$ is omitted, it is meant that $l= k_{\pi}$, in which case \eqref{eq:discrete_divergence} holds for all $q_h\in \mathbb{M}^h$.
Regarding the convective term, we wish to preserve the following skew-symmetry~property~that~is valid at the continuous level: for every $\vec{u},\vec{v},\vec{w}\in C^\infty_0(\Omega)^d$, where $\rmdiv\vec{u} = 0$ in $\Omega$, it holds
\begin{equation}
\int_\Omega (\vec{v}\otimes \vec{u})\fp \nabla \vec{w} \,{\rm d} x
=
-\int_\Omega (\vec{w}\otimes \vec{u})\fp \nabla \vec{v}\,{\rm d} x\,.
\end{equation}
In the case when discretely divergence-free functions are also point-wise divergence-free (as is, e.g., the case when $\mathbb{V}^h$ is $H(\rmdiv;\Omega)$-conforming and $\mathbb{M}^h = \rmdiv\mathbb{V}^h$),~for~every~${\vec{u}_h,\vec{v}_h,\vec{w}_h\in \mathbb{V}^h}$, we simply define
\begin{align}
\begin{aligned}
\hat{\mathcal{C}}_h[\vec{u}_h,\vec{v}_h,\vec{w}_h] &\coloneqq -\int_\Omega (\vec{v}_h\otimes \vec{u}_h)\fp \mathcal{G}_h^{2k_{\vec{u}}}(\vec{w}_h)\,{\rm d} x
\\&= -\int_\Omega (\vec{v}_h \otimes \vec{u}_h) \fp \nabla_h\vec{w}_h \,{\rm d} x
+ \int_{\Gamma_h} \avg{\vec{v}_h \otimes \vec{u}_h}\fp \jump{\vec{w}_h \otimes \vec{n}}\,{\rm d} s\,.
\end{aligned}
\end{align}
The parameter $2k_{\vec{u}}\in \mathbb{N}$ in the discrete gradient could be chosen differently, but with this choice one has the second equality, which is straightforward to implement in modern software packages. In general,~we,~then, define the skew-symmetric convective term as
\begin{equation}\label{eq:convective_term}
\mathcal{C}_h[\vec{u}_h, \vec{v}_h, \vec{w}_h] \coloneqq
\frac{1}{2}\Big[ \hat{\mathcal{C}}_h[\vec{u}_h, \vec{v}_h, \vec{w}_h]
- \hat{\mathcal{C}}_h[\vec{u}_h, \vec{w}_h, \vec{v}_h]
\Big].
\end{equation}
Let us now turn our attention towards the time discretisation: we proceed similarly as in \cite{MN.2006,EG21c}. Let $\{\mathcal{I}_{\tau}\}_{\tau>0}$ be a family of partitions of the closed time interval $[0,T]$ of the form $ \{I_j\}_{j=1}^{N_\tau}= \{(t_{j-1}, t_j]\}_{j=1}^{N_\tau}$, for some $N_\tau \in \mathbb{N}$, associated to a (maximal) time step $\tau \coloneqq \max_{j\in\{1,\ldots ,N_\tau\}} (t_j - t_{j-1})$. We will assume that the family of time partitions is quasi-uniform in the sense that there is a number $\theta \in (0,1]$ (independent of $\tau>0$) such that
\begin{equation}\label{eq:time_quasiuniform}
\theta \tau \leq \min_{j\in\{1,\ldots, N_\tau\}}(t_j - t_{j-1})\,.
\end{equation}
We will denote the local space-time cylinders as $Q_j \coloneqq I_j \times \Omega$ for all $j=1,\ldots, N_\tau$. Then, for a given Banach space $X$ and $k\in \mathbb{N}_0$, we define the space of broken (in time) polynomials of global degree $k$ with values in $X$ as
\begin{equation}
P^k(\mathcal{I}_{\tau};X) \coloneqq \Big\{v\colon [0,T]\to X \mid v|_{I_j}\in \mathbb{P}^k(I_j;X) \text{ for all } j=1,\ldots ,N_\tau \Big\}\,.
\end{equation}
Note that the functions in $\mathbb{P}^k(\mathcal{I}_{\tau};X)$ are defined at $t=0$ and are left-continuous,~in~particular,~implying that $v(t_j)= v_\tau(t_j^-)\coloneqq \lim_{\smash{s\to t_j^-}} v_\tau(s)$ at the partition points. For a given function $v_\tau\in \mathbb{P}^k(\mathcal{I}_{\tau};X)$, we define the jump at $t_{j-1}$ for every $j\in\{1,\ldots, N_\tau\}$ as
\begin{align}
\begin{aligned}
\jump{v_\tau}_{j-1} &\coloneqq v_\tau(t_{j-1}^+) - v_\tau(t_{j-1})\,,\\
v_\tau(t_{j-1}^+) &\coloneqq \lim_{s\to t_{j-1}^+} v_\tau(s)\,.
\end{aligned}
\end{align}
Fix a polynomial degree $k_t\in \mathbb{N}$ for the time approximation; in the discrete formulation, we will look for a velocity and pressure in the spaces
\begin{equation}
\vec{u}_{h,\tau} \in \mathbb{V}^{h,\tau} \coloneqq \mathbb{P}_{k_t}(\mathcal{I}_{\tau}; \mathbb{V}^h)\,,
\qquad
p_{h,\tau} \in \mathbb{M}^{h,\tau} \coloneqq \mathbb{P}_{k_t}(\mathcal{I}_{\tau}; \mathbb{M}^h)\,.
\end{equation}
Now, let $\{\xi_l\}_{l=1}^{k_t+1}$ and $\{\omega_l\}_{l=1}^{k_t +1}$ be the (right-sided) points and weights, respectively, corresponding to the Gauss--Radau quadrature of degree $2k_t\in \mathbb{N}$ on the reference interval $\hat{I}\coloneqq (-1,1]$. By applying the transformations $\xi \mapsto \frac{1}{2}(t_{j} + t_{j-1}) + \frac{\xi}{2}(t_j - t_{j-1})$, $\omega \mapsto \frac{\omega}{2}(t_j - t_{j-1})$, one can, then, obtain a quadrature $\{(\xi^j_l,\omega^j_l)\}_{l=1}^{k_t+1}$ on the $I_j$ for all $j\!\in\! \{1,\ldots, N_\tau\}$. This can be used to define the discrete measure $\mu^{\mathrm{GR}}_{k_t+1}(\d t)$, for every $f\in C^0(\overline{I})$, as
\begin{equation}\label{eq:discrete_time_measure}
\int_0^T f(t)\, \mu^{\mathrm{GR}}_{k_t+1}(\d t)
\coloneqq \sum_{j=1}^{N_\tau} \int_{I_j} f(t)\, \mu^{\mathrm{GR}}_{k_t+1}(\d t)
\coloneqq \sum_{j=1}^{N_\tau} \sum_{l=1}^{k_t +1} \omega^j_{l} f(\xi^j_l)\,.
\end{equation}
Here, note the abuse of notation in that we employ the same symbol $\mu^{\mathrm{GR}}_{k_t+1}(\d t)$ for the integral on all the subintervals $I_j$, $j=1,\ldots, N_\tau$.
We are, eventually, able to introduce the discretisation of \eqref{eq:weak_PDE}. In the discrete formulation, we look for $(\vec{u}_{h,\tau},p_{h,\tau})^\top\in \mathbb{V}^{h,\tau} \times \mathbb{M}^{h,\tau}$ such that for every $(\vec{v}_{h,\tau},q_{h,\tau})^\top\in\mathbb{V}^{h,\tau}\times \mathbb{M}^{h,\tau}$, it holds
\begin{subequations}\label{eq:discrete_PDE}
\begin{gather}
\int_Q q_{h,\tau} \mathcal{D}_h (\vec{u}_{h,\tau})\,{\rm d} t{\rm d} x + \int_I S^{\pi}_h(p_{h,\tau}; q_{h,\tau}) \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) = 0
\label{eq:discrete_mass}\\
\sum_{j=1}^{N_\tau}\left[
\int_{Q_j} \partial_t \vec{u}_{h,\tau}\cdot \vec{v}_{h,\tau} \,{\rm d} t{\rm d} x
+ \int_\Omega \jump{\vec{u}_{h,\tau}}_{j-1}\cdot \vec{v}_{h,\tau}(t^+_{j-1})\,{\rm d} x
+ \int_{I_j} \mathcal{A}_h(\vec{u}_{h,\tau}; \vec{v}_{h,\tau})\, \mu^{\mathrm{GR}}_{k_t+1}(\d t) \right.\notag \\
\left.
+ \int_{I_j} \mathcal{C}_h[\vec{u}_{h,\tau},\vec{u}_{h,\tau},\vec{v}_{h,\tau}]\, \mu^{\mathrm{GR}}_{k_t+1}(\d t)
- \int_{Q_j} p_{h,\tau}\mathcal{D}_h(\vec{v}_{h,\tau}) \,{\rm d} t{\rm d} x
\right]
=
\int_Q \bm{f}\cdot \vec{v}_{h,\tau} \,\mu^{\mathrm{GR}}_{k_t+1}(\d t){\rm d} x\,.
\label{eq:discrete_momentum}
\end{gather}
Here, the initial condition is set as the $L^2$-orthogonal projection into the corresponding discrete space, i.e., $\vec{u}_{h,\tau}(0)\coloneqq\Pi_{\mathbb{V}^h}\vec{u}_0\in \mathbb{V}^h$. The pressure stabilisation term above, for every $p_h,q_h \in \mathbb{M}^h$, is defined as
\begin{equation}\label{eq:pressure_stabilisation}
S^{\pi}_h(p_{h}, q_{h}) \coloneqq \int_{\Gamma_h^{i}} h_{\Gamma}^{p'-1} |\jump{p_{h}\vec{n} }|^{p'-2}\jump{p_h \vec{n}} \cdot \jump{q_h\vec{n}}\, {\rm d} s\,.
\end{equation}
For some $l\in \mathbb{N}$, the discretisation of the viscous term, for every $\vec{v}_h,\vec{w}_h \in \mathbb{V}^h$, is defined as
\begin{equation}\label{eq:viscous_term}
\mathcal{A}_h(\vec{v}_h; \vec{w}_h) \coloneqq \int_\Omega \hat{\tens{T}}_h \fp \mathcal{G}_h^l(\vec{w}_h)\,{\rm d} x
+ S^{\vec{u}}_h(\vec{v}_h; \vec{w}_h)\,,
\end{equation}
where $\hat{\tens{T}}_h\colon \Omega \to \mathbb{R}^{d \times d}_{\sym}$ is such that
\begin{equation}\label{eq:discrete_implicit_relation}
\tens{G}(\hat{\tens{T}}_h, \hat{\mathcal{G}}_h(\vec{v}_{h})) = \bm{0}
\qquad
\text{in }\Omega\,,
\end{equation}
\end{subequations}
where $\hat{\mathcal{G}}_h \in \{\nabla_h, \mathcal{G}_h^l\}$. The velocity stabilisation for every $\vec{v}_h,\vec{w}_h \in \mathbb{V}^h$, is defined as
\begin{equation}\label{eq:velocity_stabilisation}
S^{\vec{u}}_h(\vec{v}_{h}, \vec{w}_{h}) \coloneqq \alpha \int_{\Gamma_h^{i}} h_{\Gamma}^{1-p} |\jump{\vec{v}_{h}\otimes \vec{n} }|^{p-2}\jump{\vec{v}_h\otimes \vec{n}} \fp \jump{\vec{w}_h \otimes \vec{n}}\, {\rm d} x\,,
\end{equation}
where $\alpha>0$ is a stabilisation parameter. This choice ensures, thanks to the coercivity condition \eqref{eq:coercivity}, that the discretisation of the viscous term is coercive (in general, for large enough $\alpha>0$), i.e., for every $\vec{v}_h \in \mathbb{V}^h$, it holds
\begin{equation}\label{eq:discrete_coercivity_A}
\norm{\smash{\hat{\tens{T}}_h}}^{p'}_{\Lp{p'}} +
\|\vec{v}_h\|^p_{h,p} \lesssim \mathcal{A}_h(\vec{v}_h;\vec{v}_h)\,.
\end{equation}
Since the discretised system \eqref{eq:discrete_PDE} makes use of discontinuous polynomials in time, the method can be localised; in practice, the problem is solved on the interval $I_j$ using the information from the (already computed) solution on the previous interval $I_{j-1}$. A few additional remarks are in order:
\noindent
\textbf{Computing the constitutive relation.} In practice, it is not strictly necessary to compute the function $\smash{\hat{\tens{S}}_{h,\tau}} \colon Q\to \mathbb{R}^{d \times d}_{\sym}$ corresponding to $\vec{u}_{h,\tau}\in \mathbb{V}^{h,\tau}$ from \eqref{eq:discrete_implicit_relation}. In fact, with modern software tools it is possible to work out the dependence of $\smash{\hat{\tens{S}}_{h,\tau}}$ on $\vec{u}_{h,\tau}$ without~having~to~compute~it~explicitly~(see,~e.g.,~\cite{BH.2021}). For explicit constitutive relations of the type $\tens{S} = \tens{\mathcal{S}}(\tens{D}(\vec{u}))$, such as \eqref{eq:power_lawA}, this is of course not needed, since one can, then, write for every $\vec{v}_h,\vec{w}_h \in \mathbb{V}^h$
\begin{equation}\label{eq:viscous_term_explicit}
\mathcal{A}_h(\vec{v}_h; \vec{w}_h) \coloneqq \int_\Omega \tens{\mathcal{S}}(\hat{\mathcal{G}}_h(\vec{v}_h)) \fp \mathcal{G}_h^l(\vec{w}_h)\,{\rm d} x
+ S^{\vec{u}}_h(\vec{v}_h; \vec{w}_h)\,.
\end{equation}
Alternatively, in case a discrete stress is a quantity of interest (or for explicit relations of the type $\tens{D}(\vec{u}) = \tens{\mathcal{D}}(\tens{S})$ such as \eqref{eq:non_monotone}), one can instead employ a 3-field formulation for the variables $(\tens{S}_{h,\tau},\vec{u}_{h,\tau},p_{h,\tau})^\top$ in the spirit of \cite{FGS.2020}; the results of this work will still hold in that case.
\noindent
\textbf{Various DG methods.} We presented two choices for a discrete gradient in the constitutive relation \eqref{eq:discrete_implicit_relation}. The choice $\smash{\hat{\mathcal{G}}_h}= \mathcal{G}_h^l$, e.g., would lead to a method of Local Discontinuous Galerkin~(LDG)~type. On the other hand, choosing $\smash{\hat{\mathcal{G}}_h} = \nabla_h$ leads to an Incomplete Interior Penalty (IIDG) method, which can be advantageous for non-linear problems of the type considered here, since one would not need to explictly compute the lifting terms $R^l_h(\vec{u}_{h,\tau}),R^l_h(\vec{v}_{h,\tau})$ in the implementation, thanks to the fact that the full discrete gradient $\mathcal{G}_h^l$ would appear on the test function exclusively (and, therefore, linearly), and so the definition \eqref{eq:lifting_jumps} can be applied directly. Regarding the stabilisation term, one could consider instead
\begin{equation}
\hat{S}^{\vec{u}}_h(\vec{v}_h; \vec{w}_h) \coloneqq S^{\vec{u}}_h(\vec{v}_h; \vec{w}_h) - \int_\Omega |R^l_h(\vec{v}_h)|^{p-2}R^l_h(\vec{v}_h) \fp R^l_h(\vec{w}_h)\,{\rm d} x
\quad\text{ for all }
\vec{v}_h,\vec{w}_w\in \mathbb{V}^h\,,
\end{equation}
which leads to Symmetric Interior Penalty (SIP) methods (cf.\ \cite{MRET.2018}), in the sense that it reduces to the traditional SIP method in the Newtonian case.
\noindent
\textbf{Gauss--Radau Quadrature.} The discrete time measure $\mu^{\mathrm{GR}}_{k_t+1}(\d t)$ should, in principle, appear in all the time integrals in \eqref{eq:discrete_momentum}; this implies, following the reasoning from \cite{MN.2006,EG21c}, that the method presented here is equivalent to a RadauIIA Runge--Kutta method, which can be readily implemented with many existing software libraries. Note that since the quadrature is exact up to degree $2k_t$, we could omit it from several terms, such as $\int_{Q_j} \partial_t \vec{u}_{h,\tau}\cdot \vec{v}_{h,\tau}\, \mu^{\mathrm{GR}}_{k_t+1}(\d t)=\int_{Q_j} \partial_t \vec{u}_{h,\tau}\cdot \vec{v}_{h,\tau}\, {\rm d} t{\rm d} x$.
\noindent
\textbf{Divergence constraint and pressure stabilisation.} The motivation behind the pressure stabilisation $S^{\pi}_h$ is the validity of the following inf-sup condition
\begin{equation}\label{eq:infsup}
\norm{q_h}_{\Lp{p'}}
\lesssim
\sup_{\vec{w}_h \in \mathbb{V}^h}\frac{\int_\Omega q_h \mathcal{D}_h(\vec{w}_h)\,{\rm d} x}{\norm{\vec{w}_h}_{h,p}}
+ S^{\pi}_h(q_h;q_h)^{\frac{1}{p'}}
\qquad
\text{ for all }\, q_h\in \mathbb{M}^h\,,
\end{equation}
whose proof can be found in Appendix \ref{appendix:infsup}. In certain cases, this stabilisation~term~can~be~avoided,~e.g., when matching meshes are used and the pressure is looked for in a continuous subspace (see e.g.\ \cite{KR.2022}). Naturally, also for divergence-conforming elements (i.e., when $\mathbb{V}^h \subset H(\rmdiv;\Omega)$ and $\mathbb{M}^h = \rmdiv\mathbb{V}^h$), the stabilisation term is not needed and the divergence constraint \eqref{eq:discrete_mass} simply becomes
\begin{equation}
\int_Q q_{h,\tau} \rmdiv\vec{u}_{h,\tau}\,{\rm d} x = 0
\qquad
\text{ for all }\, q_{h,\tau}\in \mathbb{M}^{h,\tau}\,.
\end{equation}
\begin{remark}[Method without quadrature]
Sometimes the $\mathrm{DG}(k_t)$ time discretisation method is defined with the usual time integration instead of using the Gauss--Radau quadrature $\mu^{\mathrm{GR}}_{k_t+1}(\d t)$. In this case, however, the equivalence with a Runge--Kutta method will be lost, in general; that said, the method has also certain nice properties, such as not requiring the forcing function $\bm{f}$ to be continuous. All the results in this work also apply to the method without quadrature, with slightly simplified proofs.
\end{remark}
\subsection{A priori estimates and $L^\infty(I;L^2(\Omega)^d)$-stability}
\hspace{5mm} We will proceed to derive energy estimates for the discrete problem \eqref{eq:discrete_PDE}.
\begin{lemma}[A priori estimates]\label{lem:apriori}
Suppose that $(\vec{u}_{h,\tau}, p_{h,\tau})^\top\in \mathbb{V}^{h,\tau} \times \mathbb{M}^{h,\tau}$ is a solution of problem \eqref{eq:discrete_PDE}, and let $\hat{\tens{S}}_{h,\tau}:Q\to \mathbb{R}^{d \times d}_{\sym}$ be a function associated to $\vec{u}_{h,\tau}\in \mathbb{V}^{h,\tau}$ in \eqref{eq:discrete_implicit_relation}. Then, assuming the penalty parameter $\alpha>0$ is large enough, there is a constant $c>0$ (independent of $h>0$ and $\tau>0$) such that
\begin{equation}\label{eq:apriori}
\begin{split}
\max_{j\in \{1,\ldots, N_\tau\}} \|\smash{\vec{u}_{h,\tau}(t_j)}\|^2_{\Lp{2}}
&+
\sum_{j=1}^{N_\tau} \|\jump{\vec{u}_{h,\tau}}_{j-1}\|^2_{\Lp{2}}
+
\int_I S^{\pi}_h(p_{h,\tau}(t), p_{h,\tau}(t)) \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \\
&\quad+
\int_I \|\hat{\tens{S}}_{h,\tau}(t)\|^{p'}_{\Lp{p'}} \,\mu^{\mathrm{GR}}_{k_t+1}(\d t)
+
\int_I \|\vec{u}_{h,\tau}(t)\|^{p}_{h,p} \,\mu^{\mathrm{GR}}_{k_t+1}(\d t)
\leq c\,.
\end{split}
\end{equation}
For $p=2$, the discrete measure $\mu^{\mathrm{GR}}_{k_t+1}(\d t)$ can be replaced by the standard measure ${\rm d} t$; this is also true for general $p>1$ for the DG method without quadrature.
\end{lemma}
\begin{proof}
Testing the equations \eqref{eq:discrete_mass} and \eqref{eq:discrete_momentum} on the interval $I_j$ for all $j=1,\dots,N_\tau$ with $p_{h,\tau}$ and $\vec{u}_{h,\tau}$, respectively, and adding the resulting equations, recalling the skew-symmetry property of $\mathcal{C}_h$, for every $j=1,\dots,N_\tau$, we find that
\begin{align*}
\frac{1}{2}\int_{I_j} \frac{{\rm d}}{{\rm d} t}\norm{\smash{\vec{u}_{h,\tau}}}^2_{\Lp{2}}\, {\rm d} t
&+
\int_\Omega (\vec{u}_{h,\tau}(t^+_{j-1})- \vec{u}_{h,\tau}(t_{j-1}))\cdot \vec{u}_{h,\tau}(t^+_{j-1})\, {\rm d} x
+
\int_{I_j} \mathcal{A}_h(\vec{u}_{h,\tau}; \vec{u}_{h,\tau}) \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \\
&\quad+
\int_{I_j} S^{\pi}_h(p_{h,\tau}, p_{h,\tau})\, \mu^{\mathrm{GR}}_{k_t+1}(\d t)
=
\int_{I_j} \bm{f} \cdot \vec{u}_{h,\tau} \,\mu^{\mathrm{GR}}_{k_t+1}(\d t)\,.
\end{align*}
Let us assume that the jump penalisation parameter $\alpha>0$ is large enough, so that the coercivity property \eqref{eq:discrete_coercivity_A} is satisfied. Then, using the fact that $2a(a-b)= a^2 - b^2 + (a-b)^2$ for all $a,b\in \mathbb{R}$, together with Hölder’s inequality yields for all $j=1,\dots,N_\tau$
\begin{align*}
&\frac{1}{2}\norm{\smash{\vec{u}_{h,\tau}(t_j)}}^2_{\Lp{2}}
- \frac{1}{2}\norm{\smash{\vec{u}_{h,\tau}(t_{j-1})}}^2_{\Lp{2}}
+ \frac{1}{2}\|\jump{\vec{u}_{h,\tau}}_{j-1}\|^2_{\Lp{2}}
+
\int_{I_j} \norm{\smash{\hat{\tens{S}}_{h,\tau}(t)}}^{p'}_{\Lp{p'}} \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \\
&\quad
+ \int_{I_j} \norm{\smash{\vec{u}_{h,\tau}}}^p_{h,p} \mu^{\mathrm{GR}}_{k_t+1}(\d t)
+ \int_{I_j} S^{\pi}_h (p_{h,\tau}(t); p_{h,\tau}(t)) \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \\
&\lesssim
\bigg(\int_{I_j} \norm{\bm{f}}^{p'}_{\Lp{p'}} \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/p'}}
\bigg( \int_{I_j} \norm{\smash{\vec{u}_{h,\tau}}}^p_{\Lp{p}} \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/p}}.
\end{align*}
Applying Young's inequality on the right-hand-side, using Körn's inequality \eqref{eq:korn}, and summing over $j\in \{1,\ldots, i\}$, with $i\in\{1, \ldots, N_\tau\}$, for every $i=1, \ldots, N_\tau$, we arrive at
\begin{align*}
&\|\vec{u}_{h,\tau}(t_i)\|^2_{\Lp{2}}
+ \sum_{j=1}^i \frac{1}{2}\|\jump{\vec{u}_{h,\tau}}_{j-1}\|^2_{\Lp{2}}
+
\int_0^{t_i} S^{\pi}_h (p_{h,\tau}; p_{h,\tau}) \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \\
&\quad +
\int_0^{t_i} \norm{\smash{\hat{\tens{S}}_{h,\tau}}}^{p'}_{\Lp{p'}} \,\mu^{\mathrm{GR}}_{k_t+1}(\d t)
+
\int_0^{t_i} \norm{\smash{\vec{u}_{h,\tau}}}^p_{h,p} \,\mu^{\mathrm{GR}}_{k_t+1}(\d t)
\lesssim
\|\vec{u}_0\|^2_{\Lp{2}}
+
\norm{\bm{f}}_{C^0(I;\Lp{p'})}^{p'}.
\end{align*}
Here, we made use of the stability of the $L^2$-projection $\norm{\smash{\vec{u}_{h,\tau}(0)}}_{\Lp{2}} \leq \norm{\vec{u}_0}_{\Lp{2}}$. Taking the maximum over $i\in \{1,\ldots, N_\tau\}$ concludes the proof.
\end{proof}
In the lowest order time discretisation $\mathrm{DG}(0)$, the discrete velocity is piece-wise constant~in~time~and so from the a priori estimate \eqref{eq:apriori} above one immediately has (for arbitrary $p>1$)
\begin{equation*}
\norm{\smash{\vec{u}_{h,\tau}}}_{L^\infty(I;L^2(\Omega)^d)}
=
\max_{j\in \{1,\ldots, N_\tau\}} \norm{\smash{\vec{u}_{h,\tau}(t_j)}}_{\Lp{2}}
\leq
c\,.
\end{equation*}
The rest of the paper is devoted to proving that this is also the case for general polynomial~degree~$k_t\geq 1$, assuming that $p\geq \frac{3d+2}{d+2}$. In order to do this, we will employ the exponential time interpolant from \cite{CW.2010b}. Fix a parameter $\lambda>0$; for every $j\in\{1,\ldots,N_\tau\}$, we define for polynomials on $I_j$, the linear mapping $\overline{(\cdot)}\coloneqq (r\mapsto \overline{r})\colon \mathbb{P}_{k_t}(I_j) \to \mathbb{P}_{k_t}(I_j)$, for every $r\in \mathbb{P}_{k_t}(I_j)$, through
\begin{subequations}\label{eq:exponential_interpolant_properties}
\begin{align}
\overline{r}(t_{j-1}^+) &= r(t_{j-1}^+)\,, \\
\int_{I_j} \overline{r}(t) q(t) \,{\rm d} t
&=
\int_{I_j} r(t) q(t) e^{-\lambda(t-t_{j-1})} \,{\rm d} t
\qquad
\text{ for all } q\in \mathbb{P}_{k_t -1}(I_j)\,.\end{align}
\end{subequations}
Note that in the expression above one could use the discrete measure $\mu^{\mathrm{GR}}_{k_t+1}(\d t)$ as~well,~since~the~Gauss--Radau quadrature integrates exactly up to degree $2k_t$. Then,~${\overline{(\cdot)}\!\coloneqq \!(\vec{v}_{h,\tau}\!\mapsto\! \overline{\vec{v}}_{h,\tau})\colon \!\mathbb{P}_{k_t}(I_j;\mathbb{V}^h) \!\to \!\mathbb{P}_{k_t}(I_j;\mathbb{V}^h)}$, for every $\vec{v}_{h,\tau}\in\mathbb{P}_{k_t}(I_j;\mathbb{V}^h)$, can be defined through
\begin{equation}\label{eq:exponential_interpolant}
\vec{v}_{h,\tau} =
\sum_{i=0}^k r_i(t) \vec{v}_h^i \in \mathbb{P}_{k_t}(I_j; \mathbb{V}^h)
\mapsto
\overline{\vec{v}}_{h,\tau}=\sum_{i=0}^k \overline{r_i}(t) \vec{v}_h^i \in \mathbb{P}_{k_t}(I_j; \mathbb{V}^h)\,.
\end{equation}
One can extend this definition for functions in $\mathbb{V}^{h,\tau}$ in the obvious way. From \cite[Lm.\ 3.6]{CW.2010b} we~know~that if $\norm{\cdot}_{\star}$ is a (semi-)norm on $\mathbb{V}^h$ arising from an (semi-)inner product, then \eqref{eq:exponential_interpolant}~is~\mbox{$L^s(I_j;\mathbb{V}^h)$-stable}, i.e.,
\begin{subequations}
\begin{align}
\bigg(\int_{I_j} \|\overline{\vec{v}}_{h,\tau}(t)\|^s_\star {\rm d} t \bigg)^{\smash{1/s}}
& \lesssim
\bigg(\int_{I_j} \|\vec{v}_{h,\tau}(t)\|^s_\star {\rm d} t \bigg)^{\smash{1/s}}
&& \quad\text{ for all } \vec{v}_{h,\tau}\in \mathbb{P}_{k_t}(I_j;\mathbb{V}^h)\,,\; s\in[1,\infty)\,, \label{eq:exp_stability_p_star}\\
\max_{t\in I_j} \|\overline{\vec{v}}_{h,\tau}(t)\|_\star
&\lesssim
\max_{t\in I_j} \|\vec{v}_{h,\tau}(t)\|_\star
&& \quad\text{ for all } \vec{v}_{h,\tau}\in \mathbb{P}_{k_t}(I_j; \mathbb{V}^h)\,. \label{eq:exp_stability_infty_star}
\end{align}
In fact, as stated in the next lemma, the above also holds with the discrete measure $\mu^{\mathrm{GR}}_{k_t+1}(\d t)$~and/or with $\norm{\cdot}_{\star}= \|\cdot\|_{h,s}$ for $s \in (1,\infty)$, which, in general, does not arise from an inner product; a proof of this fact can be found in Appendix \ref{appendix:stability}. \enlargethispage{1mm}
\begin{lemma}\label{lem:exp_stability}
Let $s\in (1,\infty)$ and $\norm{\cdot}_\star$ is a (semi-)norm on $\mathbb{V}^h$ arising from an (semi-)inner product. Then, the exponential interpolant \eqref{eq:exponential_interpolant}, for every $\vec{v}_{h,\tau}\in \mathbb{P}_{k_t}(I_j;\mathbb{V}^h)$ and $j\in \{1,\ldots, N_\tau\}$, satisfies
\begin{align}
\bigg(\int_{I_j} \|\overline{\vec{v}}_{h,\tau}(t)\|^s_\star\, \mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/s}}
& \lesssim
\bigg(\int_{I_j} \|\vec{v}_{h,\tau}(t)\|^s_\star\, \mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/s}}\,, \label{eq:exp_stability_GR_p_star}\\
\bigg(\int_{I_j} \|\overline{\vec{v}}_{h,\tau}(t)\|^s_{h,s}\, {\rm d} t \bigg)^{\smash{1/s}}
& \lesssim
\bigg(\int_{I_j} \|\vec{v}_{h,\tau}(t)\|^s_{h,s} \, {\rm d} t \bigg)^{\smash{1/s}}\,, \label{eq:exp_stability_p}\\
\bigg(\int_{I_j} \|\overline{\vec{v}}_{h,\tau}(t)\|^s_{h,s} \, \mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/s}}
& \lesssim
\bigg(\int_{I_j} \|\vec{v}_{h,\tau}(t)\|^s_{h,s}\, \mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/s}}\,. \label{eq:exp_stability_GR_p}
\end{align}
\end{lemma}
\end{subequations}
We are, eventually, in a position to prove the main result of this paper.
\begin{theorem}\label{thm:stability}
Suppose that $(\vec{u}_{h,\tau}, p_{h,\tau})^\top\in \mathbb{V}^{h,\tau}\times \mathbb{M}^{h,\tau}$ is a solution of problem \eqref{eq:discrete_PDE}. Moreover, assume that $p\geq \frac{3d+2}{d+2}$ if $k_t >0$ and $p>1$ if $k_t=0$. Then, assuming that the penalty parameter $\alpha>0$ is large enough, there is a constant $c>0$ (independent of $h>0$ and $\tau>0$) such that
\begin{equation}\label{eq:Linfty_stability}
\|\vec{u}_{h,\tau}\|_{L^\infty(I;L^2(\Omega)^d)} \leq c\,.
\end{equation}
\end{theorem}
\begin{proof}
For $k_t \!=\! 0$, the result is a direct consequence of Lemma \ref{lem:apriori}, so we will only consider~the~case~${k_t\! >\!0}$.
Fix an arbitrary $j\in \{1,\ldots, N_\tau\}$; we will prove the claim on $L^{\infty}(I_j; L^2(\Omega)^d)$, from which the result \eqref{eq:Linfty_stability} trivially follows. Denote the exponential interpolant of $\vec{u}_{h,\tau}$ on $\mathbb{P}_{k_t}(I_j;\mathbb{V}^h)$ by $\overline{\vec{u}}_{h,\tau}$. Using \eqref{eq:exponential_interpolant_properties}, we can examine what happens to the time derivative if we test the momentum balance \eqref{eq:discrete_momentum} with $\overline{\vec{u}}_{h,\tau}$:
\begin{gather*}
\int_{Q_j} \partial_t \vec{u}_{h,\tau} \cdot \overline{\vec{u}}_{h,\tau} \,{\rm d} t{\rm d} x
+ \int_\Omega \jump{\vec{u}_{h,\tau}}_{j-1} \cdot \overline{\vec{u}}_{h,\tau}(t^+_{j-1})\,{\rm d} x
= \frac{1}{2} \norm{\smash{\vec{u}_{h,\tau}(t_j)}}^2_{L^2(\Omega)} e^{-\lambda(t_j-t_{j-1})}
- \frac{1}{2} \norm{\smash{\vec{u}_{h,\tau}(t_{j-1}^+)}}^2_{L^2(\Omega)}\\
+ \frac{\lambda}{2} \int_{I_j} \|\vec{u}_{h,\tau}(t)\|^2_{L^2(\Omega)} e^{-\lambda(t-t_{j-1})} \,{\rm d} t
+ \int_\Omega \jump{\vec{u}_{h,\tau}}_{j-1} \cdot \vec{u}_{h,\tau}(t^+_{j-1})\,{\rm d} x
= \frac{1}{2} \norm{\smash{\vec{u}_{h,\tau}(t_j)}}^2_{L^2(\Omega)} e^{-\lambda(t_j-t_{j-1})} \\
+ \frac{1}{2}\|\jump{\vec{u}_{h,\tau}}_{j-1}\|^2_{L^2(\Omega)}
- \frac{1}{2} \norm{\smash{\vec{u}_{h,\tau}(t_{j-1})}}^2_{L^2(\Omega)}
+ \frac{\lambda}{2} \int_{I_j} \|\vec{u}_{h,\tau}(t)\|^2_{L^2(\Omega)} e^{-\lambda(t-t_{j-1})} \,{\rm d} t,
\end{gather*}
where we simply used integration-by-parts in the first term. Noting that the function $t\mapsto e^{-\lambda(t-t_{j-1})}$ is decreasing and dropping positive terms, we find that testing \eqref{eq:discrete_momentum} and \eqref{eq:discrete_mass} with $(\overline{\vec{u}}_{h,\tau}, p_{h,\tau})$ yields:
\begin{align*}
& \frac{\lambda}{2}e^{-\lambda(t_j-t_{j-1})} \int_{I_j} \|\vec{u}_{h,\tau}(t)\|^2_{L^2(\Omega)} \,{\rm d} t,
+ \int_{I_j} S^{\pi}_h(p_{h,\tau}; p_{h,\tau}) \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \\
&\leq
\frac{1}{2} \norm{\smash{\vec{u}_{h,\tau}(t_{j-1})}}^2_{L^2(\Omega)}
+ \int_{Q_j} \bm{f}\cdot \overline{\vec{u}}_{h,\tau} \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) {\rm d} x
- \int_{I_j} \mathcal{A}_h(\vec{u}_{h,\tau}; \overline{\vec{u}}_{h,\tau}) \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \\
&\quad- \int_{I_j} \mathcal{C}_h[\vec{u}_{h,\tau}, \vec{u}_{h,\tau}; \overline{\vec{u}}_{h,\tau}] \,\mu^{\mathrm{GR}}_{k_t+1}(\d t)
=
\mathfrak{I}_1
+ \mathfrak{I}_2
+ \mathfrak{I}_3
+ \mathfrak{I}_4\,.
\end{align*}
The first term $\mathfrak{I}_1$ is uniformly bounded, thanks to the a priori estimate \eqref{eq:apriori}. For the second term $\mathfrak{I}_2$, we apply Hölder’s inequality and the stability estimate \eqref{eq:exp_stability_GR_p}:
\begin{align*}
|\mathfrak{I}_2| &\leq
\bigg(\int_{I_j} \norm{\bm{f}(t)}^{p'}_{L^{p'}(\Omega)} \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/p'}}
\bigg(\int_{I_j} \|\overline{\vec{u}}_{h,\tau}(t)\|^{p}_{L^{p}(\Omega)} \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/p}} \\
&\lesssim
\|\bm{f}\|_{C^0(I_j;L^{p'}(\Omega)^d)}
\bigg(\int_{I_j} \|\vec{u}_{h,\tau}(t)\|^{p}_{h,p} \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/p}}
\leq c\,.
\end{align*}
Similarly, for the viscous term:
\begin{align*}
|\mathfrak{I}_3|
&=
\int_{Q_j} \hat{\tens{S}}_{h,\tau}\fp \mathcal{G}_h^l(\overline{\vec{u}}_{h,\tau})\, \mu^{\mathrm{GR}}_{k_t+1}(\d t){\rm d} x
+ \int_{I_j} S^{\vec{u}}_h(\vec{u}_{h,\tau}(t); \overline{\vec{u}}_{h,\tau}(t)) \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \\
&\lesssim \bigg(\int_{I_j} \|\hat{\tens{S}}_{h,\tau}(t)\|^{p'}_{L^{p'}(\Omega)} \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/p'}}
\bigg(\int_{I_j} \|\overline{\vec{u}}_{h,\tau}(t)\|^{p}_{h,p} \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/p}} \\
&\quad + \bigg(\int_{I_j} |\vec{u}_{h,\tau}(t)|^p_{\Gamma_h,p}\, \mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/p'}}
\bigg(\int_{I_j} |\overline{\vec{u}}_{h,\tau}(t)|^p_{\Gamma_h,p} \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/p}}
\leq c\,.
\end{align*}
To handle $\mathfrak{I}_4$, we note first that $p\geq \frac{3d+2}{d+2}$ is equivalent to $2p' \leq p\frac{d+2}{d}$, which implies that
\begingroup
\allowdisplaybreaks
\begin{align*}
|\mathfrak{I}_4|
&\leq \int_{Q_j}{|\vec{u}_{h,\tau}|^2 |\mathcal{G}_h^{2k_{\vec{u}}}(\overline{\vec{u}}_{h,\tau})|\,\mu^{\mathrm{GR}}_{k_t+1}(\d t){\rm d} x}
+ \int_{Q_j} {|\overline{\vec{u}}_{h,\tau}| |\vec{u}_{h,\tau}| |\mathcal{G}_h^{2k_{\vec{u}}}(\vec{u}_{h,\tau})|\,\mu^{\mathrm{GR}}_{k_t+1}(\d t){\rm d} x }\\
&\leq
\bigg(\int_{I_j} \|\vec{u}_{h,\tau}(t)\|_{L^{2p'}(\Omega)}^{2p'}\,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/p'}}
\bigg(\int_{I_j} \|\overline{\vec{u}}_{h,\tau}(t)\|_{h,p}^{p}\,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/p}} \\
&\quad +
\bigg(\int_{I_j} \|\overline{\vec{u}}_{h,\tau}(t)\|_{L^{2p'}(\Omega)}^{2p'}\,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/(2p')}}
\bigg(\int_{I_j} \|\vec{u}_{h,\tau}(t)\|_{L^{2p'(\Omega)}}^{2p'}\,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/(2p')}}\cdot \\
&\hphantom{00000}\cdot \bigg(\int_{I_j} \|\vec{u}_{h,\tau}(t)\|_{h,p}^{p}\,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/p}} \\
&\lesssim
\bigg(\int_{I_j} \|\vec{u}_{h,\tau}(t)\|_{L^{p_*}(\Omega)}^{p_*}\,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{2/p_*}}
\bigg(\int_{I_j} \|\overline{\vec{u}}_{h,\tau}(t)\|_{h,p}^{p}\,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/p}} \\
&\quad +
\bigg(\int_{I_j} \|\vec{u}_{h,\tau}(t)\|_{L^{p_*}(\Omega)}^{p_*}\,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/p_*}}
\bigg(\int_{I_j} \|\overline{\vec{u}}_{h,\tau}(t)\|_{L^{p_*}(\Omega)}^{p_*}\,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/p_*}} \\
&\hphantom{00000}\cdot \bigg(\int_{I_j} \|\vec{u}_{h,\tau}(t)\|_{h,p}^{p}\,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/p}}.
\end{align*}
\endgroup
Now, the crucial observation is that Corollary \ref{cor:parabolic_interpolation} still holds when using the discrete measure $\mu^{\mathrm{GR}}_{k_t+1}(\d t)$; more precisely, for every $\vec{v}_{h,\tau} \in \mathbb{P}_{k_t}(I_j; \mathbb{V}^h)$, we have that
\begin{equation*}
\bigg(\int_{I_j} \|\vec{v}_{h,\tau}(t)\|^{p_*} \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/p_*}}
\lesssim \bigg( \int_{I_j} \|\vec{v}_{h,\tau}(t)\|^p_{h,p}\, \mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/p_*}}
\norm{\vec{v}_{h,\tau}}_{L^\infty(I_j;L^2(\Omega)^d)}^{\frac{2}{d+2}}.
\end{equation*}
Combining this with the stability estimate \eqref{eq:exp_stability_infty_star} (with $\norm{\cdot}_\star\! =\! \norm{\cdot}_{L^2(\Omega)}$) and estimate \eqref{eq:exp_stability_GR_p}~(with~${s\!=\!p}$), then, yields that
\begin{equation}
|\mathfrak{I}_4| \lesssim \|\smash{\vec{u}_{h,\tau}}\|^{\frac{4}{d+2}}_{L^\infty(I_j;L^2(\Omega)^d)}.
\end{equation}
In summary, we have that
\begin{equation}\label{eq:almost_stability}
\frac{\lambda}{2}e^{-\lambda(t_j-t_{j-1})} \int_{I_j} \|\vec{u}_{h,\tau}(t)\|^2_{L^2(\Omega)} \,{\rm d} t
\lesssim
1 +
\norm{\smash{\vec{u}_{h,\tau}}}^{\frac{4}{d+2}}_{L^\infty(I_j;L^2(\Omega)^d)}\,.
\end{equation}
On the other hand, the equivalence of norms in finite dimensional spaces and the quasi-uniformity \eqref{eq:time_quasiuniform} of the time partition imply that (cf. \cite[Lm.\ 3.5]{CW.2010b})
\begin{equation}
\|\vec{u}_{h,\tau}\|^2_{L^\infty(I_j;L^2(\Omega))}
\lesssim
\frac{1}{\tau} \int_{I_j} \norm{\smash{\vec{u}_{h,\tau}(t)}}^2_{L^2(\Omega)} \,{\rm d} t\,.
\end{equation}
Hence, choosing $\lambda=\tau^{-1}$ in \eqref{eq:almost_stability} and noting that $\frac{4}{d+2}< 2$ yields the assertion.
\end{proof}
The a priori estimate \eqref{eq:apriori} and Theorem \ref{thm:stability} could be the starting point of a compactness argument to prove (weak) convergence of the numerical solutions to a minimal regularity energy solution of \eqref{eq:weak_PDE}. In a convergence proof, further assumptions would be needed such as monotonicity of the constitutive relation, in order to be able to identify the non-linear limit; see, e.g., \cite{AGF.2023}, where this was carried out for a discretisation of natural convection.\newpage
\begin{corollary}
Let $(\vec{u}_{h,\tau}, p_{h,\tau})^\top\in \mathbb{V}^{h,\tau}\times \mathbb{M}^{h,\tau}$ be a solution of the discrete problem without quadrature. Moreover, assume that $p\geq \frac{3d+2}{d+2}$ if $k_t >0$ and $p>1$ if $k_t=0$. Then, assuming that the penalty parameter $\alpha>0$ is large enough, there is a constant $c>0$ (independent of $h>0$ and $\tau>0$) such that
\begin{equation}\label{eq:Linfty_stability2}
\|\vec{u}_{h,\tau}\|_{L^\infty(I;L^2(\Omega)^d)} \leq c\,.
\end{equation}
\end{corollary}
\begin{proof}
The proof for the DG time discretisation without quadrature is almost identical.~The~only~differ-ence is that Corollary \ref{cor:parabolic_interpolation} can be applied directly, and that now the stability estimate \eqref{eq:exp_stability_p} with the standard measure ${\rm d} t$ is the one that has to be employed.
\end{proof}
\begin{remark}
The energy stability of several Diagonally Implicit Runge--Kutta methods was recently analysed in \cite{ST.2022}. While our work focused exclusively on the RadauIIA Implicit Runge--Kutta method, the arguments presented here could be conceivably combined with the approach from \cite{ST.2022} to obtain $L^\infty(0,T;L^2(\Omega)^d)$-stability of various other discretisations of incompressible flow models.
\end{remark}
\section{Introduction}
\subsection{Description of the model}
\qquad In this paper, we analyse the stability of non-conforming numerical schemes~for~a~system~describing the evolution of an incompressible non-Newtonian fluid. Namely, for a given spatial domain $\Omega \subseteq \mathbb{R}^d$, with $d\in\{2,3\}$, and a final time $0<T<\infty$, in the continuous setting, one looks for~a~velocity~vector~field $\vec{u}\colon [0,T]\times \overline{\Omega} \to \mathbb{R}^d$, a pressure field $\pi\colon (0,T)\times \Omega \to \mathbb{R}$, and~a~(symmetric~and~traceless) stress tensor $\tens{S}\colon (0,T)\times \Omega \to \mathbb{R}^{d \times d}_{\sym,\tr}$ such that
\begin{subequations}
\label{eq:continuous_PDE}
\begin{alignat}{2}
\begin{aligned}
\label{eq:continuous_balance}
\partial_t \vec{u} - \rmdiv\tens{S} + \rmdiv(\vec{u} \otimes \vec{u}) + \nabla \pi
&= \bm{f} \qquad \quad &&\text{ in } (0,T)\times \Omega\,,\\
\rmdiv\vec{u} &= 0 \qquad \quad &&\text{ in }{(0,T)\times \Omega}\,,\\
\vec{u} &= \bm{0} \qquad \quad &&\text{ on }(0,T)\times \partial\Omega\,,\\
\vec{u}(0,\cdot) &= \vec{u}_0 \qquad \quad &&\text{ in }\Omega\,,
\end{aligned}
\end{alignat}
where the initial velocity vector field $\vec{u}_0\colon \Omega \to \mathbb{R}^d$ and the body force $\bm{f}\colon (0,T)\times \Omega \to \mathbb{R}^d$~are~given. To close the system, we consider an implicit constitutive law of the form
\begin{equation}
\label{eq:continuous_constitutive_law}
\tens{G}(\tens{S}, \tens{D}(\vec{u})) = \bm{0} \qquad
\text{ in } (0,T)\times \Omega\,,
\end{equation}
\end{subequations}
where $\tens{D}(\vec{u}) = \frac{1}{2}(\nabla\vec{u} + \nabla\vec{u}^\top)\colon(0,T)\times \Omega\to\mathbb{R}^{d \times d}_{\sym} $ denotes the strain rate tensor, i.e., symmetric part of the velocity gradient, and $\tens{G}\colon \mathbb{R}^{d \times d}_{\sym} \times \mathbb{R}^{d \times d}_{\sym} \to \mathbb{R}^{d \times d}_{\sym}$ is a locally Lipschitz function such that $\tens{G}(\bm{0},\bm{0})=\bm{0}$ and such that it defines a $p$-coercive~graph~for~$p>1$, in the sense that there exist two constants $c_1, c_2>0$ such that
\begin{equation}\label{eq:coercivity}
\tens{G}(\tens{A},\tens{B}) = \bm{0} \qquad
\Longrightarrow \qquad
\tens{A}\fp\tens{B} \geq c_1(|\tens{A}|^{p'} + |\tens{B}|^p) - c_2\,,
\end{equation}
for every $(\tens{A},\tens{B})\in \mathbb{R}^{d \times d}_{\sym}\times \mathbb{R}^{d \times d}_{\sym}$. Such a class of constitutive relations captures many models that are popular in applications. Prototypical examples that, in addition, define a \emph{monotone graph} include fluids with power-law structure
\begin{subequations}\label{eq:power_law}
\begin{gather}
\tens{G}(\tens{S},\tens{D}) \coloneqq \tens{S} - K_\star(1 + \Gamma_\star |\tens{D}|^2)^{\frac{p-2}{2}}\tens{D}
\qquad K_\star,\Gamma_\star>0\,,\; p>1\,, \label{eq:power_lawA}\\
\tens{G}(\tens{S},\tens{D}) \coloneqq K_\star(1 + \Gamma_\star |\tens{S}|^2)^{\frac{p'-2}{2}}\tens{S} - \tens{D}
\qquad K_\star,\Gamma_\star>0\,,\; p>1\,,
\end{gather}
\end{subequations}
or viscoplastic Bingham fluids
\begin{equation}\label{eq:bingham_implicit}
\tens{G}(\tens{S},\tens{D}) \coloneqq (|\tens{S}| - \tau_\star)^+ \tens{S}
- 2\nu_\star(\tau_\star + (|\tens{S}|-\tau_\star)^+)\tens{D}
\qquad \nu_\star > 0\,,\; \tau_\star \geq 0\,,
\end{equation}
where $(\cdot)^+ \!\coloneqq \! (s\mapsto\max\{s,0\})\colon \!\mathbb{R}\!\to\! \mathbb{R}$; this relation is more commonly written~in~terms~of~the~dichotomy
\begin{equation}\label{eq:bingham_dichotomy}
\renewcommand{\arraystretch}{1.2}
\left\{
\begin{array}{ccc}
|\tens{S}|\leq \tau_\star & \Longleftrightarrow & \tens{D} = \bm{0}\,, \\[1mm]
|\tens{S}|> \tau_\star & \Longleftrightarrow & \tens{S} = 2\nu_\star \tens{D} + \displaystyle\frac{\tau_\star}{|\tens{D}|}\tens{D}\,.
\end{array}
\right.
\end{equation}
Note that while it is not possible to write the relation \eqref{eq:bingham_dichotomy} in terms of a single valued function~$\tens{S}(\tens{D})$, within the implicit framework, one can express it in terms of elementary functions without issue. We note further that the Newtonian constitutive relation is of course also considered here (e.g., take $\tau_\star = 0$~in \eqref{eq:bingham_implicit} or $p=2$ in \eqref{eq:power_law}). We refer to \cite{BMR.2020,BMM.2021}, for an in-depth discussion of the different models that can be described with such monotone constitutive relations and the corresponding PDE analysis.
The implicit constitutive relations considered here also includes non-monotone relations that can describe hysteretic behaviour, e.g.,
\begin{equation}\label{eq:non_monotone}
\tens{G}(\tens{S},\tens{D}) = \Big[a(1 + b|\tens{S}|^2)^{\frac{q-2}{2}} + c\Big]\tens{S}
- \tens{D}
\qquad a,b,c>0\,,\; q\in \mathbb{R}\, .
\end{equation}
which for $q<0$, in general, is \emph{non-monotone} (see \cite{LRR.2013} for details), but has, nevertheless,~been~shown to be thermodynamically consistent \cite{JP.2018}. See also \cite{JMPT.2019} for insightful numerical experiments.
In this work, we concentrate on non-conforming discretisations of the problem \eqref{eq:continuous_PDE}; namely, a discontinuous Galerkin in time method $\mathrm{DG}(k)$ and a discontinuous Galerkin discretisation in space that can, in particular, be taken to be a Local Discontinuous Galerkin (LDG) method or an Interior Penalty (IP) method (possibly incomplete). The DG time discretisation we consider here can be shown to be equivalent to a RadauIIA Implicit Runge--Kutta scheme \cite{MN.2006}, which, due to its L-stability, is popular in applications modeled by parabolic problems.
Regarding the spatial discretisation, in the case of incompressible fluid models such as \eqref{eq:continuous_PDE}, one has the additional concern of the preservation of the divergence-free constraint \eqref{eq:continuous_balance}$_2$ at the discrete level; in recent years, the importance of this has been recognised and schemes that lead to point-wise divergence-free approximations~have~many~desirable~qualities, such as pressure robust error estimates (see \cite{JLMNR.2017} for more details). One of the main ways of obtaining exactly divergence-free approximations is to relax the conformity requirement and employ a finite element space for the velocity that is $H(\rmdiv;\Omega)$-conforming only. This non-conformity is then handled by including DG terms in the formulation (see, e.g., \cite{CKS.2007,SLLL.2018} for the Newtonian case). While this is one of our main motivations, here we will analyse more general discretisations that might not~enforce~the~divergence~constraint~exactly.
Given the highly non-linear nature of the models considered here, deriving~error~estimates~seems out of reach. In such cases, one can turn instead to proving weak convergence (of a subsequence) to minimal regularity solutions by using compactness arguments; a crucial step in such arguments is to establish stability of the corresponding discrete scheme, from which one then~extracts~converging~subsequences; this approach was taken in \cite{ST.2019,FGS.2020} for conforming-in-space discretisations of implicitly constituted models; for the case with explicit constitutive relations (and implicit Euler in time), see \cite{BKR.2021,KR.2022}. In the setting considered here, the coercivity condition \eqref{eq:coercivity} results in a stability estimate that guarantees the uniform boundedness of the velocity approximations in $L^p(0,T;W^{1,p}(\Omega)^d)$ (or, more precisely, on its broken counterpart) and of the stress approximations~in~$L^{p'}((0,T)\times \Omega)^{d\times d}$.~This~is,~however,~not enough as the usual notions of energy solutions for incompressible models require also that $\vec{u} \in L^\infty(0,T;L^2(\Omega)^d)$; among other things, this condition is useful because (see, e.g., \cite{ST.2019} for more details):
\begin{itemize}
\item Together with a Gagliardo--Nirenberg-type interpolation inequality, cf. \cite[Theorem I.2.1]{dibene}, it implies that $$\vec{u} \in L^{\frac{p(d+2)}{d}}((0,T)\times\Omega)^d\,,$$ which, in turn, implies, e.g., that if $p\geq \frac{3d+2}{d+2}$ (and so, in particular, for the Newtonian problem in 2D), then the velocity is an admissible test function in the balance of momentum and, which guarantees an energy identity and, thus, uniqueness of solutions;\vspace{1mm}
\item It is used when proving that $$\vec{u} \in C_w^0([0,T];L^2(\Omega)^d)\,,$$ meaning that the initial condition is a priori meaningful in this weak sense, but in fact this allows one to prove that $$\lim_{t\to 0}\|\vec{u}(t) - \vec{u}_0\|_{L^2(\Omega)}= 0\,.$$
\end{itemize}
It is, therefore, highly desirable that the discretisation methods produce solutions which are also uniformly stable in $L^\infty(0,T;L^2(\Omega)^d)$. By testing the DG-in-time discretised system with the solution, it is straightforward (see Lemma \ref{lem:apriori} below) to prove $L^2(\Omega)^d$-stability at the partition points $\{t_j\}$. However, this only yields the desired $L^\infty(0,T;L^2(\Omega)^d)$ bound in the lowest order case $\mathrm{DG}(0)$ (i.e.\ implicit~Euler), since the function is piece-wise constant in time. In general, when working with general DG in time discretisations, one can only guarantee stability in $L^{2p}(0,T;L^2(\Omega)^d)$; see \cite{W.2010} and \cite{AGF.2023} for the spatially conforming and non-conforming cases, respectively. Thus, in general, one would obtain convergence to a weaker notion of solution that might not be unique even when $p=2=d$. Chrysafinos and Walkington \cite{CW.2010b} proved, however, with the help of Ladyzhenskaya's inequality, that for spatially conforming discretisations, one can still obtain $L^\infty(0,T;L^2(\Omega)^d)$-stability for the Newtonian problem ($p=2$) in two spatial dimensions ($d=2$). The main contribution of this work is the extension of this result to the non-Newtonian and non-conforming setting; in particular, we establish that if $p\geq \frac{3d+2}{d+2}$ (i.e.\ when the velocity is an admissible test function), DG discretisations are stable also in $L^\infty(0,T;L^2(\Omega)^d)$. An important step in the proof is the application of a Gagliardo--Nirenberg inequality on DG spaces, which is needed since the numerical solutions are discontinuous across elements, which we also derive and is to the best of our knowledge also new.
\textit{This article is organized as follows:} In Section \ref{sec:preliminaries}, we introduce the employed notation, the basic assumptions on the mesh regularity, and the relevant spaces and operators from~DG~theory.~In~Section~\ref{sec:gagliardo}, we establish a discrete Gagliardo--Nirenberg-type inequality on DG spaces. In Section, \ref{sec:parabolic_interpolation}, using the discrete Gagliardo--Nirenberg-type inequality from Section \ref{sec:gagliardo}, we derive several parabolc discrete interpolation inequalities. These discrete parabolic interpolation inequalities are employed in Section \ref{sec:stablity} to prove the $L^\infty(0,T,L^2(\Omega)^d)$-stability of discontinuous Galerkin schemes for incompressible flows.
\section{Preliminaries}\label{sec:preliminaries}
\qquad Throughout the entire article, if not otherwise specified, we always denote by ${\Omega\subseteq \mathbb{R}^d}$,~${d\in\mathbb{N}}$, a bounded polyhedral Lipschitz domain with outward-pointing unit vector field $\vec{n}\colon \partial\Omega\to \mathbb{S}^{d-1}$.
Then, the time interval will be denoted by $I\coloneqq (0,T)$, $0<T<\infty$, and the parabolic~cylinder~by~$Q\coloneqq I\times \Omega$. For $p\in [1,\infty]$ and $k\in\mathbb{N}$, we will employ standard notation for Lebesgue $L^p(\Omega)$, Sobolev $W^{k,p}(\Omega)$, and Bochner--Sobolev $L^p(I;W^{k,p}(\Omega))$ spaces throughout. For $p\in [1,\infty)$ and $k\in \mathbb{N}$, we~denote~by $W_0^{k,p}(\Omega)$, the closure of the space of smooth functions on $\Omega$ with compact support, with respect to the $\|\cdot\|_{W^{k,p}(\Omega)}$-norm. The subspace of $L^p(\Omega)$ functions with zero mean~will~be~denoted~by~$L_0^p(\Omega)$.
\subsection{Mesh regularity}
\qquad In this subsection, we propose a set of assumptions on the family of partitions $\{\mathcal{T}_{h}\}_{h\in (0,1]}$, which are required in order to apply the theory developed in this paper. These assumptions correspond to the choice in \cite{BO.2009}.
Let $\{\mathcal{T}_{h}\}_{h\in (0,1]}$ be a family of partitions of the closure $\overline{\Omega}$ into convex polyhedral elements, which are affine images of a set of reference polyhedra. More precisely, we assume that there exists a finite number of convex reference polyedra $\widehat{K}_1,\dots,\widehat{K}_N$, such that $\vert\widehat{K}_N\vert = 1 $ for $i = 1,\dots, N$, and that for each $K \in \mathcal{T}_{h}$, there exists a reference element $\widehat{K}_i$ for some $i\in \{1,\dots, N\}$ and an invertible affine map $F_K\colon \widehat{K}_i\to K$ such that $K = F_K (\widehat{K}_i)$. The symbol $h>0$ denotes the maximal mesh size, i.e., if we define $h_K\coloneqq \text{diam}(K)$ for every $K\in \mathcal{T}_{h}$, then we have that $h=\max_{T\in \mathcal{T}_{h}}{h_K}$. Without loss of generality, we assume that $h\in (0, 1]$. We will provide further assumptions on the mesh regularity in the curse of this section.
We define the sets of $(d-1)$-dimensional faces $\Gamma_h$, interior faces $\Gamma_h^{i}$, and boundary faces $\Gamma_h^{\partial}$ of the partition $\mathcal{T}_{h}$ by
\begin{align*}
\Gamma_h&\coloneqq \Gamma_h^{i}\cup \Gamma_h^{\partial}\,,\\[-0.5mm]
\Gamma_h^{i}&\coloneqq \{K\cap K'\mid K,K'\in \mathcal{T}_{h}\,,\text{dim}_{\mathscr{H}}(K\cap K')=d-1\}\,,\\[-0.5mm]
\Gamma_h^{\partial}&\coloneqq\{K\cap \partial\Omega\mid K\in \mathcal{T}_{h}\,,\text{dim}_{\mathscr{H}}(K\cap \partial\Omega)=d-1\}\,,
\end{align*}
where for every $S\subseteq \mathbb{R}^d$, we denote by $\text{dim}_{\mathscr{H}}(S)\coloneqq\inf\{d'\geq 0\mid \mathscr{H}^{d'}(S)=0\}$, the~Hausdorff~dimension. The (local) mesh-size function $h_{\mathcal{T}}\colon \Omega\to \mathbb{R}$ for every element $K\in \mathcal{T}_{h}$ is defined by $h_{\mathcal{T}}|_K\coloneqq h_K $.
The (local) face-size function $h_{\Gamma}\colon \Gamma_h\to \mathbb{R}$ for every facet $F\in \Gamma_h$ is defined by $h_{\Gamma}|_F\coloneqq h_F \coloneqq \text{diam}(F)$.\enlargethispage{4mm}
\begin{assumption}[Mesh quality; cf. \cite{BO.2009}]\label{assum:mesh}
We assume that $\{\mathcal{T}_{h}\}_{h\in (0,1]}$ satisfies the following conditions:
\begin{itemize}[{(iii)}
\item[(i)] \textup{Shape Regularity.} There exist constants $c_1,c_2>0$ such that for every $K\in \mathcal{T}_h$ and~$h\in (0,1]$,~it~holds\vspace{-1.5mm}
\begin{align*}c_1\, h_K^d\leq \vert K\vert\leq c_2\, h_K^d\,.\\[-7mm]
\end{align*}
\item[(ii)] \textup{Contact Regularity.} There exists a constant $c_3>0$ such that for every $F\in \Gamma_h$ with $F\subseteq \overline{K}$ for some $K\in \mathcal{T}_h$ and $h\in (0,1]$, it holds\vspace{-1.5mm}
\begin{align*}c_3\, h_K^{d-1}\leq \mathscr{H}^{d-1}(F)\,.\\[-7mm]
\end{align*}
\item[(iii)] \textup{Submesh condition.} There exists a shape-regular, conforming, matching simplicial submesh $\widetilde{\mathcal{T}_{h}}$ such that\vspace{-2mm}
\begin{itemize}[{3.}
\item[1.] For each $\widetilde{K}\in \widetilde{\mathcal{T}_{h}}$, there exists $K\in \mathcal{T}_{h}$ such that $\widetilde{K}\subseteq K$,
\item[2.] The family $\{\widetilde{\mathcal{T}_{h}}\}_{h\in (0,1]}$ satisfies (i) and (ii).\vspace{-0.5mm}
\item[3.] There exists a constant $\tilde{c}>0$ such that for any $\widetilde{K}\in \smash{\widetilde{\mathcal{T}_{h}}}$, $K\in \mathcal{T}_{h}$ with $\widetilde{K}\subseteq K$,~it~holds~${h_K \leq \tilde{c}\, h_{\widetilde{K}}}$.
\end{itemize}
\end{itemize}
\end{assumption}
\begin{remark}
We note that in dimension $d \in \{ 2, 3\}$ a simplicial submesh can be constructed under mild assumptions on the partitions $\{\mathcal{T}_{h}\}_{h\in (0,1]}$ (cf. \cite[Corollary 7.3]{Brenner.2003}). In addition, it seems straightforward to generalize this proof to arbitrary dimensions $d\ge 2$.
\end{remark}
\subsubsection{Broken function spaces and projectors}
\qquad For every $k \in \mathbb{N}_0$~and~${K\in \mathcal{T}_h}$,
we denote by $\mathbb{P}_k(K)$, the space of
polynomials of degree at most $k$ on $K$. Then, for given $k\in \mathbb{N}_0$, we define the space of \textit{broken polynomials of global degree at most $k$}
\begin{align*}
\mathbb{P}_k(\mathcal T_h)&\coloneqq\big\{v_h\in L^\infty(\Omega)\mid v_h|_K\in \mathbb{P}_k(K)\text{ for all }K\in \mathcal{T}_h\big\}\,.
\end{align*}
In addition, for
given~$p\in (1,\infty)$,
we define the \textit{broken Sobolev space}
\begin{align*}
W^{1,p}(\mathcal T_h)&\coloneqq\big\{w_h\in L^p(\Omega)\mid w_h|_K\in W^{1,p}(K)\text{ for all }K\in \mathcal{T}_h\big\}\,.
\end{align*}
For each $w_h\!\in\! W^{1,p}(\mathcal{T}_h)$, we denote by $\nabla_h w_h\!\in\! L^p(\Omega)^d$,
the \textit{local gradient}, for~every~${K\!\in\!\mathcal{T}_h}$,~defined by
$(\nabla_h w_h)|_K\!\coloneqq\!\nabla(w_h|_K)$~for~all~${K\!\in\!\mathcal{T}_h}$.
For each $K\!\in\! \mathcal{T}_h$,~${w_h\!\in\! W^{1,p}(\mathcal{T}_h)}$~admits~a~trace~${\textrm{tr}^K(w_h)\!\in\! L^p(\partial K)}$. For each face
$F\in \Gamma_h$ of a given element $K\in \mathcal{T}_h$, we define~this~interior trace by
$\smash{\textup{tr}^K_F(w_h)\in L^p(F)}$. Then, given some multiplication operator~${\odot\colon \mathbb{R}^m\times \mathbb{R}^d\to \mathbb{R}^l}$,~${m,l\in \mathbb{N}}$, for
every $w_h\in W^{1,p}(\mathcal{T}_h)$ and interior faces $F\in \Gamma_h^{i}$ shared by
adjacent elements $K^-_F, K^+_F\in \mathcal{T}_h$,
we denote by
\begin{align*}
\{w_h\}_F&\coloneqq\tfrac{1}{2}\big(\textup{tr}_F^{K^+}(w_h)+
\textup{tr}_F^{K^-}(w_h)\big)\in
L^p(F)\,,\\
\llbracket w_h\odot \vec{n}\rrbracket_F
&\coloneqq\textup{tr}_F^{K^+}(w_h)\odot\vec{n}^+_F+
\textup{tr}_F^{K^-}(w_h)\odot\vec{n}_F^-
\in L^p(F)\,,
\end{align*}
the \textit{average} and \textit{jump}, respectively, of $w_h$ on $F$.
Moreover, for every $w_h\in W^{1,p}(\mathcal{T}_h)$ and boundary faces $F\in \Gamma_h^{\partial}$, we define \textit{boundary averages} and
\textit{boundary jumps}, respectively, by
\begin{align*}
\{w_h\}_F&\coloneqq\textup{tr}^\Omega_F(w_h) \in L^p(F)\,, \\
\llbracket w_h\odot\vec{n}\rrbracket_F&\coloneqq
\textup{tr}^\Omega_F(w_h)\odot\vec{n} \in L^p(F)\,.
\end{align*}
If there is no
danger~of~confusion, we will omit the index $F\in \Gamma_h$; in particular, if we interpret jumps and averages as global functions defined on the whole of $\Gamma_h$.
Apart from that, for every $w_h\in W^{1,p}(\mathcal{T}_h)$, we introduce the DG norm via
\begin{align*}
\|w_h\|_{h,p}\coloneqq\Big(\|\nabla_hw_h\|_{L^p(\Omega)}^p+\big\|h^{-\frac{1}{p'}}_\Gamma\jump{w_h\vec{n}}\big\|_{L^p(\Gamma_h)}^p\Big)^{1/p}\,,
\end{align*}
which turns $W^{1,p}(\mathcal{T}_h)$ into a Banach space\footnote{The completeness of $W^{1,p}(\mathcal{T}_h)$ equipped with $\|\cdot\|_{h,p}$, for each fixed $h\in (0,1]$, follows from ${\|w_h\|_{L^p(\Omega)}\lesssim\|w_h\|_{\nabla,p,h}}$ for all $w_h\in \smash{W^{1,p}(\mathcal{T}_h)}$ (cf.~\cite[Lemma A.9]{DKRI14}) and an element-wise application of the trace theorem.}.
With this norm, cf.~\cite[Lm. A.9]{DKRI14},~for~every~${w_h\in W^{1,p}(\mathcal{T}_h)}$, there holds the discrete Poincar\'e inequality
\begin{equation}\label{eq:poincare}
\|w_h\|_{L^p(\Omega)} \lesssim \|w_h\|_{h,p}\,.
\end{equation}
Whenever we write $A \lesssim B$, it is meant that $A \leq c\, B$ with a constant $c>0$ that might depend on the domain, polynomial degree and/or shape regularity, but is independent of the discretisation parameters (i.e., the mesh size $h>0$ or the time step size $\tau>0$).
\section{Discrete Gagliardo--Nirenberg-type inequality}\label{sec:gagliardo}
\qquad In this section, we derive a discrete Gagliardo--Nirenberg-type inequality.
Key ingredient is
the quasi-interpolation operator $Q_h\colon \mathbb{P}_k(\mathcal{T}_{h})\to \mathbb{P}_1(\smash{\widetilde{\mathcal{T}_{h}}})\cap W^{1,\infty}(\Omega)$, where $\smash{\widetilde{\mathcal{T}_{h}}}$ denotes the simplicial submesh in Assumption \ref{assum:mesh} (c), introduced~in~\cite{BO.2009}, and its approximation and stability properties~on~DG~spaces:\enlargethispage{11.5mm}
\begin{lemma}\label{lem:scott_zhang_stable}
Let $p\in [1,\infty)$ and $k\in \mathbb{N}_0$. Then,
for every $v_h\in \mathbb{P}_k(\mathcal{T}_{h})$, it holds\vspace{-1mm}
$$
\|\nabla Q_hv_h\|_{L^p(\Omega)}\lesssi
\|v_h\|_{h,p}\,.\\[-3.5mm]
$$
\end{lemma}
\begin{proof}
See \cite[Thm. 3.1, (3.11)]{BO.2009}.
\end{proof}
\begin{lemma}\label{lem:scott_zhang_approx}
Let $p,s\in [1,\infty)$ and $k\in\mathbb{N}_0$.
Then,
for every $v_h\in \mathbb{P}_k(\mathcal{T}_{h})$ and $K\in \mathcal{T}_{h}$, it holds\vspace{-2mm}\footnote{For every $p\in [1,\infty)$, $w_h\in W^{1,p}(\mathcal{T}_{h})$, and $K\in \mathcal{T}_{h}$, we define
$\smash{\|w_h\|_{h,p,\omega_K}\coloneqq (\|\nabla_hw_h\|_{L^p(\omega_K)}^p+\|h^{-1/p'}_\Gamma\jump{w_h\vec{n}}\|_{L^p(\Gamma_h\cap \omega_K)}^p)^{1/p}}$}
$$
\|v_h-Q_hv_h\|_{L^s(K)}\lesssim
h_K^{1+d(\frac{1}{s}-\frac{1}{p})}\|v_h\|_{h,p,\omega_K}\,,
$$
where $\omega_K\coloneqq \bigcup\{K'\in\mathcal{T}_{h}\mid K'\cap K\neq\emptyset\} $. In particular, for every $v_h\in \mathbb{P}_k(\mathcal{T}_{h})$,
it holds\vspace{-1mm}
$$
\|v_h-Q_hv_h\|_{L^p(\Omega)}\lesssim \|h_{\mathcal{T}}v_h\|_{h,p}\,.\\[-3.5mm]
$$
\end{lemma}
\begin{proof}
See \cite[Thm. 3.1, (3.7) \& (3.10)]{BO.2009}.
\end{proof}
\begin{corollary}\label{cor:scott_zhang_stable}
Let $p\in [1,\infty)$ and $k\in\mathbb{N}_0$. Then,
for every $v_h\in \mathbb{P}_k(\mathcal{T}_{h})$ and $K\in \mathcal{T}_{h}$, it holds\vspace{-1mm}
$$
\| Q_hv_h\|_{L^p(K)}+\| v_h-Q_hv_h\|_{L^p(K)}\lesssim
\|v_h\|_{L^p(\omega_K)}
\,
$$
In particular, for every $v_h\in \mathbb{P}_k(\mathcal{T}_{h})$,
it holds\vspace{-1mm}
$$
\| Q_hv_h\|_{L^p(\Omega)}+\| v_h-Q_hv_h\|_{L^p(\Omega)}\lesssim
\|v_h\|_{L^p(\Omega)} \,.\\[-3.5mm]
$$
\end{corollary}
\begin{proof}\let\qed\relax
Using the $L^p$-approximation property of $Q_h$ for $s=p$ (cf. Lemma \ref{lem:scott_zhang_approx}), the inverse inequality (cf. \cite[Ex. 12.3]{EG21}), and the discrete trace inequality (cf.~\cite[Lm. 12.8]{EG21}), we find that
\begin{align*}
\| Q_hv_h\|_{L^p(K)}+\| v_h-Q_hv_h\|_{L^p(K)}&\lesssim \| v_h\|_{L^p(K)} + \| v_h -Q_hv_h\|_{L^p(K)}\\&
\lesssim \| v_h\|_{L^p(K)}+h_K\,\| v_h\|_{h,p,\omega_K}
\lesssim \| v_h\|_{L^p(\omega_K)}\,.\tag*{\qedsymbol}
\end{align*}
\end{proof}\vspace*{-10mm}
\begin{lemma}[Gagliardo--Nirenberg]\label{lem:gagliardo}
Let $p,q\in [1,\infty)$ and $k\in\mathbb{N}_0$. Then,
for every $v_h\in \mathbb{P}_k(\mathcal{T}_{h})$, it holds\vspace{-1mm}\enlargethispage{3mm}
\begin{align*}
\|v_h\|_{L^s(\Omega)}\lesssim
\|v_h\|_{h,p}^\gamma\|v_h\|_{L^q(\Omega)}^{1-\gamma}\,,
\end{align*}
where $s\in [1,\infty)$ and $\gamma\in [0,1]$ satisfy\vspace{-1.5mm}
\begin{align}
\gamma=\frac{\frac{1}{q}-\frac{1}{s}}{\frac{1}{q}+\frac{1}{d}-\frac{1}{p}}\,.\label{eq:gamma}
\end{align}
\end{lemma}
Analogously to \cite[Thm. I.2.1]{dibene}, for each $d\ge 2$, the admissible range for $p,q,s\in [1,\infty)$ and $\gamma\in [0,1]$ satisfying \eqref{eq:gamma}, setting $p_*\coloneqq \smash{\frac{dp}{d-p}}$ if $p<d$, is given by:\vspace{-3mm}
\begin{subequations}\label{eq:admissibility}
\begin{alignat}{4}
&\text{if }p\in [1,d):\quad &&\gamma\in [0,1]\qquad &&\text{and}\qquad &&s\in \begin{cases}
[q,p_*]&\text{if }q\in [1,p_*]\\
[p_*,q]&\text{if }q\in [p_*,\infty)
\end{cases}\,,\label{eq:admissibility.1}\\
&\text{if }p\in [d,\infty):\quad &&s\in [q,\infty)\qquad &&\text{and}\qquad &&\gamma\in \big[0,\tfrac{dp}{dp+q(p-d)}\big)\,.\label{eq:admissibility.2}
\end{alignat}
\end{subequations}
\begin{proof}[Proof (of Lemma \ref{lem:gagliardo}).]
To begin with, we observe that
\begin{align}\label{eq:gagliardo.1}
\|v_h\|_{L^s(\Omega)}\leq \|Q_hv_h\|_{L^s(\Omega)}+\|v_h-Q_hv_h\|_{L^s(\Omega)}\eqqcolon I_h^1+I_h^2\,.
\end{align}
As a result, it suffices to estimate $I_h^1$ and $I_h^2$ separately:
\textit{ad $I_h^1$.} Using the classical Galgiardo--Nirenberg inequality \cite{Nir59}, the discrete Poincar\'e~inequality~\eqref{eq:poincare}, the DG-stability of $Q_h$ (cf.~Lemma~\ref{lem:scott_zhang_stable}), and the $L^q$-stability~property~of~$Q_h$~(cf.~Corollary~\ref{cor:scott_zhang_stable}),~we~deduce~that
\begin{align}\label{eq:gagliardo.2}
\begin{aligned}
I_h^1&\lesssim \,(\| Q_hv_h\|_{L^p(\Omega)}+\|\nabla Q_hv_h\|_{L^p(\Omega)})^{\gamma}\|Q_hv_h\|_{L^q(\Omega)}^{1-\gamma}
\\
&\lesssim \|Q_h v_h\|_{h,p}^{\gamma}\|Q_hv_h\|_{L^q(\Omega)}^{1-\gamma}
\\&\lesssim \|v_h\|_{h,p}^{\gamma}\|v_h\|_{L^q(\Omega)}^{1-\gamma}\,.
\end{aligned}
\end{align}
\textit{ad $I_h^2$.} Using Lemma \ref{lem:scott_zhang_approx}, \cite[Ex. 12.4]{EG21} for all $K\in \mathcal{T}_{h}$ and $\widetilde{K}\in \widetilde{\mathcal{T}_{h}}$, that $h_K\leq \tilde{c}\,h_{\widetilde{K}}\leq \tilde{c}\,h_K$~for~all~$K\in \mathcal{T}_{h}$ and $\widetilde{K}\in \widetilde{\mathcal{T}_{h}}$ with $\widetilde{K}\subseteq K$ (cf. Assumption \ref{assum:mesh} (c) 3.), that $\textup{card}(\{\widetilde{K}\in \smash{\widetilde{\mathcal{T}_{h}}}\mid \widetilde{K}\subseteq K\})\lesssim 1$ for all $K\in \mathcal{T}_{h}$ (cf. \cite[Lm. 1.40]{EP12}), Corollary \ref{cor:scott_zhang_stable}, and that
\begin{align*}
\sum_{i\in \mathbb{L}}{\vert a_i\vert^s}\leq \bigg(\sum_{i\in \mathbb{L}}{\vert a_i\vert}\bigg)^s
\end{align*}
for any finite subset $\mathbb{L}\subseteq \mathbb{N}$ and finite sequence $(a_i)_{i\in \mathbb{L}}\subseteq \mathbb{R}$,
we find that
\begin{align}\label{eq:gagliardo.3}
\begin{aligned}
(I_h^2)^s&\leq \sum_{K\in \mathcal{T}_{h}}{\Big(\|v_h-Q_hv_h\|_{L^s(K)}^{\gamma}\|v_h-Q_hv_h\|_{L^s(K)}^{1-\gamma}}\Big)^s
\\&\lesssim \sum_{K\in \mathcal{T}_{h}}{\Bigg(\Big(h_K^{1+d(\frac{1}{s}-\frac{1}{p})}\|v_h\|_{h,p,\omega_K}\Big)^{\gamma}\bigg(\sum_{\widetilde{K}\in \widetilde{\mathcal{T}_{h}};\widetilde{K}\subseteq K}{\|v_h-Q_hv_h\|_{L^s(\widetilde{K})}^s}\bigg)^{\smash{\frac{1-\gamma}{s}}}\Bigg)^s}
\\&\lesssim \sum_{K\in \mathcal{T}_{h}}{\Bigg(\Big(h_K^{1+d(\frac{1}{s}-\frac{1}{p})}\|v_h\|_{h,p,\omega_K}\Big)^{\gamma}\bigg(\sum_{\widetilde{K}\in \widetilde{\mathcal{T}_{h}};\widetilde{K}\subseteq K}{h_{\widetilde{K}}^{d(\frac{1}{s}-\frac{1}{q})s}\|v_h-Q_hv_h\|_{L^q(\widetilde{K})}^s\bigg)^{\smash{\frac{1-\gamma}{s}}}\Bigg)}\Bigg)^s}
\\&\lesssim \sum_{K\in \mathcal{T}_{h}}{\bigg(\Big(h_K^{1+d(\frac{1}{s}-\frac{1}{p})}\|v_h\|_{h,p,\omega_K}\Big)^{\gamma}\Big(h_K^{d(\frac{1}{s}-\frac{1}{q})}\|v_h-Q_hv_h\|_{L^q(K)}\Big)^{1-\gamma}\bigg)^s}
\\&\lesssim \sum_{K\in \mathcal{T}_{h}}{\bigg(h_K^{(1+d(\frac{1}{s}-\frac{1}{p}))\gamma+d(\frac{1}{s}-\frac{1}{q})(1-\gamma)}}\|v_h\|_{h,p,\omega_K}^{\gamma}\|v_h\|_{L^q(\omega_K)}^{1-\gamma}\bigg)^s
\\&\lesssim \bigg(\sum_{K\in \mathcal{T}_{h}}{h_K^{(1+d(\frac{1}{s}-\frac{1}{p}))\gamma+d(\frac{1}{s}-\frac{1}{q})(1-\gamma)}}\|v_h\|_{h,p,\omega_K}^{\gamma}\|v_h\|_{L^q(\omega_K)}^{1-\gamma}\bigg)^s\,.
\end{aligned}
\end{align}
By the definition of $\gamma\in [0,1]$, cf. \eqref{eq:gamma}, it holds
\begin{align}\label{eq:gagliardo.4}
\begin{aligned}
(1+d(\tfrac{1}{s}-\tfrac{1}{p}))\gamma+d(\tfrac{1}{s}-\tfrac{1}{q})(1-\gamma)=0 \,.
\end{aligned}
\end{align}
Using~\eqref{eq:gagliardo.4}~in~\eqref{eq:gagliardo.3}, in particular, using that each $K\in \mathcal{T}_{h}$ appears only in finitely many $\omega_{K'}$, $K'\in \mathcal{T}_{h}$, we arrive at\enlargethispage{1mm}
\begin{align}\label{eq:gagliardo.5}
I_h^2\lesssim \|v_h\|_{h,p}^{\gamma}\|v_h\|_{L^q(\Omega)}^{1-\gamma}\,.
\end{align}
Eventually, combining \eqref{eq:gagliardo.2} and \eqref{eq:gagliardo.5} in \eqref{eq:gagliardo.1}, we conclude the assertion.
\end{proof}
\section{Parabolic interpolation inequalities for discontinuous elements}\label{sec:parabolic_interpolation}
\qquad In this section, we derive parabolic interpolation inequalities which will be employed in Section~\ref{sec:stablity} to establish the $L^\infty(I;L^2(\Omega)^d)$-stability of discontinuous Galerkin~schemes.\vspace{-1.5mm}
\begin{lemma}[Parabolic interpolation inequality]\label{lem:parabolic_interpolation}
Let $p,q,s\in [1,\infty)$ be such that $q\leq s$, let $\gamma\in [0,1]$ be such that \eqref{eq:gamma} is satisfied and let $k\in \mathbb{N}_0$. Then,
for every $v_h\in L^\infty(I;\mathbb{P}_k(\mathcal{T}_{h}))$, it holds
\begin{align*}
\|v_h\|_{L^r(I;L^s(\Omega))}\lesssim \bigg(\int_I{\|v_h(t)\|_{h,p}^q\,\mathrm{d}t}\bigg)^{\smash{\gamma/p}}\|v_h\|_{L^\infty(I;L^q(\Omega))}^{1-\gamma}\,,
\end{align*}
where $r=\smash{\frac{s(p(q+d)-dq)}{(s-q)d}}\in (1,\infty]$.\vspace{-2.5mm}
\end{lemma}
\begin{proof}
By assumption on $p,q,s\in [1,\infty)$ and $\gamma\in [0,1]$, cf. \eqref{eq:gamma}, we can apply the discrete Gagliardo--Nirenberg-type inequality (cf. Lemma \ref{lem:gagliardo}) to find for almost every $t\in I$ that
\begin{align}
\smash{\|v_h(t)\|_{L^s(\Omega)}\lesssim \|v_h(t)\|_{h,p}^\gamma\|v_h(t)\|_{L^q(\Omega)}^{1-\gamma}\,,}\label{eq:parabolic_interpolation}
\end{align}
where $\gamma=\smash{\frac{(s-q)dp}{s(p(q+d)-dq)}}\in [0,1]$. Next, we need to distinguish the cases $s>q$ and $s=q$:
\textit{Case $s>q$.} If $s>q$, then, we have that $0<\gamma\leq 1<p$ and, consequently,~$r=\smash{\frac{p}{\gamma}}\in (1,\infty)$. Raising the inequality \eqref{eq:parabolic_interpolation} to the power $r\in (1,\infty)$, integrating with respect to $t\in I$, pulling out the $L^\infty$-norm of the second factor of the integrand and taking the $r$-th root shows the claim.
\textit{Case $s=q$.} If $s = q$, using Hölder’s inequality, the claim follows with $r = \infty$ and $\gamma = 0$.
\end{proof}
\begin{corollary}\label{cor:parabolic_interpolation}
Let $p\in [\frac{2d}{d+2},\infty)$ and $k\in \mathbb{N}_0$. Then,
for every $v_h\in L^\infty(I;\mathbb{P}_k(\mathcal{T}_{h}))$, it holds
\begin{align*}
\|v_h\|_{L^{p_*}(Q)}\lesssim \bigg(\int_I{\|v_h(t)\|_{h,p}^p\,\mathrm{d}t}\bigg)^{\smash{\gamma/p}}\|v_h\|_{L^\infty(I;L^2(\Omega))}^{1-\gamma}\,,
\end{align*}
where $\gamma=\frac{d}{d+2}$ and $p_*=p\frac{d+2}{d}$.\vspace{-5mm}
\end{corollary}
\begin{proof}
We apply Lemma \ref{lem:parabolic_interpolation} with $q \!= \!2$ and $r \!= \!s\! =\! p_*$, noting that one has admissibility by \eqref{eq:admissibility},~if~${p \!\ge\! \frac{2d}{d+2}}$. In fact, this is obvious if $p \in [d, \infty)$. For $p \in [1, d)$, it holds $s=p_* \in [2, p^*]$ if and only if $p\ge \frac{2d}{d+2}$.
\end{proof}
\begin{remark}
Applying the results we have presented so far component-wise, one can obtain analogous statements for vector-valued functions. In this case, one defines the DG~norm~of~$\vec{w}\in W^{1,p}(\mathcal{T}_{h})^d$ as:
\begin{align*}
\smash{\|\vec{w}_h\|_{h,p}\coloneqq\Big(\|\nabla_h \vec{w}_h\|_{L^p(\Omega)}^p+\big\|h^{-\frac{1}{p'}}_\Gamma\jump{\vec{w}_h \otimes\vec{n}}\big\|_{L^p(\Gamma_h)}^p\Big)^{\smash{1/p}}}\,.
\end{align*}
\end{remark}
\begin{remark}
Consider the alternative norm for $\vec{w}_h \in W^{1,p}(\mathcal{T}_{h})^d$:
\begin{align*}
\smash{{|||}\bm{w}_h{|||}_{h,p} \coloneqq \Big( \|\tens{D}_h(\bm{w}_h)\|^p_{L^p(\Omega)}+\|h_\Gamma^{-\frac{1}{p'}}\jump{\bm{w}_h\otimes \bm{n}}\|^p_{L^p(\Gamma_h^{i})}
+\|h_\Gamma^{-\frac{1}{p'}}\bm{w}_h\cdot \bm{n}\|^p_{L^p(\Gamma_h^{\partial})}+\|(\bm{w}_h)_\tau\|^p_{L^p(\Gamma_h^{\partial})} \Big)^{\smash{1/p}}\,,}
\end{align*}
where only the normal component $\vec{w}_h \cdot \vec{n}$ is penalised on $\Gamma_h^{\partial}$; here, $(\vec{w}_h)_\tau$ denotes the tangential part of $\vec{w}_h$ on the boundary, i.e., $(\vec{w}_h)_\tau \coloneqq \vec{w}_h - (\vec{w}_h\cdot \vec{n})\vec{n}$. If one~manages~to~prove~the~existence of a quasi-interpolation operator $Q_h^{\boldsymbol{n}}\colon \mathbb{P}^k(\mathcal{T}_{h})^d\to W^{1,\infty}(\Omega)^d$ that has analogous stability and approximation properties to those described in Lemma \ref{lem:scott_zhang_stable} and Lemma \ref{lem:scott_zhang_approx}, but using the norm ${|||}\cdot{|||}_{h,p}$, then all the results presented in this work would also apply for the problem with Navier's~slip~boundary~conditions:
\begin{align*}
\begin{aligned}
\vec{u}\cdot \vec{n} &= 0 &\quad \text{on }\partial \Omega\,,\\
-(\tens{S}\vec{n})_\tau &= \gamma \vec{u}_\tau &\quad \text{on }\partial\Omega\,,
\end{aligned}
\end{align*}
where $\gamma>0$ is a parameter. Such a DG method enforces the normal condition $\vec{u}\cdot\vec{n}=0$ weakly, which has been observed to be advantageous in practice; see, e.g., \cite{GS.2022}. To the best of our knowledge, such an operator is not yet available in the literature.
\end{remark}
\section{Stability of DG schemes for non-Newtonian fluids}\label{sec:stablity}
\subsection{Continuous model and its discretisation}
\qquad Let us assume that the initial data belongs to $\vec{u}_0 \in L^2_{\rmdiv}(\Omega)^d$ and, for simplicity, we will take the forcing function in $\bm{f}\in C^0(I;L^{p'}(\Omega)^d)$. In the weak formulation of problem \eqref{eq:continuous_PDE}, we look for a triplet of functions
\begin{gather*}
\tens{S} \in L^{p'}(Q)^{d\times d}_{\mathop{\mathrm{sym}}\nolimits, \mathop{\mathrm{tr}}\nolimits}\,,\quad
\vec{u} \in L^{p}(I;W^{1,p}_0(\Omega)^d) \cap L^\infty(I;L^2(\Omega)^d)\,, \quad
p \in H^{-1}(I;\Lmean{p'})\,,
\end{gather*}
such that for every $\vec{v}\in C^\infty_0(\Omega)^d$, $\phi \in C^\infty_0([0,T))$, and $ q\in C^\infty_0(Q)$, it holds
\begin{subequations}\label{eq:weak_PDE}
\begin{align}
\tens{G}(\tens{S}, \tens{D}(\vec{u})) &= \bm{0} \quad\text{a.e. in }Q\,, \\
-\int_Q \vec{u} \cdot \vec{v} \partial_t \phi \,{\rm d} t{\rm d} x
-\int_\Omega \vec{u}_0 \cdot \vec{v} \phi(0) \,{\rm d} x
+ \int_Q [\tens{S}-\vec{u} \otimes \vec{u}-p\mathbb{I}_d] \fp \tens{D}(\vec{v}) \phi \,{\rm d} t{\rm d} x
&= \int_Q \bm{f}\cdot \vec{v} \phi \,{\rm d} t{\rm d} x\,,\\
-\int_Q q \rmdiv \vec{u}\,{\rm d} t{\rm d} x &= 0\,.
\end{align}
\end{subequations}
Note that the exponent $p>1$ is determined by the coercivity condition \eqref{eq:coercivity}. The existence of global weak solutions for large data (assuming $p>\frac{2d}{d+2}$) under monotonicity assumptions for $\tens{G}$ was proved in \cite{BGMS.2012} by working with the graph induced by $\tens{G}$, and later in \cite{BMM.2021} by working with the function $\tens{G}$ directly. In the non-monotone case, existence of weak solutions is not known, but numerical experiments seem to produce reasonable results \cite{JMPT.2019}.
Let us fix polynomial degrees $k_{\vec{u}}, k_{\pi}\in \mathbb{N}$ for the velocity and pressure approximations, respectively; we assume that $k_{\vec{u}}\geq 1$ and $k_{\pi} \leq k_{\vec{u}}$. The spaces corresponding to the discrete approximations~are,~then, defined as
\begin{equation*}
\mathbb{V}^h \coloneqq \mathbb{P}_{k_{\vec{u}}}(\mathcal{T}_{h})^d\,,
\qquad
\mathbb{M}^h \coloneqq \mathbb{P}_{k_{\pi}}(\mathcal{T}_{h}) \cap \Lmean{p'}\,.
\end{equation*}
The space $\mathbb{M}^h$ is equipped with the norm $\norm{\cdot}_{\Lp{p'}}$, while the velocity space $\mathbb{V}^h$~is~equipped~with~the~norm
\begin{equation}\label{eq:DG_norm2}
\norm{\cdot}_{h,p}
\coloneqq \big(
\norm{\tens{D}_h (\cdot)}_{\Lp{p}}^p
+
|\cdot|^p_{\Gamma_h,p}\big)^{1/p}\,,
\end{equation}
where the jump semi-norm for vector-valued functions $\vec{v}_h\in \mathbb{V}^h$ is defined as
\begin{equation}\label{eq:jump_semi-norm2}
|\vec{v}_h|^p_{\Gamma_h,p} \coloneqq \int_{\Gamma_h} h_\Gamma^{1-p} |\jump{\vec{v}_h\otimes \vec{n}}|^p\,{\rm d} s.
\end{equation}
It can be shown (see \cite[Eq. (1.19)]{B.2003} or \cite[Prop.\ 2.4]{KR.2022}) that for every $\vec{v}_h\in \mathbb{V}^h$, there holds
the discrete Korn-type inequality
\begin{equation}\label{eq:korn}
\begin{gathered}
\|\vec{v}_h\|_{L^p(\Omega)} + \|\nabla_h \vec{v}_h\|_{L^p(\Omega)} \lesssim \|\vec{v}_h\|_{h,p}\,.
\end{gathered}
\end{equation}
Before we present the discretised system, it will be useful to introduce the~notion~of~discrete~gradients. For $l\geq 0$, let us define a discrete gradient operator $\mathcal{G}_h^l\colon \mathbb{V}^h \to \mathbb{P}_{\max\{k_{\vec{u}}-1,l\}}(\mathcal{T}_{h})^{d\times d}$ through the relation
\begin{equation}\label{eq:discrete_gradient}
\mathcal{G}_h^l(\vec{v}_h) \coloneqq \nabla_h \vec{v}_h - R^l_h(\vec{v}_h)\quad\text{ in }\mathbb{P}_{\max\{k_{\vec{u}}-1,l\}}(\mathcal{T}_{h})^{d\times d}\,,
\end{equation}
where $R^l_h(\vec{v}_h) \in \mathbb{P}_l(\mathcal{T}_{h})^{d\times d}$, for every $\bm{t}_h \in \mathbb{P}_l(\mathcal{T}_{h})^{d\times d}$, is defined through
\begin{equation}\label{eq:lifting_jumps}
\int_\Omega R_h^l(\vec{v}_h) \fp \bm{t}_h\,{\rm d} x
=
\int_{\Gamma_h} \left[\!\left[ \vec{v}_h\otimes \vec{n} \right]\!\right] \fp \avg{\bm{t}_h} \,{\rm d} s\,.
\end{equation}
While the natural choice seems to be $l\!=\!k_{\vec{u}}\!-\!1\!\in\! \mathbb{N}_0$ (this will be set whenever the index~${l\!\in \!\mathbb{N}_0}$~is~omitted), the number $l\in \mathbb{N}_0$ is a parameter and can be chosen freely; for instance, if $l=0$, the implementation becomes easier as $R^l_h$ can be, then, computed through element-wise averages; on the other hand, taking $l=k_{\vec{u}}+1\in\mathbb{N}$ seems to be advantageous, in the linear case at least, in that the method does not require jump penalisation \cite{JNS.2016}. We will shortly explore yet another choice when defining the discrete convective term. Note that if $\bm{t}_h \in C_0^\infty(\Omega)^{d\times d}$, then this is precisely the distributional gradient of $\vec{v}_h$. It is possible to prove stability of the discrete gradient (see e.g.\ \cite[Prop.\ 2.1]{dPE.2010} or \cite[Lm.\ 7]{BO.2009}), i.e., that for every $\vec{v}_h \in \mathbb{V}^h$, it holds
\begin{equation}\label{eq:discrete_gradient_stability}
\|\mathcal{G}_h^l(\vec{v}_h)\|_{L^p(\Omega)} \lesssim \|\vec{v}_h\|_{h,p} \,.
\end{equation}
The discrete symmetric gradient $\mathcal{G}^l_{h,\mathop{\mathrm{sym}}\nolimits}\colon \mathbb{V}^h\to \mathbb{P}_l(\mathcal{T}_{h})_{\mathop{\mathrm{sym}}\nolimits}^{d\times d}$, for every $\vec{v}_h\in \mathbb{V}^h$,
is defined through
\begin{align}
\mathcal{G}^l_{h,\mathop{\mathrm{sym}}\nolimits}(\vec{v}_h) \coloneqq \tens{D}_h(\vec{v}_h) - R^l_{h,\mathop{\mathrm{sym}}\nolimits}(\vec{v}_h)\quad\text{ in }\mathbb{P}_{\max\{k_{\vec{u}}-1,l\}}(\mathcal{T}_{h})_{\mathop{\mathrm{sym}}\nolimits}^{d\times d}\,,
\end{align} where now $ R_{h,\mathop{\mathrm{sym}}\nolimits}^l(\vec{v}_h)\in \mathbb{P}_l(\mathcal{T}_{h})_{\mathop{\mathrm{sym}}\nolimits}^{d\times d}$, for every $\bm{t}_h \in \mathbb{P}_l(\mathcal{T}_{h})_{\mathop{\mathrm{sym}}\nolimits}^{d\times d}$, is defined through
\begin{equation}
\int_\Omega R_{h,\mathop{\mathrm{sym}}\nolimits}^l(\vec{v}_h) \fp \bm{t}_h \,{\rm d} x
=
\int_{\Gamma_h} \left[\!\left[ \vec{v}_h\otimes \vec{n} \right]\!\right] \fp \avg{\bm{t}_h}\,{\rm d} s\,.
\end{equation}
Similarly, one can define a discrete divergence operator $\mathcal{D}_h^l\colon \mathbb{V}^h\to \mathbb{P}_{\max\{k_{\bm{u}}-1,l\}}(\mathcal{T}_{h})$
by taking the trace, i.e.,
for every $\vec{v}_h\in \mathbb{V}^h$, we define
\begin{align}
\mathcal{D}_h^l(\vec{v}_h) \coloneqq \mathop{\mathrm{tr}}\nolimits(\mathcal{G}_h^l(\vec{v}_h)) = \rmdiv_h(\vec{v}_h) + \mathop{\mathrm{tr}}\nolimits(R^l_h(\vec{v}_h))\quad\text{ in }\mathbb{P}_{\max\{k_{\bm{u}}-1,l\}}(\mathcal{T}_{h})\,.
\end{align}
The trace of $R^l_h(\vec{v}_h)\!\in \!\mathbb{P}_l(\mathcal{T}_{h})^{d\times d}$ for $\vec{v}_h\!\in\! \mathbb{V}^h$ can be computed from \eqref{eq:lifting_jumps} by~taking~${\bm{t}_h\! =\! q_h \mathbb{I}_d\!\in\! \mathbb{P}_l(\mathcal{T}_{h})_{\mathop{\mathrm{sym}}\nolimits}^{d\times d}}$, where $q_h \in \mathbb{P}_l(\mathcal{T}_{h})$ is arbitrary and $\mathbb{I}_d\in \mathbb{R}^{d\times d}$ is the identity matrix. In particular, for every $q_h \in \mathbb{P}_l(\mathcal{T}_{h})$, we can write
\begin{equation}\label{eq:discrete_divergence}
\int_\Omega q_h \mathcal{D}_h^l(\vec{v}_h)\,{\rm d} x
=
\int_\Omega q_h \rmdiv_h\vec{v}_h\,{\rm d} x
- \int_{\Gamma_h} \jump{\vec{v}_h \cdot \vec{n}} \avg{q_h}\,{\rm d} s\,.
\end{equation}
Whenever the index $l\in \mathbb{N}_0$ is omitted, it is meant that $l= k_{\pi}$, in which case \eqref{eq:discrete_divergence} holds for all $q_h\in \mathbb{M}^h$.
Regarding the convective term, we wish to preserve the following skew-symmetry~property~that~is valid at the continuous level: for every $\vec{u},\vec{v},\vec{w}\in C^\infty_0(\Omega)^d$, where $\rmdiv\vec{u} = 0$ in $\Omega$, it holds
\begin{equation}
\int_\Omega (\vec{v}\otimes \vec{u})\fp \nabla \vec{w} \,{\rm d} x
=
-\int_\Omega (\vec{w}\otimes \vec{u})\fp \nabla \vec{v}\,{\rm d} x\,.
\end{equation}
In the case when discretely divergence-free functions are also point-wise divergence-free (as is, e.g., the case when $\mathbb{V}^h$ is $H(\rmdiv;\Omega)$-conforming and $\mathbb{M}^h = \rmdiv\mathbb{V}^h$),~for~every~${\vec{u}_h,\vec{v}_h,\vec{w}_h\in \mathbb{V}^h}$, we simply define
\begin{align}
\begin{aligned}
\hat{\mathcal{C}}_h[\vec{u}_h,\vec{v}_h,\vec{w}_h] &\coloneqq -\int_\Omega (\vec{v}_h\otimes \vec{u}_h)\fp \mathcal{G}_h^{2k_{\vec{u}}}(\vec{w}_h)\,{\rm d} x
\\&= -\int_\Omega (\vec{v}_h \otimes \vec{u}_h) \fp \nabla_h\vec{w}_h \,{\rm d} x
+ \int_{\Gamma_h} \avg{\vec{v}_h \otimes \vec{u}_h}\fp \jump{\vec{w}_h \otimes \vec{n}}\,{\rm d} s\,.
\end{aligned}
\end{align}
The parameter $2k_{\vec{u}}\in \mathbb{N}$ in the discrete gradient could be chosen differently, but with this choice one has the second equality, which is straightforward to implement in modern software packages. In general,~we,~then, define the skew-symmetric convective term as
\begin{equation}\label{eq:convective_term}
\mathcal{C}_h[\vec{u}_h, \vec{v}_h, \vec{w}_h] \coloneqq
\frac{1}{2}\Big[ \hat{\mathcal{C}}_h[\vec{u}_h, \vec{v}_h, \vec{w}_h]
- \hat{\mathcal{C}}_h[\vec{u}_h, \vec{w}_h, \vec{v}_h]
\Big].
\end{equation}
Let us now turn our attention towards the time discretisation: we proceed similarly as in \cite{MN.2006,EG21c}. Let $\{\mathcal{I}_{\tau}\}_{\tau>0}$ be a family of partitions of the closed time interval $[0,T]$ of the form $ \{I_j\}_{j=1}^{N_\tau}= \{(t_{j-1}, t_j]\}_{j=1}^{N_\tau}$, for some $N_\tau \in \mathbb{N}$, associated to a (maximal) time step $\tau \coloneqq \max_{j\in\{1,\ldots ,N_\tau\}} (t_j - t_{j-1})$. We will assume that the family of time partitions is quasi-uniform in the sense that there is a number $\theta \in (0,1]$ (independent of $\tau>0$) such that
\begin{equation}\label{eq:time_quasiuniform}
\theta \tau \leq \min_{j\in\{1,\ldots, N_\tau\}}(t_j - t_{j-1})\,.
\end{equation}
We will denote the local space-time cylinders as $Q_j \coloneqq I_j \times \Omega$ for all $j=1,\ldots, N_\tau$. Then, for a given Banach space $X$ and $k\in \mathbb{N}_0$, we define the space of broken (in time) polynomials of global degree $k$ with values in $X$ as
\begin{equation}
P^k(\mathcal{I}_{\tau};X) \coloneqq \Big\{v\colon [0,T]\to X \mid v|_{I_j}\in \mathbb{P}^k(I_j;X) \text{ for all } j=1,\ldots ,N_\tau \Big\}\,.
\end{equation}
Note that the functions in $\mathbb{P}^k(\mathcal{I}_{\tau};X)$ are defined at $t=0$ and are left-continuous,~in~particular,~implying that $v(t_j)= v_\tau(t_j^-)\coloneqq \lim_{\smash{s\to t_j^-}} v_\tau(s)$ at the partition points. For a given function $v_\tau\in \mathbb{P}^k(\mathcal{I}_{\tau};X)$, we define the jump at $t_{j-1}$ for every $j\in\{1,\ldots, N_\tau\}$ as
\begin{align}
\begin{aligned}
\jump{v_\tau}_{j-1} &\coloneqq v_\tau(t_{j-1}^+) - v_\tau(t_{j-1})\,,\\
v_\tau(t_{j-1}^+) &\coloneqq \lim_{s\to t_{j-1}^+} v_\tau(s)\,.
\end{aligned}
\end{align}
Fix a polynomial degree $k_t\in \mathbb{N}$ for the time approximation; in the discrete formulation, we will look for a velocity and pressure in the spaces
\begin{equation}
\vec{u}_{h,\tau} \in \mathbb{V}^{h,\tau} \coloneqq \mathbb{P}_{k_t}(\mathcal{I}_{\tau}; \mathbb{V}^h)\,,
\qquad
p_{h,\tau} \in \mathbb{M}^{h,\tau} \coloneqq \mathbb{P}_{k_t}(\mathcal{I}_{\tau}; \mathbb{M}^h)\,.
\end{equation}
Now, let $\{\xi_l\}_{l=1}^{k_t+1}$ and $\{\omega_l\}_{l=1}^{k_t +1}$ be the (right-sided) points and weights, respectively, corresponding to the Gauss--Radau quadrature of degree $2k_t\in \mathbb{N}$ on the reference interval $\hat{I}\coloneqq (-1,1]$. By applying the transformations $\xi \mapsto \frac{1}{2}(t_{j} + t_{j-1}) + \frac{\xi}{2}(t_j - t_{j-1})$, $\omega \mapsto \frac{\omega}{2}(t_j - t_{j-1})$, one can, then, obtain a quadrature $\{(\xi^j_l,\omega^j_l)\}_{l=1}^{k_t+1}$ on the $I_j$ for all $j\!\in\! \{1,\ldots, N_\tau\}$. This can be used to define the discrete measure $\mu^{\mathrm{GR}}_{k_t+1}(\d t)$, for every $f\in C^0(\overline{I})$, as
\begin{equation}\label{eq:discrete_time_measure}
\int_0^T f(t)\, \mu^{\mathrm{GR}}_{k_t+1}(\d t)
\coloneqq \sum_{j=1}^{N_\tau} \int_{I_j} f(t)\, \mu^{\mathrm{GR}}_{k_t+1}(\d t)
\coloneqq \sum_{j=1}^{N_\tau} \sum_{l=1}^{k_t +1} \omega^j_{l} f(\xi^j_l)\,.
\end{equation}
Here, note the abuse of notation in that we employ the same symbol $\mu^{\mathrm{GR}}_{k_t+1}(\d t)$ for the integral on all the subintervals $I_j$, $j=1,\ldots, N_\tau$.
We are, eventually, able to introduce the discretisation of \eqref{eq:weak_PDE}. In the discrete formulation, we look for $(\vec{u}_{h,\tau},p_{h,\tau})^\top\in \mathbb{V}^{h,\tau} \times \mathbb{M}^{h,\tau}$ such that for every $(\vec{v}_{h,\tau},q_{h,\tau})^\top\in\mathbb{V}^{h,\tau}\times \mathbb{M}^{h,\tau}$, it holds
\begin{subequations}\label{eq:discrete_PDE}
\begin{gather}
\int_Q q_{h,\tau} \mathcal{D}_h (\vec{u}_{h,\tau})\,{\rm d} t{\rm d} x + \int_I S^{\pi}_h(p_{h,\tau}; q_{h,\tau}) \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) = 0
\label{eq:discrete_mass}\\
\sum_{j=1}^{N_\tau}\left[
\int_{Q_j} \partial_t \vec{u}_{h,\tau}\cdot \vec{v}_{h,\tau} \,{\rm d} t{\rm d} x
+ \int_\Omega \jump{\vec{u}_{h,\tau}}_{j-1}\cdot \vec{v}_{h,\tau}(t^+_{j-1})\,{\rm d} x
+ \int_{I_j} \mathcal{A}_h(\vec{u}_{h,\tau}; \vec{v}_{h,\tau})\, \mu^{\mathrm{GR}}_{k_t+1}(\d t) \right.\notag \\
\left.
+ \int_{I_j} \mathcal{C}_h[\vec{u}_{h,\tau},\vec{u}_{h,\tau},\vec{v}_{h,\tau}]\, \mu^{\mathrm{GR}}_{k_t+1}(\d t)
- \int_{Q_j} p_{h,\tau}\mathcal{D}_h(\vec{v}_{h,\tau}) \,{\rm d} t{\rm d} x
\right]
=
\int_Q \bm{f}\cdot \vec{v}_{h,\tau} \,\mu^{\mathrm{GR}}_{k_t+1}(\d t){\rm d} x\,.
\label{eq:discrete_momentum}
\end{gather}
Here, the initial condition is set as the $L^2$-orthogonal projection into the corresponding discrete space, i.e., $\vec{u}_{h,\tau}(0)\coloneqq\Pi_{\mathbb{V}^h}\vec{u}_0\in \mathbb{V}^h$. The pressure stabilisation term above, for every $p_h,q_h \in \mathbb{M}^h$, is defined as
\begin{equation}\label{eq:pressure_stabilisation}
S^{\pi}_h(p_{h}, q_{h}) \coloneqq \int_{\Gamma_h^{i}} h_{\Gamma}^{p'-1} |\jump{p_{h}\vec{n} }|^{p'-2}\jump{p_h \vec{n}} \cdot \jump{q_h\vec{n}}\, {\rm d} s\,.
\end{equation}
For some $l\in \mathbb{N}$, the discretisation of the viscous term, for every $\vec{v}_h,\vec{w}_h \in \mathbb{V}^h$, is defined as
\begin{equation}\label{eq:viscous_term}
\mathcal{A}_h(\vec{v}_h; \vec{w}_h) \coloneqq \int_\Omega \hat{\tens{T}}_h \fp \mathcal{G}_h^l(\vec{w}_h)\,{\rm d} x
+ S^{\vec{u}}_h(\vec{v}_h; \vec{w}_h)\,,
\end{equation}
where $\hat{\tens{T}}_h\colon \Omega \to \mathbb{R}^{d \times d}_{\sym}$ is such that
\begin{equation}\label{eq:discrete_implicit_relation}
\tens{G}(\hat{\tens{T}}_h, \hat{\mathcal{G}}_h(\vec{v}_{h})) = \bm{0}
\qquad
\text{in }\Omega\,,
\end{equation}
\end{subequations}
where $\hat{\mathcal{G}}_h \in \{\nabla_h, \mathcal{G}_h^l\}$. The velocity stabilisation for every $\vec{v}_h,\vec{w}_h \in \mathbb{V}^h$, is defined as
\begin{equation}\label{eq:velocity_stabilisation}
S^{\vec{u}}_h(\vec{v}_{h}, \vec{w}_{h}) \coloneqq \alpha \int_{\Gamma_h^{i}} h_{\Gamma}^{1-p} |\jump{\vec{v}_{h}\otimes \vec{n} }|^{p-2}\jump{\vec{v}_h\otimes \vec{n}} \fp \jump{\vec{w}_h \otimes \vec{n}}\, {\rm d} x\,,
\end{equation}
where $\alpha>0$ is a stabilisation parameter. This choice ensures, thanks to the coercivity condition \eqref{eq:coercivity}, that the discretisation of the viscous term is coercive (in general, for large enough $\alpha>0$), i.e., for every $\vec{v}_h \in \mathbb{V}^h$, it holds
\begin{equation}\label{eq:discrete_coercivity_A}
\norm{\smash{\hat{\tens{T}}_h}}^{p'}_{\Lp{p'}} +
\|\vec{v}_h\|^p_{h,p} \lesssim \mathcal{A}_h(\vec{v}_h;\vec{v}_h)\,.
\end{equation}
Since the discretised system \eqref{eq:discrete_PDE} makes use of discontinuous polynomials in time, the method can be localised; in practice, the problem is solved on the interval $I_j$ using the information from the (already computed) solution on the previous interval $I_{j-1}$. A few additional remarks are in order:
\noindent
\textbf{Computing the constitutive relation.} In practice, it is not strictly necessary to compute the function $\smash{\hat{\tens{S}}_{h,\tau}} \colon Q\to \mathbb{R}^{d \times d}_{\sym}$ corresponding to $\vec{u}_{h,\tau}\in \mathbb{V}^{h,\tau}$ from \eqref{eq:discrete_implicit_relation}. In fact, with modern software tools it is possible to work out the dependence of $\smash{\hat{\tens{S}}_{h,\tau}}$ on $\vec{u}_{h,\tau}$ without~having~to~compute~it~explicitly~(see,~e.g.,~\cite{BH.2021}). For explicit constitutive relations of the type $\tens{S} = \tens{\mathcal{S}}(\tens{D}(\vec{u}))$, such as \eqref{eq:power_lawA}, this is of course not needed, since one can, then, write for every $\vec{v}_h,\vec{w}_h \in \mathbb{V}^h$
\begin{equation}\label{eq:viscous_term_explicit}
\mathcal{A}_h(\vec{v}_h; \vec{w}_h) \coloneqq \int_\Omega \tens{\mathcal{S}}(\hat{\mathcal{G}}_h(\vec{v}_h)) \fp \mathcal{G}_h^l(\vec{w}_h)\,{\rm d} x
+ S^{\vec{u}}_h(\vec{v}_h; \vec{w}_h)\,.
\end{equation}
Alternatively, in case a discrete stress is a quantity of interest (or for explicit relations of the type $\tens{D}(\vec{u}) = \tens{\mathcal{D}}(\tens{S})$ such as \eqref{eq:non_monotone}), one can instead employ a 3-field formulation for the variables $(\tens{S}_{h,\tau},\vec{u}_{h,\tau},p_{h,\tau})^\top$ in the spirit of \cite{FGS.2020}; the results of this work will still hold in that case.
\noindent
\textbf{Various DG methods.} We presented two choices for a discrete gradient in the constitutive relation \eqref{eq:discrete_implicit_relation}. The choice $\smash{\hat{\mathcal{G}}_h}= \mathcal{G}_h^l$, e.g., would lead to a method of Local Discontinuous Galerkin~(LDG)~type. On the other hand, choosing $\smash{\hat{\mathcal{G}}_h} = \nabla_h$ leads to an Incomplete Interior Penalty (IIDG) method, which can be advantageous for non-linear problems of the type considered here, since one would not need to explictly compute the lifting terms $R^l_h(\vec{u}_{h,\tau}),R^l_h(\vec{v}_{h,\tau})$ in the implementation, thanks to the fact that the full discrete gradient $\mathcal{G}_h^l$ would appear on the test function exclusively (and, therefore, linearly), and so the definition \eqref{eq:lifting_jumps} can be applied directly. Regarding the stabilisation term, one could consider instead
\begin{equation}
\hat{S}^{\vec{u}}_h(\vec{v}_h; \vec{w}_h) \coloneqq S^{\vec{u}}_h(\vec{v}_h; \vec{w}_h) - \int_\Omega |R^l_h(\vec{v}_h)|^{p-2}R^l_h(\vec{v}_h) \fp R^l_h(\vec{w}_h)\,{\rm d} x
\quad\text{ for all }
\vec{v}_h,\vec{w}_w\in \mathbb{V}^h\,,
\end{equation}
which leads to Symmetric Interior Penalty (SIP) methods (cf.\ \cite{MRET.2018}), in the sense that it reduces to the traditional SIP method in the Newtonian case.
\noindent
\textbf{Gauss--Radau Quadrature.} The discrete time measure $\mu^{\mathrm{GR}}_{k_t+1}(\d t)$ should, in principle, appear in all the time integrals in \eqref{eq:discrete_momentum}; this implies, following the reasoning from \cite{MN.2006,EG21c}, that the method presented here is equivalent to a RadauIIA Runge--Kutta method, which can be readily implemented with many existing software libraries. Note that since the quadrature is exact up to degree $2k_t$, we could omit it from several terms, such as $\int_{Q_j} \partial_t \vec{u}_{h,\tau}\cdot \vec{v}_{h,\tau}\, \mu^{\mathrm{GR}}_{k_t+1}(\d t)=\int_{Q_j} \partial_t \vec{u}_{h,\tau}\cdot \vec{v}_{h,\tau}\, {\rm d} t{\rm d} x$.
\noindent
\textbf{Divergence constraint and pressure stabilisation.} The motivation behind the pressure stabilisation $S^{\pi}_h$ is the validity of the following inf-sup condition
\begin{equation}\label{eq:infsup}
\norm{q_h}_{\Lp{p'}}
\lesssim
\sup_{\vec{w}_h \in \mathbb{V}^h}\frac{\int_\Omega q_h \mathcal{D}_h(\vec{w}_h)\,{\rm d} x}{\norm{\vec{w}_h}_{h,p}}
+ S^{\pi}_h(q_h;q_h)^{\frac{1}{p'}}
\qquad
\text{ for all }\, q_h\in \mathbb{M}^h\,,
\end{equation}
whose proof can be found in Appendix \ref{appendix:infsup}. In certain cases, this stabilisation~term~can~be~avoided,~e.g., when matching meshes are used and the pressure is looked for in a continuous subspace (see e.g.\ \cite{KR.2022}). Naturally, also for divergence-conforming elements (i.e., when $\mathbb{V}^h \subset H(\rmdiv;\Omega)$ and $\mathbb{M}^h = \rmdiv\mathbb{V}^h$), the stabilisation term is not needed and the divergence constraint \eqref{eq:discrete_mass} simply becomes
\begin{equation}
\int_Q q_{h,\tau} \rmdiv\vec{u}_{h,\tau}\,{\rm d} x = 0
\qquad
\text{ for all }\, q_{h,\tau}\in \mathbb{M}^{h,\tau}\,.
\end{equation}
\begin{remark}[Method without quadrature]
Sometimes the $\mathrm{DG}(k_t)$ time discretisation method is defined with the usual time integration instead of using the Gauss--Radau quadrature $\mu^{\mathrm{GR}}_{k_t+1}(\d t)$. In this case, however, the equivalence with a Runge--Kutta method will be lost, in general; that said, the method has also certain nice properties, such as not requiring the forcing function $\bm{f}$ to be continuous. All the results in this work also apply to the method without quadrature, with slightly simplified proofs.
\end{remark}
\subsection{A priori estimates and $L^\infty(I;L^2(\Omega)^d)$-stability}
\hspace{5mm} We will proceed to derive energy estimates for the discrete problem \eqref{eq:discrete_PDE}.
\begin{lemma}[A priori estimates]\label{lem:apriori}
Suppose that $(\vec{u}_{h,\tau}, p_{h,\tau})^\top\in \mathbb{V}^{h,\tau} \times \mathbb{M}^{h,\tau}$ is a solution of problem \eqref{eq:discrete_PDE}, and let $\hat{\tens{S}}_{h,\tau}:Q\to \mathbb{R}^{d \times d}_{\sym}$ be a function associated to $\vec{u}_{h,\tau}\in \mathbb{V}^{h,\tau}$ in \eqref{eq:discrete_implicit_relation}. Then, assuming the penalty parameter $\alpha>0$ is large enough, there is a constant $c>0$ (independent of $h>0$ and $\tau>0$) such that
\begin{equation}\label{eq:apriori}
\begin{split}
\max_{j\in \{1,\ldots, N_\tau\}} \|\smash{\vec{u}_{h,\tau}(t_j)}\|^2_{\Lp{2}}
&+
\sum_{j=1}^{N_\tau} \|\jump{\vec{u}_{h,\tau}}_{j-1}\|^2_{\Lp{2}}
+
\int_I S^{\pi}_h(p_{h,\tau}(t), p_{h,\tau}(t)) \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \\
&\quad+
\int_I \|\hat{\tens{S}}_{h,\tau}(t)\|^{p'}_{\Lp{p'}} \,\mu^{\mathrm{GR}}_{k_t+1}(\d t)
+
\int_I \|\vec{u}_{h,\tau}(t)\|^{p}_{h,p} \,\mu^{\mathrm{GR}}_{k_t+1}(\d t)
\leq c\,.
\end{split}
\end{equation}
For $p=2$, the discrete measure $\mu^{\mathrm{GR}}_{k_t+1}(\d t)$ can be replaced by the standard measure ${\rm d} t$; this is also true for general $p>1$ for the DG method without quadrature.
\end{lemma}
\begin{proof}
Testing the equations \eqref{eq:discrete_mass} and \eqref{eq:discrete_momentum} on the interval $I_j$ for all $j=1,\dots,N_\tau$ with $p_{h,\tau}$ and $\vec{u}_{h,\tau}$, respectively, and adding the resulting equations, recalling the skew-symmetry property of $\mathcal{C}_h$, for every $j=1,\dots,N_\tau$, we find that
\begin{align*}
\frac{1}{2}\int_{I_j} \frac{{\rm d}}{{\rm d} t}\norm{\smash{\vec{u}_{h,\tau}}}^2_{\Lp{2}}\, {\rm d} t
&+
\int_\Omega (\vec{u}_{h,\tau}(t^+_{j-1})- \vec{u}_{h,\tau}(t_{j-1}))\cdot \vec{u}_{h,\tau}(t^+_{j-1})\, {\rm d} x
+
\int_{I_j} \mathcal{A}_h(\vec{u}_{h,\tau}; \vec{u}_{h,\tau}) \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \\
&\quad+
\int_{I_j} S^{\pi}_h(p_{h,\tau}, p_{h,\tau})\, \mu^{\mathrm{GR}}_{k_t+1}(\d t)
=
\int_{I_j} \bm{f} \cdot \vec{u}_{h,\tau} \,\mu^{\mathrm{GR}}_{k_t+1}(\d t)\,.
\end{align*}
Let us assume that the jump penalisation parameter $\alpha>0$ is large enough, so that the coercivity property \eqref{eq:discrete_coercivity_A} is satisfied. Then, using the fact that $2a(a-b)= a^2 - b^2 + (a-b)^2$ for all $a,b\in \mathbb{R}$, together with Hölder’s inequality yields for all $j=1,\dots,N_\tau$
\begin{align*}
&\frac{1}{2}\norm{\smash{\vec{u}_{h,\tau}(t_j)}}^2_{\Lp{2}}
- \frac{1}{2}\norm{\smash{\vec{u}_{h,\tau}(t_{j-1})}}^2_{\Lp{2}}
+ \frac{1}{2}\|\jump{\vec{u}_{h,\tau}}_{j-1}\|^2_{\Lp{2}}
+
\int_{I_j} \norm{\smash{\hat{\tens{S}}_{h,\tau}(t)}}^{p'}_{\Lp{p'}} \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \\
&\quad
+ \int_{I_j} \norm{\smash{\vec{u}_{h,\tau}}}^p_{h,p} \mu^{\mathrm{GR}}_{k_t+1}(\d t)
+ \int_{I_j} S^{\pi}_h (p_{h,\tau}(t); p_{h,\tau}(t)) \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \\
&\lesssim
\bigg(\int_{I_j} \norm{\bm{f}}^{p'}_{\Lp{p'}} \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/p'}}
\bigg( \int_{I_j} \norm{\smash{\vec{u}_{h,\tau}}}^p_{\Lp{p}} \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/p}}.
\end{align*}
Applying Young's inequality on the right-hand-side, using Körn's inequality \eqref{eq:korn}, and summing over $j\in \{1,\ldots, i\}$, with $i\in\{1, \ldots, N_\tau\}$, for every $i=1, \ldots, N_\tau$, we arrive at
\begin{align*}
&\|\vec{u}_{h,\tau}(t_i)\|^2_{\Lp{2}}
+ \sum_{j=1}^i \frac{1}{2}\|\jump{\vec{u}_{h,\tau}}_{j-1}\|^2_{\Lp{2}}
+
\int_0^{t_i} S^{\pi}_h (p_{h,\tau}; p_{h,\tau}) \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \\
&\quad +
\int_0^{t_i} \norm{\smash{\hat{\tens{S}}_{h,\tau}}}^{p'}_{\Lp{p'}} \,\mu^{\mathrm{GR}}_{k_t+1}(\d t)
+
\int_0^{t_i} \norm{\smash{\vec{u}_{h,\tau}}}^p_{h,p} \,\mu^{\mathrm{GR}}_{k_t+1}(\d t)
\lesssim
\|\vec{u}_0\|^2_{\Lp{2}}
+
\norm{\bm{f}}_{C^0(I;\Lp{p'})}^{p'}.
\end{align*}
Here, we made use of the stability of the $L^2$-projection $\norm{\smash{\vec{u}_{h,\tau}(0)}}_{\Lp{2}} \leq \norm{\vec{u}_0}_{\Lp{2}}$. Taking the maximum over $i\in \{1,\ldots, N_\tau\}$ concludes the proof.
\end{proof}
In the lowest order time discretisation $\mathrm{DG}(0)$, the discrete velocity is piece-wise constant~in~time~and so from the a priori estimate \eqref{eq:apriori} above one immediately has (for arbitrary $p>1$)
\begin{equation*}
\norm{\smash{\vec{u}_{h,\tau}}}_{L^\infty(I;L^2(\Omega)^d)}
=
\max_{j\in \{1,\ldots, N_\tau\}} \norm{\smash{\vec{u}_{h,\tau}(t_j)}}_{\Lp{2}}
\leq
c\,.
\end{equation*}
The rest of the paper is devoted to proving that this is also the case for general polynomial~degree~$k_t\geq 1$, assuming that $p\geq \frac{3d+2}{d+2}$. In order to do this, we will employ the exponential time interpolant from \cite{CW.2010b}. Fix a parameter $\lambda>0$; for every $j\in\{1,\ldots,N_\tau\}$, we define for polynomials on $I_j$, the linear mapping $\overline{(\cdot)}\coloneqq (r\mapsto \overline{r})\colon \mathbb{P}_{k_t}(I_j) \to \mathbb{P}_{k_t}(I_j)$, for every $r\in \mathbb{P}_{k_t}(I_j)$, through
\begin{subequations}\label{eq:exponential_interpolant_properties}
\begin{align}
\overline{r}(t_{j-1}^+) &= r(t_{j-1}^+)\,, \\
\int_{I_j} \overline{r}(t) q(t) \,{\rm d} t
&=
\int_{I_j} r(t) q(t) e^{-\lambda(t-t_{j-1})} \,{\rm d} t
\qquad
\text{ for all } q\in \mathbb{P}_{k_t -1}(I_j)\,.\end{align}
\end{subequations}
Note that in the expression above one could use the discrete measure $\mu^{\mathrm{GR}}_{k_t+1}(\d t)$ as~well,~since~the~Gauss--Radau quadrature integrates exactly up to degree $2k_t$. Then,~${\overline{(\cdot)}\!\coloneqq \!(\vec{v}_{h,\tau}\!\mapsto\! \overline{\vec{v}}_{h,\tau})\colon \!\mathbb{P}_{k_t}(I_j;\mathbb{V}^h) \!\to \!\mathbb{P}_{k_t}(I_j;\mathbb{V}^h)}$, for every $\vec{v}_{h,\tau}\in\mathbb{P}_{k_t}(I_j;\mathbb{V}^h)$, can be defined through
\begin{equation}\label{eq:exponential_interpolant}
\vec{v}_{h,\tau} =
\sum_{i=0}^k r_i(t) \vec{v}_h^i \in \mathbb{P}_{k_t}(I_j; \mathbb{V}^h)
\mapsto
\overline{\vec{v}}_{h,\tau}=\sum_{i=0}^k \overline{r_i}(t) \vec{v}_h^i \in \mathbb{P}_{k_t}(I_j; \mathbb{V}^h)\,.
\end{equation}
One can extend this definition for functions in $\mathbb{V}^{h,\tau}$ in the obvious way. From \cite[Lm.\ 3.6]{CW.2010b} we~know~that if $\norm{\cdot}_{\star}$ is a (semi-)norm on $\mathbb{V}^h$ arising from an (semi-)inner product, then \eqref{eq:exponential_interpolant}~is~\mbox{$L^s(I_j;\mathbb{V}^h)$-stable}, i.e.,
\begin{subequations}
\begin{align}
\bigg(\int_{I_j} \|\overline{\vec{v}}_{h,\tau}(t)\|^s_\star {\rm d} t \bigg)^{\smash{1/s}}
& \lesssim
\bigg(\int_{I_j} \|\vec{v}_{h,\tau}(t)\|^s_\star {\rm d} t \bigg)^{\smash{1/s}}
&& \quad\text{ for all } \vec{v}_{h,\tau}\in \mathbb{P}_{k_t}(I_j;\mathbb{V}^h)\,,\; s\in[1,\infty)\,, \label{eq:exp_stability_p_star}\\
\max_{t\in I_j} \|\overline{\vec{v}}_{h,\tau}(t)\|_\star
&\lesssim
\max_{t\in I_j} \|\vec{v}_{h,\tau}(t)\|_\star
&& \quad\text{ for all } \vec{v}_{h,\tau}\in \mathbb{P}_{k_t}(I_j; \mathbb{V}^h)\,. \label{eq:exp_stability_infty_star}
\end{align}
In fact, as stated in the next lemma, the above also holds with the discrete measure $\mu^{\mathrm{GR}}_{k_t+1}(\d t)$~and/or with $\norm{\cdot}_{\star}= \|\cdot\|_{h,s}$ for $s \in (1,\infty)$, which, in general, does not arise from an inner product; a proof of this fact can be found in Appendix \ref{appendix:stability}. \enlargethispage{1mm}
\begin{lemma}\label{lem:exp_stability}
Let $s\in (1,\infty)$ and $\norm{\cdot}_\star$ is a (semi-)norm on $\mathbb{V}^h$ arising from an (semi-)inner product. Then, the exponential interpolant \eqref{eq:exponential_interpolant}, for every $\vec{v}_{h,\tau}\in \mathbb{P}_{k_t}(I_j;\mathbb{V}^h)$ and $j\in \{1,\ldots, N_\tau\}$, satisfies
\begin{align}
\bigg(\int_{I_j} \|\overline{\vec{v}}_{h,\tau}(t)\|^s_\star\, \mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/s}}
& \lesssim
\bigg(\int_{I_j} \|\vec{v}_{h,\tau}(t)\|^s_\star\, \mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/s}}\,, \label{eq:exp_stability_GR_p_star}\\
\bigg(\int_{I_j} \|\overline{\vec{v}}_{h,\tau}(t)\|^s_{h,s}\, {\rm d} t \bigg)^{\smash{1/s}}
& \lesssim
\bigg(\int_{I_j} \|\vec{v}_{h,\tau}(t)\|^s_{h,s} \, {\rm d} t \bigg)^{\smash{1/s}}\,, \label{eq:exp_stability_p}\\
\bigg(\int_{I_j} \|\overline{\vec{v}}_{h,\tau}(t)\|^s_{h,s} \, \mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/s}}
& \lesssim
\bigg(\int_{I_j} \|\vec{v}_{h,\tau}(t)\|^s_{h,s}\, \mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/s}}\,. \label{eq:exp_stability_GR_p}
\end{align}
\end{lemma}
\end{subequations}
We are, eventually, in a position to prove the main result of this paper.
\begin{theorem}\label{thm:stability}
Suppose that $(\vec{u}_{h,\tau}, p_{h,\tau})^\top\in \mathbb{V}^{h,\tau}\times \mathbb{M}^{h,\tau}$ is a solution of problem \eqref{eq:discrete_PDE}. Moreover, assume that $p\geq \frac{3d+2}{d+2}$ if $k_t >0$ and $p>1$ if $k_t=0$. Then, assuming that the penalty parameter $\alpha>0$ is large enough, there is a constant $c>0$ (independent of $h>0$ and $\tau>0$) such that
\begin{equation}\label{eq:Linfty_stability}
\|\vec{u}_{h,\tau}\|_{L^\infty(I;L^2(\Omega)^d)} \leq c\,.
\end{equation}
\end{theorem}
\begin{proof}
For $k_t \!=\! 0$, the result is a direct consequence of Lemma \ref{lem:apriori}, so we will only consider~the~case~${k_t\! >\!0}$.
Fix an arbitrary $j\in \{1,\ldots, N_\tau\}$; we will prove the claim on $L^{\infty}(I_j; L^2(\Omega)^d)$, from which the result \eqref{eq:Linfty_stability} trivially follows. Denote the exponential interpolant of $\vec{u}_{h,\tau}$ on $\mathbb{P}_{k_t}(I_j;\mathbb{V}^h)$ by $\overline{\vec{u}}_{h,\tau}$. Using \eqref{eq:exponential_interpolant_properties}, we can examine what happens to the time derivative if we test the momentum balance \eqref{eq:discrete_momentum} with $\overline{\vec{u}}_{h,\tau}$:
\begin{gather*}
\int_{Q_j} \partial_t \vec{u}_{h,\tau} \cdot \overline{\vec{u}}_{h,\tau} \,{\rm d} t{\rm d} x
+ \int_\Omega \jump{\vec{u}_{h,\tau}}_{j-1} \cdot \overline{\vec{u}}_{h,\tau}(t^+_{j-1})\,{\rm d} x
= \frac{1}{2} \norm{\smash{\vec{u}_{h,\tau}(t_j)}}^2_{L^2(\Omega)} e^{-\lambda(t_j-t_{j-1})}
- \frac{1}{2} \norm{\smash{\vec{u}_{h,\tau}(t_{j-1}^+)}}^2_{L^2(\Omega)}\\
+ \frac{\lambda}{2} \int_{I_j} \|\vec{u}_{h,\tau}(t)\|^2_{L^2(\Omega)} e^{-\lambda(t-t_{j-1})} \,{\rm d} t
+ \int_\Omega \jump{\vec{u}_{h,\tau}}_{j-1} \cdot \vec{u}_{h,\tau}(t^+_{j-1})\,{\rm d} x
= \frac{1}{2} \norm{\smash{\vec{u}_{h,\tau}(t_j)}}^2_{L^2(\Omega)} e^{-\lambda(t_j-t_{j-1})} \\
+ \frac{1}{2}\|\jump{\vec{u}_{h,\tau}}_{j-1}\|^2_{L^2(\Omega)}
- \frac{1}{2} \norm{\smash{\vec{u}_{h,\tau}(t_{j-1})}}^2_{L^2(\Omega)}
+ \frac{\lambda}{2} \int_{I_j} \|\vec{u}_{h,\tau}(t)\|^2_{L^2(\Omega)} e^{-\lambda(t-t_{j-1})} \,{\rm d} t,
\end{gather*}
where we simply used integration-by-parts in the first term. Noting that the function $t\mapsto e^{-\lambda(t-t_{j-1})}$ is decreasing and dropping positive terms, we find that testing \eqref{eq:discrete_momentum} and \eqref{eq:discrete_mass} with $(\overline{\vec{u}}_{h,\tau}, p_{h,\tau})$ yields:
\begin{align*}
& \frac{\lambda}{2}e^{-\lambda(t_j-t_{j-1})} \int_{I_j} \|\vec{u}_{h,\tau}(t)\|^2_{L^2(\Omega)} \,{\rm d} t,
+ \int_{I_j} S^{\pi}_h(p_{h,\tau}; p_{h,\tau}) \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \\
&\leq
\frac{1}{2} \norm{\smash{\vec{u}_{h,\tau}(t_{j-1})}}^2_{L^2(\Omega)}
+ \int_{Q_j} \bm{f}\cdot \overline{\vec{u}}_{h,\tau} \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) {\rm d} x
- \int_{I_j} \mathcal{A}_h(\vec{u}_{h,\tau}; \overline{\vec{u}}_{h,\tau}) \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \\
&\quad- \int_{I_j} \mathcal{C}_h[\vec{u}_{h,\tau}, \vec{u}_{h,\tau}; \overline{\vec{u}}_{h,\tau}] \,\mu^{\mathrm{GR}}_{k_t+1}(\d t)
=
\mathfrak{I}_1
+ \mathfrak{I}_2
+ \mathfrak{I}_3
+ \mathfrak{I}_4\,.
\end{align*}
The first term $\mathfrak{I}_1$ is uniformly bounded, thanks to the a priori estimate \eqref{eq:apriori}. For the second term $\mathfrak{I}_2$, we apply Hölder’s inequality and the stability estimate \eqref{eq:exp_stability_GR_p}:
\begin{align*}
|\mathfrak{I}_2| &\leq
\bigg(\int_{I_j} \norm{\bm{f}(t)}^{p'}_{L^{p'}(\Omega)} \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/p'}}
\bigg(\int_{I_j} \|\overline{\vec{u}}_{h,\tau}(t)\|^{p}_{L^{p}(\Omega)} \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/p}} \\
&\lesssim
\|\bm{f}\|_{C^0(I_j;L^{p'}(\Omega)^d)}
\bigg(\int_{I_j} \|\vec{u}_{h,\tau}(t)\|^{p}_{h,p} \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/p}}
\leq c\,.
\end{align*}
Similarly, for the viscous term:
\begin{align*}
|\mathfrak{I}_3|
&=
\int_{Q_j} \hat{\tens{S}}_{h,\tau}\fp \mathcal{G}_h^l(\overline{\vec{u}}_{h,\tau})\, \mu^{\mathrm{GR}}_{k_t+1}(\d t){\rm d} x
+ \int_{I_j} S^{\vec{u}}_h(\vec{u}_{h,\tau}(t); \overline{\vec{u}}_{h,\tau}(t)) \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \\
&\lesssim \bigg(\int_{I_j} \|\hat{\tens{S}}_{h,\tau}(t)\|^{p'}_{L^{p'}(\Omega)} \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/p'}}
\bigg(\int_{I_j} \|\overline{\vec{u}}_{h,\tau}(t)\|^{p}_{h,p} \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/p}} \\
&\quad + \bigg(\int_{I_j} |\vec{u}_{h,\tau}(t)|^p_{\Gamma_h,p}\, \mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/p'}}
\bigg(\int_{I_j} |\overline{\vec{u}}_{h,\tau}(t)|^p_{\Gamma_h,p} \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/p}}
\leq c\,.
\end{align*}
To handle $\mathfrak{I}_4$, we note first that $p\geq \frac{3d+2}{d+2}$ is equivalent to $2p' \leq p\frac{d+2}{d}$, which implies that
\begingroup
\allowdisplaybreaks
\begin{align*}
|\mathfrak{I}_4|
&\leq \int_{Q_j}{|\vec{u}_{h,\tau}|^2 |\mathcal{G}_h^{2k_{\vec{u}}}(\overline{\vec{u}}_{h,\tau})|\,\mu^{\mathrm{GR}}_{k_t+1}(\d t){\rm d} x}
+ \int_{Q_j} {|\overline{\vec{u}}_{h,\tau}| |\vec{u}_{h,\tau}| |\mathcal{G}_h^{2k_{\vec{u}}}(\vec{u}_{h,\tau})|\,\mu^{\mathrm{GR}}_{k_t+1}(\d t){\rm d} x }\\
&\leq
\bigg(\int_{I_j} \|\vec{u}_{h,\tau}(t)\|_{L^{2p'}(\Omega)}^{2p'}\,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/p'}}
\bigg(\int_{I_j} \|\overline{\vec{u}}_{h,\tau}(t)\|_{h,p}^{p}\,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/p}} \\
&\quad +
\bigg(\int_{I_j} \|\overline{\vec{u}}_{h,\tau}(t)\|_{L^{2p'}(\Omega)}^{2p'}\,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/(2p')}}
\bigg(\int_{I_j} \|\vec{u}_{h,\tau}(t)\|_{L^{2p'(\Omega)}}^{2p'}\,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/(2p')}}\cdot \\
&\hphantom{00000}\cdot \bigg(\int_{I_j} \|\vec{u}_{h,\tau}(t)\|_{h,p}^{p}\,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/p}} \\
&\lesssim
\bigg(\int_{I_j} \|\vec{u}_{h,\tau}(t)\|_{L^{p_*}(\Omega)}^{p_*}\,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{2/p_*}}
\bigg(\int_{I_j} \|\overline{\vec{u}}_{h,\tau}(t)\|_{h,p}^{p}\,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/p}} \\
&\quad +
\bigg(\int_{I_j} \|\vec{u}_{h,\tau}(t)\|_{L^{p_*}(\Omega)}^{p_*}\,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/p_*}}
\bigg(\int_{I_j} \|\overline{\vec{u}}_{h,\tau}(t)\|_{L^{p_*}(\Omega)}^{p_*}\,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/p_*}} \\
&\hphantom{00000}\cdot \bigg(\int_{I_j} \|\vec{u}_{h,\tau}(t)\|_{h,p}^{p}\,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/p}}.
\end{align*}
\endgroup
Now, the crucial observation is that Corollary \ref{cor:parabolic_interpolation} still holds when using the discrete measure $\mu^{\mathrm{GR}}_{k_t+1}(\d t)$; more precisely, for every $\vec{v}_{h,\tau} \in \mathbb{P}_{k_t}(I_j; \mathbb{V}^h)$, we have that
\begin{equation*}
\bigg(\int_{I_j} \|\vec{v}_{h,\tau}(t)\|^{p_*} \,\mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/p_*}}
\lesssim \bigg( \int_{I_j} \|\vec{v}_{h,\tau}(t)\|^p_{h,p}\, \mu^{\mathrm{GR}}_{k_t+1}(\d t) \bigg)^{\smash{1/p_*}}
\norm{\vec{v}_{h,\tau}}_{L^\infty(I_j;L^2(\Omega)^d)}^{\frac{2}{d+2}}.
\end{equation*}
Combining this with the stability estimate \eqref{eq:exp_stability_infty_star} (with $\norm{\cdot}_\star\! =\! \norm{\cdot}_{L^2(\Omega)}$) and estimate \eqref{eq:exp_stability_GR_p}~(with~${s\!=\!p}$), then, yields that
\begin{equation}
|\mathfrak{I}_4| \lesssim \|\smash{\vec{u}_{h,\tau}}\|^{\frac{4}{d+2}}_{L^\infty(I_j;L^2(\Omega)^d)}.
\end{equation}
In summary, we have that
\begin{equation}\label{eq:almost_stability}
\frac{\lambda}{2}e^{-\lambda(t_j-t_{j-1})} \int_{I_j} \|\vec{u}_{h,\tau}(t)\|^2_{L^2(\Omega)} \,{\rm d} t
\lesssim
1 +
\norm{\smash{\vec{u}_{h,\tau}}}^{\frac{4}{d+2}}_{L^\infty(I_j;L^2(\Omega)^d)}\,.
\end{equation}
On the other hand, the equivalence of norms in finite dimensional spaces and the quasi-uniformity \eqref{eq:time_quasiuniform} of the time partition imply that (cf. \cite[Lm.\ 3.5]{CW.2010b})
\begin{equation}
\|\vec{u}_{h,\tau}\|^2_{L^\infty(I_j;L^2(\Omega))}
\lesssim
\frac{1}{\tau} \int_{I_j} \norm{\smash{\vec{u}_{h,\tau}(t)}}^2_{L^2(\Omega)} \,{\rm d} t\,.
\end{equation}
Hence, choosing $\lambda=\tau^{-1}$ in \eqref{eq:almost_stability} and noting that $\frac{4}{d+2}< 2$ yields the assertion.
\end{proof}
The a priori estimate \eqref{eq:apriori} and Theorem \ref{thm:stability} could be the starting point of a compactness argument to prove (weak) convergence of the numerical solutions to a minimal regularity energy solution of \eqref{eq:weak_PDE}. In a convergence proof, further assumptions would be needed such as monotonicity of the constitutive relation, in order to be able to identify the non-linear limit; see, e.g., \cite{AGF.2023}, where this was carried out for a discretisation of natural convection.\newpage
\begin{corollary}
Let $(\vec{u}_{h,\tau}, p_{h,\tau})^\top\in \mathbb{V}^{h,\tau}\times \mathbb{M}^{h,\tau}$ be a solution of the discrete problem without quadrature. Moreover, assume that $p\geq \frac{3d+2}{d+2}$ if $k_t >0$ and $p>1$ if $k_t=0$. Then, assuming that the penalty parameter $\alpha>0$ is large enough, there is a constant $c>0$ (independent of $h>0$ and $\tau>0$) such that
\begin{equation}\label{eq:Linfty_stability2}
\|\vec{u}_{h,\tau}\|_{L^\infty(I;L^2(\Omega)^d)} \leq c\,.
\end{equation}
\end{corollary}
\begin{proof}
The proof for the DG time discretisation without quadrature is almost identical.~The~only~differ-ence is that Corollary \ref{cor:parabolic_interpolation} can be applied directly, and that now the stability estimate \eqref{eq:exp_stability_p} with the standard measure ${\rm d} t$ is the one that has to be employed.
\end{proof}
\begin{remark}
The energy stability of several Diagonally Implicit Runge--Kutta methods was recently analysed in \cite{ST.2022}. While our work focused exclusively on the RadauIIA Implicit Runge--Kutta method, the arguments presented here could be conceivably combined with the approach from \cite{ST.2022} to obtain $L^\infty(0,T;L^2(\Omega)^d)$-stability of various other discretisations of incompressible flow models.
\end{remark}
|
2,877,628,088,545 | arxiv | \section{Introduction}
In a 1982 paper \cite{Za82} Tudor Zamfirescu proved a remarkable result saying that `most mirrors are magic'. For the mathematical formulation let $\mathcal{C}$ be the set of all closed convex curves in the plane ${\mathbb{R}^2}$. Fix some $C \in \mathcal{C}$ and $z \in C$ so that the tangent line, $T(z)$, to $C$ at $z$ is unique, then so is the normal line $N(z)$ to $C$ at $z$. A point $u \in {\mathbb{R}^2}$ {\sl sees an image} of another point $v \in {\mathbb{R}^2}$ via $z$ if $u$ and $v$ and $C$ lie on the same side of $T(z)$ and the line $N(z)$ halves the angle $\angle uzv$. In particular, $u$ sees an image of itself via $z$ if $u \in N(z)$ and $u$ and $C$ are on the same side of $T(z)$.
With the Haussdorf metric $\mathcal{C}$ becomes a complete metric space. It is well-known that the normal $N(z)$ is unique at every point $z \in C$ for most convex curves $C \in \mathcal{C}$ in the Baire category sense, that is, for the elements of a comeagre set of curves in $\mathcal{C}$. Now the `most mirrors are magic' statement is, precisely, that for most convex curves, most points in ${\mathbb{R}^2}$ (again in Baire category sense) see infinitely many images of themselves. Another theorem from \cite{Za82} says that for most convex curves, most points in ${\mathbb{R}^2}$ see infinitely many images of any given point $v \in {\mathbb{R}^2}$. Zamfirescu actually proves the existence of countably many images and self-images.
The purpose of this paper is to show that most mirrors are even more magic.
\begin{theorem}\label{th:selfmirr} For most convex curves, most points in ${\mathbb{R}^2}$ see continuum many images of themselves.
\end{theorem}
\begin{theorem}\label{th:mirr} For most convex curves $C$ and for every point $v \in {\mathbb{R}^2}\setminus C$, most points in ${\mathbb{R}^2}$ see continuum many images of $v$.
\end{theorem}
The condition $v \notin C$ in the last theorem is used to avoid some trivial complications in the proof. The statement holds even for $v \in C$.
{\bf Remark.} Let $C^o$ denote the closed convex set whose boundary is $C$. The above definition of `$u$ sees an image of $v$ via $z\in C$' means that the mirror side of $C$ is the interior one, that is, the segment $uz$ intersects the interior of $C^o$. Theorem~\ref{th:selfmirr} does not hold when the mirror is on the other side of $C$ because every point in ${\mathbb{R}^2} \setminus C^o$ lies on exactly one outer normal halfline to $C$.
A statement, analogous to Theorem~\ref{th:mirr} about affine diameters was proved in \cite{BZ} in 1990, for typical $d$-dimensional convex bodies for every $d\ge 2$. The segment $[a,b]$ is an {\sl affine diameter} of $C \in \mathcal{C}$ if there are distinct and parallel tangent lines to $a,b\in C$. The result in \cite{BZ} says that for most convex curves $C \in \mathcal{C}$, most points on a fixed affine diameter of $C$ are contained in infinitely many affine diameters of $C$. In this case again we show the existence of continuum many diameters passing through most points in $C^o$.
\begin{theorem}\label{th:diam} For most convex curves $C \in \mathcal{C}$, most points in $C^o$ lie in continuum many diameters of $C$.
\end{theorem}
Note that every point outside $C^o$ lies on the line of at most one affine diameter as any two affine diameters have a point in common. It is not hard to see, actually, that every point outside $C$ lies on a unique affine diameter.
\section{Plan of proof}
For $C \in \mathcal{C}$ let $\rho(z)$ denote the radius of curvature of $C$ at $z\in C$. Let $\mathcal{D}$ denote the family of all convex curves $C \in \mathcal{C}$ such that
\begin{enumerate}
\item[(1)] there is a unique tangent line to $C$ at every $z \in C$,
\item[(2)] $\{z\in C: \rho(z)=0\}$ is dense in $C$,
\item[(3)] $\{z\in C: \rho(z)=\infty\}$ is dense in $C$.
\end{enumerate}
It is well-known, see for instance \cite{Za85}, that $\mathcal{D}$ is comeagre in $\mathcal{C}$. We are going to show that every $C \in \mathcal{D}$ has the property required in Theorem~\ref{th:selfmirr}. We will need slightly different conditions for Theorems~\ref{th:mirr} and \ref{th:diam}. But the basic steps of the proofs are the same. We explain them in this section in the case of Theorem~\ref{th:selfmirr}.
Let $C \in \mathcal{D}$ and define, for $z \in C$, the halfline $N^+(z)\subset N(z)$ that starts at $z$ and intersect the interior of $C^o$. Note that every $u \in {\mathbb{R}^2}$ lies on some $N^+(z)$: namely when the farthest point from $u$ on $C$ is $z$. Set $L(u)=\{z \in C: u \in N^+(z)\}$ and define
\[
H=\{u \in {\mathbb{R}^2}: L(u) \mbox{ is not perfect}\}.
\]
\begin{lemma}\label{l:borel} $H$ is a Borel set.
\end{lemma}
Write now $u=(u_1,u_2)\in {\mathbb{R}^2}$ and define $H^{u_2}=\{u_1\in \mathbb{R}: (u_1,u_2)\in H\}$. This is just the section of $H$ on the horizontal line $\ell(u_2)=\{(x,y)\in {\mathbb{R}^2}: y=u_2\}$. There are two points $z \in C$ with $N(z)$ horizontal, so there are at most two exceptional values for $u_2$ where $\ell(u_2)$ coincides with some $N(z)$.
\begin{lemma}\label{l:firstcat} Apart from those exceptional values, $H^{u_2}$ is meagre.
\end{lemma}
These two lemmas imply Theorem~\ref{th:selfmirr}. Indeed, deleting the (one or two) exceptional lines from $H$ gives a Borel set $H'$. According to a theorem of Kuratowski (see \cite{Ke} page 53), if all horizontal sections of the Borel set $H'$ are meagre, then so is $H'$, and then $H$ itself is meagre. So its complement is comeagre, so $L(u)$ is perfect and non-empty for most $u \in {\mathbb{R}^2}$. The theorem follows now from the fact that a non-empty and perfect set has continuum many points. The proofs of Theorems~\ref{th:mirr} and \ref{th:diam} will use the same argument.
\smallskip
For the proof of Lemma~\ref{l:firstcat} we need another lemma that appeared first as Lemma 2 in \cite{Pa}. A function $g:[0,1]\to {\mathbb{R}^2}$ is increasing on an interval $I\subset [0,1]$ (resp. decreasing on $I$) if every $x,y \in I$ with $x \le y$ satisfy $g(x) \le g(y)$ (resp. $g(x)\ge g(y))$, and $g$ is monotone in $I$ if it is either increasing or decreasing there. For the sake of completeness we present the short proof.
\begin{lemma}\label{l:geom} Assume $g:[0,1]\to {\mathbb{R}^2}$ is continuous and is not monotone in any subinterval of $[0,1]$. Then the set
\[
B=\{b \in \mathbb{R}: \{x:g(x)=b\} \mbox{ is not perfect}\}
\]
is meagre.
\end{lemma}
\medskip
{\bf Proof} of Lemma~\ref{l:geom}. For each $b \in B$ the level set $\{x:g(x)=b\}$ has an isolated point, and so there is an open interval $I_b \subset [0,1]$ with rational endpoints in which $g(x)=b$ has a unique solution. For a given rational interval $(p,q)$ define
\[
B(p,q)=\{b\in B: I_b=(p,q)\}.
\]
If every $B(p,q)$ is nowhere dense, then we are done since $B$, as a countable union of nowhere dense sets, is meagre. If some $B(p,q)$ is not nowhere dense, then there is a non-empty open interval $I$ in which $B(p,q)$ is dense. The line $y=b$, for a dense subset of $I$, intersects the graph of $g$ restricted to $(p,q)$ in a single point. This implies easily that $g$ is strictly monotone in a subinterval $(p,q)$, contrary to our assumption.\hfill$\Box$
\section{Proof of the lemmas}
Fix $C \in \mathcal{D}$ and let $z(\alpha)$ denote the point $z \in C$ where the halfline $N^+(z)$ spans angle $\alpha\in [0,2\pi)$ with a fixed unit vector in ${\mathbb{R}^2}$. This is a parametrization of $C$ with $\alpha \in [0,2\pi]$ and $z(0)=z(2\pi)$. We write $C_{\alpha,\beta}$ for the arc $\{z(\gamma): \alpha < \gamma < \beta\}$ when $0\le \alpha < \beta \le 2\pi$, and the definition is extended, naturally, to the case when $\alpha < 2\pi < \beta$. We always assume that $\alpha, \beta$ are rational and $\beta -\alpha$ is small, smaller than $0.1$, say.
\medskip
{\bf Proof} of Lemma~\ref{l:borel} . Note first that the set
\[
K=\{(u,z)\in {\mathbb{R}^2} \times C: u \in N^+(z)\}
\]
is closed. Further, $L(u)$ is not perfect if and only if there is a short arc $C_{\alpha,\beta}$ such that $u \in N^+(z)$ for a unique $z \in C_{\alpha,\beta}$. Thus
\[
H = \bigcup_{\mbox{ all }C_{\alpha,\beta}} \{u \in {\mathbb{R}^2}: u \in N^+(z) \mbox{ for a unique } z \in C_{\alpha,\beta}\}.
\]
Let $p: K \to {\mathbb{R}^2}$ be the projection $p(u,z)=u$. Let $P_{\alpha,\beta}$ be the set of points $u \in {\mathbb{R}^2}$ such that there are more than one $z \in C_{\alpha,\beta}$ with $u\in N^+(z)$. Then
\[
P_{\alpha,\beta}=\bigcup_{\gamma} p(K\cap ({\mathbb{R}^2} \times C_{\alpha,\gamma}))\cap p(K\cap ({\mathbb{R}^2} \times C_{\gamma,\beta}))
\]
where the union is taken over all rational $\gamma$ with $\alpha<\gamma < \beta$. Consequently
\[
H=\bigcup_{\mbox{ all }C_{\alpha,\beta}} p(K\cap ({\mathbb{R}^2} \times C_{\alpha,\beta})) \setminus P_{\alpha,\beta}.
\]
Since $p(K\cap ({\mathbb{R}^2} \times C_{\alpha,\beta}))$ is $F_{\sigma}$ for every $\alpha < \beta$, it follows that $H$ is indeed Borel. \hfill$\Box$
\medskip
{\bf Proof} of Lemma~\ref{l:firstcat}. The set $z\in C$ where $N^+(z)$ intersects $\ell(u_2)$ in a single point consists of one or two open subarcs of $C$, as one can check easily. Let $C_1$ be such an arc. It suffices to see that
\[
E=H^{u_2}\cap \{u_1\in \mathbb{R}: (u_1,u_2)=\ell(u_2)\cap N(z) \mbox{ for some }z \in C_1\}
\]
is meagre, as $H^{u_2}$ either coincides with this set, or is the union of two such sets.
We may assume that $C_1$ is the graph of a convex function $F: J \to \mathbb{R}$ and $u_2>F(x)$ on $J$ where $J$ is an open interval.
(This position can be reached after a suitable reflection about a horizontal line.) With this notation, $E$ is the set of real numbers $u_1 \in \mathbb{R}$ such that the set of points $x\in J$ for which $(u_1,u_2)\in N^+(x,F(x))$ is not perfect.
Then $F'(x)=f(x)$ is continuous and increasing on $J$. Each $z \in C_1$ is a point $(x,F(x))$ on the graph of $F$. As $\rho(z) =(1+f(x))^{3/2}/f'(x)$, $f'$ equals zero resp. infinity on a dense set in $J$. The normal $N(z)$ to $z=(x,F(x))$ has equation $(u_2-F(x))f(x)=x-u_1$,
as one checks readily. With $g(x)=(u_2-F(x))f(x)-x$, $g'(x)=-f(x)^2+(u_2-F(x))f'(x)-1$ and so on a dense set in $J$ the value of $g'(x)$ is positive, and on another dense set in $J$ it is negative. So $g$ is not monotone in any subinterval of $J$. Lemma~\ref{l:geom} implies now that $E$ is meagre. \hfill$\Box$
\section{Proof of Theorem~\ref{th:mirr}}
It is known \cite{Za85} that for most $C \in \mathcal{D}$ there is a dense set $E\subset C$ such that at each point $z \in E$ the lower curvatures of radii in both directions $\rho_i^+(z),\rho_i^-(z)$ vanish and the upper curvatures of radii in both directions $\rho_s^+(z),\rho_s^-(z)$ are infinite. We let $\mathcal{D}_1$ denote the set of all $C \in \mathcal{D}$ possessing such a dense set $E$. We are going to show that for each $C \in \mathcal{D}_1$, most points see continuum many images of any given point $v \in {\mathbb{R}^2}$, $v\notin C$.
For $z \in C$ we define the line $R(z)$ as the reflected copy (with respect to $N(z)$) of the line through $v$ and $z$. Note that $R(z)$ depends continuously from $z$. Here we need $v \notin C$.
If $u$ sees an image of $v$ via $z$, then $u \in R(z)$. More precisely, $u$ sees an image of $v$ via $z$ iff $u,v$ and $C$ are on the same side of $T(z)$ and $u \in R(z)$. Let $R^+(z)\subset R(z)$ be the halfline that starts at $z$ and is on the same side of $T(z)$ as $C$. Also, $R^+(z)$ is well defined for all $z \in C$.
As before, $\ell(u_2)$ is the horizontal line in ${\mathbb{R}^2}$ whose points have second coordinate equal to $u_2$. Define, for fixed $u_2 \in \mathbb{R}$, $H^{u_2}=\{u_1 \in \mathbb{R}: (u_1,u_2)\in H\}$. This is the same as the set of first coordinates of all $u \in H\cap \ell(u_2)$.
In the generic case $R(z)$ is not horizontal and so $R(z)\cap \ell(u_2)$ is a single point. But we have to deal with non-generic situations, that is, when $R(z)$ is horizontal and so coincides with $\ell(u_2)$ for some $u_2\in \mathbb{R}$. Define $Z=\{z\in C: R(z) \mbox{ is horizontal}\}$ and $U_2=\{u_2 \in \mathbb{R}: \ell(u_2)=R(z) \mbox{ for some } z \in Z\}$. Both $Z$ and $U_2$ are closed sets and there is a one-to-one correspondence between them given by $z \leftrightarrow u_2$ iff $R(z)=\ell(u_2)$.
From now on we assume that $Z$ is nowhere dense. We will justify this assumption at the end of the proof. Then $U_2$ is also nowhere dense. $C\setminus Z$ is open in $C$ and so its connected components $C_1,C_2,\ldots$ are open arcs in $C$, and there are at most countably many of them.
This time we define $L(u,C_i)$ as the set of $z \in C_i$ via which $u$ sees an image of $v$. Formally, $L(u,C_i)=\{z \in C_i: u \in R^+(z)\}$, and define again, for fixed $u_2 \in \mathbb{R}$,
\[
H_i^{u_2}=\{u_1\in \mathbb{R}: L((u_1,u_2),C_i) \mbox{ is not perfect}\}.
\]
A very similar proof shows that $H_i^{u_2}$ is Borel. We omit the details, but mention that the condition $v \notin C$ is needed to show that the corresponding $K=\{(u,z):\dots\}$ is closed.
\medskip
\begin{lemma}\label{l:firstcat2} For $u_2 \notin U_2$ the set $H_i^{u_2}$ is meagre.
\end{lemma}
\medskip
{\bf Proof.} With every $u_1 \in H_i^{u_2}$ we associate a (rational) open arc $C_{\alpha,\beta}$ of $C_i$ such that $u=(u_1,u_2) \in R(z)$ for a unique $z \in C_{\alpha,\beta}$, namely for $z_u$. If the set of $u\in H_i^{u_2}$ that are associated with $C_{\alpha,\beta}$ is nowhere dense for every rational arc $C_{\alpha,\beta}$, then we are done as $H_i^{u_2}$ is the countable union of nowhere dense sets. So suppose that it is not nowhere dense for some $C_{\alpha,\beta}$. Then there is an open interval $I \in \mathbb{R}$ such that $H_i^{u_2}$ is dense in $I$.
Choose two distinct points $w^-,w^+$ from $I \cap H_i^{u_2}$. Then $z_{(w^-,u_2)}$ and $z_{(w^+,u_2)}$ are distinct points and so they are the endpoints of an open subarc $C_{\gamma,\delta}$ of $C_{\alpha,\beta}$. Define the map $h:C_{\gamma,\delta} \to I$ by $h(z)=u_1$ when $(u_1,u_2)=\ell(u_2)\cap R(z)$; $h$ is clearly continuous. It is also monotone because its inverse is well-defined on a dense subset $I$.
We show next that this is impossible. Choose $z_0 \in C_{\gamma,\delta} \cap E$ (recall that $E$ is dense in $C$).
\begin{figure}
\centering
\includegraphics{mirr.pdf}
\caption{Theorem~\ref{th:mirr}}
\label{fig:delta}
\end{figure}
We fix a new coordinate system in ${\mathbb{R}^2}$: the origin coincides with $z_0$, the $x$ axis with $T(z_0)$, the tangent line to $C$ at $z_0$, and the $y$ axis is $N(z_0)$; see the figure. We assume w.l.o.g. that $v_1<0$ and $v_2>0$ where $v=(v_1,v_2)$. A subarc of $C_{\gamma,\delta}$ is the graph of a non-negative convex function $F:[0,\Delta)\to \mathbb{R}$ such that $F(0)=0$ and $z=z(x)=(x,F(x))$ and $f(x)=F'(x)$ is an increasing function with $f(0)=0$. If the lines $R(z(x))$ and $R(z(0))$ intersect, then they intersect in a single point whose $y$ component is denoted by $y(x)$.
\begin{claim}\label{cl:eps} For every $\varepsilon>0$ there are $x_1,x_2 \in (0,\varepsilon)$ so that $y(x_1)<0$ and $0<y(x_2)<\varepsilon$.
\end{claim}
{\bf Proof.} We use the notation of the figure. The sine theorem for the triangle with vertices $v,0,z(x)$ implies that $\phi(x)=x \sin \lambda/|v|(1+o(1))$ where $o(1)$ is understood when $x \to 0$. The slope of the line $R(z(x))$ is $\tan (\lambda-\phi+2\psi)$, and
\[
\tan \psi(x)=f(x)=x\cdot \frac {f(x)-0}{x-0}.
\]
The liminf and limsup of the last fraction (when $x\to 0$) is the curvature $\rho^+_i(z_0)=0$ and $\rho^+_s(z_0)=\infty$ of $C$ at $z_0$ as $z_0 \in E$. Consequently for every integer $n>1$ there is $x\in (0,1/n)$ with $\tan \psi(x)< x/n$ and also with $\tan \psi(x)>nx$. Then there is $x_1<1/n$ such that $\lambda/2 < \lambda -\phi(x_1) +2\psi(x_1)< \lambda$ which implies, after a simple checking, that $y(x_1)<0$. Also, there is $x_2<1/n$ such that $\lambda -\phi(x_2) +2\psi(x_2)> \lambda +nx_2/2$. A direct computation shows then that $0<y(x_2)<\varepsilon$ if $n$ is chosen large enough.\hfill$\Box$
\medskip
We return to the proof of Lemma~\ref{l:firstcat2}. The claim shows that there are $x_1,x_2,x_3 \in (0,\Delta)$ with $x_1<x_2<x_3$ such that the line $R(z(x_1))$ and $R(z(x_3))$ strictly separate the origin and the point $R(z_0)\cap \ell(u_2)$ while $R(z(x_2))$ does not. Writing $z_i=z(x_i), i=1,2,3$ this implies that $z_2$ is between $z_1$ and $z_3$ while $h(z_2)$ is not on the segment $(h(z_1),h(z_3))$. So $h$ is not monotone.\hfill$\Box$
\medskip
It is evident that $U_2$, and consequently $U$, is closed and nowhere dense, so $U$ is meagre. The lemma implies, by Kuratowski's theorem, that $H_i\setminus U$ is meagre. It follows that $H_i$ is meagre and then so is $H=\bigcup_i H_i$. Thus every point in the complement of $H$ sees an image of $v$ via a perfect set in $C$, except possibly for the points of the meagre set $U$. This perfect set is nonempty, because every point sees an image of $v$ via some $z \in C$ (for instance by Zamfirescu's result \cite[Theorem 1]{Za82}). So most points see continuum many images of $v$.
Finally we justify the assumption that $Z$ is nowhere dense. This is done by choosing the horizontal direction (which is at our liberty) suitably. So for a given direction $(\cos \theta,\sin \theta)$ write $Z(\theta)$ for the set of $z\in C$ such that $R(z)$ is parallel with this direction. Every $Z(\theta)$ is closed and so there is one (actually, many) among them that contains no non-empty open arc of $C$. Choose the corresponding $\theta$ for the horizontal direction, then $Z=Z(\theta)$ is nowhere dense.
\hfill$\Box$
\section{Proof of Theorem~\ref{th:diam}}
Write $\mathcal{C}_1$ for the set of all convex curves $C$ that have a unique tangent at every $z \in C$.
Assume $C \in \mathcal{C}_1$ and use the parametrization $z:[0,2\pi)\to C$ as before. For $z\in C$ with $z=z(\alpha)$ let $z^*\in C$ be the opposite point, that is $z^*=z(\alpha+\pi)$. It is evident that $z^{**}=z$. Further, $[z,z^*]$ is always an affine diameter of $C$ and all affine diameters of $C$ are of this form. We need a geometric lemma.
\begin{lemma}\label{l:arc} Most convex curves $C \in \mathcal{C}_1$ have the following property: for every $\varepsilon> 0$ every subarc $C_0$ of $C$ contains points $x,y$ such that
\[
\frac {|x-y|}{|x^*-y^*|} < \varepsilon.
\]
\end{lemma}
The lemma follows from a result in \cite{AZ}, we give a separate proof in the next section. From now on we assume that $C \in \mathcal{C}_1$ has the property in the lemma.
We use again the same proof scheme: for $u \in C^o$ define $L(u)=\{z\in C: u\in [z,z^*]\}$; this set is nonempty as one can check easily that every point $u \in C^o$ lies on at least one affine diameter. (This holds for every convex curve, not only for the ones in $\mathcal{C}_1$.) We set next $H=\{u \in C^o: L(u) \mbox{ is not perfect}\}$, and, for fixed $u_2 \in {\mathbb{R}^2}$, $H^{u_2}=H \cap \ell(u_2)$. The same proof as in Section 3 shows that $H$ is Borel. We claim that $H$ is meagre which implies Theorem~\ref{th:diam}.
$C$ has a horizontal affine diameter and we assume w.l.o.g. that it lies on the line $\ell(0)$.
To see that $H$ is meagre it suffices to show (by Kuratowski's theorem) that $H^{u_2}$ is meagre as a subset of $\ell(u_2)$ for $u_2\ne 0$. We only consider $u_2 \in \mathbb{R}$, $u_2\ne 0$ with $\ell(u_2) \cap C \ne \emptyset$. With each $u \in H^{u_2}$ we associate an isolated point $z_u \in C$ and a short rational arc $C_{\alpha,\beta}$ such that
$z_u$ is the unique $z \in C_{\alpha,\beta}$ with $u \in [z,z^*]$. We are done if, for each short rational arc $C_{\alpha,\beta}$, the set of $u \in H^{u_2}$ that are associated with $C_{\alpha,\beta}$ is nowhere dense. So suppose that this fails for some $C_{\alpha,\beta}$. Then there is an open interval $I \subset \ell(u_2)$ on which $H^{u_2}$ is dense. Choose distinct points $u^-$ and $u^+$ from $I \cap H^{u_2}$ and let $z^-,z^+$ be the corresponding isolated points on $C_{\alpha,\beta}$. We suppose (by symmetry) that $C_{\alpha,\beta}$ is below the line $\ell(u_2)$.
From now on we consider the subarc $C_0 \subset C_{\alpha,\beta}$ whose endpoints are $z^-$ and $z^+$ and its opposite arc $C_0^*$.
We note here that the map $z\to z^*$ is order preserving on $C_0$, meaning that if $v\in C_0$ is between $v_1,v_2 \in C_0$, then $v^*$ lies between $v_1^*$ and $v_2^*$ on $C_0^*$.
Define a map $m:C_0 \to \ell(u_2)$ via $m(z)=\ell(u_2)\cap [z,z^*]$; $m$ is continuous. It is one-to-one on a dense subset of $C_0$ which implies that $m$ is order-preserving in the sense that if $v\in C_0$ is between $v_1,v_2 \in C_0$, then $m(v)$ lies between $m(v_1)$ and $m(v_2)$ on $\ell(u_2)$. We show that this is impossible.
Using Lemma~\ref{l:arc} choose two points $v_1,v_2$ on $C_0$ very close to each other so that $|v_1-v_2|$ is much shorter than $|v_1^*-v_2^*|$. Then the segment $[v_1,v_2]$ is almost parallel with $[v_1^*,v_2^*]$, and the diameters $[v_1,v_1^*]$ and $[v_2,v_2^*]$ intersect in a point very close to $[v_1,v_2]$, so this point is below $\ell(u_2)$. Now apply Lemma~\ref{l:arc} on the arc between $v_1^*$ and $v_2^*$. We get points $w_1$ and $w_2$ very close to each other on this arc so that $|w_1-w_2|$ is much shorter than $|w_1^*-w_2^*|$. This time the diameters $[w_1,w_1^*]$ and $[w_2,w_2^*]$ intersect above $\ell(u_2)$. We assume (by choosing the names $w_1,w_2$ properly) that $v_1^*,w_1,w_2,v_2^*$ come in this order on $C_0^*$ and so $v_1,w_1^*,w_2^*,v_2$ come in this order on $C_0$. The order of their $m$-images on $\ell(u_2)$ is $m(v_1),m(w_2^*),m(w_1^*),m(v_2)$. Thus indeed, $m$ is not order preserving.\hfill$\Box$
\section{Proof of Lemma~\ref{l:arc}}
Given $C \in \mathcal{C}_1$ define $A_{k,n}$ as the short arc between $z_k=z(2\pi k/2n)$ and $z_{k+1}=z(2\pi (k+1)/2n)$ where $k=0,1,\ldots,2n-1$. For positive integers $n,m$ let $\mathcal{F}_{n,m}$ be the set of all $C \in \mathcal{C}_1$ for which there is $A_{k,n}$ such that for all $x,y \in A_{k,n}$ ($x\ne y$)
\[
\frac{|x-y|}{|x^*-y^*|}\ge \frac 1m.
\]
It is easy to see that $\mathcal{F}_{n,m}$ is closed in $\mathcal{C}_1$, we omit the details. We show next that it is nowhere dense.
Fix a $C \in \mathcal{C}_1$ and $\varepsilon>0$ and let $U(C)$ denote the $\varepsilon$-neighbourhood of $C$. We construct another convex curve $\Gamma \in \mathcal{C}_1$ that is contained in $U(C)$ but is not an element of $\mathcal{F}_{n,m}$. Fix $k\in \{0,1,\ldots,n-1\}$ and consider a fixed arc $A_{k,n}$ and its opposite arc $A^*_{k,n}=A_{k+n,n}$. Let $T_k$ be the tangent line to $C$ at $z((k+\frac 12)\pi/n)$ and $T_k^*$ be the parallel tangent line at $z((k+n+\frac 12)\pi/n)$. Translate $T_k^*$ a little so that the translated copy intersects $C$ in two points $x_1,y_1$ and the segment $[x_1,y_1]$ lies in $U(C)$ and is much shorter than $[z_{k+n},z_{k+1+n}]$. Similarly translate $T_k$ a little so that the translated copy intersects $C$ in $x_2,y_2$ and $[x_2,y_2]$ lies in $U(C)$, and is much shorter than $[z_k,z_{k+1}]$ and, most importantly, it is much shorter than $[x_1,y_1]$, namely, $m|x_2-y_2| <|x_1-y_1|$. This is clearly possible.
Now we choose points $w_1$ resp. $w_2$ from the caps cut off from $C^o$ by the segment $[x_1,y_1]$ and $[x_2,y_2]$ so that, for $i=1,2$, the triangles $\triangle_i=\textrm{conv} \{x_i,y_i,w_i\}$ are homothetic. This is possible again. Note that $[x_1,w_1]$ and $[x_2,w_2]$ are parallel, and so are $[y_1,w_1]$ and $[y_2,w_2]$.
The next target is construct a convex curve $\Gamma_k$ from $z_k$ to $z_{k+1}$ going through $x_2$ and $y_2$ that lies in $U(C)$, has a unique tangent at every point, and this tangent coincides with the line through $x_2,w_2$ at $x_2$ and with the line through $y_2,w_2$ at $y_2$. Also, an analogous curve $\Gamma_{k+n}$ is needed from $z_{k+n}$ to $z_{k+1+n}$.
This is quite easy. The unique parabola arc connecting $x_2$ to $y_2$ within $\triangle_2$ that touches the sides $[x_2,w_2]$ at $x_2$ and $[w_2,y_2]$ at $y_2$ is the middle piece of $\Gamma_k$. To connect this arc by a convex curve to $z_k$ (say) within $U(C)$ choose a point $w\in C$ on the arc between $z_k$ and $y_2$ so close to $y_2$ that the triangle $\triangle$ delimited by $T(z)$, the line through $y_2,w_2$, and the segment $[y_2,z]$ lies in $U(C)$. The analogous parabola arc in $\triangle$ gives the next piece of $\Gamma_k$, and then add to this piece the subarc of $C$ between $w$ and $z_k$. The middle piece of $\Gamma_k$ is continued to $z_{k+1}$ the same way.
The convex curve $\Gamma_{k+n}$ connecting $z_{k+n}$ to $z_{k+1+n}$ is constructed the same way. Note that the tangents to $\Gamma_k$ at $x_2$ (resp. $y_2$) are parallel with the tangents to $\Gamma_{k+n}$ at $x_1$ (and $y_1$).
The curves $\Gamma_k$ for $k=0,\ldots,2n-1$ together form a convex curve $\Gamma \in C_1$. It has parallel tangents at $x_1\in \Gamma_{k+n}$ and $x_2\in \Gamma_k$, and also at $y_1$ and $y_2$. Thus $[x_1,x_2]$ and $[y_1,y_2]$ are affine diameters of $\Gamma$ and $m|x_1-y_1|<|x_2-y_2|$. As this holds for every $k$, $\Gamma \notin \mathcal{F}_{n,m}$. Thus $\mathcal{F}_{n,m}$ is indeed nowhere dense.
It follows that $\mathcal{C}_2=\mathcal{C}_1 \setminus \bigcup_{n,m}\mathcal{F}_{n,m}$ is comeagre in $\mathcal{C}_1$. We show next that every $C \in \mathcal{C}_2$ satisfies the requirement of the lemma. So we are given $\varepsilon>0$ and a short subarc $C_0$ of $C$. Take a positive integer $m$ with $1/m<\varepsilon$. For a suitably large $n$, $C_0$ contains an arc of the form $A_{k,n}$. As $C \notin \mathcal{F}_{n,m}$, there are distinct points $x,y \in A_{k,n}$ with
\[
\frac{|x-y|}{|x^*-y^*|}\le \frac 1m< \varepsilon.
\]
This finishes the proof. \hfill$\Box$
\bigskip
{\bf Acknowledgements.} Research of the first author was partially supported by ERC Advanced Research Grant 267165 (DISCONV), and by Hungarian National Research Grant K 83767. The second author was partially supported by the Hungarian National Foundation for Scientific Research Grant K 104178.
\bigskip
|
2,877,628,088,546 | arxiv | \section{I. Introduction}
Recent development in topological insulators (TI) has greatly
enhanced our understanding of topological properties in condensed
matter \cite{Hasan2010,Qi2010,Moore2010}. Band insulators with time
reversal symmetry can be classified by a $Z_{2}$ topological
invariant $\nu $ associated with the occupied bands, $\nu =0$ for
topologically trivial phase and $\nu =1$ for non-trivial phase\cite
{Kane2005A,Fu2007,Moore2007,Qi2008}. In two dimensions (2D), the TI
($\nu =1$) exhibits quantum spin Hall effect, whose edge currents
are robust against weak non-magnetic
disorder\cite{Bernevig2006,Kane2005A,Kane2005B}. This
dissipation-less transport can only be destroyed by extremely strong
disorder, which drives the system into a traditional Anderson
insulator\cite {Hatsugai1999,Li09}. The 2D TI has been
experimentally realized in HgTe/CdTe quantum wells, where the
thickness of the quantum well can be varied to tune the system
between TI and normal insulator \cite{Konig-06science,Konig2008}.
Recent numerical simulation reveals an interesting new phase, called
topological Anderson insulator (TAI)\cite{Li09}. The TAI is a
reentrant TI due to disorder: the disorder drives a 2D topologically
trivial insulator into a TI phase, then back to trivial insulator at
strong disorder. This is contrary to the general intuition that
disorder always tends to localize electronic states. This TAI phase
has since attracted extensive research interests\cite
{Jiang2009,Groth2009,Yamakage-11jpsj,Chen-11xxx,Xing-11prb,Li-11prb,HMGuo2010}.
In the original work, this phase was identified from the transport
properties showing a two-terminal conductance plateau $2e^{2}/h$
with extremely small fluctuations\cite{Li09}. Further numerical
studies confirmed that the plateau conductance in the TAI is
contributed from the dissipation-less edge states\cite{Jiang2009},
which further suggests the topological origin of this phenomena.
Theoretical study based on the first Born approximation of the
disordered Dirac fermions proposed that the TAI originates from a
band touching and subsequent re-opening of a topologically
nontrivial gap driven by disorder\cite{Groth2009}. The band touching
has been confirmed in the perturbative and numerical calculations,
but it is not sufficient to explain the whole region in the TAI.
Very recently, there has been a phase diagram for the disordered
HgTe/CdTe quantum spin Hall well, where the quantum spin Hall phase
and the TAI are connected\cite{Prodan2011}.
In this paper, we study the topological evolution of the TAI, and
examine the origin of the TAI from a topological point of view. We
calculate the band structure and the corresponding $Z_2$ invariants
as the disorder strength increases. Starting with a topologically
trivial insulating phase, the bulk gap closes due to the disorder,
which changes the topological invariants of the occupied bands,
therefore triggering an insulator-TI transition (band inversion). As
the disorder strength further increases, a bulk gap is re-opened.
However, the gap value is too small (due to large sample size) and
too fluctuating (due to the randomness of disorder) to account for a
stable TAI phase as observed in transport calculations\cite{Li09}.
We shall show clear evidences that the TAI phase corresponds to a
continuous cluster of nontrivial subgaps, rather than a single gap.
These subgaps are separated by some extremely narrow subbands, and
survive through size scaling and random statistics. In other words,
in the TAI region, a Fermi level falls into a nontrivial subgap with
a probability close to 1, regardless of sample size and disorder
fluctuations. On the other hand, those extremely narrow subbands are
strongly localized, therefore they do not contribute to electronic
transport in the thermodynamic limit. This novel phase offers a new
realization of quantum spin Hall states in solids.
This paper is organized as follows. In section II, we describe the
model we use. In section III, the general definition and
calculation methods of topological invariants are reviewed. In
section IV, the ansatz of defining topological invariants for
disordered systems is introduced. The main results are described
in sections V and VI.
\section{II. The Model}
We first briefly revisit the Bloch's description of electronic properties.
In real space, the electronic Hamiltonian in a crystal lattice has the
general form
\begin{equation}
\mathcal{H}=\sum_i\sum_{\alpha\beta}\mathcal{H}_{\alpha,\beta}(i,i)c^{%
\dagger}_{i\alpha} c_{i\beta}+ \\
\sum_{\langle ij\rangle}\sum_{\alpha\beta}\mathcal{H}_{\alpha
\beta}(i,j)c^{\dagger}_{i\alpha} c_{j\beta}, \label{eqRealSpace}
\end{equation}
where $i,j$ are the indices of primary unit cells of the lattice, and $%
\alpha,\beta$ are the indices of freedom degree within the unit cell, e.g.,
sublattices, orbitals and spins etc. After Fourier transformation $%
c_{i\alpha}=\frac{1}{\sqrt{V}}\sum_{\bm{k}}c_{\bm{k}\alpha}e^{i\bm{k}\cdot{%
\bm{x}_i}}$, the Hamiltonian can be written as
\begin{equation}
\mathcal{H}=\sum_{\bm{k}}\sum_{\alpha\beta}H_{\alpha\beta}(\bm{k})c^{\dagger}_{\bm{k}\alpha}
c_{\bm{k}\beta},
\end{equation}
where $\bm{k}$ is defined in the first Brillouin zone. In the eigenproblem
\begin{equation}
\sum_{\beta}H_{\alpha\beta}(\bm{k})\; u_{n,\beta}(\bm{k})=E_{n}(\bm{k})\;
u_{n,\alpha}(\bm{k}),
\end{equation}
$E_{n}(\bm{k})$ determines the band structure, and $|u_{n}\rangle$ is the
unit cell periodic part of the Bloch function $|\psi_{n\bm{k}}\rangle=e^{i%
\bm{k}\cdot\bm{r}}|u_{n}(\bm{k})\rangle$.
The Bernevig-Hughes-Zhang (BHZ) model\cite{Bernevig2006}, a typical
tight-binding model with spin-orbit coupling that exhibits quantum spin Hall
phase, is defined on a square lattice with one $s$ orbital and one $p$
orbital on each site. In the above mentioned representation, the Bloch
Hamiltonian $H$ is a $4\times 4$ matrix written as
\begin{eqnarray}
H_{\alpha \beta }(\bm{k}) &=&\left(
\begin{array}{cc}
h(\bm{k}) & g(\bm{k}) \\
g^{\dagger }(\bm{k}) & h^{\ast }(-\bm{k})%
\end{array}%
\right) \label{eqH} \\
h(\bm{k}) &=&d_{0}I_{2\times 2}+d_{1}\sigma _{x}+d_{2}\sigma
_{y}+d_{3}\sigma _{z} \label{eqh} \\
g(\bm{k}) &=&\left(
\begin{array}{cc}
0 & -\Delta \\
\Delta & 0%
\end{array}%
\right) \label{eqg} \\
d_{0}(\bm{k}) &=&-2D\big(2-\cos k_{x}-\cos k_{y}\big) \notag \\
d_{1}(\bm{k}) &=&A\sin k_{x},\quad d_{2}(\bm{k})=-A\sin k_{y} \notag \\
d_{3}(\bm{k}) &=&M-2B\big(2-\cos k_{x}-\cos k_{y}\big). \notag
\end{eqnarray}%
Here $\alpha ,\beta $ are the indices of spinorbital within the unit cell, $%
\alpha ,\beta \in \{1,2,3,4\}\equiv \{|s\uparrow \rangle ,|p\uparrow \rangle
,|s\downarrow \rangle ,|p\downarrow \rangle \}$. $\sigma _{i}$ are Pauli
matrices acting on the spinor space spanned by $s$ and $p$ orbitals. The
real space Hamiltonian $\mathcal{H}$ of this model can be obtained from $%
H_{\alpha \beta }$ by a straightforward inverse Fourier transformation $c_{%
\bm{k}\alpha }=\frac{1}{\sqrt{V}}\sum_{\bm{i}}c_{i\alpha }e^{-i\bm{k}\cdot {%
\bm{x}_{i}}}$. The effect of non-magnetic impurities is included in real
space by adding a term
\begin{equation}
V_{I}=\sum_{i}\sum_{\alpha}U(i)c_{i\alpha }^{\dagger }c_{i\alpha },
\label{eqImpurity}
\end{equation}%
to $\mathcal{H}$, where $U(i)$ are random numbers uniformly distributed in $%
(-W/2,W/2)$.
\section{III. $Z_2$ Invariant}
\begin{figure}[htbp]
\includegraphics*[bb=10 0 540 550,width=0.45\textwidth]{Fig01.eps}
\caption{(Color online) Schematic illustration of Kramers pairs.
Black is for trivial pair and red for non-trivial pair. (a) Band
structures $E(\bm{k}%
) $. (b) Extension of the Kramers pair in (a), represented by the
width of a solid bar along the energy axis. Different heights (in
horizontal direction) of the bars are used to distinguish
individual pairs. Green bar is for nontrivial gap. }
\label{FSchematic}
\end{figure}
For a time reversal invariant system including both spin components,
Kramers Theorem states that, all the electronic bands $E_n(\bm{k})$
come in pairs connected at time reversal invariant points (TRIPs),
which are called Kramers pairs\cite{Fu2006,Fu2007}. If there are no
other degeneracies (e.g., disordered ``supercells'' which will be
discussed in the following) therefore each Kramers pair (KP) is
separated from others, a topological invariant can thus be defined
for each KP\cite{Essin2007}. In Fig. \ref{FSchematic} (a), we
illustrate the typical band structures of a time reversal invariant
system in the topological aspect. There are 8 bands, forming 4 KPs,
two of which (pairs 2 and 3 in red) are topologically nontrivial,
which will be defined below. Pairs 3 and 4 are separated but
overlapping in the energy axis. We will simply call the gap between
them as not fully gapped. Most of the information in Fig.
\ref{FSchematic} (a) can be plotted in a simple ``bar code'' version
in \ref{FSchematic} (b), where the extensions of the KPs and full
gaps along the energy axis are represented by the width of the bars
in this direction.
\begin{figure}[htbp]
\includegraphics*[width=0.38\textwidth]{Fig02.eps}
\caption{(Color online) First Brillouin zone of a time reverse
symmetric solid in square lattice. Black dots are time reversal
invariant points. Green region is the effective Brillouin zone
$\protect\tau_{1/2}$, and arrows indicate its boundary
$\partial\protect\tau_{1/2}$. Dashed lines: mesh used in our
calculations (\protect\ref{eqZ2}).} \label{FBZ}
\end{figure}
In 2D the topological invariant $\nu$ associated with a KP is a
$Z_2$ integer defined from the periodic part of the Bloch function $u(\bm{k}%
) $ as\cite{Fu2006}
\begin{align}
\nu &=\frac{1}{2\pi}[\oint_{\partial\tau_{1/2}}d\bm{k}\cdot \bm{A}-
\int_{\tau_{1/2}}d\bm{k}^2 F]\mod 2, \label{eqZ2} \\
\bm{A}(\bm{k})&=\sum_{s=\mathtt{I,II}}i\big\langle u^s(\bm{k})\big|%
\bm{\nabla}_{k}\big|u^s(\bm{k})\big\rangle, \label{eqZ2b} \\
F(\bm{k})&=\Big(\nabla_{\bm{k}}\times \bm{A}(\bm{k})\Big)_z . \label{eqZ2c}
\end{align}
Here $\tau_{1/2}$ is the effective Brillouin zone (EBZ) from which the rest
half can be obtained from its time reverse, $\partial\tau_{1/2}$ is the
boundary of $\tau_{1/2}$, as illustrated in Fig. \ref{FBZ}. The Romantic
numbers $\mathtt{I}$ and $\mathtt{II}$ in equation (\ref{eqZ2b}) label the
two branches of a KP (See Fig. \ref{FSchematic} (a)). Note the time reversal
constraint
\begin{equation}
\begin{aligned} |u^{\mathrm{I}}(-\bm{k})\rangle&=\Theta
|u^{\mathrm{II}}(\bm{k})\rangle\\ |u^{\mathrm{II}}(-\bm{k})\rangle&=-\Theta
|u^{\mathrm{I}}(\bm{k})\rangle \end{aligned} \label{eqTRC}
\end{equation}
on $\partial\tau_{1/2}$ must be employed for equation (\ref{eqZ2}) to make
sense, where $\Theta=-is_y\otimes I_{2\times2} K$ is the time reversal
operator ($s_y$ is the Pauli matrix of physical spin and $K$ here is the
complex conjugation). A KP is trivial (nontrivial) if $\nu=0$ ($\nu=1$). The
topological invariant of a cluster of occupied KPs is just the sum of $\nu$
of all these KPs in the sense of $\mod 2$. Therefore, a gap between two
pairs are called trivial (nontrivial) if there are even (odd) number of
nontrivial pairs below it. If a gap is nontrivial, dissipationless edge
states will appear within the gap, when the system is truncated with open
boundaries\cite{Fu2006,Fu2007,Fu2007b}.
Among several equivalent definitions of $Z_2$ invariant $\nu$\cite
{Fu2007,Fu2006,Wang2010}, this definition has the advantages of
being expressed by well known topological quantities, i.e., Berry
connection $ \bm{A}$ and Berry curvature $\bm{F}$\cite{Xiao2010},
and being appropriate for numerical
evaluation\cite{Fukui2007,Essin2007,Xiao2010PRL}, which is briefly
introduced here. After discretizing the EBZ into a mesh (dashed
lines in Fig.\ref{FBZ} (c)), the field quantities $\bm{A}$ and
$\bm{F}$ can be defined from the eigenstates of the lattice
sites\cite{Fukui2007,Essin2007,Xiao2010PRL}, based on well-developed
lattice gauge theories. Note in the numerical calculations, for any
mesh site $\bm{k}$ in the EBZ, the phases of the eigenstates
(therefore the values of the field quantities) are arbitrarily and
independently determined by the numerical routines ($U(1)$ freedom
of local gauge choice). Care must be taken to cancel all these phase
uncertainties when summing up the discretized field quantities
$\bm{A}$ and $\bm{F}$ by equation (\ref{eqZ2}), so that the
resultant $\nu$ is gauge independent. Of course, the mesh should be
dense enough to obtain converged values for each KP.
For the clean systems, the topological properties of this model are
well understood\cite{Fu2007}, when the lower half bands are
occupied. When $ \Delta=0$, the quantum spin Hall phase with $Z_2=1$
is realized when $ 0<M/(2B)<2$. When tuning $M/B$, a ``band
inversion''\cite{Bernevig2006,Konig2008} occurs at $\Gamma$ point,
leading to a I-TI transition. The presence of $g(\bm{k})$ breaks the
conservation of $S_z$, but the topological invariants does not
change as long as the finite gap remains.
\section{IV. Zone Folding}
The topological invariants are defined in $\bm{k}$ space\cite%
{TKNN1982,Fu2006}, as introduced above. Impurities break the translation
invariance of the original lattice and make $\bm{k}$ badly defined. However,
for a disordered 2D sample with $N\times N$ unit cells, the above
topological arguments can be restored if twisted boundary conditions
\begin{equation}
\psi(\bm{r}+N\cdot \bm{a}_1)=e^{i\bm{k}\cdot N\bm{a}_1}\psi(\bm{r}),\quad
\psi(\bm{r}+N\cdot \bm{a}_2)=e^{i\bm{k}\cdot N\bm{a}_2}\psi(\bm{r})
\end{equation}
are introduced to the opposite boundaries of this finite sample,
where $\bm{a}_i$ are primitive vectors of the clean lattice\cite
{Niu1984,Sheng2006,Essin2007}. Physically speaking, this is
completely equivalent to taking this $N\times N$ sample as a large
unit cell of a 2D super-lattice, so that $\bm{k}$ can be defined in
a smaller Brillouin zone with reciprocal vectors $\bm{b}_i/N$, where
$\bm{b}_i$ is the reciprocal vector of the original lattice.
Disorder within this supercell tends to destroy all the band
degeneracies except those protected by time reversal symmetry, i.e.,
Kramers degeneracies. It is reasonable to imagine that for
sufficiently large $N$, the topological properties of this
superlattice can reflect those of the ``real'' disordered system. In
the following, we will call the primary unit cell of the original
clean system as a ``unit cell'', while the $N\times N$ sample as a
``supercell'' in this context.
In the clean limit, the band structure of the super-lattice can be derived
directly from that of the original lattice by using the standard method of
\textquotedblleft zone folding\textquotedblright , which is briefly reviewed
here. Now the Bloch Hamiltonian $H(\bm{k})$ becomes a $4N^{2}\times 4N^{2}$
matrix
\begin{equation}
H_{i\alpha ,\;j\beta }^{\mathrm{S}}(\bm{k}),\quad 1\leq i,j\leq N^{2},
\end{equation}%
where $\alpha ,\beta $ again represent the spinorbital indices within the
original unit cell, and $i,j$ are the indices of unit cells within the
supercell. The eigenvalues of $H^{\mathrm{S}}(\bm{k})$ are related with
those of the original lattice $E_{n}(\bm{k})$ as
\begin{equation}
E_{n,lm}^{\mathrm{S}}(\bm{k})=E_{n}(\bm{k}+\frac{l}{N}\bm{b}_{1}+\frac{m}{N}%
\bm{b}_{2}),\quad 0\leq l,m\leq N-1 \label{eqEigenvalues}
\end{equation}%
and associated eigenstates are
\begin{equation}
u_{n,lm}^{\mathrm{S}}(\bm{k})=\left(
\begin{array}{c}
e^{i(\frac{l}{N}\bm{b}_{1}+\frac{m}{N}\bm{b}_{2})\cdot \bm{r_1}}u_{n}(\bm{k}+%
\frac{l}{N}\bm{b}_{1}+\frac{m}{N}\bm{b}_{2}) \\
e^{i(\frac{l}{N}\bm{b}_{1}+\frac{m}{N}\bm{b}_{2})\cdot \bm{r_2}}u_{n}(\bm{k}+%
\frac{l}{N}\bm{b}_{1}+\frac{m}{N}\bm{b}_{2}) \\
\vdots \\
e^{i(\frac{l}{N}\bm{b}_{1}+\frac{m}{N}\bm{b}_{2})\cdot \bm{r_{N\times N}}%
}u_{n}(\bm{k}+\frac{l}{N}\bm{b}_{1}+\frac{m}{N}\bm{b}_{2})%
\end{array}%
\right) . \label{eqEigenvectors}
\end{equation}%
Equations (\ref{eqEigenvalues}) and (\ref{eqEigenvectors}) can be verified
by a straightforward application of Bloch's Theorem, with the new
definitions of supercell and associated Brillouin zone in mind.
\section{V. A Small Supercell}
We will only consider the BHZ model in the case of $|D|<|B|$, so that the
system is always fully gaped between the lower and upper halves of bands,
when $M\neq0$. To obtain some insights from analytical treatments, we start
from a simple stage, a small supercell with $2\times2$ unit cells without
spin-flip parts, i.e., $\Delta=0$. Now the system is decoupled into two
sub-systems with single spin component, and the topological property of $%
H_{4\times4}(\bm{k})$ in equation (\ref{eqH}) can be reduced to that of $%
h_{2\times2}(\bm{k})$, represented by spin-resolved Chern
number\cite {Sheng2006,Prodan2010}. To further simplify the
analytical treatments, we can only consider the spin-up sub-system,
because its spin-down counterpart can be obtained from a
straightforward time reversal operation. For this
spin-up sub-system with a $2\times 2$-site supercell, the Hamiltonian is a $%
8\times8$ matrix
\begin{equation}
h_I^{\mathrm{S}}=h^{\mathrm{S}}(\bm{k})+V_I^{\mathrm{S}}, \label{eq4}
\end{equation}
where $h_{8\times8}^{\mathrm{S}}(\bm{k})$ is constructed from the above
zone-folding technique from the original $h_{2\times2}(\bm{k})$ in equation (%
\ref{eqh}) and the impurity term reads
\begin{equation}
V_I=W\cdot \mathrm{diag}(\epsilon_1,\epsilon_1,\epsilon_2,\epsilon_2,%
\epsilon_3,\epsilon_3,\epsilon_4,\epsilon_4), \label{eqImpurity2}
\end{equation}
where $\epsilon_i$ are random numbers within the interval $(1/2,1/2)$ and $W$
is a single parameter to control the disorder strength. This $V_I$
represents random onsite potential distributed on 4 primary unit cells
within the $2\times2$ super-cell. We will show that, this minimal model in
equation (\ref{eq4}) that accommodates both disorder and topology, can
produce some non-trivial results.
Without impurities, as stated above, the eigenenergies and
eigenstates of $ h^{\mathrm{S}}$ can be constructed from those of
$h$ by zone-folding, eqs. (\ref{eqEigenvalues}) and
(\ref{eqEigenvectors}). The eigenenergies of $h^{\mathrm{S}}$ are
ordered by their values at the $\Gamma $ point $\bm{k}=0$ as
\begin{align*}
& E_{1,2,\cdots ,8}^{0}(\Gamma )=-8D-\big|M-8B\big|,\quad -4D-\big|M-4B\big|,
\\
& -4D-\big|M-4B\big|,\quad -M,\quad M,\quad -4D+\big|M-4B\big|, \\
& -4D+\big|M-4B\big|,\quad -8D+\big|M-8B\big|,
\end{align*}%
with a gap $2M$ between conductance band $E_{4}$ and valance band $E_{5}$.
The presence of $V_{I}$ will change band structures. Although the band
structures including impurities can be deduced from diagonalizing eq. (\ref%
{eq4}) directly, we will treat the impurities as a perturbation,
therefore the eigenenergies can be expressed as matrix elements
between unperturbed eigenstates $|u_{i}^{\mathrm{S}}\rangle $.
Straightforward calculations show that at $\Gamma =(0,0)$, the first
order correction to these two states is
\begin{eqnarray*}
E_{4}^{(1)}(\Gamma ) &=&\langle u_{4}^{S}(\Gamma )|V_{I}|u_{4}^{S}(\Gamma
)\rangle =\frac{W}{4}(\epsilon _{1}+\epsilon _{2}+\epsilon _{3}+\epsilon
_{4}) \\
E_{5}^{(1)}(\Gamma ) &=&\langle u_{5}^{S}(\Gamma )|V_{I}|u_{5}^{S}(\Gamma
)\rangle =\frac{W}{4}(\epsilon _{1}+\epsilon _{2}+\epsilon _{3}+\epsilon
_{4}),
\end{eqnarray*}%
which is just a uniform shift as a simple mean field of impurity potentials.
The second order correction is
\begin{eqnarray*}
E_{4}^{(2)}(\Gamma ) &=&\sum_{i\neq 4}\frac{\big|\langle u_{4}^{S}(\Gamma
)|V_{I}|u_{i}^{S}(\Gamma )\rangle \big|^{2}}{E_{4}(\Gamma )-E_{i}(\Gamma )}=%
\frac{-W^{2}F(\epsilon _{1},\epsilon _{2},\epsilon _{3},\epsilon _{4})}{%
128(B-D)} \\
E_{5}^{(2)}(\Gamma ) &=&\sum_{i\neq 5}\frac{\big|\langle u_{5}^{S}(\Gamma
)|V_{I}|u_{i}^{S}(\Gamma )\rangle \big|^{2}}{E_{5}(\Gamma )-E_{i}(\Gamma )}=%
\frac{W^{2}F(\epsilon _{1},\epsilon _{2},\epsilon _{3},\epsilon _{4})}{%
128(B+D)},
\end{eqnarray*}%
where
\begin{align*}
& F(\epsilon _{i})=5(\epsilon _{1}^{2}+\epsilon _{2}^{2}+\epsilon
_{3}^{2}+\epsilon _{4}^{2})-2(\epsilon _{1}\epsilon _{2}+\epsilon
_{2}\epsilon _{3}+\epsilon _{1}\epsilon _{4}+\epsilon _{3}\epsilon _{4}) \\
& -6(\epsilon _{1}\epsilon _{3}+\epsilon _{2}\epsilon _{4})%
\begin{cases}
=0,\quad \epsilon _{1}=\epsilon _{2}=\epsilon _{3}=\epsilon _{4} \\
>0,\quad \mathrm{otherwise}%
\end{cases}%
\end{align*}%
is a semi-positive-definite quadrics. Now the gap is
\begin{eqnarray}
E_{g} &=&\big|%
(E_{5}^{0}+E_{5}^{(1)}+E_{5}^{(2)})-(E_{4}^{0}+E_{4}^{(1)}+E_{4}^{(2)})\big|
\notag \\
&=&\bigg|2M\big(1+\frac{B}{M}\cdot \frac{W^{2}F(\epsilon _{1},\epsilon
_{2},\epsilon _{3},\epsilon _{4})}{64(B^{2}-D^{2})}\big)\bigg|.
\label{eqRenormalizedGap}
\end{eqnarray}
\begin{figure}[htbp]
\includegraphics*[width=0.42\textwidth]{Fig03.eps}
\caption{Evolution of Kramers pairs and their topological invariants
for a $2\times2$ supercell as disorder strength $W$ increase (top to
bottom), for a definite configuration of $\{\epsilon_i\}$. Bars:
same as in Fig. \protect\ref{FSchematic}(b). Model parameters for H
in Eqs. (4): $A=0.0729$, $B=-0.0274$, $C=0$, $D=-0.205$, $M=0.001$
and $\Delta=0$, which are also consistent with
\protect\cite{Li09,Jiang2009}. Lattice constant is set to be 1. }
\label{TwobyTwo}
\end{figure}
Equation (\ref{eqRenormalizedGap}) is the first important result in this
paper. It suggests that in the weak disorder regime, the impurities
effectively renormalize $M$\cite{Groth2009} and this renormalization comes
from a second order effect of disorder. Note the sign of this
renormalization term $\frac{B}{M}\cdot\frac{W^2 F(\epsilon_i)}{64(B^2-D^2)}$
does not depend on the concrete configuration of random impurities. If the
clean system is topologically non-trivial ($M/B>0$), the gap grows as $M+%
\mathrm{const}\cdot W^2$. This means that weak disorder tends to
make the two bands expel each other to avoid a band-touching which
will trigger a transition to a trivial insulator\cite{Hatsugai1999}.
This is a vivid manifestation of ``robustness against weak
disorder'' for TI. If the clean system is topologically trivial
($M/B<0$), on the other hand, the gap decays as
$M-\mathrm{const}\cdot W^2$. If $M$ is small, when we tune disorder
on, the gap will soon close at some small $W_c$, before strong
disorder makes the above perturbation treatment unreliable. This gap
close leads to an I-TI transition with the sign change of $M$. From
the topological point of view, the Chern number of $E_4$ changes by
1 after band-touching\cite
{Hatsugai1999,Murakami2007,Murakami2007b}. Remember we only
considered the spin-up block so far, but the physical argument of
topologically
trivial-nontrivial transition also applies when the time-reversal invariant $%
H$ involving both spin blocks is considered, by a simple correspondence
between Chern number and $Z_2$ invariant\cite{Moore2007}, as long as these
two blocks are decoupled.
To test the above physical pictures, we calculate the $Z_2$
topological invariants $\nu$ for a $2\times2$ supercell with both
spin components included. In Fig. \ref{TwobyTwo}, the evolution of
KPs for a definite configuration of $\{\epsilon _{i}\}$ with
increasing disorder strength $W$ are plotted. We can see that the
gap closing predicted by second order perturbation really happens at
$W=0.024$ and it does lead to a topological transition from $\nu=0$
to $\nu=1$ associated with the lower half bands. After this
transition, a topologically non-trivial gap (the green bar) emerges.
This gap will develop with further increasing $W$, until strong
disorder eventually close it again\cite{Hatsugai1999}. This
disorder-induced nontrivial gap shifts towards positive energy with
increasing $W$, as the TAI region observed in \cite{Li09} does. This
simple model itself also paves a route to producing a TI phase from
a trivial
insulator with spin-orbit coupling by constructing a superlattice\cite%
{ZFJiang2010}.
\section{VI. Large Supercells}
\begin{figure}[htbp]
\includegraphics*[width=0.5\textwidth]{Fig04.eps}
\caption{(a) Two-terminal conductance of a samples with size $%
100\times100$, as functions of energy at a given disorder strength
$W=0.2$. Conductance is an average over 300 random configurations.
Conductance plateau with extremely small fluctuations corresponds
to the TAI phase, indicated by the light yellow region.
(b) The average DOS $\protect%
\rho_{\mathrm{ave}}$ (blue) and the typical DOS
$\protect\rho_{\mathrm{typ}}$ (red), calculated from 300 samples
with size $100\times 100$ and with periodic boundary conditions.
The parameters are the same as in Fig. 3} \label{FDOS}
\end{figure}
So far, the origin of TAI seems clear: the disorder triggers a band
touching, or a band inversion, after which a nontrivial gap opens for the
TAI phase to live. Unfortunately, this simple argument from a small
supercell cannot directly be applied to large samples, which will be shown
in the following. In Fig. \ref{FDOS} (a), we plot the statistics of
two-terminal conductance of a $100\times100$ samples. The two-terminal
conductance is calculated by the standard non-equilibrium Green's function
method\cite{Datta}, and the Fermi energy in the leads are fixed at $E_F^{%
\mathrm{lead}}=0.12$ to offer enough number of channels. The TAI phase is
identified as the conductance plateau $2e^2/h$ with extremely small
fluctuations, as in Ref. \cite{Li09}. If this region corresponds to a bulk
gap, the density of states (DOS) must vanish, at least in the case of
periodic boundary condition. The single particle local density of states
(LDOS) is calculated as\cite{MacKinnon1985}
\begin{equation}
\rho(i,E)=\frac{1}{N^2}\sum_n |\langle i|n\rangle|^2\delta(E-E_n).
\label{eqDOS}
\end{equation}
The arithmetic mean of the LDOS
\begin{equation}
\rho_{\mathrm{ave}}(E)\equiv \ll \rho(i,E) \gg
\end{equation}
is just the bulk DOS except for a constant factor, where $\ll \cdots \gg$ is
the arithmetic average over the sites of the sample. Meanwhile, the
geometric mean of the LDOS
\begin{equation}
\rho_{\mathrm{typ}}(E)\equiv \exp[ \ll \ln\rho(i,E) \gg]
\end{equation}
gives the localization property of the states. In the thermodynamic limit ($%
N\rightarrow\infty$), if $\rho_{\mathrm{ave}}(E)/\rho_{\mathrm{typ}%
}(E)\rightarrow0$, then the states around $E$ is localized\cite{Weisse2006}.
We thus plot $\rho_{\mathrm{ave}}$ and $\rho_{\mathrm{typ}}$ in Fig. \ref%
{FDOS} (b) in the case of periodic boundary conditions. Two remarkable
features can be read from the comparison between Fig. \ref{FDOS} (a) and
(b). First, the DOS $\rho_{\mathrm{ave}}$ does not vanish in TAI region. As
a matter of fact, there are regions with smaller DOS outside the TAI region.
In other words, the TAI phase does not live in a bulk gap at all. Second,
the TAI region corresponds to a vanishing of $\rho_{\mathrm{typ}}$. This
means that these states are extremely localized.
\begin{figure}[htbp]
\includegraphics*[width=0.42\textwidth]{Fig05.eps}
\caption{Evolution of Kramers pairs and their topological invariants
for a $8\times8$ supercell, for a definite configuration of
$\{\epsilon_i\}$. Disorder strength $W$ increases from top to
bottom. Parameters are the same as in Fig. 3} \label{EightByEight}
\end{figure}
These surprising results throw a doubt on whether TAI can be
understood within the above mentioned topological regime. In order
to answer this, we repeat the numerical calculation of $\nu$ for
larger supercells. In Fig. \ref {EightByEight}, we plot the
evolution of KPs for a $8\times 8$ supercell associated with a
definite configuration of $\{\epsilon _{i}\}$, with increasing
disorder strength $W$. There is also a band touching at $W=0.05$,
which triggers a nontrivial subgap represented by a green bar, as in
the case of a small supercell. On the other hand, with stronger
disorder, for example, around $W=0.2$, it develops into a wider
region of nontrivial subgaps separated by narrow KPs, instead of one
single nontrivial gap. In Fig. \ref{FigKPWidth}, we show the average
width of KPs within the TAI region, it is clear that in the
thermodynamic limit, the KPs will be extremely narrow. These narrow
KPs are topologically trivial\cite{Tang2011,KSun2011,Neupert2011}
and do not affect the topology (trivial or nontrivial) of subgaps
between them. In other words, the disorder induced nontrivial nature
soon hides in the lower KPs deeply below the TAI region. We will
argue that, those narrow and topologically trivial KPs are
responsible for nonzero DOS in this region, while these nontrivial
subgaps are responsible for the TAI region observed from transport
calculations in Ref. \cite{Li09}.
\begin{figure}[htbp]
\includegraphics*[bb=0 0 711 495,width=0.5\textwidth]{Fig06.eps}
\caption{(Color online) Average width of Kramers pairs between
$E=0.01$ and $E=0.03$ as a function of supercell size $N$.
Parameters are the same as in Fig. 3} \label{FigKPWidth}
\end{figure}
\begin{figure}[tbph]
\includegraphics*[bb=0 0 711 495,width=0.5\textwidth]{Fig07.eps}
\caption{(Color online) Probability of falling into a nontrivial subgap $%
\langle Q\rangle $ (lines with symbols) for supercells with
different size $N $, at $W=0.2$. Every $\langle Q\rangle $ is
averaged over 500 random configurations. Green curve is the
conductance, same as that in Fig. \protect\ref{FDOS}. Parameters
are the same as in Fig. 3} \label{FigD}
\end{figure}
The nonzero DOS is easy to understand. Although the KPs are
microscopically separated, but due to the broadening $\eta$
associated with any measure or calculation of DOS, they will give
rise to a continuous region of finite DOS. Moreover, these flat and
well separated KPs tend to be strong localized \cite{Essin2007}.
This is what we have observed in Fig. \ref{FDOS} (b). One may also
imagine that a fluctuating transition disorder strength $W_c$ from
sample to sample might contribute to the statistically nonzero DOS
in the TAI region. However, as stated in section V, since this gap
closing is a perturbation effect at weak disorder, the fluctuation
of $W_c$ is also very small. Indeed, the numerical results confirm
that (not shown here), compared to the width of TAI region, the
statistical error of $W_c$ is extremely small.
On the other hand, the origin of TAI phase exhibited from transport
calculations is more profound. It is well known that a nontrivial
subgap always gives rise to dissipationless edge states. However, in
our case, these disorder induced nontrivial subgaps are densely and
randomly distributed on the energy axis. To confirm that they are
indeed responsible for TAI phase, one must verify that these
nontrivial subgaps can survive through random statistics and size
scaling. To characterize this quantitatively, we define a function
\begin{equation}
Q(E)=\left\{ \begin{aligned} &1, \text{ $E\in$ a nontrivial and full subgap}
\\ &0, \text{ otherwise} \\ \end{aligned}\right.
\end{equation}%
for a definite supercell size and a definite disorder configuration.
The average $\langle Q(E)\rangle $ over the disorder ensemble is the
probability for the Fermi energy $E$ to fall into a nontrivial
subgap. If the above physical picture can make sense, $\langle
Q(E)\rangle $ must be a value very close to 1 in the TAI region,
under random averaging and size scaling. In Fig. \ref{FigD}, we show
the numerical results of $\langle Q(E)\rangle $ over averaging and
scaling. The conductance curve is also plotted as a comparison. It
is happy to see that, the $\langle Q(E)\rangle $ curves converge to
a broad peak approaching 1, and that the energy region of the broad
peak does correspond to the conductance plateau of TAI. Note this
broad peak of $\langle Q(E)\rangle $ calculated with supercell size
$\geq 10\times 10$ is sufficient to reproduce TAI region identified
from conductance plateau for $100\times 100$ samples. In the process
of scaling ($N\rightarrow \infty$), the numbers of subpairs and
subgaps increase, while the widths of individual subpairs and
subgaps decrease but with different decreasing rate. As a result, as
disclosed in Fig. \ref{FigD}, for large enough $N$, the total
measure of subgaps will dominate over that of flat subpairs,
$\langle Q\rangle\sim 1\gg1-\langle Q\rangle$. Fig. \ref{FigD} is
the most important result of this paper. It reveals that, the TAI
phase corresponds to a cluster of nontrivial subgaps instead of a
single topologically nontrivial gap. A Fermi energy falls into a
nontrivial subgap with a probability close to 1. The KPs, although
contributing to nonzero DOS in this region, are so narrow that their
measures on the energy axis are extremely small, and they are
localized therefore do not contribute to the electronic transport.
Because of the topological origin, it is now clear why the TAI is an
QSHE phase seen from the real space\cite{Prodan2011}, and why the
dissipationless currents are still carried by edge
states\cite{Jiang2009}.
\section{VII. Summary}
In summary, the topological evolution of TAI is studied in a supercell
regime. Starting from a trivial insulator phase with a small gap, weak
disorder inevitably lead to gap closing between the valence and conduction
bands, which is a second order perturbation effect. This causes an exchange
of topological invariants between them, and results in a transition to a
topologically nontrivial phase. In the limit of large supercell, there will
be very large numbers of subbands and subgaps densely distributed on the
energy axis. However, there exists a continuous region where the Fermi
energy falls into a nontrivial subgap with an extremely high probability,
even after a statistical average over the disorder ensemble. This special
region can thus support a stable and observable TAI phase. This physical
picture also helps find disorder-induced topological insulators in other
materials and higher dimensions\cite{HMGuo2010}.
\section{Acknowledgements}
This work was supported by the Resarch Grant Council of Hong Kong
under Grant No. HKU705110P.
|
2,877,628,088,547 | arxiv | \section{\label{sec:Introduction}Introduction}
The study of the synthesis of distributions can be traced back to the seminal
work by Wyner \cite{Wyner}
where the problem studied was to characterize the smallest rate, in bits per symbol,
at which common randomness needs to be provided to two agents, Alice and Bob, each having
an arbitrary amount of private randomness, such that each of them can separately generate
a sequence of random variables from respective finite sets, with the joint distribution being
close to that of an i.i.d. sequence with a desired joint distribution at each symbol time
(the notion of approximation in \cite{Wyner} is based on relative entropy).
Wyner used this framework to
define a notion of the common information
of two dependent sources (the ones being synthesized by Alice and Bob respectively),
which is known nowadays as Wyner's common information.
\iffalse
in which Wyner used this framework to
define the common information (known as Wyner's common information
nowadays) of two correlated sources.
\color{magenta}
blah \cite{Cuff10} blah \cite{Gohari} blah \cite{Borkar}
\color{black}
\fi
This formulation is of considerable interest for problem of distributed control and
game theory with distributed agents \cite{Borkar} because of the need to
randomize for strategic reasons. It was generalized to the context of
networks by Cuff et al. \cite{Cuff10} where, in particular, the formulation allows
for communication between the agents attempting to create i.i.d. copies of a target
joint distribution, with the communication occurring at the level of blocks
of symbols, see also \cite{Gohari}.
\iffalse
By choosing proper measures of
the approximation level, Wyner's definition of common information
can be extended to a variant version in which we seek for the minimum
communication rate required for a pair of sender and receiver to synthesize
a channel in a distributed way.
\fi
For instance, for two agents, one can seek to find the minimum
communication rate required for a pair of sender and receiver to synthesize
a channel with a given input distribution in a distributed way.
Specifically, the sender and receiver
share a sequence of common random variables $W^{n}$. After observing
a source $X^{n}\sim\pi_{X}^{n}$ and the common randomness $W^{n}$,
the sender generates bits and send them to the receiver, who
generates another source $Y^{n}$ according to the common randomness
$W^{n}$ and the bits that he/she receives. They cooperate in such
a way so that the channel induced by the code $P_{Y^{n}|X^{n}}$ is
close to a target channel $\pi_{Y|X}^{n}$. If the closeness here
is measured by the total variation (TV) distance between $\pi_{X}^{n}P_{Y^{n}|X^{n}}$
and the target joint distribution $\pi_{XY}^{n}$, this channel synthesis
problem was investigated in \cite{bennett2002entanglement,winter2002compression,Cuff,bennett2014quantum}
and the minimum communication rate was completely characterized by
Cuff \cite{Cuff}.
The exact synthesis of
such
a channel was considered
in \cite{harsha2010communication,Kumar,li2017distributed,yu2019exact}
where
exact synthesis here means that the synthesized channel $P_{Y^{n}|X^{n}}$
is exactly equal to the target channel $\pi_{Y|X}^{n}$. The characterization
of the minimum communication rate for exact synthesis (given the
shared randomness rate) is an interesting but hard problem. It is
still open until now except for some cases: the exact synthesis
for symmetric binary erasure source (completely characterized by Kumar,
Li, and El Gamal \cite{Kumar}) and the doubly symmetric binary source
(completely characterized by Yu and Tan \cite{yu2019exact}).
In this paper, we consider an arguably more natural variant of the channel synthesis
problem, which we call the \emph{sequential channel synthesis problem}, in
which the encoder and the decoder work in a sequential way.
Under a mild assumption on the target joint distribution
we provide
a complete (single-letter) characterization for the point-to-point
case, which shows that the canonical symbol-by-symbol mapping is not
optimal in general (but we also show that it is indeed optimal if we make
an additional assumption
on the encoder and decoder). We also extend this result to the broadcast
scenario and the interactive communication scenario,
where we provide bounds in the
former case and a complete solution
in the latter case
under a mild assumption on the target joint distribution.
Our proofs in this paper are based on a R\'enyi entropy
method.
\subsection{Problem Formulation }
Let $\mathcal{W}$, $\mathcal{X}$, $\mathcal{Y}$ and $\mathcal{B}$
be finite sets.
Alice and Bob share a sequence of i.i.d. random variables $\left\{ W_{i}\right\} $
taking values in $\mathcal{W}$, with each $W_{i}\sim P_{W}$. Let
$\left\{ X_{i}\right\} $ be a sequence of i.i.d. random variables
taking values in $\mathcal{X}$, with each $X_{i}\sim\pi_{X}$. We
assume that $\left\{ X_{i}\right\} $ and $\left\{ W_{i}\right\} $
are independent. $\left\{ X_{i}\right\} $ is called the source sequence.
Consider the following sequential channel synthesis problem. At the
epoch $k$, upon observing the common random sequence\footnote{Throughout this paper, for any sequence
$(z_{k},k\ge1)$, we use the notation
$z^{k}:=(z_{1},\ldots,z_{k})$ for $k\ge1$.} $W^{k}$, the source sequence $X^{k}$, and previous communication
random variables $B^{k-1}$, Alice generates $B_{k}\in\mathcal{B}$
by using a random mapping with conditional distribution $P_{B_{k}|W^{k}X^{k}B^{k-1}}$,
and then sends $B_{k}$ to Bob. At the epoch $k$, upon observing
$W^{k}$, $B^{k}$, and the previous outputs $Y^{k-1}$, Bob generates
$Y_{k}$ taking values in $\mathcal{Y}$, by using a random mapping
with conditional distribution $P_{Y_{k}|W^{k}B^{k}Y^{k-1}}$. Given
a target channel $\pi_{Y|X}$, the goal for Alice and Bob is to cooperate
in this sequential manner to minimize the Kullback-Leibler (KL) divergence
$D\left(P_{Y^{n}|X^{n}}\|\pi_{Y|X}^{n}|\pi_{X}^{n}\right)$ of the
synthesized joint distribution $\pi_{X}^{n}P_{Y^{n}|X^{n}}$ with
respect to the target joint distribution $\pi_{X}^{n}\pi_{Y|X}^{n}$,
where $\pi_{X}^{n}(x^{n}):=\prod_{i=1}^{n}\pi_{X}(x_{i})$ and $\pi_{Y|X}^{n}(y^{n}|x^{n}):=\prod_{i=1}^{n}\pi_{Y|X}(y_{i}|x_{i})$.
Here the conditional KL divergence for two conditional distributions
$P_{U|V}$ and $\pi_{U|V}$ conditioned on the marginal distribution
$\pi_{V}$ is defined as
\[
D\left(P_{U|V}\|\pi_{U|V}|\pi_{V}\right):=D\left(P_{U|V}\pi_{V}\|\pi_{U|V}\pi_{V}\right).
\]
The channel synthesized by Alice and Bob can be expressed as
\[
P_{Y^{n}|X^{n}}\left(y^{n}|x^{n}\right):=\sum_{b^{n}}\sum_{w^{n}}P_{W}^{n}\left(w^{n}\right)\prod_{k=1}^{n}P_{B_{k}|W^{k}X^{k}B^{k-1}}\left(b_{k}|w^{k},x^{k},b^{k-1}\right)\prod_{k=1}^{n}P_{Y_{k}|W^{k}B^{k}Y^{k-1}}\left(y_{k}|w^{k},b^{k},y^{k-1}\right),
\]
where $P_{W}^{n}\left(w^{n}\right):=\prod_{i=1}^{n}P_{W}(w_{i})$.
We are interested in characterizing
\begin{equation}
\Gamma\left(\pi_{XY},P_{W}\right):=\lim_{n\to\infty}\frac{1}{n}\Gamma^{(n)}\left(\pi_{XY},P_{W}\right),\label{eq:-3}
\end{equation}
where
\begin{equation}
\Gamma^{(n)}\left(\pi_{XY},P_{W}\right):=\inf_{\left\{ \left(P_{B_{k}|W^{k}X^{k}B^{k-1}},P_{Y_{k}|W^{k}B^{k}Y^{k-1}}\right)\right\} _{k=1}^{n}}D\left(P_{Y^{n}|X^{n}}\|\pi_{Y|X}^{n}|\pi_{X}^{n}\right),\label{eq:-31}
\end{equation}
and $\pi_{XY}(x,y):=\pi_{X}(x)\pi_{Y|X}(y|x)$. The limit in \eqref{eq:-3}
exists since $\Gamma^{(n)}\left(\pi_{XY},P_{W}\right)$ is subadditive
in $n$.
When $P_{W}$ is degenerate, i.e., $W_{i}$ is constant for all $i$,
then this corresponds to the case in which there is no common randomness. The optimal asymptotic KL divergence for this case is denoted
by $\Gamma_{0}\left(\pi_{XY}\right)$.
We assume throughout that $|\mathcal{B}| \ge 2$, where $|\mathcal{B}|$ denotes the cardinality of $\mathcal{B}$,
since otherwise the problem is of no interest.
\iffalse
\color{red}
VA question: Each of the two statements in the preceding paragraph requires a proof. In
particular, these statements are not obvious because of the potential discontinuity phenomenon that shows up in Lemma
\ref{lem:psilem}.
Lei: The two statements have been removed.
\color{black}
\fi
\subsection{Notation}
\label{subsec:notation}
We use upper-case letters, e.g., $X$, to denote a random variable
on
a finite alphabet
$\mathcal{X}$. We use the lower-case letter $x$ to
denote a realization of $X$. We denote the distribution or the probability
mass function of $X$ as $P_{X}$, and use $Q_{X}$ to denote the
distribution of another r.v. on the same alphabet $\mathcal{X}$.
For brevity, the probability values $P_{X}(x)$ are sometimes written
as $P(x)$, when the subscript and the parameter are the same except
that the subscript is upper-case, and the parameter is lower-case.
We use $\mathbf{X}:=(X_{1},X_{2},...,X_{N})$ to denote a random vector.
We use the notation $A\leftrightarrow C\leftrightarrow B$ for a triple
of random variables $(A,B,C)$ to denote that $A$ and $B$ are conditionally
independent given $C$. We will also use notations $H_{Q}(X)$ or
$H(Q_{X})$ to denote the entropy of $X\sim Q_{X}$. If the distribution
is denoted by $P_{X}$,
we sometimes
write the entropy as $H(X)$ for brevity.
We use $\supp(P_{X})$ to denote the support of $P_{X}$.
The logarithm is taken to the natural base.
Note that, as is the case for many other information-theoretic results,
the results in this paper can be viewed as independent of the choice
of the base of the logarithm as long as exponentiation is interpreted
as being with respect to the same base.
Also, for notational convenience, we will write $\mathrm{Unif}\left[1:e^{NR}\right]$ for a probability distribution that is uniform on
$\left[\lceil e^{NR} \rceil\right]$,
where for a positive integer $n$
the notation $[n]$ denotes the set
$\{1, \ldots, n\}$.
\iffalse
\color{red}
VA comment: I changed
$\mathrm{Unif}\left[1:2^{NR}\right]$
to
$\mathrm{Unif}\left[1:e^{NR}\right]$
wherever it occurred.
\color{black}
\color{blue}
(Lei: Your revision is fine. )
\color{black}
\fi
Since there are different notions of conditional R\'{e}nyi divergence in the literature,
we give a detailed description of the notion we use.
Fix distributions $P_{X},Q_{X}$ on the same alphabet $\mathcal{X}$.
For $s > 0$ the
{\em relative entropy} and the {\em R\'enyi divergence of order
$1+s$} are respectively defined as
\begin{align}
D(P_{X}\|Q_{X}) & :=\sum_{x\in\mathrm{supp}(P_{X})}P_{X}(x)\log\frac{P_{X}(x)}{Q_{X}(x)}\label{eq:-19-1}\\*
D_{1+s}(P_{X}\|Q_{X}) & :=\frac{1}{s}\log\sum_{x\in\mathrm{supp}(P_{X})}P_{X}(x)^{1+s}Q_{X}(x)^{-s}.\label{eq:-40}
\end{align}
These are standard notions, see e.g. \cite{Erven}.
The conditional versions are respectively defined as
\begin{align}
D(P_{Y|X}\|Q_{Y|X}|P_{X}) & :=D(P_{X}P_{Y|X}\|P_{X}Q_{Y|X})\\*
D_{1+s}(P_{Y|X}\|Q_{Y|X}|P_{X}) & :=D_{1+s}(P_{X}P_{Y|X}\|P_{X}Q_{Y|X}),
\end{align}
the first of these being of course standard.
It is known
that $D_1(P_{X}\|Q_{X}):=\lim_{s\to0}D_{1+s}(P_{X}\|Q_{X})=D(P_{X}\|Q_{X})$ so a special
case of the R\'enyi divergence (or the conditional version) is the usual
relative entropy (respectively the conditional version).
It can be checked that the data processing inequality for relative entropy extends to
the R\'{e}nyi divergence, i.e. for $s\ge 0$,
\[
D_{1+s}(P_{XY}\|Q_{XY}) \ge D_{1+s}(P_{X}\|Q_{X}).
\]
The entropy of a random variable $X$ on a finite alphabet
$\mathcal{X}$ with probability distribution $P_{X}$ can be written as
\[
H(X) := H(P_{X}) = \log |\mathcal{X}| - D(P_{X}\|U_{X}),
\]
where $|\mathcal{X}|$ denotes the cardinality of $\mathcal{X}$ and $U_{X}$ denotes the
uniform distribution on $\mathcal{X}$.
Thus for $s > 0$ the {\em R\'enyi entropy of order
$1+s$} is defined as
\[
H_{1+s}(X) := H_{1+s}(P_{X}) := \log |\mathcal{X}| - D_{1+s}(P_{X}\|U_{X}) =
- \frac{1}{s}\log\sum_{x\in\mathrm{supp}(P_{X})}P_{X}(x)^{1+s}.
\]
This is a standard notion.
Note that if $X$ and $Y$ are independent then for all $s > 0$ we have
\[
H_{1+s}(XY) = H_{1+s}(X) + H_{1+s}(Y).
\]
For the conditional versions, for conditional entropy we have
\[
H(Y|X) = H(XY) - H(X) = H(P_{Y|X}| P_{X}) = \log |\mathcal{Y}| - D(P_{Y|X}\|U_{Y}|P_{X}).
\]
Thus for $s > 0$ the {\em conditional R\'enyi entropy of order
$1+s$} is defined as
\[
H_{1+s}(Y|X) := H_{1+s}(P_{Y|X}| P_{X}) := \log |\mathcal{Y}| - D_{1+s}(P_{Y|X}\|U_{Y}|P_{X}) =
- \frac{1}{s}\log\sum_{x\in\mathrm{supp}(P_{X})}P_{X}(x) \sum_{y\in\mathrm{supp}(P_{Y})} P_{Y|X}(y|x)^{1+s}.
\]
It can be checked that we have
$\lim_{s\to0}H_{1+s}(X) = H(X)$
and
$\lim_{s\to0}H_{1+s}(Y|X)= H(Y|X)$.
With these definitions, as a caveat we note that while it is true that
\[
H(Y|X) = \sum_{x\in\mathrm{supp}(P_{X})} P_{X}(x) H(Y|X = x),
\]
where $H(Y|X = x)$ denotes the entropy of the probability distribution $(P_{Y|X}(y|x), y \in \mathcal{Y})$,
for $s > 0$ we have in general that
\[
H_{1+s}(Y|X) \neq \sum_{x\in\mathrm{supp}(P_{X})} P_{X}(x) H_{1+s}(Y|X = x),
\]
where we will use the notation $H_{1+s}(Y|X = x)$ to denote the R\'{e}nyi entropy of
the probability distribution $(P_{Y|X}(y|x), y \in \mathcal{Y})$ (similarly, for instance,
$H_{1+s}(Z|Y, X = x)$ will denote the conditional R\'{e}nyi entropy under the joint
probability distribution $(P_{YZ|X}(y,z|x), (y,z) \in \mathcal{Y} \times \mathcal{Z})$).
Note that
$\lim_{s\to0}H_{1+s}(Y|X=x) = H(Y|X=x)$.
Similarly the chain rule
does not hold for
R\'{e}nyi entropy, i.e. for $s > 0$
in general we have
\[
H_{1+s}(YZ|X) \neq H_{1+s}(Y|X) + H_{1+s}(Z|XY).
\]
On the other hand, if $Z$ is independent of $(X,Y)$ then we have
\[
H_{1+s}(YZ|X) = H_{1+s}(Y|X) + H_{1+s}(Z).
\]
\iffalse
\color{blue}
Lei: The chain rule does not seem to hold for Renyi entropy.
\color{black}
\fi
In this document we do not need the R\'{e}nyi divergence and related notions for $s < 0$.
\section{The Point-to-point Case}
For the sequential channel synthesis problem,
in this section
we provide a single-letter characterization
of $\Gamma\left(\pi_{XY},P_{W}\right)$
in Theorem
\ref{thm:sequentialCS},
which is one of our main results.
Define
$\Psi: \mathbb{R} \to [0,+\infty]$
by
\begin{equation} \label{eq:psidef}
\Psi(t):=\min_{\substack{P_{UV},P_{B|XUV},P_{Y|BUV}:\\
H\left(U|V\right) \le H\left(BU|XYV\right)+t
}
}D\left(P_{Y|XV}\|\pi_{Y|X}|\pi_{X}P_{V}\right),
\end{equation}
where $B\in\mathcal{B}$, and all the entropies in \eqref{eq:psidef}
are evaluated at the joint distribution $\pi_{X}P_{UV}P_{B|XUV}P_{Y|BUV}$
and the distribution $P_{Y|XV}$ is also induced by this joint distribution.
Note that the minimum in \eqref{eq:psidef} is achieved
because a nonnegative lower semicontinuous function
achieves its minimum on a compact set.
Denote $t_{\min}$ as the infimum of $t\in \mathbb{R}$ such that
$\Psi(t)<+\infty$.
\begin{lem} \label{lem:psilem}
1) $\Psi(t)$ is convex and nonincreasing on $\mathbb{R}$. Moreover, $\Psi(t)$ is equal to $+\infty $ on $(-\infty,t_{\min})$, and continuous on $(t_{\min},+\infty)$. \\
2) A sufficient condition for $t_{\min}<0$ is the following assumption.
Assumption 1: $|\mathcal{B}|\ge 2$ and there is at least one $y$ such that $\pi_{Y|X}(y|x)>0$ for all $x$ such that $\pi_{X}(x)>0$.
\end{lem}
\begin{IEEEproof}
We first prove Statement 1). Since both the objective function and constraint functions are linear in $P_V$ given $(P_{U|V},P_{B|XUV},P_{Y|BUV})$, $\Psi(t)$ is in fact a convex function. This can be shown by the standard argument that for two tuples of r.v.'s $(X_1,V_1,U_1,B_1,Y_1)$ and $(X_2,V_2,U_2,B_2,Y_2)$, we can define a new r.v. $(X,U,B,Y):=(X_J,U_J,B_J,Y_J)$ and $V:=(V_J,J)$ where $J \sim \mathrm{Bern}(p)$ is independent of $V_1,V_2$). Then, the resultant objective function and constraint functions are the averages (with respect to $\mathrm{Bern}(p)$) of those for $(X_1,V_1,U_1,B_1,Y_1)$ and $(X_2,V_2,U_2,B_2,Y_2)$.
By the convexity, $\Psi(t)$ is continuous on $(t_{\min},+\infty)$.
We next prove Statement 2). If we choose $U,V$ as constants, $B \sim \mathrm{Unif}(\mathcal{B}),X$ are mutually independent, and $Y=y$ as constant as well (here $y$ is the element given in the lemma), then this set of distributions is feasible if $t> -\log |\mathcal{B}|$ and the resultant value is finite. Hence, $t_{\min}<0$.
\end{IEEEproof}
Denote $\Delta\left(\pi_{XY},P_{W}\right):=\Psi(H(W))$.
The proof of the following theorem is provided in
Appendix \ref{sec:Proof-of-Theorem}.
\begin{thm}
\label{thm:sequentialCS}
Under Assumption 1, we have
\begin{equation}
\Gamma\left(\pi_{XY},P_{W}\right) = \Delta\left(\pi_{XY},P_{W}\right).\label{eq:-18-1}
\end{equation}
Furthermore, it suffices to restrict the cardinality of $\mathcal{U}$
and $\mathcal{V}$
in the calculation of
$\Delta\left(\pi_{XY},P_{W}\right)$
such that
$\left|\mathcal{V}\right|\leq 2$
and $\left|\mathcal{U}\right|\leq 2\left|\mathcal{X}\right|\left|\mathcal{Y}\right|$.
\end{thm}
\iffalse
\color{blue}
Lei: The cardinality bound was improved, and the proof is given in Appendix A.
\color{black}
\fi
\iffalse
\color{red}
VA comment: The claimed cardinality bounds on the auxiliaries do not seem to have been proved in the appendix.
\color{black}
\fi
\begin{rem}
Note that $\Delta\left(\pi_{XY},P_{W}\right)$ depends on $P_{W}$
only through its entropy $H\left(W\right)$.
\end{rem}
We next consider the case in which the stochastic encoder $P_{B_{k}|W^{k}X^{k}B^{k-1}}$
and decoder $P_{Y_{k}|W^{k}B^{k}Y^{k-1}}$ are respectively replaced
by $P_{B_{k}|X^{k}}$ and $P_{Y_{k}|B^{k}Y^{k-1}}$. In other words,
in this case, it is not allowed to extract common randomness for the
communication at the $k$-th epoch from the previous communication
bits $B^{k-1}$
and there is no externally provided common randomness.
We next show that a symbol-by-symbol mapping suffices
to achieve the optimal KL divergence for this case, which we denote
by $\widetilde{\Gamma}_{0}\left(\pi_{XY}\right)$, as shown in the
following result.
\begin{rem}
Note that $\widetilde{\Gamma}_{0}\left(\pi_{XY}\right)$ is a priori
smaller than $\Gamma_{0}\left(\pi_{XY}\right)$, because the latter
allows for stochastic encoders of the form $P_{B_{k}|X^{k}B^{k-1}}$
which, with decoders of the form $P_{Y_{k}|B^{k}Y^{k-1}}$, allows
for the possibility of extracting common randomness from the communication.
\end{rem}
\begin{thm}
\label{thm:where--is}If the stochastic encoder $P_{B_{k}|W^{k}X^{k}B^{k-1}}$
and decoder $P_{Y_{k}|W^{k}B^{k}Y^{k-1}}$ are respectively replaced
by $P_{B_{k}|X^{k}}$ and $P_{Y_{k}|B^{k}Y^{k-1}}$, then
\[
\widetilde{\Gamma}_{0}\left(\pi_{XY}\right)=\widetilde{\Delta}\left(\pi_{XY}\right):=\inf_{P_{B|X},P_{Y|B}}D\left(P_{Y|X}\|\pi_{Y|X}|\pi_{X}\right)
\]
where $B\in\mathcal{B}$ and $P_{Y|X}$ is induced by the joint distribution
$\pi_{X}P_{B|X}P_{Y|B}$.
\end{thm}
\begin{IEEEproof}
It is easy to see that
$\widetilde{\Gamma}_{0}\left(\pi_{XY}\right)\leq\widetilde{\Delta}\left(\pi_{XY}\right)$
since $\widetilde{\Delta}\left(\pi_{XY}\right)$ is achievable by
a communication scheme consisting of symbol-by-symbol mappings.
On the other hand,
\begin{align}
& D\left(P_{Y^{n}|X^{n}}\|\pi_{Y|X}^{n}|\pi_{X}^{n}\right)\nonumber \\
& =D\left(\prod_{k=1}^{n}P_{Y_{k}|X^{k}Y^{k-1}}\|\pi_{Y|X}^{n}|\pi_{X}^{n}\right)\nonumber \\
& =\sum_{k=1}^{n}D\left(P_{Y_{k}|X^{k}Y^{k-1}}\|\pi_{Y|X}|\pi_{X}^{k}P_{Y^{k-1}|X^{k}}\right)\nonumber \\
& \geq\sum_{k=1}^{n}D\left(P_{Y_{k}|X^{k}}\|\pi_{Y|X}|\pi_{X}^{k}\right)\label{eq:-4}\\
& \geq\sum_{k=1}^{n}\min_{x^{k-1}}D\left(P_{Y_{k}|X_{k},X^{k-1}=x^{k-1}}\|\pi_{Y|X}|\pi_{X}^{k}\right)\label{eq:-5}
\end{align}
where \eqref{eq:-4} follows
from the convexity of $D(p\|q)$ in the pair $(p,q)$, and \eqref{eq:-5}
follows since
\begin{align*}
D\left(P_{Y_{k}|X^{k}}\|\pi_{Y_{k}|X^{k}}|\pi_{X^{k}}\right) & =\mathbb{E}_{X^{k-1}\sim\pi_{X}^{k-1}}D\left(P_{Y_{k}|X_{k},X^{k-1}=x^{k-1}}\|\pi_{Y|X}|\pi_{X}\right)\\
& \geq\min_{x^{k-1}}D\left(P_{Y_{k}|X_{k},X^{k-1}=x^{k-1}}\|\pi_{Y|X}|\pi_{X}\right).
\end{align*}
It is easy to verify that (note that the following does not hold if
we consider encoder $P_{B_{k}|X^{k}B^{k-1}}$)
\begin{align}
P_{X^{k}B_{k}Y_{k}}\left(x^{k},b_{k},y_{k}\right) & =\sum_{b^{k-1},y^{k-1}}\pi_{X}^{k}\left(x^{k}\right)P_{B^{k-1}|X^{k-1}}\left(b^{k-1}|x^{k-1}\right)P_{Y^{k-1}|B^{k-1}X^{k-1}}(y^{k-1}|b^{k-1},x^{k-1}) \nonumber\\
& \qquad\times P_{B_{k}|X^{k}}\left(b_{k}|x^{k}\right)P_{Y_{k}|B^{k}Y^{k-1}}\left(y_{k}|b^{k},y^{k-1}\right) \label{eq:factor}\\
& =\pi_{X}^{k}\left(x^{k}\right)P_{B_{k}|X^{k}}\left(b_{k}|x^{k}\right)P_{Y_{k}|B_{k}X^{k-1}}\left(y_{k}|b_{k},x^{k-1}\right).
\end{align}
\iffalse
\color{red}
VA question: I am not sure I understand why the preceding claim is true. Note that
\[
P_{X^{k}B_{k}Y_{k}}\left(x^{k},b_{k},y_{k}\right)
= \pi_{X}^{k}\left(x^{k}\right)P_{B_{k}|X^{k}}\left(b_{k}|x^{k}\right)P_{Y_{k}|B_{k}X^{k}}\left(y_{k}|b_{k},x^{k}\right),
\]
so what is being claimed is basically
\[
P_{Y_{k}|B_{k}X^{k-1}}\left(y_{k}|b_{k},x^{k-1}\right)
=
P_{Y_{k}|B_{k}X^{k}}\left(y_{k}|b_{k},x^{k}\right),
\]
i.e. the Markov chain
\[
X_k \leftrightarrow \left(B_{k},X^{k-1}\right)
\leftrightarrow Y_{k}.
\]
But it is not clear intuitively why this should be true. Since $Y_{k}$ is chosen based on
$\left(B^{k},Y^{k-1}\right)$ what matters is
whether the Markov chain
$\left(B^{k},Y^{k-1}\right) \leftrightarrow
\left(B_{k},X^{k-1}\right)
\leftrightarrow
X_{k}$
holds and it seems to me that this need not be the
case because $B_{k}$ is allowed to depend on all of $X^{k}$.
(Lei: The Markov chains you mentioned follow from the factorization given in \eqref{eq:factor}, i.e.,
\[
P_{X^k B^k Y^k}= \pi_{X}^{k} P_{B^{k-1}|X^{k-1}} P_{Y^{k-1}|B^{k-1}X^{k-1}} P_{B_{k}|X^{k}} P_{Y_{k}|B^{k}Y^{k-1}}.
\]
Taking the conditional distribution, we have
\[
P_{Y^k B^{k-1} | X^k B_k }= P_{B^{k-1}|X^{k-1}} P_{Y^{k-1}|B^{k-1}X^{k-1}} P_{Y_{k}|B^{k}Y^{k-1}}.
\]
Note that there is no $X_k$ appearing at the RHS above, which
implies $Y^k B^{k-1} \leftrightarrow X^{k-1} B_k \leftrightarrow X_k$.)
\color{black}
\fi
Let $\hat{x}^{k-1}$ be the optimal sequence that attains the minimum
in \eqref{eq:-5}. Then given $X^{k-1}=\hat{x}^{k-1}$,
\begin{align*}
P_{X_{k}B_{k}Y_{k}|X^{k-1}}\left(x_{k},b_{k},y_{k}|\hat{x}^{k-1}\right) & =\pi_{X}\left(x_{k}\right)P_{B_{k}|X^{k}}\left(b_{k}|x_{k},\hat{x}^{k-1}\right)P_{Y_{k}|B_{k}X^{k-1}}\left(y_{k}|b_{k},\hat{x}^{k-1}\right).
\end{align*}
By identifying $P_{B|X}=P_{B_{k}|X_{k},X^{k-1}=\hat{x}^{k-1}},P_{Y|B}=P_{Y_{k}|B_{k},X^{k-1}=\hat{x}^{k-1}}$,
we have
$\widetilde{\Gamma}_{0}\left(\pi_{XY}\right)\geq\widetilde{\Delta}\left(\pi_{XY}\right)$.
\end{IEEEproof}
\section{The Broadcast Case}
We now consider the sequential channel synthesis problem over a noiseless
broadcast channel. Let $\mathcal{W}$, $\hat{\mathcal{W}}$, $\mathcal{X}$,
$\mathcal{Y}$, $\mathcal{Z}$ and $\mathcal{B}$ be finite sets.
Assume that $|\mathcal{B}| \ge 2$.
Assume that a sender Alice and two receivers Bob and Charles share
a common random sequence $W^{k}$; in addition to this, Alice and
Bob also share another common random sequence $\hat{W}^{k}$. Here
$\{W_{i}\}$ is an i.i.d. sequence of random variables taking values
in $\mathcal{W}$ with each $W_{i}\sim P_{W}$ and $\{\hat{W}_{i}\}$
is an i.i.d. sequence of random variables taking values in $\hat{\mathcal{W}}$
with each $\hat{W}_{i}\sim P_{\hat{W}}$.
There is also a sequence of random variables $\{X_{i}\}$ taking values
in $\mathcal{X}$, with $X_{i}\sim P_{X}$. We assume that $\{X_{i}\}$,
$\{W_{i}\}$ and $\{\hat{W}_{i}\}$ are mutually independent. The
sequence $\{X_{i}\}$ is called the source sequence
and is observed only by Alice.
At the epoch $k$, upon observing the random sequences $\left(W^{k},\hat{W}^{k}\right)$,
the source sequence $X^{k}$, and previous communication random variables
$B^{k-1}$, Alice generates $B_{k}\in\mathcal{B}$ by using a random
mapping with conditional distribution $P_{B_{k}|W^{k}\hat{W}^{k}X^{k}B^{k-1}}$,
and then sends $B_{k}$ to Bob and Charles. Upon observing $W^{k},\hat{W}^{k}$,
$B^{k}$, and previous outputs $Y^{k-1}$, Bob generates $Y_{k}$
by using a random mapping with conditional distribution $P_{Y_{k}|W^{k}\hat{W}^{k}B^{k}Y^{k-1}}$.
Upon observing $W^{k}$, $B^{k}$, and the previous outputs $Z^{k-1}$,
Charles generates $Z_{k}$ by using a random mapping with conditional
distribution $P_{Z_{k}|W^{k}B^{k}Z^{k-1}}$. Given a target broadcast
channel $\pi_{YZ|X}$, the goal is for Alice, Bob, and Charles to
cooperate in this sequential manner to minimize the KL divergence
$D\left(P_{Y^{n}Z^{n}|X^{n}}\|\pi_{YZ|X}^{n}|\pi_{X}^{n}\right)$
between the synthesized joint distribution $\pi_{X}^{n}P_{Y^{n}Z^{n}|X^{n}}$
and the target joint distribution $\pi_{X}^{n}\pi_{YZ|X}^{n}$. Here
the broadcast channel from Alice to Bob and Charles that has been
synthesized is
\begin{align*}
P_{Y^{n}Z^{n}|X^{n}}\left(y^{n},z^{n}|x^{n}\right) & :=\sum_{w^{n},\hat{w}^{n},b^{n}}P_{W}^{n}\left(w^{n}\right)P_{\hat{W}}^{n}\left(\hat{w}^{n}\right)\prod_{k=1}^{n}P_{B_{k}|W^{k}\hat{W}^{k}X^{k}B^{k-1}}\left(b_{k}|w^{k},\hat{w}^{k},x^{k},b^{k-1}\right)\\
& \qquad\times\prod_{k=1}^{n}P_{Y_{k}|W^{k}\hat{W}^{k}B^{k}Y^{k-1}}\left(y_{k}|w^{k},\hat{w}^{k},b^{k},y^{k-1}\right)\prod_{k=1}^{n}P_{Z_{k}|W^{k}B^{k}Z^{k-1}}\left(z_{k}|w^{k},b^{k},z^{k-1}\right).
\end{align*}
We are interested in characterizing
\begin{equation}
\Gamma\left(\pi_{XYZ},P_{W}P_{\hat{W}}\right):=\lim_{n\to\infty}\inf_{\left\{ \left(P_{B_{k}|W^{k}\hat{W}^{k}X^{k}B^{k-1}},P_{Y_{k}|W^{k}\hat{W}^{k}B^{k}Y^{k-1}},P_{Z_{k}|W^{k}B^{k}Z^{k-1}}\right)\right\} _{k=1}^{n}}\frac{1}{n}D\left(P_{Y^{n}Z^{n}|X^{n}}\|\pi_{YZ|X}^{n}|\pi_{X}^{n}\right).\label{eq:-3-1}
\end{equation}
For this sequential broadcast channel synthesis problem, we prove
the following result. The proof is provided in Appendix \ref{sec:Proof-of-Theorem-broadcast}.
\begin{thm}
Assume
$|\mathcal{B}|\ge 2$ and there is at least one pair $(y,z)$ such that $\pi_{YZ|X}(y,z|x)>0$ for all $x$ such that $\pi_{X}(x)>0$.
Then we have
\label{thm:sequentialCS-broadcast-1}
\begin{equation}
\Delta\left(\pi_{XYZ},P_{W}P_{\hat{W}}\right)\leq\Gamma\left(\pi_{XYZ},P_{W}P_{\hat{W}}\right)\leq\hat{\Delta}\left(\pi_{XYZ},P_{W}P_{\hat{W}}\right)\label{eq:-18-1-4-2}
\end{equation}
where
\begin{equation}
\Delta\left(\pi_{XYZ},P_{W}P_{\hat{W}}\right):=\min_{\substack{P_{U\hat{U}V},P_{B|XU\hat{U}V},P_{Y|BU\hat{U}V},P_{Z|BUV}:\\
H\left(U|V\right)\le H(W)+H\left(BU|XYZV\right),\\
H\left(U\hat{U}|V\right)\le H(\hat{W})+H(W)+H\left(BU\hat{U}|XYZV\right)
}
}D\left(P_{YZ|XV}\|\pi_{YZ|X}|\pi_{X}P_{V}\right),\label{eq:-18-1-4-2-1}
\end{equation}
and $\hat{\Delta}\left(\pi_{XYZ},P_{W}P_{\hat{W}}\right)$ is defined
as the expression identical to $\Delta\left(\pi_{XYZ},P_{W}P_{\hat{W}}\right)$
except that $I\left(B;\hat{U}|XYZUV\right)$ is additionally added
to the LHS in the first constraint.
Here all the entropies are evaluated
at the joint distribution \[
\pi_{X}P_{U\hat{U}V}P_{B|XU\hat{U}V}P_{Y|BU\hat{U}V}P_{Z|BUV}
\]
and the distribution $P_{YZ|XV}$ is also induced by this joint distribution.
Furthermore, it suffices to restrict the cardinality of
$\mathcal{V}, \mathcal{U}$
and $\hat{\mathcal{U}}$
in the calculation of
$\Delta\left(\pi_{XYZ},P_{W}P_{\hat{W}}\right)$
such that $\left|\mathcal{V}\right|\leq 3$, $\left|\mathcal{U}\right|\leq 3(|\mathcal{X}||\mathcal{Y}||\mathcal{Z}|+1)$,
and $|\hat{\mathcal{U}}|\leq 3(|\mathcal{X}||\mathcal{Y}||\mathcal{Z}|+1)|\mathcal{B}||\mathcal{X}||\mathcal{Y}||\mathcal{Z}|$. Similarly, it suffices to restrict the cardinality of
$\mathcal{V}, \mathcal{U}$
and $\hat{\mathcal{U}}$
in the calculation of
$\hat{\Delta}\left(\pi_{XYZ},P_{W}P_{\hat{W}}\right)$
such that $\left|\mathcal{V}\right|\leq 3$, $\left|\mathcal{U}\right|\leq 3(|\mathcal{X}||\mathcal{Y}||\mathcal{Z}|+1)$,
and $|\hat{\mathcal{U}}|\leq 3(|\mathcal{X}||\mathcal{Y}||\mathcal{Z}|+1)(|\mathcal{B}||\mathcal{X}||\mathcal{Y}||\mathcal{Z}|+1)$.
\end{thm}
\iffalse
\color{red}
VA question: Cardinality bounds on the auxiliaries seems to be missing in the statement of the theorem as
opposed to the other theorems.
\color{black}
\color{blue}
Lei: I have added the cardinality bounds and the corresponding proofs.
\color{black}
\fi
\section{The Interactive Communication Case}
We now consider the sequential channel synthesis problem over a noiseless
\emph{two-way} channel. Let $\{(S_{k},X_{k})\}$ be a memoryless source
with $(S_{k},X_{k})\sim\pi_{SX}$ for all $k$. At epoch $k$, upon
observing the common random sequence $W^{k}$, the source sequence
$S^{k}$, previous communication random variables $\left(A^{k-1},B^{k-1}\right)$, and the previous output $Y^{k-1}$,
Alice generates
$A_{k}\in\mathcal{A}$ by using a
random mapping $P_{A_{k}|S^{k}A^{k-1}B^{k-1}Y^{k-1}W^{k}}$, and then sends
it to Bob. At the same epoch, upon observing the common random sequence
$W^{k}$, the source sequence $X^{k}$, previous communication
random variables $\left(A^{k-1},B^{k-1}\right)$, , and the previous output $Z^{k-1}$, Bob generates
$B_{k}\in\mathcal{B}$ by using a random mapping $P_{B_{k}|X^{k}A^{k-1}B^{k-1}Z^{k-1}W^{k}}$,
and then sends it to Alice.
\iffalse
\color{red}
VA question: Since Alice has access to $Y^{k-1}$ at time $k$ and Bob has access to
$Z^{k-1}$ at time $k$ (see the next paragraph) it seems strange that $A_{k}$
is generated via $P_{A_{k}|S^{k}A^{k-1}B^{k-1}W^{k}}$ instead of
$P_{A_{k}|S^{k}Y^{k-1}A^{k-1}B^{k-1}W^{k}}$ and similarly that
$B_{k}$ is generated via $P_{B_{k}|X^{k}A^{k-1}B^{k-1}W^{k}}$
instead of $P_{B_{k}|X^{k}Z^{k-1}A^{k-1}B^{k-1}W^{k}}$. It seems that if one generalizes
the formulation in this way the answers may change. Maybe some comment should be made about this.
\color{black}
\color{blue}
(Lei: Your suggestion is great. I have revised $P_{A_{k}|S^{k}A^{k-1}B^{k-1}W^{k}}$ to
$P_{A_{k}|S^{k}Y^{k-1}A^{k-1}B^{k-1}W^{k}}$, and also revised the corresponding parts in the proof. However, the result in Theorem 4
remains unchanged.)
\color{black}
\fi
Also at epoch $k$, upon observing $W^{k}$, $A^{k},B^{k}$, source
sequence $S^{k}$, and previous outputs $Y^{k-1}$, Alice generates
$Y_{k}$ by using a random mapping $P_{Y_{k}|A^{k}B^{k}S^{k}Y^{k-1}W^{k}}$.
Upon observing $W^{k}$, $A^{k},B^{k}$, source sequence $X^{k}$,
and previous outputs $Z^{k-1}$, Bob generates a r.v. $Z_{k}$ by
using a random mapping $P_{Z_{k}|A^{k}B^{k}X^{k}Z^{k-1}W^{k}}$. Given
a target channel $\pi_{YZ|SX}$,
Alice and Bob
cooperate
to minimize the KL divergence $D\left(P_{Y^{n}Z^{n}|S^{n}X^{n}}\|\pi_{YZ|SX}^{n}|\pi_{SX}^{n}\right)$
between the synthesized channel and the target channel. We are interested
in characterizing
\begin{equation}
\Gamma\left(\pi_{SXYZ},P_{W}\right):=\lim_{n\to\infty}\inf_{\left\{ \left(\substack{P_{A_{k}|S^{k}A^{k-1}B^{k-1}Y^{k-1}W^{k}},P_{B_{k}|X^{k}A^{k-1}B^{k-1}Z^{k-1}W^{k}},\\
P_{Y_{k}|A^{k}B^{k}S^{k}Y^{k-1}W^{k}},P_{Z_{k}|A^{k}B^{k}X^{k}Z^{k-1}W^{k}}
}
\right)\right\} _{k=1}^{n}}\frac{1}{n}D\left(P_{Y^{n}Z^{n}|S^{n}X^{n}}\|\pi_{YZ|SX}^{n}|\pi_{SX}^{n}\right).\label{eq:-3-1-5}
\end{equation}
Let
\[
\Delta\left(\pi_{SXYZ},P_{W}\right):=\min_{\substack{P_{UV},P_{A|SUV},P_{B|XUV},P_{Y|ABUV},P_{Z|ABUV}:\\
H\left(U|V\right)\leq H\left(ABU|SXYZV\right)+H\left(W\right)
}
}D\left(P_{YZ|SXV}\|\pi_{YZ|SX}|\pi_{SX}P_{V}\right),
\]
where $A\in\mathcal{A},B\in\mathcal{B}$ and all the entropies above
are evaluated at the joint distribution $\pi_{SX}P_{UV}P_{B|XUV}P_{A|SUV}P_{Y|ABUV}P_{Z|ABUV}$
and the distribution $P_{YZ|SXV}$ is also induced by this joint distribution.
Note that these expressions depend on $P_{W}$ only through $H(W)$.
For this interactive version of sequential channel synthesis problem,
we prove the following result. The proof is provided in Appendix \ref{sec:Proof-of-Theorem-interaction}.
\begin{thm}
\label{thm:sequentialCS-interaction}
Assume that $|\mathcal{A}|, |\mathcal{B}|\ge 2$ and there is at least one $(y,z)$ such that $\pi_{YZ|SX}(y,z|s,x)>0$ for all $(s,x)$ such that $\pi_{SX}(s,x)>0$. We have
\begin{equation}
\Gamma\left(\pi_{SXYZ},P_{W}\right) = \Delta \left(\pi_{SXYZ},P_{W}\right).
\label{eq:-18-1-4-1}
\end{equation}
Furthermore,
in the calculation of
$\Delta\left(\pi_{SXYZ},P_{W}\right)$
\color{black}
it suffices to restrict the cardinality of $\mathcal{U}$
and $\mathcal{V}$ such that
$\left|\mathcal{V}\right|\leq 2$
and $\left|\mathcal{U}\right|\leq 2 \left|\mathcal{S}\right|\left|\mathcal{X}\right|\left|\mathcal{Y}\right|\left|\mathcal{Z}\right|$.
\color{black}
\end{thm}
\iffalse
\color{red}
VA question: In the proof of this result in Appendix
\ref{sec:Proof-of-Theorem-interaction} there is no justification
given for the claimed cardinality bounds on the auxiliaries.
\color{blue}
Lei: The proof of the cardinality bound is the same as the one of Theorem \ref{thm:sequentialCS}. This point was mentioned in Appendix C.
\color{black}
\fi
\iffalse
\section{The Energy Harvesting Case}
In this section we consider sequential synthesis of a channel over
a binary noiseless channel with an energy harvesting constraint
\color{magenta}
on the sender.
We assume that the sender maintains energy to transmit messages to the
receiver in a buffer of limited size $\sigma$. At each symbol interval
the sender can harvest a fixed amount of energy $\rho$ from the environment
and in order to send the message $b \in \mathcal{B}$ it needs to expend energy
$\psi(b)$ for some fixed function $\psi: \mathcal{B} \to [0, \infty)$.
More precisely, if the amount of energy in the buffer at the end of the
$(t-1)$-th symbol interval is $\sigma_{t-1}$ then the symbol $b \in \mathcal{B}$
can be used for transmission only if we have $\sigma_{t-1} + \rho - \psi(b) \ge 0$.
This can be written as the constraint
\color{black}
\begin{equation}
\sigma_{t}\ge0,\quad\forall t\ge1,\label{eq:-6}
\end{equation}
where $(\sigma_{t}, t \ge 1)$
\color{magenta}
is given in terms of the transmitted sequence $(B_t, t \ge 1)$ via
\color{black}
\begin{equation}
\sigma_{0}=\sigma,\textrm{ and }\sigma_{t}=\min\left\{ \sigma,\sigma_{t-1}+\rho- \psi(B_{t})\right\}. \label{eq:-6-3}
\end{equation}
\color{magenta}
Note that the resulting constraint is on the transmitted sequence $(B_t, t \ge 1)$.
\color{red}
VA comment: Note that $B_t$ was replaced by $\psi(B_t)$ in \eqref{eq:-6-3}.
\color{black}
We are interested in characterizing the minimum asymptotic normalized
KL divergence for this system, i.e.,
\begin{equation}
\Gamma_{\mathrm{E}}\left(\pi_{XY},P_{W}\right):=\inf_{\left\{ \left(P_{B_{t}|W^{t}X^{t}B^{t-1}},P_{Y_{t}|W^{t}B^{t}Y^{t-1}}\right)\right\} _{t=1}^{\infty}:\eqref{eq:-6}}\limsup_{n\to\infty}\frac{1}{n}D\left(P_{Y^{n}|X^{n}}\|\pi_{Y|X}^{n}|\pi_{X}^{n}\right).\label{eq:GammaE}
\end{equation}
\begin{thm}
\label{thm:wherewith--defined}
Suppose that $|\mathcal{B}|\ge 2$ and there is at least one $y$ such that $\pi_{Y|X}(y|x)>0$ for all $x$ such that $\pi_{X}(x)>0$.
Then, we have
\begin{equation}
\Gamma_{\mathrm{E}}\left(\pi_{XY},P_{W}\right)=\lim_{N\to\infty}\Delta_{\mathrm{E}}^{(N)}\left(\pi_{XY},P_{W}\right)\label{eq:-18-1-3}
\end{equation}
where
\begin{equation}
\Delta_{\mathrm{E}}^{(N)}\left(\pi_{XY},P_{W}\right):=\inf_{\substack{P_{\mathbf{UV}},P_{\mathbf{B|XUV}},P_{\mathbf{Y|BUV}}:\eqref{eq:-6}\\
\frac{1}{N}H\left(\mathbf{U|V}\right)\leq\frac{1}{N}H\left(\mathbf{BU|XYV}\right)+H\left(W\right)
}
}\frac{1}{N}D\left(P_{\mathbf{Y|XV}}\|\pi_{Y|X}^{N}|\pi_{X}^{N}P_{\mathbf{V}}\right)\label{eq:-18-1-3-1}
\end{equation}
and all the entropies above are evaluated at the joint distribution
$\pi_{X}^{N}P_{\mathbf{UV}}P_{\mathbf{B|XUV}}P_{\mathbf{Y|BUV}}$
and the distribution $P_{\mathbf{Y|XV}}$ is also induced by this
joint distribution. Furthermore, it suffices to restrict the cardinality
of $\mathcal{U}_{N}$ and $\mathcal{V}_{N}$ (the alphabets of $\mathbf{U},\mathbf{V}$)
such that $\left|\mathcal{U}_{N}\right|\leq\left(\left|\mathcal{B}\right|\left|\mathcal{X}\right|\left|\mathcal{Y}\right|\right)^{N}$
and $\left|\mathcal{V}_{N}\right|\leq\left(\left|\mathcal{B}\right|\left|\mathcal{X}\right|\left|\mathcal{Y}\right|\right)^{2N}+1$.
\end{thm}
The constraint ``\eqref{eq:-6}'' in \eqref{eq:-18-1-3-1} means
that \eqref{eq:-6} holds almost surely under the joint distribution
$\pi_{X}^{N}P_{\mathbf{UV}}P_{\mathbf{B|XUV}}P_{\mathbf{Y|BUV}}$.
Theorem \ref{thm:wherewith--defined} implies that there exists a
sequence of $N$-order Markov policies with $N\in\mathbb{N}$ such
that the induced divergences converge to $\Gamma_{\mathrm{E}}\left(\pi_{XY},P_{W}\right)$
as $N\to\infty$.
\begin{IEEEproof}
We first consider the achievability part, i.e., $\Gamma_{\mathrm{E}}\left(\pi_{XY},P_{W}\right)\leq\lim_{N\to\infty}\Delta_{\mathrm{E}}^{(N)}\left(\pi_{XY},P_{W}\right)$.
In fact, if we consider $\left(\mathbf{W},\mathbf{B},\mathbf{X},\mathbf{Y}\right)$
as supersymbols (of length $N$) and omit the constraint in \eqref{eq:-6},
then applying Theorem \ref{thm:sequentialCS} to this system, we have
$\Gamma_{\mathrm{E}}\left(\pi_{XY},P_{W}\right)\leq\lim_{N\to\infty}\Delta_{\mathrm{E}}^{(N)}\left(\pi_{XY},P_{W}\right)$.
If the constraint in \eqref{eq:-6} is imposed, then Theorem \ref{thm:sequentialCS}
still applies to this system. This can be realized by inserting queue-clearing
operations periodically (after every epoch $kN$ with $k\in\mathbb{N}$).
Here the queue-clearing operation means that Alice constantly sends
$B_{t}=0$ during $M:=\left\lceil \frac{\sigma}{\rho}\right\rceil $
consecutive epochs (from $\left((k-1)(N+M)+N\right)$-th epoch to
$k\left(N+M\right)$). Such an operation will increase the queue $\sigma_{k}$
to $\sigma$. During the queue-clearing period, the decoder generates
$Y^{M}$ with a fixed distribution $Q_{Y}^{M}$ where $Q_{Y}$ is
an optimal distribution attaining $\Delta:=\min_{Q_{Y}}D\left(Q_{Y}\|\pi_{Y|X}|\pi_{X}\right)$. Note that $\Delta$ is finite by assumption.
In fact, these queue-clearing operations ensure that queue states
in different $kN$ periods are independent. We then implement the
code in the achievability proof of Theorem \ref{thm:sequentialCS}
to $K$ supersymbols $\left(\mathbf{W},\mathbf{B},\mathbf{X},\mathbf{Y}\right)$.
(A supersymbol here correspond to a symbol in the code in Theorem
\ref{thm:sequentialCS}.) Observe that the code is constructed according
to the distributions $P_{\mathbf{UV}},P_{\mathbf{B|XUV}},P_{\mathbf{Y|BUV}}$.
Although the real distribution induced by the code may be different
the ideal one, the real distribution must be concentrated on the support
of the ideal one. Hence, if the ideal distribution satisfies \eqref{eq:-6},
so does the real distribution. This ensure that \eqref{eq:-6} is
always satisfied for the code we use here.
For this new policy, the resultant KL divergence for $n'$-length
symbols with $n'=K\left(N+M\right)$ is
\begin{align}
& \frac{1}{n'}D\left(P_{Y^{n'}|X^{n'}}\|\pi_{Y|X}^{n'}|\pi_{X}^{n'}\right)\nonumber \\
& =\frac{1}{n'}KM\Delta+\frac{1}{n'}D\left(P_{Y_{A}|X_{A}}\|\pi_{Y|X}^{n}|\pi_{X}^{n}\right)\label{eq:-4-1-1-2-1-1}\\
& =\frac{M}{N+M}\Delta+\frac{N}{N+M}\frac{1}{n}D\left(P_{Y_{A}|X_{A}}\|\pi_{Y|X}^{n}|\pi_{X}^{n}\right)\label{eq:-17}
\end{align}
where $A:=\bigcup_{k\in\left[1:K\right]}\left[\left(k-1\right)N'+1:\left(k-1\right)N'+N\right]$
with $N'=N+M$ denotes non-queue-clearing epochs, i.e., the epochs
at which symbols are encoded in the code in Theorem \ref{thm:sequentialCS}.
Here $\frac{1}{n}D\left(P_{Y_{A}|X_{A}}\|\pi_{Y|X}^{n}|\pi_{X}^{n}\right)$
denotes the divergence induced by the code in Theorem \ref{thm:sequentialCS}.
Hence for fixed $M$, the divergence induced by the new policy is
asymptotically equal to the divergence induced by the code in Theorem
\ref{thm:sequentialCS} as $N\to\infty$. Applying Theorem \ref{thm:sequentialCS}
to this supersymbol system yields the achievability part of Theorem
\ref{thm:wherewith--defined}.
The converse part follows by the derivation similar to that for the
converse in Theorem \ref{thm:sequentialCS}, with symbols $\left(W,B,X,Y\right)$
replaced by supersymbols $\left(\mathbf{W},\mathbf{B},\mathbf{X},\mathbf{Y}\right)$.
\end{IEEEproof}
\fi
\appendices{}
\section{\label{sec:Proof-of-Theorem}Proof of Theorem \ref{thm:sequentialCS}}
\subsection{Cardinality Bounds}
To prove the claimed cardinality bounds for $\Delta\left(\pi_{XY},P_{W}\right)$, it suffices to prove that the same
cardinality bounds hold for $\Psi(t)$ with $t\ge 0$.
Note that the constraint in \eqref{eq:psidef} can be rewritten as
$H(XY|V)-H(BXY|UV) \le t$. By the support lemma in \cite[Appendix C]{Gamal}, the cardinality of $\mathcal{V}$ can be upper bounded by
$2$, without changing the constraint function and the objective function (both of which are linear in $P_V$).
Applying the support lemma in \cite[Appendix C]{Gamal} again, for each $v$, we can restrict the size of the support of $P_{U|V=v}$ no larger than $|\mathcal{X}||\mathcal{Y}|$
without changing the linear functionals $P_{XY|V=v}$ and $H(BXY|U,V=v)$, and hence also without changing the constraint function and the objective function.
Therefore, the cardinality of $\mathcal{U}$ can be upper bounded by
$2|\mathcal{X}||\mathcal{Y}|$.
\subsection{Achievability}
To prove the achievability part, i.e., $\Gamma\left(\pi_{XY},P_{W}\right)\leq\Delta\left(\pi_{XY},P_{W}\right)$,
we first prove
\begin{equation}
\Gamma\left(\pi_{XY},P_{W}\right)\leq\overline{\Delta}\left(\pi_{XY},P_{W}\right):=\inf_{\substack{P_{U},P_{B|XU},P_{Y|BU}:\\
H\left(U\right) < H\left(BU|XY\right)+H\left(W\right)
}
}D\left(P_{Y|X}\|\pi_{Y|X}|\pi_{X}\right).\label{eq:-18-1-1}
\end{equation}
\iffalse
\color{red}
VA question:
Suppose that in the expression on the right hand side of \eqref{eq:-18-1-1} I take $U$ to be a constant
(trivial) and $B = Y$. Then the condition
$H\left(U\right) < H\left(BU|XY\right)+H\left(W\right)$ is clearly satisfied if $H(W) > 0$. But in this case
we have
$D\left(P_{Y|X}\|\pi_{Y|X}|\pi_{X}\right) = 0$. This seems strange.
(Lei: The explanation in your latest email is correct.)
\color{black}
\fi
\iffalse
\color{red}
VA question:
I am not sure this is obvious. The condition
\[
H\left(U|V\right)\leq H\left(BU|XYV\right)+H\left(W\right)
\]
does not imply that for every $v$ we have
\[
H\left(U|V = v\right)\leq H\left(BU|XY,V = v\right)+H\left(W\right).
\]
Therefore focusing on $\overline{\Delta}\left(\pi_{XY},P_{W}\right)$
may not be enough to control
$\Delta\left(\pi_{XY},P_{W}\right)$. If the claim
is true more details may be needed.
(Lei: You are right. These two conditions are not equivalent. Thanks for pointing out this obvious error. It has been fixed by the argument given at the end of the achievability proof.)
\color{black}
\fi
Let $\left(Q_{U},Q_{B|XU},Q_{Y|BU}\right)$ be a tuple
that satisfy the constraints in the expression on the RHS of
\eqref{eq:-18-1-1}.
For the achievability proof
we will adopt block-by-block codes. For brevity, for a sequence of
r.v.'s $\left\{ Z_{i}\right\} $, we denote $\mathbf{Z}:=Z^{N}$ and
$\mathbf{Z}_{k}:=Z_{\left(k-1\right)N+1}^{kN}$.
We will also use the notation $Z_{ki}$ for $Z_{(k-1)N+ i}$ for $k \ge 1$ and $1 \le i \le N$ when
$N$ is known from the context.
Let $\mathcal{C}:=\left\{ \mathbf{M}\left(\mathbf{b},\mathbf{w}\right):\left(\mathbf{b},\mathbf{w}\right)\in\mathcal{B}^{N}\times\mathcal{W}^{N}\right\} $
be a
random binning codebook where $\mathbf{M}\left(\mathbf{b},\mathbf{w}\right)\sim\mathrm{Unif}\left[1:e^{NR}\right]$
are generated independently. Let $\mathcal{C}_{k},k=1,2,...$ be independent
copies of $\mathcal{C}$. The codebook $\mathcal{C}_{k}$ will be
used to generate a nearly uniform r.v. from the previous block of
communication bits $\mathbf{B}_{k-1}$ and the common randomness $\mathbf{W}_{k-1}$.
Let $\hat{\mathcal{C}}:=\left\{ \mathbf{U}\left(m\right):m\in\left[1:e^{NR}\right]\right\} $
be another
random codebook, where $\mathbf{U}\left(i\right)\sim\widetilde{Q}_{\mathbf{U}}$
are generated independently with $\widetilde{Q}_{\mathbf{U}}$ denoting
the following truncated product distribution:
\[
\widetilde{Q}_{\mathbf{U}}=\frac{Q_{U}^{N}1_{\mathcal{T}_{\epsilon}^{(N)}\left(Q_{U}\right)}}{Q_{U}^{N}\left(\mathcal{T}_{\epsilon}^{(N)}\left(Q_{U}\right)\right)}.
\]
(Here $\mathcal{T}_{\epsilon}^{(N)}\left(Q_{U}\right)$ denotes the set of $\epsilon$-typical sequences of length $N$
with respect to the marginal distribution $Q_{U}$.)
Let $\hat{\mathcal{C}}_{k},k=1,2,...$ be independent copies of $\hat{\mathcal{C}}$.
The codebook $\hat{\mathcal{C}}_{k}$ will be used to generate a nearly
i.i.d. r.v. from the output of $\mathcal{C}_{k}$.
\iffalse
\color{red}
VA comment: Deleted a phrase at the end of the preceding line that did not make sense to me.
\color{black}
\color{blue}
(Lei: Thanks for your revision. Your revision is better.)
\color{black}
\fi
The codebook sequences $\left\{ \mathcal{C}_{k}\right\} ,\left\{ \hat{\mathcal{C}}_{k}\right\} $
are shared by both the terminals.
We choose the rate $R$ in these two sequence of codebooks
such that
\[
I_{Q}\left(U;XY\right)<R<H(W)+H_{Q}\left(B|XYU\right).
\]
It can be checked that this is feasible because
$\left(Q_{U},Q_{B|XU},Q_{Y|BU}\right)$
satisfies the constraints in the expression on the RHS of
\eqref{eq:-18-1-1}.
We now describe our scheme in detail. Consider the following sequence
of block codes with each block consisting of $N$ symbols.
For the
first block (from epoch $1$ to epoch $N$),
the encoder
sends a sequence of i.i.d. uniform r.v.'s
$B_t \sim \mathrm{Unif}(\mathcal{B})$ to the decoder, where $\mathbf{B}_1$ is independent of $\mathbf{X}_1$.
The decoder generates
$\mathbf{Y}_1$ with a fixed distribution $\hat{Q}_{Y}^{N}$ where $\hat{Q}_{Y}$ is
an optimal distribution attaining $\Delta:=\min_{Q_{Y}}D\left(Q_{Y}\|\pi_{Y|X}|\pi_{X}\right)$. Note that $\Delta$ is finite by assumption. Furthermore, $\mathbf{M}_1,\mathbf{U}_1$ are set to be constant. Obviously, $\mathbf{B}_1, \mathbf{X}_1, \mathbf{Y}_1$ are independent of $\mathcal{C}_1,\hat{\mathcal{C}}_1$.
For the
$k$-th block
(from epoch $\left(k-1\right)N+1$ to epoch $kN$)
with $k\ge 2$, the encoder and decoder adopt the following strategy.
First the encoder and decoder extract common randomness $\mathbf{M}_{k}$
from the previous block of communication bits $\mathbf{B}_{k-1}$
and common randomness $\mathbf{W}_{k-1}$, by using random binning
based on the codebook $\mathcal{C}_{k}$. That is, the encoder and
decoder generate $\mathbf{M}_{k}=\mathbf{M}\left(\mathbf{B}_{k-1},\mathbf{W}_{k-1}\right)$,
where $\mathbf{M}\left(\mathbf{b},\mathbf{w}\right)$ is the codeword
indexed by $\left(\mathbf{b},\mathbf{w}\right)$ in $\mathcal{C}_{k}$.
Next, the encoder and decoder generate $\mathbf{U}_{k}=\mathbf{U}\left(\mathbf{M}_{k}\right)$
based on the codebook $\hat{\mathcal{C}}_{k}$, where $\mathbf{U}\left(m\right)$
is the codeword indexed by $m$ in $\hat{\mathcal{C}}_{k}$. Then
by using $\left(\mathbf{X}_{k},\mathbf{U}_{k}\right)$, the encoder
generates $\mathbf{B}_{k}$ according to the product conditional distribution
$Q_{B|XU}^{N}$. In fact, the random binning code in the encoder forms
a privacy amplification code with $\left(\mathbf{B}_{k-1},\mathbf{W}_{k-1}\right)$
as public sources and $\left(\mathbf{X}_{k-1},\mathbf{Y}_{k-1},\mathbf{U}_{k-1}\right)$
as private sources. (The target in privacy amplification is to
maximize the alphabet size of the output r.v. $\mathbf{M}_{k}$, generated
from the public sources, under the condition that $\mathbf{M}_{k}$
is nearly uniform and nearly independent of the private sources.)
At the decoder side, upon observing $\left(\mathbf{B}_{k},\mathbf{U}_{k}\right)$
the decoder generates $\mathbf{Y}_{k}$ according to the product conditional
distribution $Q_{Y|BU}^{N}$. Note that this corresponds to the channel
resolvability problem for the channel $Q_{XY|U}$ with $\mathbf{M}_{k}$
considered as the input. (The target in a channel resolvability
problem is
to synthesize a target output distribution of a channel over a block
by inputting an input block that is a function of a uniform r.v.,
\color{black}
usually one
\color{black}
with the least alphabet size.)
The distribution for the first $K$
blocks in this code can be expressed as
\begin{align}
P_{\mathcal{C}^{K}\hat{\mathcal{C}}^{K}\mathbf{W}^{K}\mathbf{M}^{K}\mathbf{U}^{K}\mathbf{X}^{K}\mathbf{B}^{K}\mathbf{Y}^{K}}=P_{\mathcal{C}}^{K}P_{\hat{\mathcal{C}}}^{K}P_{W}^{KN}\pi_{X}^{KN} (P_{\mathbf{M}_{1}}P_{\mathbf{U}_{1}}P_{\mathbf{B}_{1}}\hat{Q}_{Y}^{N}) \prod_{k=2}^{K} (P_{\mathbf{M}_{k}|\mathbf{B}_{k-1}\mathbf{W}_{k-1}\mathcal{C}_{k}}P_{\mathbf{U}_{k}|\mathbf{M}_{k}\hat{\mathcal{C}}_{k}}Q_{B|XU}^{N}Q_{Y|BU}^{N}),\label{eq:dist}
\end{align}
where $P_{\mathbf{M}_{1}},P_{\mathbf{U}_{1}}$
are some Dirac measures,
$P_{\mathbf{B}_{1}}$ is as described above,
$P_{\mathbf{M}_{k}|\mathbf{B}_{k-1}\mathbf{W}_{k-1}\mathcal{C}_{k}}$
corresponds to the deterministic function $\mathbf{M}_{k}=\mathbf{M}\left(\mathbf{B}_{k-1},\mathbf{W}_{k-1}\right)$
with $\mathbf{M}\left(\mathbf{b},\mathbf{w}\right)$ denoting the
codeword indexed by $\left(\mathbf{b},\mathbf{w}\right)$ in $\mathcal{C}_{k}$,
and $P_{\mathbf{U}_{k}|\mathbf{M}_{k}\hat{\mathcal{C}}_{k}}$ corresponds
to the deterministic function $\mathbf{U}_{k}=\mathbf{U}\left(\mathbf{M}_{k}\right)$
with $\mathbf{U}\left(m\right)$ denoting the codeword indexed by
$m$ in $\hat{\mathcal{C}}_{k}$.
Although the code above is random (since the codebooks are random),
we next show that for this random code,
\[
\frac{1}{KN}D\left(P_{\mathbf{Y}^{K}|\mathbf{X}^{K}\mathcal{C}^{K}\hat{\mathcal{C}}^{K}}\|\pi_{Y|X}^{KN}|\pi_{X}^{KN}P_{\mathcal{C}}^{K}P_{\hat{\mathcal{C}}}^{K}\right)\to D\left(Q_{Y|X}\|\pi_{Y|X}|\pi_{X}\right)
\]
as $K\to\infty$ and $N \to \infty$ along an appropriately chosen sequence,
which implies that there is a sequence of deterministic
codebooks $({c}^{K},\hat{{c}}^{K})$ satisfying
\[
\frac{1}{KN}D\left(P_{\mathbf{Y}^{K}|\mathbf{X}^{K},\mathcal{C}^{K}={c}^{K},\hat{\mathcal{C}}^{K}=\hat{{c}}^{K}}\|\pi_{Y|X}^{KN}|\pi_{X}^{KN}\right)\to
D\left(Q_{Y|X}\|\pi_{Y|X}|\pi_{X}\right),
\]
as $K \to \infty$ and $N \to \infty$
along the same sequence.
For the random code above, we have the following lemma.
\begin{lem}
\label{lem:For-this-code,} For the random code above,
\begin{align}
D\left(P_{\mathbf{M}_{k}|\mathbf{X}_{k-1}\mathbf{Y}_{k-1}\mathbf{U}_{k-1}\mathcal{C}_{k}}\|\mathrm{Unif}\left[1:e^{NR}\right]|P_{\mathbf{X}_{k-1}\mathbf{Y}_{k-1}\mathbf{U}_{k-1}}P_{\mathcal{C}_{k}}\right) & \to0\label{eq:D_M}\\
D\left(P_{\mathbf{Y}_{k}|\mathbf{X}_{k}\mathcal{C}^{k}\hat{\mathcal{C}}^{k}}\|Q_{Y|X}^{N}|\pi_{X}^{N}P_{\mathcal{C}}^{k}P_{\hat{\mathcal{C}}}^{k}\right) & \to0\label{eq:D_Y}
\end{align}
uniformly for all
$k\ge 2$
as $N\to\infty$.
\end{lem}
The proof of this lemma is given in Appendix \ref{sec:Proof-of-Lemma}.
For the first $K$ blocks induced by the code above we have
\begin{align}
& D\left(P_{\mathbf{Y}^{K}|\mathbf{X}^{K}\mathcal{C}^{K}\hat{\mathcal{C}}^{K}}\|\pi_{Y|X}^{KN}|\pi_{X}^{KN}P_{\mathcal{C}}^{K}P_{\hat{\mathcal{C}}}^{K}\right)\nonumber \\
& =\sum_{\mathbf{x}^{K},\mathbf{y}^{K},c^{K},\hat{c}^{K}}P(c^{K})P(\hat{c}^{K})\pi(\mathbf{x}^{K})P(\mathbf{y}^{K}|\mathbf{x}^{K},c^{K},\hat{c}^{K})\log\frac{\prod_{k=1}^{K}P(\mathbf{y}_{k}|\mathbf{x}^{k},\mathbf{y}^{k-1},c^{k},\hat{c}^{k})}{\pi(\mathbf{y}^{K}|\mathbf{x}^{K})}\label{eq:-27}\\
& =\sum_{k=1}^{K}\sum_{\mathbf{x}^{k},\mathbf{y}^{k},c^{k},\hat{c}^{k}}P(c^{k})P(\hat{c}^{k})\pi(\mathbf{x}^{k})
P(\mathbf{y}^{k}|\mathbf{x}^{k},c^{k},\hat{c}^{k})
\left(\log\frac{P(\mathbf{y}_{k}|\mathbf{x}^{k},\mathbf{y}^{k-1},c^{k},\hat{c}^{k})}{P(\mathbf{y}_{k}|\mathbf{x}_{k},c^{k},\hat{c}^{k})}+\log\frac{P(\mathbf{y}_{k}|\mathbf{x}_{k},c^{k},\hat{c}^{k})}{\pi(\mathbf{y}_{k}|\mathbf{x}_{k})}\right)\\
& =\sum_{k=1}^{K}I\left(\mathbf{Y}_{k};\mathbf{X}^{k-1}\mathbf{Y}^{k-1}|\mathbf{X}_{k}\mathcal{C}^{k}\hat{\mathcal{C}}^{k}\right)+\sum_{k=1}^{K}D\left(P_{\mathbf{Y}_{k}|\mathbf{X}_{k}\mathcal{C}^{k}\hat{\mathcal{C}}^{k}}\|\pi_{Y|X}^{N}|\pi_{X}^{N}P_{\mathcal{C}}^{k}P_{\hat{\mathcal{C}}}^{k}\right),\label{eq:-19}
\end{align}
where \eqref{eq:-27} follows since in our scheme, $\mathbf{Y}_{k}\leftrightarrow(\mathbf{X}^{k},\mathbf{Y}^{k-1},\mathcal{C}^{k},\hat{\mathcal{C}}^{k})\leftrightarrow(\mathbf{X}_{k+1}^{K},\mathcal{C}_{k+1}^{K},\hat{\mathcal{C}}_{k+1}^{K})$
under the distribution $P$ (which can be easily seen from the expression
of the joint distribution in \eqref{eq:dist}).
\iffalse
\color{red}
VA question: It is unclear to me where the second term in \eqref{eq:-19} comes from.
We have
\[
D\left(P_{\mathbf{Y}_{k}|\mathbf{X}_{k}\mathcal{C}^{k}\hat{\mathcal{C}}^{k}}\|\pi_{Y|X}^{N}|\pi_{X}^{N}P_{\mathcal{C}}^{k}P_{\hat{\mathcal{C}}}^{k}\right) =
\sum_{\mathbf{x}_{k},\mathbf{y}_{k},c^{k},\hat{c}^{k}}P(c^{k})P(\hat{c}^{k})\pi(\mathbf{x}_{k})P(\mathbf{y}_{k}|\mathbf{x}_{k},c^{k},\hat{c}^{k})
\log\frac{P(\mathbf{y}_{k}|\mathbf{x}_{k},c^{k},\hat{c}^{k})}{\pi(\mathbf{y}_{k}|\mathbf{x}_{k})}.
\]
Why should this be equal to
\[
\sum_{\mathbf{x}^{k},\mathbf{y}^{k},c^{k},\hat{c}^{k}}P(c^{k})P(\hat{c}^{k})\pi(\mathbf{x}^{k})P(\mathbf{y}_{k}|\mathbf{x}^{k},\mathbf{y}^{k-1},c^{k},\hat{c}^{k})
\log\frac{P(\mathbf{y}_{k}|\mathbf{x}_{k},c^{k},\hat{c}^{k})}{\pi(\mathbf{y}_{k}|\mathbf{x}_{k})}?
\]
While it is true that $\mathbf{Y}_{k}$
is generated from $\left(\mathbf{B}_{k},\mathbf{U}_{k}\right)$
according to the product distribution
$Q_{Y|BU}^{N}$, we have
$\mathbf{U}_{k}=\mathbf{U}\left(\mathbf{M}_{k}\right)$ and
$\mathbf{M}_{k}=\mathbf{M}\left(\mathbf{B}_{k-1},\mathbf{W}_{k-1}\right)$, where
$\mathbf{B}_{k-1}$ is generated according
to the product distribution
$Q_{B|XU}^{N}$ based on
$\left(\mathbf{X}_{k-1},\mathbf{U}_{k-1}\right)$ and therefore depends on
$\mathbf{X}_{k-1}$. So it is not clear how we can drop $\mathbf{x}_{k-1}$ (and other such past values) from the conditioning.
Also, if one is doing calculations starting from $k=1$ instead of $k = -\infty$ there is a technical issue in that there is nothing
called $\left(\mathbf{B}_{0},\mathbf{W}_{0}\right)$ and so it is unclear how
$\mathbf{U}_{1}$ is defined.
Lei: Sorry. There was a typo there; see the corrected part marked in blue. Hope it is clear now.
\color{black}
\fi
We first consider the first term in \eqref{eq:-19} for $k \ge 2$. From the expression
of the joint distribution in \eqref{eq:dist}, we have that for the
considered code, $\left(\mathbf{X}^{k-1},\mathbf{Y}^{k-1}\right)\leftrightarrow(\mathbf{M}_{k},\mathcal{C}^{k},\hat{\mathcal{C}}^{k})\leftrightarrow\left(\mathbf{X}_{k},\mathbf{Y}_{k}\right)$
forms a Markov chain, and so does $\left(\mathbf{X}^{k-2},\mathbf{Y}^{k-2},\mathcal{C}^{k-1},\hat{\mathcal{C}}^{k}\right)\leftrightarrow(\mathbf{U}_{k-1},\mathcal{C}_{k})\leftrightarrow\left(\mathbf{X}_{k-1},\mathbf{Y}_{k-1},\mathbf{M}_{k}\right)$.
More specifically, the second Markov chain follows since
\begin{align}
P_{\mathbf{W}_{k-1}\mathbf{B}_{k-1}\mathbf{X}_{k-1}\mathbf{Y}_{k-1}\mathbf{M}_{k}|\mathbf{U}_{k-1}\mathbf{X}^{k-2}\mathbf{Y}^{k-2}\mathcal{C}^{k}\hat{\mathcal{C}}^{k}}=P_{W}^{N}\pi_{X}^{N}Q_{B|XU}^{N}Q_{Y|BU}^{N}P_{\mathbf{M}_{k}|\mathbf{B}_{k-1}\mathbf{W}_{k-1}\mathcal{C}_{k}}.\label{eq:dist2}
\end{align}
Hence,
\begin{align}
I\left(\mathbf{Y}_{k};\mathbf{X}^{k-1}\mathbf{Y}^{k-1}|\mathbf{X}_{k}\mathcal{C}^{k}\hat{\mathcal{C}}^{k}\right) & \leq I\left(\mathbf{X}_{k}\mathbf{Y}_{k};\mathbf{X}^{k-1}\mathbf{Y}^{k-1}|\mathcal{C}^{k}\hat{\mathcal{C}}^{k}\right)\nonumber \\
& \leq I\left(\mathbf{M}_{k};\mathbf{X}^{k-1}\mathbf{Y}^{k-1}|\mathcal{C}^{k}\hat{\mathcal{C}}^{k}\right)\nonumber \\
& \leq I\left(\mathbf{M}_{k};\mathbf{X}_{k-1}\mathbf{Y}_{k-1}\mathbf{U}_{k-1}|\mathcal{C}_{k}\right).\label{eq:}
\end{align}
We have that
\begin{align}
& D\left(P_{\mathbf{M}_{k}|\mathbf{X}_{k-1}\mathbf{Y}_{k-1}\mathbf{U}_{k-1}\mathcal{C}_{k}}\|\mathrm{Unif}\left[1:e^{NR}\right]|P_{\mathbf{X}_{k-1}\mathbf{Y}_{k-1}\mathbf{U}_{k-1}}P_{\mathcal{C}_{k}}\right)\nonumber \\
& =I\left(\mathbf{M}_{k};\mathbf{X}_{k-1}\mathbf{Y}_{k-1}\mathbf{U}_{k-1}|\mathcal{C}_{k}\right)+D\left(P_{\mathbf{M}_{k}|\mathcal{C}_{k}}\|\mathrm{Unif}\left[1:e^{NR}\right]|P_{\mathcal{C}_{k}}\right)\\
& \geq I\left(\mathbf{M}_{k};\mathbf{X}_{k-1}\mathbf{Y}_{k-1}\mathbf{U}_{k-1}|\mathcal{C}_{k}\right),\label{eq:-1}
\end{align}
where the equality follows since $\log \frac{P_{\mathbf{M}_{k}|\mathbf{X}_{k-1}\mathbf{Y}_{k-1}\mathbf{U}_{k-1}\mathcal{C}_{k}}}{\mathrm{Unif}\left[1:e^{NR}\right]}=\log \frac{P_{\mathbf{M}_{k}|\mathbf{X}_{k-1}\mathbf{Y}_{k-1}\mathbf{U}_{k-1}\mathcal{C}_{k}}}{P_{\mathbf{M}_{k}| \mathcal{C}_{k}}}+\log \frac{P_{\mathbf{M}_{k}| \mathcal{C}_{k}}}{\mathrm{Unif}\left[1:e^{NR}\right]}$.
\iffalse
\color{red}
VA comment: It would be useful to give some details for the preceding claim.
\color{black}
\color{blue}
Lei: Thanks for your suggestion. I have added the explanation for the equality above.
\color{black}
\fi
Hence combining \eqref{eq:D_M}, \eqref{eq:}, and \eqref{eq:-1}, for $k \ge 2$,
we have $I\left(\mathbf{Y}_{k};\mathbf{X}^{k-1}\mathbf{Y}^{k-1}|\mathbf{X}_{k}\mathcal{C}^{k}\hat{\mathcal{C}}^{k}\right)\to0$.
We next consider the second term in \eqref{eq:-19} for $k \ge 2$.
\begin{align*}
D\left(P_{\mathbf{Y}_{k}|\mathbf{X}_{k}\mathcal{C}^{k}\hat{\mathcal{C}}^{k}}\|\pi_{Y|X}^{N}|\pi_{X}^{N}P_{\mathcal{C}}^{k}P_{\hat{\mathcal{C}}}^{k}\right) & =D\left(P_{\mathbf{Y}_{k}|\mathbf{X}_{k}\mathcal{C}^{k}\hat{\mathcal{C}}^{k}}\|Q_{Y|X}^{N}|\pi_{X}^{N}P_{\mathcal{C}}^{k}P_{\hat{\mathcal{C}}}^{k}\right)\\
& \qquad+\sum_{\mathbf{x}_{k},\mathbf{y}_{k},c_{k},\hat{c}_{k}}P(c_{k},\hat{c}_{k})\pi(\mathbf{x}_{k})P(\mathbf{y}_{k}|\mathbf{x}_{k},c^{k},\hat{c}^{k})\log\frac{Q(\mathbf{y}_{k}|\mathbf{x}_{k})}{\pi(\mathbf{y}_{k}|\mathbf{x}_{k})}.
\end{align*}
By \eqref{eq:D_Y}, $D\left(P_{\mathbf{Y}_{k}|\mathbf{X}_{k}\mathcal{C}^{k}\hat{\mathcal{C}}^{k}}\|Q_{Y|X}^{N}|\pi_{X}^{N}P_{\mathcal{C}}^{k}P_{\hat{\mathcal{C}}}^{k}\right)\to0$.
Denote $J$ as a random time index, which is independent of all other
r.v.'s involved in the system. Observe that $\pi_{X}P_{Y_{J}|X_{J}\mathcal{C}^{k}\hat{\mathcal{C}}^{k}}$
and $\pi_{X}Q_{Y|X}$ are respectively the output distributions of
the channel $\left(\mathbf{X}_{k},\mathbf{Y}_{k}\right)\mapsto\left(X_{J},Y_{J}\right)$
with input distributions $\pi_{X}^{N}P_{\mathbf{Y}_{k}|\mathbf{X}_{k}\mathcal{C}^{k}\hat{\mathcal{C}}^{k}}$
and $\pi_{X}^{N}Q_{Y|X}^{N}$. Hence by the data processing inequality
concerning relative entropy, we have for $k \ge 2$,
\[
D\left(P_{Y_{J}|X_{J}\mathcal{C}^{k}\hat{\mathcal{C}}^{k}}\|Q_{Y|X}|\pi_{X}P_{\mathcal{C}}^{k}P_{\hat{\mathcal{C}}}^{k}\right)\leq D\left(P_{\mathbf{Y}_{k}|\mathbf{X}_{k}\mathcal{C}^{k}\hat{\mathcal{C}}^{k}}\|Q_{Y|X}^{N}|\pi_{X}^{N}P_{\mathcal{C}}^{k}P_{\hat{\mathcal{C}}}^{k}\right)\to0.
\]
By Pinsker's inequality, this further implies that $P_{\mathcal{C}}^{k}P_{\hat{\mathcal{C}}}^{k}\pi_{X}P_{Y_{J}|X_{J}\mathcal{C}^{k}\hat{\mathcal{C}}^{k}}$
converges to $P_{\mathcal{C}}^{k}P_{\hat{\mathcal{C}}}^{k}\pi_{X}Q_{Y|X}$
under the total variation distance, which further implies that
\begin{align*}
& \left|\frac{1}{N}\sum_{\mathbf{x}_{k},\mathbf{y}_{k},c_{k},\hat{c}_{k}}P(c_{k},\hat{c}_{k})\pi(\mathbf{x}_{k})P(\mathbf{y}_{k}|\mathbf{x}_{k},c_{k},\hat{c}_{k})\log\frac{Q(\mathbf{y}_{k}|\mathbf{x}_{k})}{\pi(\mathbf{y}_{k}|\mathbf{x}_{k})}-D\left(Q_{Y|X}\|\pi_{Y|X}|\pi_{X}\right)\right|\\
& =\left|\sum_{x,y,c_{k},\hat{c}_{k}}P(c_{k},\hat{c}_{k})\pi_{X}(x)\left(P_{Y_{J}|X_{J}\mathcal{C}_{k}\hat{\mathcal{C}}_{k}}(y|x,c_{k},\hat{c}_{k})-Q_{Y|X}(y|x)\right)\log\frac{Q_{Y|X}(y|x)}{\pi_{Y|X}(y|x)}\right|\\
& \leq\sum_{x,y,c_{k},\hat{c}_{k}}P(c_{k},\hat{c}_{k})\pi_{X}(x)\left|P_{Y_{J}|X_{J}\mathcal{C}_{k}\hat{\mathcal{C}}_{k}}(y|x,c_{k},\hat{c}_{k})-Q_{Y|X}(y|x)\right|\left|\log\frac{Q_{Y|X}(y|x)}{\pi_{Y|X}(y|x)}\right|\\
& \leq\left|P_{\mathcal{C}}^{k}P_{\hat{\mathcal{C}}}^{k}\pi_{X}P_{Y_{J}|X_{J}\mathcal{C}^{k}\hat{\mathcal{C}}^{k}}-P_{\mathcal{C}}^{k}P_{\hat{\mathcal{C}}}^{k}\pi_{X}Q_{Y|X}\right|_{\mathrm{TV}}\times\max_{x,y}\left|\log\frac{Q_{Y|X}(y|x)}{\pi_{Y|X}(y|x)}\right|\\
& \to0.
\end{align*}
In the last inequality above, the max term is finite since $D\left(Q_{Y|X}\|\pi_{Y|X}|\pi_{X}\right)$
is finite and $\pi_{X}$ is fully supported. Hence, for $k \ge 2$, $\frac{1}{N}D\left(P_{\mathbf{Y}_{k}|\mathbf{X}_{k}\mathcal{C}^{k}\hat{\mathcal{C}}^{k}}\|\pi_{Y|X}^{N}|\pi_{X}^{N}P_{\mathcal{C}}^{k}P_{\hat{\mathcal{C}}}^{k}\right)\to D\left(Q_{Y|X}\|\pi_{Y|X}|\pi_{X}\right)$.
Hence combining the two points above yields for any given $N$ and noting that both the summands for $k=1$ in the first and second summations in \eqref{eq:-19} are finite, we have that $\frac{1}{KN}D\left(P_{\mathbf{Y}^{K}|\mathbf{X}^{K}\mathcal{C}^{K}\hat{\mathcal{C}}^{K}}\|\pi_{Y|X}^{KN}|\pi_{X}^{KN}P_{\mathcal{C}}^{K}P_{\hat{\mathcal{C}}}^{K}\right)\to D\left(Q_{Y|X}\|\pi_{Y|X}|\pi_{X}\right)$
as $K\to\infty$, which implies that there is a sequence of deterministic
codebooks $({c}^{K},\hat{{c}}^{K})$ satisfying
\begin{equation}
\frac{1}{KN}D\left(P_{\mathbf{Y}^{K}|\mathbf{X}^{K},\mathcal{C}^{K}={c}^{K},\hat{\mathcal{C}}^{K}=\hat{{c}}^{K}}\|\pi_{Y|X}^{KN}|\pi_{X}^{KN}\right)\to D\left(Q_{Y|X}\|\pi_{Y|X}|\pi_{X}\right).\label{eq:-14}
\end{equation}
Hence, $\Gamma\left(\pi_{XY},P_{W}\right)\leq\overline{\Delta}\left(\pi_{XY},P_{W}\right)$.
We next proceed to prove $\Gamma\left(\pi_{XY},P_{W}\right)\leq\Delta\left(\pi_{XY},P_{W}\right)$.
Let $(P_{UV},P_{B|XUV},P_{Y|BUV})$ be a joint distribution such that $H\left(U|V\right) < H\left(BU|XYV\right)+ H(W)$.
Denote $ (Q_{U^m},Q_{B^m|X^m U^m},Q_{Y^m|B^m U^m}):=(P_{U|V}^m(\cdot|v^m),P_{B|XUV}^m(\cdot|\cdot, v^m),P_{Y|BUV}^m(\cdot|\cdot, v^m)) $ for some $v^m\in \mathcal{T}^{(m)}_{\epsilon'}(P_V),\; \epsilon'>0$.
We then extend the code above to the $m$-letter version by substituting
\begin{align}
(P_W, \pi_X,Q_{U},Q_{B|XU},Q_{Y|BU}) & \leftarrow (P_W^m, \pi_X ^m, Q_{U^m},Q_{B^m|X^m U^m},Q_{Y^m|B^m U^m})
\end{align}
into the code above.
In this $m$-letter code, the basic unit is the supersymbol which consists of $m$ successive original letters. Even so, by definition, the random mappings $Q_{B^m|X^m U^m},Q_{Y^m|B^m U^m}$ indeed still work in symbol-by-symbol way, which means that the $m$-letter code
is also a feasible code for the single-letter scenario. In other words, the encoder and decoder of this $m$-letter code are still a special case of $(P_{B_{k}|W^{k}X^{k}B^{k-1}},P_{Y_{k}|W^{k}B^{k}Y^{k-1}})$.
Hence,
\begin{equation}
\Gamma\left(\pi_{XY},P_{W}\right)\leq \frac{1}{m} D\left(Q_{Y^m|X^m}\|\pi_{Y|X}^m|\pi_{X}^m\right)
\end{equation}
as long as $H_Q\left(U^m\right) < H_Q\left(B^m U^m|X^m Y^m \right)+H_Q\left(W^m\right)$. We now claim that this condition for sufficiently large $m$ is in fact equivalent to $H\left(U|V\right) < H\left(BU|XYV\right)+ H(W)$, and moreover, $\frac{1}{m} D\left(Q_{Y^m|X^m}\|\pi_{Y|X}^m|\pi_{X}^m\right)=D\left(P_{Y|XV}\|\pi_{Y|X}|\pi_{X}P_V\right)+o(1)$. We next prove this claim.
By the conditional typicality lemma \cite{Gamal}, we have that with high probability,
\[
(W^m, X^m,U^m,B^m,Y^m) \sim P_W^m \pi_{X}^m P_{U|V}^m(\cdot|v^m)P_{B|XUV}^m(\cdot|\cdot, v^m)P_{Y|BUV}^m(\cdot|\cdot, v^m)
\]
is jointly $\epsilon$-typical with $v^m$ (with respect to the distribution $P_W \pi_X P_V P_{U|V} P_{B|XUV} P_{Y|BUV}$) for some $\epsilon>\epsilon'$ and sufficiently large $m$. Hence, $\frac{1}{m}H(U^m|V^m=v^m) = H(U|V) + o(1),\frac{1}{m}H(B^m U^m|X^m,Y^m,V^m=v^m) = H(
BU|XYV) + o(1), \frac{1}{m}H(W^m|V^m=v^m)=H(W)$, and $\frac{1}{m}D\left(P_{Y|XV}^m(\cdot|\cdot,v^m)\|\pi_{Y|X}^m|\pi_{X}^m\right)=D\left(P_{Y|XV}\|\pi_{Y|X}|\pi_{X}P_V\right)+o(1)$, where $o(1)$ denotes a generic term vanishing as $m\to \infty$.
This implies the claim above.
By the claim above, we have $\Gamma\left(\pi_{XY},P_{W}\right) \le \lim_{t\uparrow H\left(W\right)} \Psi(t)$.
Since $H(W) \neq t_{\min}$, $\Psi(t)$ is continuous at $t=H(W)$. We have $\Gamma\left(\pi_{XY},P_{W}\right)\leq \Psi(H\left(W\right))=\Delta\left(\pi_{XY},P_{W}\right)$.
This completes the proof of the achievability part.
\subsection{Converse}
We next consider the converse part. Observe that
\begin{align*}
D\left(P_{Y^{n}|X^{n}}\|\pi_{Y|X}^{n}|\pi_{X}^{n}\right) & =D\left(\prod_{k=1}^{n}P_{Y_{k}|X^{k}Y^{k-1}}\|\pi_{Y|X}^{n}|\pi_{X}^{n}\right)\\
& =\sum_{k=1}^{n}D\left(P_{Y_{k}|X^{k}Y^{k-1}}\|\pi_{Y|X}|\pi_{X}^{k}P_{Y^{k-1}|X^{k}}\right).
\end{align*}
Denote $K\sim\mathrm{Unif}\left[1:n\right]$ as a random time index,
which is independent of all other r.v.'s involved in the system. Define
$U:=\left(B^{K-1},W^{K}\right),V:=\left(X^{K-1},Y^{K-1},K\right),B:=B_{K},X:=X_{K},Y:=Y_{K}$.
Then
\begin{align}
\frac{1}{n}D\left(P_{Y^{n}|X^{n}}\|\pi_{Y|X}^{n}|\pi_{X}^{n}\right) & =D\left(P_{Y|XV}\|\pi_{Y|X}|\pi_{X}P_{V}\right).\label{eq:-8}
\end{align}
It is easy to verify that
\begin{align*}
P_{UVBXY}(u,v,b,x,y) & =P_{W}^{k}(w^{k})P_{K}(k)P_{X^{k-1}Y^{k-1}|W^{k}}\left(x^{k-1},y^{k-1}|w^{k}\right)P_{B^{k-1}|X^{k-1}Y^{k-1}W^{k}}\left(b^{k-1}|x^{k-1},y^{k-1},w^{k}\right)\\
& \qquad\times\pi_{X}(x)P_{B_{k}|X^{k}B^{k-1}W^{k}}\left(b_{k}|x^{k},b^{k-1},w^{k}\right)P_{Y_{k}|B^{k}Y^{k-1}W^{k}}\left(y_{k}|b^{k},y^{k-1},w^{k}\right)\\
& =P_{UV}\left(u,v\right)\pi_{X}(x)P_{B|XUV}\left(b|x,u,v\right)P_{Y|BUV}\left(y|b,u,v\right).
\end{align*}
Hence it remains to show $H\left(U|V\right)\leq H\left(BU|XYV\right)+H\left(W\right)$.
This can be easily verified as follows:
\begin{align}
& H\left(BU|XYV\right)-H\left(U|V\right)\nonumber \\
& =\frac{1}{n}\sum_{k=1}^{n}\left\{ H\left(B^{k}W^{k}|X^{k}Y^{k}\right)-H\left(B^{k-1}W^{k}|X^{k-1}Y^{k-1}\right)\right\} \label{eq:-38}\\
& =\frac{1}{n}\sum_{k=1}^{n}\left\{ H\left(B^{k}W^{k}|X^{k}Y^{k}\right)-H\left(B^{k-1}W^{k-1}|X^{k-1}Y^{k-1}\right)-H\left(W_{k}|X^{k-1}Y^{k-1}B^{k-1}W^{k-1}\right)\right\} \nonumber \\
& =\frac{1}{n}H\left(B^{n}W^{n}|X^{n}Y^{n}\right)-H\left(W\right)\label{eq:-25}\\
& \geq-H\left(W\right),\label{eq:-39}
\end{align}
where \eqref{eq:-25} follows since $W_{k}$ is independent of $X^{k-1}Y^{k-1}B^{k-1}W^{k-1}$
and has entropy $H\left(W\right)$.
\subsection{\label{sec:Proof-of-Lemma}Proof of Lemma \ref{lem:For-this-code,}}
We now prove Lemma \ref{lem:For-this-code,} by using a R\'enyi entropy
method. Recall that the rate $R$ is chosen such that
\begin{equation}
I_{Q}\left(U;XY\right)<R<H(W)+H_{Q}\left(B|XYU\right).\label{eq:-31}
\end{equation}
The condition can be relaxed to
\begin{align}
& \left(1+\epsilon\right)D_{1+s}\left(Q_{XY|U}\|Q_{XY}|Q_{U}\right)\label{eq:-34}\\
& <R \nonumber\\
& <\left(1-\epsilon\right)\sum_{u}Q_{U}\left(u\right)H_{1+s}\left(B|XY,U=u\right)+H_{1+s}(W)\label{eq:-33}
\end{align}
for some $\epsilon,s>0$, since both the expressions in \eqref{eq:-34}
and \eqref{eq:-33} are continuous in $\epsilon$ and $s$
and we have $H_{Q}\left(B|XYU\right) =
\sum_{u}Q_{U}\left(u\right)H\left(B|XY,U=u\right)$.
We first prove that if
the upper bound on $R$ given by
\eqref{eq:-33} holds, then we have \eqref{eq:D_M}.
To show this, we need the following lemma on one-shot privacy
amplification.
\iffalse
\color{red}
Made the reference more precise in the following lemma. Please check. It also seems that this Lemma may actually be in Reference [18] of
\cite{Hayashi17}, specifically in Appendix I of that paper.
\color{black}
\color{blue}
Lei: Thanks for your careful checking. I have updated the reference.
\color{black}
\fi
\begin{lem}
\cite[Equation (29)]{Hayashi11}
\label{lem:oneshotach-1} Consider a random mapping
$f_{\mathcal{C}}:\mathcal{X}\rightarrow\mathcal{M}:=\{1,\ldots,e^{R}\}$. We set
$\mathcal{C}=\left\{ M\left(x\right)\right\} _{x\in\mathcal{X}}$ with $M\left(x\right),x\in\mathcal{X}$
drawn independently for different $x$'s and according to the uniform
distribution $\mathrm{Unif}\left[1:e^{R}\right]$, and set $f_{\mathcal{C}}\left(x\right)=M\left(x\right)$.
This forms a random binning code. For this random code, we have for
$s\in(0,1]$ and any distribution $P_{XY}$,
\begin{align}
& e^{sD_{1+s}(P_{f_{\mathcal{C}}\left(X\right)|Y\mathcal{C}}\|\mathrm{Unif}\left[1:e^{R}\right]|P_{Y}P_{\mathcal{C}})}\nonumber \\
& \leq1+e^{-s\left(H_{1+s}\left(X|Y\right)-R\right)}.\label{eq:-123-1}
\end{align}
\end{lem}
Note that the codebook in this lemma is generated in the same way
as the codebook $\mathcal{C}_{k}$ in our scheme. By applying the
lemma above with substitution $X\leftarrow(\mathbf{B}_{k-1},\mathbf{W}_{k-1}),Y\leftarrow(\mathbf{X}_{k-1},\mathbf{Y}_{k-1},\mathbf{U}_{k-1})$,
we have
\begin{align}
& D_{1+s}\left(P_{\mathbf{M}_{k}|\mathbf{X}_{k-1}\mathbf{Y}_{k-1}\mathbf{U}_{k-1}\mathcal{C}_{k}}\|\mathrm{Unif}\left[1:e^{nR}\right]|P_{\mathbf{X}_{k-1}\mathbf{Y}_{k-1}\mathbf{U}_{k-1}}P_{\mathcal{C}_{k}}\right)\nonumber \\
& \leq\frac{1}{s}\log\left[1+e^{-s\left(H_{1+s}\left(\mathbf{B}_{k-1}\mathbf{W}_{k-1}|\mathbf{X}_{k-1}\mathbf{Y}_{k-1}\mathbf{U}_{k-1}\right)-NR\right)}\right]\nonumber \\
& \leq\frac{1}{s}e^{-s\left(H_{1+s}\left(\mathbf{B}_{k-1}\mathbf{W}_{k-1}|\mathbf{X}_{k-1}\mathbf{Y}_{k-1}\mathbf{U}_{k-1}\right)-NR\right)}.\label{eq:-36}
\end{align}
Note that $\mathbf{W}_{k-1}$ is in fact independent of $(\mathbf{B}_{k-1},\mathbf{X}_{k-1},\mathbf{Y}_{k-1},\mathbf{U}_{k-1})$,
since in the first $k-1$ blocks, only $\mathbf{W}_{1},\mathbf{W}_{2},...,\mathbf{W}_{k-2}$
are used in the encoding process. Hence,
\begin{align}
H_{1+s}\left(\mathbf{B}_{k-1}\mathbf{W}_{k-1}|\mathbf{X}_{k-1}\mathbf{Y}_{k-1}\mathbf{U}_{k-1}\right)=H_{1+s}\left(\mathbf{B}_{k-1}|\mathbf{X}_{k-1}\mathbf{Y}_{k-1}\mathbf{U}_{k-1}\right)+NH_{1+s}(W).\label{eq:indpW}
\end{align}
On the other hand,
for $k\ge 2$,
\begin{align}
& \frac{1}{N}H_{1+s}\left(\mathbf{B}_{k-1}|\mathbf{X}_{k-1}\mathbf{Y}_{k-1}\mathbf{U}_{k-1}\right)\nonumber \\
& =\frac{1}{sN}\log\left[\mathbb{E}_{\mathbf{U}_{k-1}}\sum_{\mathbf{x},\mathbf{y}}Q_{XY|U}^{N}\left(\mathbf{x},\mathbf{y}|\mathbf{U}_{k-1}\right)\sum_{\mathbf{b}}Q_{B|XYU}^{N}\left(\mathbf{b}|\mathbf{x},\mathbf{y},\mathbf{U}_{k-1}\right)^{1+s}\right]\label{eq:-29}\\
& =\frac{1}{sN}\log\left[\sum_{m}P_{\mathbf{M}_{k-1}}\left(m\right)\prod_{i=1}^{N}\left(\sum_{x,y}Q_{XY|U}(x,y|U_{i}\left(m\right))\sum_{b}Q_{B|XYU}(b|x,y,U_{i}\left(m\right))^{1+s}\right)\right]\nonumber \\
& =\frac{1}{sN}\log\left[\sum_{m}P_{\mathbf{M}_{k-1}}\left(m\right)e^{sN\sum_{u}T_{\mathbf{U}\left(m\right)}\left(u\right)H_{1+s}\left(B|XY,U=U_{i}\left(m\right)\right)}\right]\label{eq:typeU}\\
& \geq\frac{1}{sN}\log\left[\sum_{m}P_{\mathbf{M}_{k-1}}\left(m\right)e^{\left(1-\epsilon\right)sN\sum_{u}Q_{U}\left(u\right)H_{1+s}\left(B|XY,U=u\right)}\right]\label{eq:typeave}\\
& =\left(1-\epsilon\right)\sum_{u}Q_{U}\left(u\right)H_{1+s}\left(B|XY,U=u\right),\label{eq:-35}
\end{align}
where $T_{\mathbf{U}\left(m\right)}$ in \eqref{eq:typeU} denotes
the empirical distribution of the sequence $\mathbf{U}\left(m\right)$,
and \eqref{eq:typeave} follows by combining the typical average lemma
on p. 26 of \cite{Gamal} and the fact that by the construction of
the codebook, all codewords $\mathbf{U}\left(m\right)$ come from
$\mathcal{T}_{\epsilon}^{(n)}\left(Q_{U}\right)$.
In fact, for $k=2$, \eqref{eq:-35} still holds since in this case $\mathbf{B}_1$ is uniform and independent of $\mathbf{X}_1,\mathbf{Y}_1$ and $\mathbf{U}_1$ is set to a constant.
Substituting \eqref{eq:indpW}
and \eqref{eq:-35} into \eqref{eq:-36}, we have \eqref{eq:D_M},
i.e.,
\begin{equation}
D_{1+s}\left(P_{\mathbf{M}_{k}|\mathbf{X}_{k-1}\mathbf{Y}_{k-1}\mathbf{U}_{k-1}\mathcal{C}_{k}}\|\mathrm{Unif}\left[1:e^{NR}\right]|P_{\mathbf{X}_{k-1}\mathbf{Y}_{k-1}\mathbf{U}_{k-1}}P_{\mathcal{C}_{k}}\right)\to0\label{eq:-2}
\end{equation}
uniformly for all $k$ as $N\to\infty$.
We next prove that if
the inequality in \eqref{eq:-34} holds,
then
we have \eqref{eq:D_Y}. First, by the data processing inequality,
\begin{align*}
& D_{1+s}\left(P_{\mathbf{M}_{k}|\mathbf{X}_{k-1}\mathbf{Y}_{k-1}\mathbf{U}_{k-1}\mathcal{C}^{k}\hat{\mathcal{C}}^{k-1}}\|\mathrm{Unif}\left[1:e^{NR}\right]|P_{\mathbf{X}_{k-1}\mathbf{Y}_{k-1}\mathbf{U}_{k-1}}P_{\mathcal{C}}^{k}P_{\hat{\mathcal{C}}}^{k-1}\right)\\
& \geq D_{1+s}\left(P_{\mathbf{M}_{k}|\mathcal{C}^{k}\hat{\mathcal{C}}^{k-1}}\|\mathrm{Unif}\left[1:e^{NR}\right]|P_{\mathcal{C}}^{k}P_{\hat{\mathcal{C}}}^{k-1}\right).
\end{align*}
In fact, the LHS above is identical to the LHS of \eqref{eq:D_M}
(or \eqref{eq:-2}), since $\left(\mathbf{X}^{k-2},\mathbf{Y}^{k-2},\mathcal{C}^{k-1},\hat{\mathcal{C}}^{k}\right)\leftrightarrow(\mathbf{U}_{k-1},\mathcal{C}_{k})\leftrightarrow\left(\mathbf{X}_{k-1},\mathbf{Y}_{k-1},\mathbf{M}_{k}\right)$
holds under the distribution $P$ (see the reasoning around \eqref{eq:dist2}).
Combining this with \eqref{eq:D_M}, we have
\begin{align*}
D_{1+s}\left(P_{\mathbf{M}_{k}|\mathcal{C}^{k}\hat{\mathcal{C}}^{k-1}}\|\mathrm{Unif}\left[1:e^{NR}\right]|P_{\mathcal{C}}^{k}P_{\hat{\mathcal{C}}}^{k-1}\right) & \to0
\end{align*}
uniformly for all $k$ as $N\to\infty$. That is,
\begin{align}
\frac{1}{N}H_{1+s}\left(P_{\mathbf{M}_{k}|\mathcal{C}^{k}\hat{\mathcal{C}}^{k-1}}|P_{\mathcal{C}}^{k}P_{\hat{\mathcal{C}}}^{k-1}\right) & \to R\label{eq:-13}\\
& >\left(1+\epsilon\right)D_{1+s}\left(Q_{XY|U}\|Q_{XY}|Q_{U}\right).\label{eq:-28}
\end{align}
Now we need the following lemma on one-shot channel resolvability.
This can be proved by a technique similar to that used in Lemma \ref{lem:oneshotach-2}, for which we have given a complete proof.
\iffalse
\color{red}
Please give a precise location in
\cite{yu2019renyi} for the following Lemma.
\color{blue}
Lei: Have updated the reference.
\color{black}
\fi
\begin{lem}
\cite[Lemma 1]{yu2019renyi}\label{lem:oneshotach} Consider a random mapping
$f_{\mathcal{C}}:\mathcal{W}\rightarrow\mathcal{X}$. We set $\mathcal{C}=\left\{ X\left(w\right)\right\} _{w\in\mathcal{W}}$
with $X\left(w\right),w\in\mathcal{W}$ drawn independently for different
$w$'s and according to a same distribution $P_{X}$, and set $f_{\mathcal{C}}\left(w\right)=X\left(w\right)$.
This forms a random code. For this random code, we have for $s\in(0,1]$
and any distributions $P_{W},P_{Y|X}$ and $Q_{Y}$,
\begin{align}
& e^{sD_{1+s}(P_{Y|\mathcal{C}}\|Q_{Y}|P_{\mathcal{C}})}\nonumber \\
& \leq e^{sD_{1+s}\left(P_{Y|X}\|Q_{Y}|P_{X}\right)-sH_{1+s}\left(P_{W}\right)}+e^{sD_{1+s}(P_{Y}\|Q_{Y})},\label{eq:-123}
\end{align}
where the distribution $P_{Y|\mathcal{C}}$ is induced by the ``true''
joint distribution $P_{\mathcal{C}}P_{W}P_{Y|X=f_{\mathcal{C}}(W)}$,
and the distribution $P_{Y}$ is induced by the ``ideal'' joint
distribution $P_{X}P_{Y|X}$.
\end{lem}
This lemma immediately implies the following conditional version.
\begin{lem}
\label{lem:oneshotach2} Under the same assumptions as in Lemma \ref{lem:oneshotach},
for $s\in(0,1]$ and any distributions $P_{AW}P_{B},P_{Y|XB}$ and
$Q_{Y|B}$, we have
\begin{align}
& e^{sD_{1+s}(P_{Y|AB\mathcal{C}}\|Q_{Y|B}|P_{A}P_{B}P_{\mathcal{C}})}\nonumber \\
& \leq e^{sD_{1+s}\left(P_{Y|XB}\|Q_{Y|B}|P_{X}P_{B}\right)-sH_{1+s}\left(P_{W|A}|P_{A}\right)}+e^{sD_{1+s}(P_{Y|B}\|Q_{Y|B}|P_{B})},\label{eq:-124}
\end{align}
where the distribution $P_{Y|AB\mathcal{C}}$ is induced by the ``true''
joint distribution $P_{\mathcal{C}}P_{AW}P_{B}P_{Y|B,X=f_{\mathcal{C}}(W)}$,
and the distribution $P_{Y|B}$ is induced by the ``ideal'' joint
distribution $P_{B}P_{X}P_{Y|XB}$.
\end{lem}
\begin{IEEEproof}[Proof of Lemma \ref{lem:oneshotach2}]
Applying Lemma \ref{lem:oneshotach} with substitution $P_{W}\leftarrow P_{W|A=a},P_{Y|X}\leftarrow P_{Y|X,B=b}$
and $Q_{Y}\leftarrow Q_{Y|B=b}$, we obtain that
\begin{align}
& e^{sD_{1+s}(P_{Y|A=a,B=b,\mathcal{C}}\|Q_{Y|B=b}|P_{\mathcal{C}})}\nonumber \\
& \leq e^{sD_{1+s}\left(P_{Y|X,B=b}\|Q_{Y|B=b}|P_{X}\right)-sH_{1+s}\left(P_{W|A=a}\right)}+e^{sD_{1+s}(P_{Y|B=b}\|Q_{Y|B=b})}.
\end{align}
Taking expectation with respect to $(A,B)\sim P_{A}P_{B}$ for the
two sides above, we obtain \eqref{eq:-124}.
\end{IEEEproof}
Recall that
\[
\widetilde{Q}_{\mathbf{U}}=\frac{Q_{U}^{N}1_{\mathcal{T}_{\epsilon}^{(N)}\left(Q_{U}\right)}}{Q_{U}^{N}\left(\mathcal{T}_{\epsilon}^{(N)}\left(Q_{U}\right)\right)}.
\]
Note that the codebook in Lemmas \ref{lem:oneshotach} and \ref{lem:oneshotach2}
is generated in the same way as the codebook $\hat{\mathcal{C}}_{k}$
in our scheme. Applying Lemma \ref{lem:oneshotach2} with substitution
$A\leftarrow(\mathcal{C}^{k},\hat{\mathcal{C}}^{k-1}),B\leftarrow\mathbf{X}_{k},W\leftarrow\mathbf{M}_{k},X\leftarrow\mathbf{U}_{k},Y\leftarrow\mathbf{Y}_{k},\mathcal{C}\leftarrow\hat{\mathcal{C}}_{k}$
and the corresponding distributions $P_{AW}\leftarrow P_{\mathcal{C}}^{k}P_{\hat{\mathcal{C}}}^{k-1}P_{\mathbf{M}_{k}|\mathcal{C}^{k}\hat{\mathcal{C}}^{k-1}},P_{B}\leftarrow\pi_{X}^{N},P_{X}\leftarrow\widetilde{Q}_{\mathbf{U}},P_{Y|XB}\leftarrow Q_{Y|UX}^{N},Q_{Y|B}\leftarrow Q_{Y|X}^{N}$
(which induces $P_{Y|AB\mathcal{C}}=P_{\mathbf{Y}_{k}|\mathbf{X}_{k}\mathcal{C}^{k}\hat{\mathcal{C}}^{k}}$),
we have
\begin{align}
& e^{sD_{1+s}\left(P_{\mathbf{Y}_{k}|\mathbf{X}_{k}\mathcal{C}^{k}\hat{\mathcal{C}}^{k}}\|Q_{Y|X}^{N}|\pi_{X}^{N}P_{\mathcal{C}}^{k}P_{\hat{\mathcal{C}}}^{k}\right)}\nonumber \\
& \leq e^{sD_{1+s}\left(\widetilde{Q}_{\mathbf{Y}|\mathbf{XU}}\|Q_{Y|X}^{N}|\pi_{X}^{N}\widetilde{Q}_{\mathbf{U}}\right)-sH_{1+s}\left(P_{\mathbf{M}_{k}|\mathcal{C}^{k}\hat{\mathcal{C}}^{k-1}}|P_{\mathcal{C}}^{k}P_{\hat{\mathcal{C}}}^{k-1}\right)}+e^{sD_{1+s}(\widetilde{Q}_{\mathbf{Y}|\mathbf{X}}\|Q_{Y|X}^{N}|\pi_{X}^{N})}.\label{eq:-123-2}
\end{align}
where $\widetilde{Q}_{\mathbf{Y}|\mathbf{XU}}:=Q_{Y|UX}^{N}$ and
$\widetilde{Q}_{\mathbf{Y}|\mathbf{X}}$ are induced by the ``ideal''
joint distribution
\begin{align}
\widetilde{Q}_{\mathbf{UXY}}:=\widetilde{Q}_{\mathbf{U}}\pi_{X}^{N}Q_{Y|XU}^{N}.\label{eq:Qtilde}
\end{align}
Note that here $P_{\mathcal{C}}^{k}P_{\hat{\mathcal{C}}}^{k-1}P_{\mathbf{M}_{k}|\mathcal{C}^{k}\hat{\mathcal{C}}^{k-1}}$
and $P_{\mathbf{Y}_{k}|\mathbf{X}_{k}\mathcal{C}^{k}\hat{\mathcal{C}}^{k}}$
correspond to the ``true'' conditional distributions induced by
our scheme. Moreover, according to the process of encoding, $\mathbf{X}_{k},\hat{\mathcal{C}}_{k},(\mathcal{C}^{k},\hat{\mathcal{C}}^{k-1},\mathbf{M}_{k})$
are mutually independent.
On one hand, $\widetilde{Q}_{\mathbf{U}}$ is not far from the product
version $Q_{U}^{N}$, as shown in the following equations:
\begin{align}
& D_{1+s}(\widetilde{Q}_{\mathbf{U}}\|Q_{U}^{N})\nonumber \\
& =\frac{1}{s}\log\sum_{\mathbf{u}}\left(\frac{Q_{U}^{N}\left(\mathbf{u}\right)1\left\{ \mathbf{u}\in\mathcal{T}_{\epsilon}^{(N)}\right\} }{Q_{U}^{N}\left(\mathcal{T}_{\epsilon}^{(N)}\right)}\right)^{1+s}\left(Q_{U}^{N}\left(\mathbf{u}\right)\right)^{-s}\label{eq:-46}\\
& =\frac{1}{s}\log\sum_{\mathbf{u}\in\mathcal{T}_{\epsilon}^{(N)}}\left(\frac{1}{Q_{U}^{N}\left(\mathcal{T}_{\epsilon}^{(N)}\right)}\right)^{1+s}Q_{U}^{N}\left(\mathbf{u}\right)\\
& =\log\frac{1}{Q_{U}^{N}\left(\mathcal{T}_{\epsilon}^{(N)}\right)}\label{eq:-52}\\
& \rightarrow0,\label{eq:-15-2}
\end{align}
where \eqref{eq:-15-2} follows from the fact that $Q_{U}^{n}\left(\mathcal{T}_{\epsilon}^{(N)}\right)\rightarrow1$.
By the data processing inequality
for R\'{e}nyi divergence
\cite{Erven} and by the definition
of the distribution $\widetilde{Q}$ in \eqref{eq:Qtilde}, we have
\begin{equation}
D_{1+s}(\widetilde{Q}_{\mathbf{Y}|\mathbf{X}}\|Q_{Y|X}^{N}|\pi_{X}^{N})\leq D_{1+s}(\widetilde{Q}_{\mathbf{YU}|\mathbf{X}}\|Q_{UY|X}^{N}|\pi_{X}^{N})=D_{1+s}(\widetilde{Q}_{\mathbf{U}}\|Q_{U}^{N})
\end{equation}
Hence $D_{1+s}(\widetilde{Q}_{\mathbf{Y}|\mathbf{X}}\|Q_{Y|X}^{N}|\pi_{X}^{N})\rightarrow0$
as well.
On the other hand, by a derivation similar to the steps from \eqref{eq:-29} to
\eqref{eq:-35}, we have
\begin{align*}
& \frac{1}{N}D_{1+s}\left(\widetilde{Q}_{\mathbf{Y}|\mathbf{XU}}\|Q_{Y|X}^{N}|\pi_{X}^{N}\widetilde{Q}_{\mathbf{U}}\right)\leq\left(1+\epsilon\right)D_{1+s}\left(Q_{XY|U}\|Q_{XY}|Q_{U}\right),
\end{align*}
since $\widetilde{Q}_{\mathbf{Y}|\mathbf{XU}}=Q_{Y|XU}^{N}$ and any sequences $\mathbf{u}$ such that $\widetilde{Q}_{\mathbf{U}}(\mathbf{u})>0$ have a type close to $Q_U$.
\iffalse
\color{red}
VA comment: It would be useful to have more details for the preceding claim.
\color{blue}
Lei: Have added more information.
\color{black}
\fi
By \eqref{eq:-123-2}, $D_{1+s}\left(P_{\mathbf{Y}_{k}|\mathbf{X}_{k}\mathcal{C}^{k}\hat{\mathcal{C}}^{k}}\|Q_{Y|X}^{N}|\pi_{X}^{N}P_{\mathcal{C}}^{k}P_{\hat{\mathcal{C}}}^{k}\right)\to0$
since the conditions in \eqref{eq:-13} and \eqref{eq:-28} hold.
\section{\label{sec:Proof-of-Theorem-broadcast}Proof of Theorem \ref{thm:sequentialCS-broadcast-1}}
\subsection{Cardinality Bounds}
We first prove the cardinality bounds for
the calculation of
$\Delta\left(\pi_{XYZ},P_{W}P_{\hat{W}}\right)$.
Note that the constraints in \eqref{eq:-18-1-4-2-1} can be rewritten as
$H(XYZ|V)-H(BXYZ|UV) \le H(W)$ and $H(XYZ|V)-H(BXYZ|U\hat{U}V) \le H(W)+H(\hat{W})$. By the support lemma in \cite[Appendix C]{Gamal}, the cardinality of $\mathcal{V}$ can be upper bounded by
$3$, without changing the constraints and the objective function.
Applying the support lemma in \cite[Appendix C]{Gamal} again, for each $v$, we can restrict the size of the support of $P_{U|V=v}$ no larger than $|\mathcal{X}||\mathcal{Y}||\mathcal{Z}|+1$
without changing the linear functionals $P_{XYZ|V=v}$ and $H(BXYZ|U,V=v),H(BXYZ|\hat{U},U,V=v)$, and hence also without changing the constraints and the objective function.
Therefore, the cardinality of $\mathcal{U}$ can be upper bounded by
$3(|\mathcal{X}||\mathcal{Y}||\mathcal{Z}|+1)$.
Applying the support lemma in \cite[Appendix C]{Gamal} again, for each $(u,v)$, we can restrict the size of the support of $P_{\hat{U}|U=u,V=v}$ no larger than $|\mathcal{B}||\mathcal{X}||\mathcal{Y}||\mathcal{Z}|$
without changing the linear functionals $P_{BXYZ|U=u,V=v}$ and $H(BXYZ|\hat{U},U=u,V=v)$, and hence also without changing the constraints and the objective function (since $P_{XYZ|V=v}$ remains unchanged as well).
Therefore, the cardinality of $\hat{\mathcal{U}}$ can be upper bounded by
$3(|\mathcal{X}||\mathcal{Y}||\mathcal{Z}|+1)|\mathcal{B}||\mathcal{X}||\mathcal{Y}||\mathcal{Z}|$.
We next prove the cardinality bounds for
the calculation of
$\hat{\Delta}\left(\pi_{XYZ},P_{W}P_{\hat{W}}\right)$.
The constraints for this case can be rewritten as
$H(XYZ|V)-H(XYZ|UV)-H(B|XYZ\hat{U}UV) \le H(W)$ and $H(XYZ|V)-H(BXYZ|\hat{U}UV) \le H(W)+H(\hat{W})$. By the support lemma in \cite[Appendix C]{Gamal}, we can restrict $|\mathcal{V}| \le 3$. Applying the support lemma in \cite[Appendix C]{Gamal} again, for each $v$, we can restrict the size of the support of $P_{U|V=v}$ no larger than $|\mathcal{X}||\mathcal{Y}||\mathcal{Z}|+1$
without changing the linear functionals $P_{XYZ|V=v}$ and $H(XYZ|U,V=v)-H(B|XYZ\hat{U}U,V=v),H(BXYZ|\hat{U}U,V=v)$, and hence also without changing the constraints and the objective function.
Therefore, the cardinality of $\mathcal{U}$ can be upper bounded by
$3(|\mathcal{X}||\mathcal{Y}||\mathcal{Z}|+1)$.
Applying the support lemma in \cite[Appendix C]{Gamal} again, for each $(u,v)$, we can restrict the size of the support of $P_{\hat{U}|U=u,V=v}$ no larger than $|\mathcal{B}||\mathcal{X}||\mathcal{Y}||\mathcal{Z}|+1$
without changing the linear functionals $P_{BXYZ|U=u,V=v}$ and $H(B|XYZ\hat{U},U=u,V=v),H(BXYZ|\hat{U},U=u,V=v)$, and hence also without changing the constraints and the objective function.
Therefore, the cardinality of $\hat{\mathcal{U}}$ can be upper bounded by
$3(|\mathcal{X}||\mathcal{Y}||\mathcal{Z}|+1)(|\mathcal{B}||\mathcal{X}||\mathcal{Y}||\mathcal{Z}|+1)$.
\subsection{Upper Bound }
We first prove the upper bound, i.e., $\Gamma\left(\pi_{XYZ},P_{W}P_{\hat{W}}\right)\leq\hat{\Delta}\left(\pi_{XYZ},P_{W}P_{\hat{W}}\right)$,
by using a proof similar to that of Theorem \ref{thm:sequentialCS}.
In order to do this, we prove that
$\Gamma\left(\pi_{XYZ},P_{W}P_{\hat{W}}\right)\leq\Delta^{+}\left(\pi_{XYZ},P_{W}P_{\hat{W}}\right)$,
where $\Delta^{+}\left(\pi_{XYZ},P_{W}P_{\hat{W}}\right)$ is defined like
$\hat{\Delta}\left(\pi_{XYZ},P_{W}P_{\hat{W}}\right)$ except that the
strict inequalities in the constraints are replaced by weak inequalities and
$\min$ is replaced by $\inf$.
This suffices because under the assumption that
there is at least one pair $(y,z)$ such that $\pi_{YZ|X}(y,z|x)>0$ for all $x$ such that $\pi_{X}(x)>0$
we can show that
$\Delta^{+}\left(\pi_{XYZ},P_{W}P_{\hat{W}}\right)$ equals
$\hat{\Delta}\left(\pi_{XYZ},P_{W}P_{\hat{W}}\right)$ by using an argument similar to that in Lemma \ref{lem:psilem}.
Let
$\overline{\Delta}\left(\pi_{XYZ},P_{W}P_{\hat{W}}\right)$
be defined like
$\Delta^{+}\left(\pi_{XYZ},P_{W}P_{\hat{W}}\right)$
but with $V$ replaced with a constant.
Let
$\left(Q_{U\hat{U}},Q_{B|XU\hat{U}},Q_{Y|BU\hat{U}},Q_{Y|BU}\right)$
be a tuple
that satisfies the constraints
under the infimum in the definition of
$\overline{\Delta}\left(\pi_{XYZ},P_{W}P_{\hat{W}}\right)$.
Let
\begin{align*}
\mathcal{C} & :=\left\{ \mathbf{M}\left(\mathbf{b},\mathbf{w}\right):\left(\mathbf{b},\mathbf{w}\right)\in\mathcal{B}^{N}\times\mathcal{W}^{N}\right\} \\
\mathcal{C}' & :=\left\{ \hat{\mathbf{M}}\left(\hat{\mathbf{w}}\right):\hat{\mathbf{w}}\in\hat{\mathcal{W}}^{N}\right\}
\end{align*}
be two random binning codebooks where $\mathbf{M}\left(\mathbf{b},\mathbf{w}\right)\sim\mathrm{Unif}\left[1:e^{NR}\right],\hat{\mathbf{M}}\left(\hat{\mathbf{w}}\right)\sim\mathrm{Unif}\left[1:e^{N\hat{R}}\right]$
are respectively generated independently. Let $\mathcal{C}_{k},k=1,2,...$
be independent copies of $\mathcal{C}$ and $\mathcal{C}_{k}',k=1,2,...$
be independent copies of $\mathcal{C}'$. The codebook sequences $\left\{ \mathcal{C}_{k}\right\} ,\left\{ \mathcal{C}_{k}'\right\} $
are shared by all the terminals, Alice, Bob, and Charles (although
$\left\{ \mathcal{C}_{k}'\right\} $ will not be used by Charles).
Let
\[
\hat{\mathcal{C}}:=\left\{ \left(\mathbf{U}\left(m\right),\hat{\mathbf{U}}\left(m,\hat{m}\right)\right):m\in\left[1:e^{NR}\right],\hat{m}\in\left[1:e^{N\hat{R}}\right]\right\}
\]
be another random codebook where $\mathbf{U}\left(m\right)\sim\widetilde{Q}_{\mathbf{U}},\hat{\mathbf{U}}\left(m,\hat{m}\right)\sim\widetilde{Q}_{\hat{\mathbf{U}}|\mathbf{U}}\left(\cdot|\mathbf{U}\left(m\right)\right)$
are generated independently. Here $\widetilde{Q}_{\mathbf{U}}$ and
$\widetilde{Q}_{\hat{\mathbf{U}}|\mathbf{U}}$ are the following truncated
product distributions:
\begin{align*}
\widetilde{Q}_{\mathbf{U}} & =\frac{Q_{U}^{N}1_{\mathcal{T}_{\epsilon}^{(N)}\left(Q_{U}\right)}}{Q_{U}^{N}\left(\mathcal{T}_{\epsilon}^{(N)}\left(Q_{U}\right)\right)},\\
\widetilde{Q}_{\hat{\mathbf{U}}|\mathbf{U}}\left(\cdot|\mathbf{u}\right) & =\frac{Q_{\hat{U}|U}^{N}\left(\cdot|\mathbf{u}\right)1_{\mathcal{T}_{\epsilon}^{(N)}\left(Q_{U\hat{U}}|\mathbf{u}\right)}}{Q_{\hat{U}|U}^{N}\left(\mathcal{T}_{\epsilon}^{(N)}\left(Q_{U\hat{U}}|\mathbf{u}\right)|\mathbf{u}\right)},\forall\mathbf{u}\in\mathcal{U}^{N}.
\end{align*}
Let $\hat{\mathcal{C}}_{k},k=1,2,...$ be independent copies of $\hat{\mathcal{C}}$.
The codebook sequence $\left\{ \hat{\mathcal{C}}_{k}\right\} $ is
also shared by all the terminals (Alice, Bob, and Charles). We choose
rates $R,\hat{R}$ such that
\begin{align}
I_{Q}\left(U;XYZ\right) & <R<H(W)+H_{Q}\left(B|XYZU\hat{U}\right),\label{eq:-11}\\
\hat{R} & <H(\hat{W}),\label{eq:-12}\\
I_{Q}\left(U\hat{U};XYZ\right) & <R+\hat{R}.\nonumber
\end{align}
Such $\left(R,\hat{R}\right)$ exists if and only if
\begin{align*}
I_{Q}\left(U;XYZ\right) & <H(W)+H_{Q}\left(B|XYZU\hat{U}\right),\\
I_{Q}\left(U\hat{U};XYZ\right) & <H(\hat{W})+H(W)+H_{Q}\left(B|XYZU\hat{U}\right),
\end{align*}
or equivalently,
\iffalse
\begin{align*}
H\left(U\right) & \le H(W)+H\left(BU|XYZ\right)-I\left(B;\hat{U}|XYZU\right),\\
H\left(U\hat{U}\right) & \le H(\hat{W})+H(W)+H\left(BU\hat{U}|XYZ\right).
\end{align*}
\fi
\begin{align*}
H_{Q}\left(U\right) & < H(W)+H_{Q}\left(BU|XYZ\right)-I_{Q}\left(B;\hat{U}|XYZU\right),\\
H_{Q}\left(U\hat{U}\right) & < H(\hat{W})+H(W)+H_{Q}\left(BU\hat{U}|XYZ\right),
\end{align*}
which are
satisfied by the tuple
$\left(Q_{U\hat{U}},Q_{B|XU\hat{U}},Q_{Y|BU\hat{U}},Q_{Y|BU}\right)$
by assumption.
Consider the following sequence of superposition codes.
For the
first block (from epoch $1$ to epoch $N$),
Alice
sends a sequence of i.i.d. uniform r.v.'s
$B_t \sim \mathrm{Unif}(\mathcal{B})$ to Bob and Charles, where $\mathbf{B}_1$ is independent of $\mathbf{X}_1$.
Bob and Charles respectively generate
$\mathbf{Y}_1$ with a fixed distribution $\hat{Q}_{Y}^{N}$ and $\mathbf{Z}_1$ with a fixed distribution $\hat{Q}_{Z}^{N}$ where $(\hat{Q}_{Y},\hat{Q}_{Z})$ is
an optimal distribution attaining $\Delta:=\min_{Q_{Y},Q_{Z}}D\left(Q_{Y}Q_{Z}\|\pi_{YZ|X}|\pi_{X}\right)$. Note that $\Delta$ is finite by assumption. Furthermore, $\mathbf{M}_1,\hat{\mathbf{M}}_{1},\mathbf{U}_1,\hat{\mathbf{U}}_{1}$ are set to be constant. Obviously, $\mathbf{B}_1, \mathbf{X}_1, \mathbf{Y}_1$ are independent of $\mathcal{C}_1,\mathcal{C}_1',\hat{\mathcal{C}}_1$.
For the
$k$-th block
(from epoch $\left(k-1\right)N+1$ to epoch $kN$)
with $k\ge 2$,
the encoder and decoder adopt the following strategy.
All the terminals (Alice, Bob, and Charles) extract common
randomness $\mathbf{M}_{k}$ from the previous block of communication
bits $\mathbf{B}_{k-1}$ and common randomness $\mathbf{W}_{k-1}$,
by using random binning based on $\mathcal{C}_{k}$. That is, they
generate $\mathbf{M}_{k}=\mathbf{M}\left(\mathbf{B}_{k-1},\mathbf{W}_{k-1}\right)$,
where $\mathbf{M}\left(\mathbf{b},\mathbf{w}\right)$ is the codeword
indexed by $\left(\mathbf{b},\mathbf{w}\right)$ in $\mathcal{C}_{k}$.
Besides, Alice and Bob also generate $\hat{\mathbf{M}}_{k}=\hat{\mathbf{M}}\left(\hat{\mathbf{W}}_{k}\right)$,
where $\hat{\mathbf{M}}\left(\hat{\mathbf{w}}\right)$ is the codeword
indexed by $\hat{\mathbf{w}}$ in $\mathcal{C}_{k}'$. Next, Alice
and Bob generate $\left(\mathbf{U}_{k},\hat{\mathbf{U}}_{k}\right)=\left(\mathbf{U}\left(\mathbf{M}_{k}\right),\hat{\mathbf{U}}\left(\mathbf{M}_{k},\hat{\mathbf{M}}_{k}\right)\right)$,
where $\left(\mathbf{U}\left(m\right),\hat{\mathbf{U}}\left(m,\hat{m}\right)\right)$
the codeword indexed by $\left(m,\hat{m}\right)$ in $\hat{\mathcal{C}}_{k}$.
Moreover, $\mathbf{U}_{k}$ is also available at Charles since he
knows $\mathbf{M}_{k}$. Then
by using $\left(\mathbf{X}_{k},\mathbf{U}_{k},
\hat{\mathbf{U}}_{k}\right)$,
the encoder Alice generates $\mathbf{B}_{k}$ by the product distribution
$Q_{B|XU\hat{U}}^{N}$. At the decoder sides, upon observing $\left(\mathbf{B}_{k},\mathbf{U}_{k},\hat{\mathbf{U}}_{k}\right)$
Bob generates $\mathbf{Y}_{k}$ by the product distribution $Q_{Y|BU\hat{U}}^{N}$,
and upon observing $\left(\mathbf{B}_{k},\mathbf{U}_{k}\right)$ Charlie
generates $\mathbf{Z}_{k}$ by the product distribution $Q_{Z|BU}^{N}$.
\begin{lem}
\label{lem:For-this-code,-1}For this code,
\begin{align}
D\left(P_{\mathbf{M}_{k}\hat{\mathbf{M}}_{k}|\mathbf{X}_{k-1}\mathbf{Y}_{k-1}\mathbf{Z}^{k-1}\mathbf{U}_{k-1}\hat{\mathbf{U}}_{k-1}\mathcal{C}_{k}\mathcal{C}_{k}'}\|\mathrm{Unif}\left[1:e^{NR}\right]\mathrm{Unif}\left[1:e^{N\hat{R}}\right]|P_{\mathbf{X}_{k-1}\mathbf{Y}_{k-1}\mathbf{Z}^{k-1}\mathbf{U}_{k-1}\hat{\mathbf{U}}_{k-1}}P_{\mathcal{C}_{k}}P_{\mathcal{C}_{k}'}\right) & \to0\label{eq:D_M-1}\\
D\left(P_{\mathbf{Y}_{k}\mathbf{Z}_{k}|\mathbf{X}_{k}\mathcal{C}^{k}\mathcal{C}^{\prime k}\hat{\mathcal{C}}^{k}}\|Q_{YZ|X}^{N}|\pi_{X}^{N}P_{\mathcal{C}}^{k}P_{\mathcal{C}'}^{k}P_{\hat{\mathcal{C}}}^{k}\right) & \to0\label{eq:D_Y-1}
\end{align}
uniformly for all $k\ge 2$ as $N\to\infty$.
\end{lem}
The convergence in \eqref{eq:D_M-1} follows since on one hand,
\[
P_{\mathbf{M}_{k}\hat{\mathbf{M}}_{k}|\mathbf{X}_{k-1}\mathbf{Y}_{k-1}\mathbf{Z}^{k-1}\mathbf{U}_{k-1}\hat{\mathbf{U}}_{k-1}\mathcal{C}_{k}\mathcal{C}_{k}'}=P_{\mathbf{M}_{k}|\mathbf{X}_{k-1}\mathbf{Y}_{k-1}\mathbf{Z}^{k-1}\mathbf{U}_{k-1}\hat{\mathbf{U}}_{k-1}\mathcal{C}_{k}}P_{\hat{\mathbf{M}}_{k}|\mathcal{C}_{k}'}
\]
and hence, the divergence in \eqref{eq:D_M-1} can be written as
the sum of the following two divergences
\begin{align}
&D\left(P_{\mathbf{M}_{k}|\mathbf{X}_{k-1}\mathbf{Y}_{k-1}\mathbf{Z}^{k-1}\mathbf{U}_{k-1}\hat{\mathbf{U}}_{k-1}\mathcal{C}_{k}}\|\mathrm{Unif}\left[1:e^{NR}\right]|P_{\mathbf{X}_{k-1}\mathbf{Y}_{k-1}\mathbf{Z}^{k-1}\mathbf{U}_{k-1}\hat{\mathbf{U}}_{k-1}}P_{\mathcal{C}_{k}}\right) \label{eq:-9}\\
&D\left(P_{\hat{\mathbf{M}}_{k}|\mathcal{C}_{k}'}\|\mathrm{Unif}\left[1:e^{N\hat{R}}\right]|P_{\mathcal{C}_{k}'}\right), \label{eq:-10}
\end{align}
and on the other hand, by Lemma \ref{lem:oneshotach-1}, the divergences in \eqref{eq:-9} and \eqref{eq:-10} vanish as $N\to \infty$
once the upper bounds on $R,\hat{R}$ in \eqref{eq:-11} and \eqref{eq:-12}
hold.
\iffalse
\color{red}
VA comment: It would be helpful to have more details for the preceding claim.
\color{blue}
Lei: Have added more information.
\color{black}
\fi
In order to prove \eqref{eq:D_Y-1}, we need the following lemmas,
which are generalizations of Lemma \ref{lem:oneshotach}
and Lemma \ref{lem:oneshotach2}
respectively
to superposition
codes.
\begin{lem}
\label{lem:oneshotach-2} Let $P_{X\hat{X}}$ be a probability distribution. Consider
a random mapping $f_{\mathcal{C}}:\mathcal{W}\times\hat{\mathcal{W}}\rightarrow\mathcal{X}\times\hat{\mathcal{X}}$.
We set $\mathcal{C}=\left\{ \left(X\left(w\right),\hat{X}\left(w,\hat{w}\right)\right)\right\} _{w\in\mathcal{W}}$
with $X\left(w\right),w\in\mathcal{W}$ drawn independently for different
$w$'s and according to the same distribution $P_{X}$ and given $w$,
$\hat{X}\left(w,\hat{w}\right),\hat{w}\in\hat{\mathcal{W}}$ drawn
independently for different $\hat{w}$'s and according to the same distribution
$P_{\hat{X}|X}\left(\cdot|X\left(w\right)\right)$, and set $f_{\mathcal{C}}\left(w,\hat{w}\right)=\left(X\left(w\right),\hat{X}\left(w,\hat{w}\right)\right)$.
This forms a random superposition code. For this code, we have for
$s\in(0,1]$ and any distributions $P_{W\hat{W}},P_{Y|X\hat{X}}$
and $Q_{Y}$,
\begin{align}
& e^{sD_{1+s}(P_{Y|\mathcal{C}}\|Q_{Y}|P_{\mathcal{C}})}\nonumber \\
& \leq e^{sD_{1+s}\left(P_{Y|X\hat{X}}\|Q_{Y}|P_{X\hat{X}}\right)-sH_{1+s}\left(P_{W\hat{W}}\right)}\nonumber \\
& \qquad+e^{sD_{1+s}\left(P_{Y|X}\|Q_{Y}|P_{X}\right)-sH_{1+s}\left(P_{W}\right)}+e^{sD_{1+s}(P_{Y}\|Q_{Y})},
\end{align}
where the distribution $P_{Y|\mathcal{C}}$ is induced by the ``true''
joint distribution $P_{\mathcal{C}}P_{W\hat{W}}P_{Y|(X,\hat{X})=f_{\mathcal{C}}(W,\hat{W})}$,
and the distribution $P_{Y}$ is induced by the ``ideal'' joint
distribution $P_{X\hat{X}}P_{Y|X\hat{X}}$.
\end{lem}
\begin{lem}
\label{lem:oneshotach-2-1} Let $P_{X\hat{X}}$ be a probability distribution.
Consider a random mapping $f_{\mathcal{C}}:\mathcal{W}\times\hat{\mathcal{W}}\rightarrow\mathcal{X}\times\hat{\mathcal{X}}$.
We set $\mathcal{C}=\left\{ \left(X\left(w\right),\hat{X}\left(w,\hat{w}\right)\right)\right\} _{w\in\mathcal{W}}$
with $X\left(w\right),w\in\mathcal{W}$ drawn independently for different
$w$'s and according to the same distribution $P_{X}$ and given $w$,
$\hat{X}\left(w,\hat{w}\right),\hat{w}\in\hat{\mathcal{W}}$ drawn
independently for different $\hat{w}$'s and according to the same distribution
$P_{\hat{X}|X}\left(\cdot|X\left(w\right)\right)$, and set $f_{\mathcal{C}}\left(w\right)=\left(X\left(w\right),\hat{X}\left(w,\hat{w}\right)\right)$.
This forms a random superposition code. For this code, we have for
$s\in(0,1]$ and any distributions $P_{AW\hat{W}}P_{B},P_{Y|X\hat{X}B}$,
and $Q_{Y|B}$,
\begin{align}
& e^{sD_{1+s}(P_{Y|AB\mathcal{C}}\|Q_{Y|B}|P_{A}P_{B}P_{\mathcal{C}})}\nonumber \\
& \leq e^{sD_{1+s}\left(P_{Y|X\hat{X}B}\|Q_{Y|B}|P_{X\hat{X}}P_{B}\right)-sH_{1+s}\left(P_{W\hat{W}|A}|P_{A}\right)}\nonumber \\
& \qquad+e^{sD_{1+s}\left(P_{Y|XB}\|Q_{Y|B}|P_{X}P_{B}\right)-sH_{1+s}\left(P_{W|A}|P_{A}\right)}+e^{sD_{1+s}(P_{Y|B}\|Q_{Y|B}|P_{B})},
\end{align}
where the distribution $P_{Y|AB\mathcal{C}}$ is induced by the ``true''
joint distribution $P_{\mathcal{C}}P_{AW\hat{W}}P_{B}P_{Y|B,(X,\hat{X})=f_{\mathcal{C}}(W,\hat{W})}$,
and the distributions $P_{Y|B}$ and $P_{Y|XB}$ are induced by the
``ideal'' joint distribution $P_{B}P_{X\hat{X}}P_{Y|X\hat{X}B}$.
\end{lem}
Lemma \ref{lem:oneshotach-2-1} can be seen as a conditional version of Lemma \ref{lem:oneshotach-2}. The proof of Lemma \ref{lem:oneshotach-2} is provided in Appendix
\ref{sec:Proof-of-Lemma-1}. The extension of Lemma \ref{lem:oneshotach-2}
to Lemma \ref{lem:oneshotach-2-1} follows similarly to the extension
of Lemma \ref{lem:oneshotach} to Lemma \ref{lem:oneshotach2}.
By proof steps similar to that of Lemma \ref{lem:For-this-code,}
except for replacing Lemma \ref{lem:oneshotach2} with Lemma \ref{lem:oneshotach-2-1},
one can prove \eqref{eq:D_Y-1}. Specifically, consider the following
substitution in Lemma \ref{lem:oneshotach-2-1}: $A\leftarrow(\mathcal{C}^{k},\mathcal{C}^{\prime k},\hat{\mathcal{C}}^{k-1}),B\leftarrow\mathbf{X}_{k},W\leftarrow\mathbf{M}_{k},\hat{W}\leftarrow\hat{\mathbf{M}}_{k},X\leftarrow\mathbf{U}_{k},\hat{X}\leftarrow\hat{\mathbf{U}}_{k},Y\leftarrow(\mathbf{Y}_{k},\mathbf{Z}_{k}),\mathcal{C}\leftarrow\hat{\mathcal{C}}_{k}$
and the corresponding distributions $P_{AW\hat{W}}\leftarrow P_{\mathcal{C}}^{k}P_{\mathcal{C}'}^{k}P_{\hat{\mathcal{C}}}^{k-1}P_{\mathbf{M}_{k}\hat{\mathbf{M}}_{k}|\mathcal{C}^{k}\hat{\mathcal{C}}^{k-1}},P_{B}\leftarrow\pi_{X}^{N},P_{X}\leftarrow\widetilde{Q}_{\mathbf{U}},P_{\hat{X}|X}\leftarrow\widetilde{Q}_{\hat{\mathbf{U}}|\mathbf{U}},P_{Y|X\hat{X}B}\leftarrow Q_{Y|U\hat{U}X}^{N}Q_{Z|UX}^{N},Q_{Y|B}\leftarrow Q_{YZ|X}^{N}$.
Furthermore, by proof steps similar to those from \eqref{eq:-27}
to \eqref{eq:-14}, one can show
that $\Gamma\left(\pi_{XYZ},P_{W}P_{\hat{W}}\right) \le \overline{\Delta}\left(\pi_{XYZ},P_{W}P_{\hat{W}}\right)$. The random variable $V$ can be added by an argument similar to that
given at the end of achievability proof of Theorem \ref{thm:sequentialCS} to conclude that
$\Gamma\left(\pi_{XYZ},P_{W}P_{\hat{W}}\right) \le \Delta^{+}\left(\pi_{XYZ},P_{W}P_{\hat{W}}\right)$.
Since $\Delta^{+}\left(\pi_{XYZ},P_{W}P_{\hat{W}}\right) = \Delta\left(\pi_{XYZ},P_{W}P_{\hat{W}}\right)$ under our assumptions, this
completes the proof of
the achievability part of Theorem
\ref{thm:sequentialCS-broadcast-1}. Here we omit the detailed proofs.
\subsection{Lower Bound }
The lower bound follows similarly to the converse in Theorem \ref{thm:sequentialCS}.
Denote $K\sim\mathrm{Unif}\left[1:n\right]$ as a random time index,
which is independent of all other r.v.'s involved in the system. Define
$U:=\left(B^{K-1},W^{K}\right),\hat{U}:=\hat{W}^{K},V:=\left(X^{K-1},Y^{K-1},Z^{K-1},K\right),B:=B_{K},X:=X_{K},Y:=Y_{K},Z:=Z_{K}$.
Then, following derivations similar to the ones for the converse of
Theorem \ref{thm:sequentialCS}, we have
\begin{align*}
\frac{1}{n}D\left(P_{Y^{n}Z^{n}|X^{n}}\|\pi_{YZ|X}^{n}|\pi_{X}^{n}\right) & =D\left(P_{YZ|XV}\|\pi_{YZ|X}|\pi_{X}P_{V}\right),\\
H\left(U|V\right) & \le H(W)+H\left(BU|XYZV\right),\\
H\left(U\hat{U}|V\right) & \le H(\hat{W})+H(W)+H\left(BU\hat{U}|XYZV\right).
\end{align*}
Moreover,
\begin{align*}
P_{U\hat{U}VBXYZ}(u,\hat{u},v,b,x,y,z) & =P_{W}^{k}(w^{k})P_{\hat{W}}^{k}(\hat{w}^{k})P_{K}(k)P\left(b^{k-1},x^{k-1},y^{k-1},z^{k-1}|w^{k},\hat{w}^{k}\right)\\
& \qquad\times\pi_{X}(x)P\left(b_{k}|x^{k},b^{k-1},w^{k},\hat{w}^{k}\right)P\left(y_{k}|b^{k},y^{k-1},w^{k},\hat{w}^{k}\right)P\left(z_{k}|b^{k},z^{k-1},w^{k}\right)\\
& =P_{U\hat{U}V}\left(u,v\right)\pi_{X}(x)P_{B|XU\hat{U}V}\left(b|x,u,\hat{u},v\right)P_{Y|BU\hat{U}V}\left(y|b,u,\hat{u},v\right)P_{Z|BUV}\left(z|b,u,v\right).
\end{align*}
Combining all the above yields the lower bound $\Delta\left(\pi_{XYZ},P_{W}P_{\hat{W}}\right)$.
\subsection{\label{sec:Proof-of-Lemma-1}Proof of Lemma \ref{lem:oneshotach-2}}
Observe that
\begin{align}
& e^{sD_{1+s}(P_{Y\mathcal{C}}\|Q_{Y}\times P_{\mathcal{C}})}\nonumber \\
& =\mathbb{E}_{\mathcal{C}}\sum_{y}P^{1+s}\left(y|\mathcal{C}\right)Q^{-s}\left(y\right)\\
& =\mathbb{E}_{\mathcal{C}}\sum_{y}\sum_{w,\hat{w}}P\left(w,\hat{w}\right)P\left(y|f_{\mathcal{C}}\left(w,\hat{w}\right)\right)\biggl(P\left(w,\hat{w}\right)P\left(y|f_{\mathcal{C}}\left(w,\hat{w}\right)\right)\nonumber \\
& \qquad+\sum_{\hat{w}'\neq\hat{w}}P(w,\hat{w}')P\left(y|f_{\mathcal{C}}\left(w,\hat{w}'\right)\right)+\sum_{w'\neq w}\sum_{\hat{w}'}P(w',\hat{w}')P\left(y|f_{\mathcal{C}}\left(w',\hat{w}'\right)\right)\biggr)^{s}Q^{-s}\left(y\right)\label{eq:-149}
\end{align}
\iffalse
\color{red}
VA comment: Deleted the reference in the sentence that follows because the inequality is trivial.
\color{blue}
Lei: Your revision is fine.
\color{black}
\fi
Then using the inequality $(a+b+c)^{s}\le a^{s}+b^{s}+c^{s}$ for
$a,b,c\ge0$ and $0<s\le1$
we get
\begin{align}
& e^{sD_{1+s}(P_{Y\mathcal{C}}\|Q_{Y}\times P_{\mathcal{C}})}\leq L_{1}+L_{2}+L_{3},\label{eq:-150}
\end{align}
where
\begin{align}
& L_{1}:=\sum_{y}\sum_{w,\hat{w}}P^{1+s}\left(w,\hat{w}\right)\mathbb{E}_{\mathcal{C}}\left[P^{1+s}\left(y|f_{\mathcal{C}}\left(w,\hat{w}\right)\right)\right]Q^{-s}\left(y\right)\\
& L_{2}:=\mathbb{E}_{\mathcal{C}}\sum_{y}\sum_{w,\hat{w}}P\left(w,\hat{w}\right)P\left(y|f_{\mathcal{C}}\left(w,\hat{w}\right)\right)\\
& \qquad\times\left(\sum_{\hat{w}'\neq\hat{w}}P(w,\hat{w}')P\left(y|f_{\mathcal{C}}\left(w,\hat{w}'\right)\right)\right)^{s}Q^{-s}\left(y\right)\\
& L_{3}:=\mathbb{E}_{\mathcal{C}}\sum_{y}\sum_{w,\hat{w}}P\left(w,\hat{w}\right)P\left(y|f_{\mathcal{C}}\left(w,\hat{w}\right)\right)\nonumber \\
& \qquad\times\left(\sum_{w'\neq w}\sum_{\hat{w}'}P(w',\hat{w}')P\left(y|f_{\mathcal{C}}\left(w',\hat{w}'\right)\right)\right)^{s}Q^{-s}\left(y\right).
\end{align}
Furthermore, $L_{1},L_{2}$, and $L_{3}$ can be respectively expressed
or upper bounded as follows.
\begin{align}
L_{1} & =\sum_{y}\sum_{w,\hat{w}}P^{1+s}\left(w,\hat{w}\right)\sum_{x,\hat{x}}P\left(x,\hat{x}\right)P^{1+s}\left(y|x,\hat{x}\right)Q^{-s}\left(y\right)\\
& =e^{sD_{1+s}\left(P_{Y|X\hat{X}}\|Q_{Y}|P_{X\hat{X}}\right)-sH_{1+s}\left(W\hat{W}\right)},\label{eq:-3-3-1}
\end{align}
\begin{align}
L_{2} & =\sum_{y}\sum_{w,\hat{w}}P\left(w,\hat{w}\right)\mathbb{E}_{X\left(w\right)}\mathbb{E}_{\hat{X}\left(w,\hat{w}\right)}\left[P_{Y|X\hat{X}}\left(y|X\left(w\right),\hat{X}\left(w,\hat{w}\right)\right)\right]\nonumber \\
& \qquad\times\mathbb{E}_{\left\{ \hat{X}\left(w,\hat{w}'\right):\hat{w}'\neq\hat{w}\right\} }\left(\sum_{\hat{w}'\neq\hat{w}}P(w,\hat{w}')P_{Y|X\hat{X}}\left(y|X\left(w\right),\hat{X}\left(w,\hat{w}'\right)\right)\right)^{s}Q^{-s}\left(y\right)\label{eq:-114-1-1}\\
& \leq\sum_{y}\sum_{w,\hat{w}}P\left(w,\hat{w}\right)\mathbb{E}_{X\left(w\right)}\sum_{\hat{x}}P_{\hat{X}|X}\left(\hat{x}|X\left(w\right)\right)P_{Y|X\hat{X}}\left(y|X\left(w\right),\hat{x}\right)\\
& \qquad\times\left(\sum_{\hat{w}'}P(w,\hat{w}') \mathbb{E}_{ \hat{X}\left(w,\hat{w}'\right) } P_{Y|X\hat{X}}\left(y|X\left(w\right),\hat{X}\left(w,\hat{w}'\right)\right)\right)^{s}Q^{-s}\left(y\right)\\
& = \sum_{y}\sum_{w,\hat{w}}P\left(w,\hat{w}\right)\mathbb{E}_{X\left(w\right)}P_{Y|X}\left(y|X\left(w\right)\right)\nonumber \\
& \qquad\times\left(P(w)P_{Y|X}\left(y|X\left(w\right)\right)\right)^{s}Q^{-s}\left(y\right)\label{eq:-42-2-1}\\
& =\sum_{w}P\left(w\right)^{1+s}\sum_{y}\sum_{x}P\left(x\right)P\left(y|x\right)^{1+s}Q^{-s}\left(y\right)\nonumber \\
& =e^{sD_{1+s}\left(P_{Y|X}\|Q_{Y}|P_{X}\right)-sH_{1+s}\left(W\right)},\label{eq:-148-1}
\end{align}
\iffalse
\color{red}
VA comments: Regarding the calculation for $L_{2}$ above, (1) moved the expectation inside the summation in the fourth line since the random variable over which the expectation is being taken depends on $\hat{w}'$;
(2) in the fifth line I think there is equality.
Please check.
\color{blue}
Lei: Your revision is fine.
\color{black}
\fi
and
\begin{align}
L_{3} & = \sum_{y}\sum_{w,\hat{w}}P\left(w,\hat{w}\right)\mathbb{E}_{\mathcal{C}}\left[P\left(y|f_{\mathcal{C}}\left(w,\hat{w}\right)\right)\right]\nonumber \\
& \qquad\times\mathbb{E}_{\mathcal{C}}\left[\left(\sum_{w'\neq w}\sum_{\hat{w}'}P(w',\hat{w}')P\left(y|f_{\mathcal{C}}\left(w',\hat{w}'\right)\right)\right)^{s}\right]Q^{-s}\left(y\right)\label{eq:-114-1}\\
& \leq\sum_{y}\sum_{w,\hat{w}}P\left(w,\hat{w}\right)\mathbb{E}_{\mathcal{C}}\left[P\left(y|f_{\mathcal{C}}\left(w,\hat{w}\right)\right)\right]\nonumber \\
& \qquad\times\left(\sum_{w',\hat{w}'}P(w',\hat{w}')\mathbb{E}_{\mathcal{C}}\left[P\left(y|f_{\mathcal{C}}\left(w',\hat{w}'\right)\right)\right]\right)^{s}Q^{-s}\left(y\right)\label{eq:-42-2}\\
& =\sum_{y}\sum_{x,\hat{x}}P\left(x,\hat{x}\right)P\left(y|x,\hat{x}\right)\nonumber \\
& \qquad\times\left(\sum_{x,\hat{x}}P\left(x,\hat{x}\right)P\left(y|x,\hat{x}\right)\right)^{s}Q^{-s}\left(y\right)\\
& =\sum_{y}P^{1+s}\left(y\right)Q^{-s}\left(y\right)\\
& =e^{sD_{1+s}(P_{Y}\|Q_{Y})}.\label{eq:-148}
\end{align}
\iffalse
\color{red}
VA comment: Regarding the calculation for $L_{3}$ I think there is equality in the first line. Please check.
\color{blue}
Lei: Yes, thanks for careful checking.
\color{black}
\fi
where \eqref{eq:-42-2} follows since $x\mapsto x^{s}$ is a concave
function
for $0 < s \le 1$
and
we
relax
the summation $\sum_{w'\neq w}$ to $\sum_{w'}$.
\section{\label{sec:Proof-of-Theorem-interaction}Proof of Theorem \ref{thm:sequentialCS-interaction}}
The proof of the cardinality bound is similar to the one for Theorem \ref{thm:sequentialCS}. We next prove the equality in \eqref{eq:-18-1-4-1}.
\subsection{Achievability}
We first prove the achievability part, i.e., $\Gamma\left(\pi_{SXYZ},P_{W}\right)\leq\Delta\left(\pi_{SXYZ},P_{W}\right)$.
We first prove that
\iffalse
\begin{equation}
\Gamma\left(\pi_{SXYZ},P_{W}\right)\leq\overline{\Delta}\left(\pi_{SXYZ},P_{W}\right):=\min_{\substack{P_{U},P_{A|SU},P_{B|XU},P_{Y|ABU},P_{Z|ABU}:\\
H\left(U\right)\leq H\left(ABU|SXYZ\right)+H\left(W\right)
}
}D\left(P_{YZ|SX}\|\pi_{YZ|SX}|\pi_{SX}\right).\label{eq:-18-1-1-2}
\end{equation}
\fi
\begin{equation}
\Gamma\left(\pi_{SXYZ},P_{W}\right)\leq\overline{\Delta}\left(\pi_{SXYZ},P_{W}\right):=\inf_{\substack{P_{U},P_{A|SU},P_{B|XU},P_{Y|ABU},P_{Z|ABU}:\\
H\left(U\right) < H\left(ABU|SXYZ\right)+H\left(W\right)
}
}D\left(P_{YZ|SX}\|\pi_{YZ|SX}|\pi_{SX}\right).\label{eq:-18-1-1-2}
\end{equation}
\iffalse
\color{red}
VA comment: I changed the weak inequality in the constraint to strict inequality
and replaced $\min$ by $\inf$.
\color{black}
\color{blue}
(Lei: Thanks for your revision. Your revision is correct.)
\color{black}
\fi
Let $\left(Q_{U},Q_{A|SU},Q_{B|XU},Q_{Y|ABU},Q_{Z|ABU}\right)$ be
any tuple of joint distributions that satisfy the constraints on the right hand side of
\eqref{eq:-18-1-1-2}.
Both Alice and Bob adopt
a coding scheme as
in the point-to-point setting.
Specifically,
for the
first block,
Alice
sends a sequence of i.i.d. uniform r.v.'s
$A_t \sim \mathrm{Unif}(\mathcal{A})$ to Bob, and Bob
sends a sequence of i.i.d. uniform r.v.'s
$B_t \sim \mathrm{Unif}(\mathcal{B})$ to Alice, where $\mathbf{A}_1,\mathbf{B}_1$ are independent of
$\mathbf{S}_1,\mathbf{X}_1$.
\iffalse
Alice generates
$\mathbf{Y}_1$ with a fixed distribution $\hat{Q}_{Y}^{N}$, and Bob generates
$\mathbf{Z}_1$ with a fixed distribution $\hat{Q}_{Z}^{N}$ where $\hat{Q}_{Y},\hat{Q}_{Z}$ are respectively
optimal distributions attaining $\Delta_1:=\min_{Q_{Y}}D\left(Q_{Y}\|\pi_{Y|S}|\pi_{S}\right)$ and $\Delta_2:=\min_{Q_{Z}}D\left(Q_{Z}\|\pi_{Z|X}|\pi_{X}\right)$.
Note that $\Delta_1,\Delta_2$ are finite by assumption.
\fi
Alice generates $\mathbf{Y}_1$ as a constant sequence equal to $y$ and
Bob generates $\mathbf{Z}_1$ as a constant sequence equal to $z$
where $(y,z)$ are such that $\pi_{YZ|SX}(y,z|s,x) > 0$ for all $(s,x)$
(the existence of such a pair $(y,z)$ was assumed in the statement of the theorem).
Note that $D(\delta_{(y,z)}\|\pi_{YZ|SX}|\pi_{SX})$ is finite, where
$\delta_{(y,z)}$ denotes the probability distribution concentrated at $(y,z)$.
\iffalse
\color{red}
VA comment: Changed the preceding sentence since the earlier version seemed to allow for infinite
relative entropy. Please check.
\color{black}
\color{blue}
(Lei: Thanks for your revision. Your revision is correct.)
\color{black}
\fi
Furthermore, $\mathbf{M}_1,\mathbf{U}_1$ are set to be constant.
\iffalse
\color{red}
VA comment: Deleted the preceding sentence since it seems unnecessary and the notation
$\mathcal{C}_1,\hat{\mathcal{C}}_1$ has not been formally defined in this case. Please check.
\color{black} \color{blue}
(Lei: Thanks for your revision. Your revision is better.)
\color{black}
\fi
For $k$-th block with $k\ge 2$,
\color{black}
Alice and Bob individually
generate $\mathbf{M}_{k}=\mathbf{M}\left(\mathbf{A}_{k-1},\mathbf{B}_{k-1},\mathbf{W}_{k-1}\right)$
and $\mathbf{U}_{k}=\mathbf{U}\left(\mathbf{M}_{k}\right)$ by using
the common codebooks, the previous communication bits $\mathbf{A}_{k-1},\mathbf{B}_{k-1}$,
and the common randomness $\mathbf{W}_{k-1}$. Then by using $\left(\mathbf{S}_{k},\mathbf{U}_{k}\right)$,
Alice generates $\mathbf{A}_{k}$ according to the product conditional
distribution $Q_{A|SU}^{N}$ and then sends it to Bob. By using $\left(\mathbf{X}_{k},\mathbf{U}_{k}\right)$,
Bob generates $\mathbf{B}_{k}$ according to the product conditional
distribution $Q_{B|XU}^{N}$ and then sends it to Alice. Upon observing
$\left(\mathbf{A}_{k},\mathbf{B}_{k},\mathbf{U}_{k}\right)$ Alice
generates $\mathbf{Y}_{k}$ according to the product conditional distribution
$Q_{Y|ABU}^{N}$. Upon observing $\left(\mathbf{A}_{k},\mathbf{B}_{k},\mathbf{U}_{k}\right)$
Bob generates $\mathbf{Z}_{k}$ according to the product conditional
distribution $Q_{Z|ABU}^{N}$.
The distribution synthesized by the code above is exactly the one
synthesized by the following code in the point-to-point setting. Consider
a new scenario in which Alice is a sender and Bob is a receiver.
Specifically,
for the
first block,
the encoder
sends a sequence of i.i.d. uniform r.v.'s
$(A_t,B_t) \sim \mathrm{Unif}(\mathcal{A}\times \mathcal{B})$ to the decoder, where $\mathbf{A}_1,\mathbf{B}_1$ are independent of $\mathbf{S}_1,\mathbf{X}_1$.
The decoder generates
$\mathbf{Y}_1\sim \hat{Q}_{Y}^{N}$ and
$\mathbf{Z}_1\sim \hat{Q}_{Z}^{N}$ independently.
For the $k$-th block with $k\ge 2$,
as in the
interactive setting above, Alice and Bob can individually generate
$\mathbf{M}_{k}=\mathbf{M}\left(\mathbf{A}_{k-1},\mathbf{B}_{k-1},\mathbf{W}_{k-1}\right)$
and $\mathbf{U}_{k}=\mathbf{U}\left(\mathbf{M}_{k}\right)$ by using
the common codebooks, the previous communication bits $\mathbf{A}_{k-1},\mathbf{B}_{k-1}$,
and the common randomness $\mathbf{W}_{k-1}$. Alice observes $\left(\mathbf{S}_{k},\mathbf{X}_{k}\right)$,
generates bits $\left(\mathbf{A}_{k},\mathbf{B}_{k}\right)$ according
to the distribution $Q_{A|SU}^{N}Q_{B|XU}^{N}$, and then sends these
bits to Bob. After receiving these bits, Bob generates $\left(\mathbf{Y}_{k},\mathbf{Z}_{k}\right)$
according to the product conditional distribution $Q_{Y|ABU}^{N}Q_{Z|ABU}^{N}$.
By
the achievability part of the proof of
Theorem \ref{thm:sequentialCS}, the KL divergence induced by
this code is
bounded above by the term in the infimum on the RHS of \eqref{eq:-18-1-1-2}
corresponding to the choice that was made of
$\left(Q_{U},Q_{A|SU},Q_{B|XU},Q_{Y|ABU},Q_{Z|ABU}\right)$. This proves
\eqref{eq:-18-1-1-2} .
The random variable $V$ can be added into the optimization
in the definition of $\overline{\Delta}\left(\pi_{SXYZ},P_{W}\right)$ by the argument
given at the end of achievability proof of Theorem \ref{thm:sequentialCS}.
This shows that $\Gamma(\pi_{SXYZ},P_W) \le \Delta^+(\pi_{SXYZ},P_W)$, where
\[
\Delta^{+}\left(\pi_{SXYZ},P_{W}\right):=\inf_{\substack{P_{UV},P_{A|SUV},P_{B|XUV},P_{Y|ABUV},P_{Z|ABUV}:\\
H\left(U|V\right) < H\left(ABU|SXYZV\right)+H\left(W\right)
}
}D\left(P_{YZ|SXV}\|\pi_{YZ|SX}|\pi_{SX}P_{V}\right).
\]
But under the assumption that there is some
$(y,z)$ such that $\pi_{YZ|SX}(y,z|s,x) > 0$ for all $(s,x)$
one can show by an argument similar to that of
Lemma \ref{lem:psilem} that $\Delta^{+}\left(\pi_{SXYZ},P_{W}\right) =
\Delta\left(\pi_{SXYZ},P_{W}\right)$.
\subsection{Converse}
We next consider the converse part. Observe that
\begin{align*}
D\left(P_{Y^{n}Z^{n}|S^{n}X^{n}}\|\pi_{YZ|SX}^{n}|\pi_{SX}^{n}\right) & =\sum_{k=1}^{n}D\left(P_{Y_{k}Z_{k}|S^{k}X^{k}Y^{k-1}Z^{k-1}}\|\pi_{YZ|SX}|\pi_{SX}^{k}P_{Y^{k-1}Z^{k-1}|S^{k}X^{k}}\right)
\end{align*}
Denote $K\sim\mathrm{Unif}\left[1:n\right]$ as a random time index,
which is independent of all other r.v.'s involved in the system. Define
$U:=\left(A^{K-1},B^{K-1},W^{K}\right),V:=\left(S^{K-1},X^{K-1},Y^{K-1},Z^{K-1},K\right),A:=A_{K},B:=B_{K},S:=S_{K},X:=X_{K},Y:=Y_{K},Z:=Z_{K}$.
Then
\begin{align*}
\frac{1}{n}D\left(P_{Y^{n}Z^{n}|S^{n}X^{n}}\|\pi_{YZ|SX}^{n}|\pi_{SX}^{n}\right) & =D\left(P_{YZ|SXV}\|\pi_{YZ|SX}|\pi_{SX}P_{V}\right)
\end{align*}
It is easy to verify that
\iffalse
\begin{align*}
P_{UVABSXYZ} & =P_{K}(k)P_{W}^{k}\pi_{SX}^{k-1}P_{A^{k-1}|S^{k-1}W^{k-1}}P_{B^{k-1}|X^{k-1}W^{k-1}}P_{Y^{k-1}|A^{k-1}B^{k-1}S^{k-1}W^{k-1}}P_{Z^{k-1}|A^{k-1}B^{k-1}X^{k-1}W^{k-1}}\\
& \qquad\times\pi_{SX}P_{A_{k}|S^{k}A^{k-1}B^{k-1}W^{k}}P_{B_{k}|X^{k}A^{k-1}B^{k-1}W^{k}}P_{Y_{k}|A^{k}B^{k}S^{k}Y^{k-1}W^{k}}P_{Z_{k}|A^{k}B^{k}X^{k}Z^{k-1}W^{k}}\\
& =P_{UV}\pi_{SX}P_{A|SUV}P_{B|XUV}P_{Y|ABSUV}P_{Z|ABXUV}
\end{align*}
\fi
\begin{align*}
P_{UVABSXYZ} & =P_{K}(k)P_{W}^{k}\pi_{SX}^{k-1}P_{A^{k-1}B^{k-1}Y^{k-1}Z^{k-1}|S^{k-1}X^{k-1}W^{k-1}}\\
& \qquad\times\pi_{SX}P_{A_{k}|S^{k}A^{k-1}B^{k-1}Y^{k-1}W^{k}}P_{B_{k}|X^{k}A^{k-1}B^{k-1}Z^{k-1}W^{k}}P_{Y_{k}|A^{k}B^{k}S^{k}Y^{k-1}W^{k}}P_{Z_{k}|A^{k}B^{k}X^{k}Z^{k-1}W^{k}}\\
& =P_{UV}\pi_{SX}P_{A|SUV}P_{B|XUV}P_{Y|ABSUV}P_{Z|ABXUV}.
\end{align*}
Hence it remains to show $H\left(U|V\right)\leq H\left(ABU|SXYZV\right)+H\left(W\right)$.
This can be easily verified similarly to \eqref{eq:-38}-\eqref{eq:-39}.
\iffalse
\color{red}
VA question: In the long displayed equation above, I changed $P_{A^{k-1}|S^{k-1}W^{k-1}}P_{B^{k-1}|X^{k-1}W^{k-1}}$ to
$P_{A^{k-1}B^{k-1}|S^{k-1}X^{k-1}W^{k-1}}$.
This seems to be important and does not seem to affect the proof. Please check.
\color{black}
\color{blue}
(Lei: Thanks for your revision. Your revision is better.)
\color{black}
\fi
\bibliographystyle{unsrt}
|
2,877,628,088,548 | arxiv | \section{Introduction}\label{sintro}
The last few decades have seen tremendous applications of heterogeneous materials in automotive industry, civil, aerospace and mechanical engineering. These materials possess superior mechanical properties attributed to the unique architecture and complex microstructure. Most common among these materials are concrete, alloys, polymers, reinforced composites, etc. A primary assumption generally made for computational modeling of composite materials is that these materials are periodic in microscope scale and the periodic microstructures can be approximated by representative elements (RVEs). To develop composite materials with unusual combination of properties, it is crucial to understand the effects of various characteristics of RVE (microstructure, constituent phase, volume fraction, etc.) on the macroscopic material properties.
For most of composite design problems, effective material properties are used instead of taking all the constituents and microstructure into consideration. A lot of efforts have been devoted to developing mathematical and/or numerical approaches for calculating the effective/homogenized material properties. The homogenization theory, which was originally developed to study partial differential equations (PDEs) with rapidly oscillating coefficients \cite{hornung2012homogenization}, have been widely used to describe the mechanics of periodic microstructure of composites. Numerous homogenization approaches have been developed to calculate effective properties which can subsequently be used for macroscopic structural analysis. These approaches can be classified into three categories \cite{aboudi2012}:
(1) Analytical methods, e.g., the Voigt and Reuss model \cite{voigt, reuss1929};
(2) Semi-analytical methods, e.g., generalized method of cells (GMC) \cite{aboudi2004gmc}, self-consistent scheme (SCS) \cite{scs1968, scs1978}, Mori-Tanaka method \cite{mori1973};
(3) Numerical methods, e.g., finite flement (FE) \cite{feyel1999fe2, feyel2000fe2, feyel2003fe2, miehe2002fe, smit1998fe, terada2001fe}, boundary element (BE) \cite{kaminski1999bem, okada2001bem}, fast fourier transforms (FFT) \cite{lee2011fft, eisenlohr2013fft}. Each of the aforementioned approaches has its pros and cons. For example, the Voigt and Reuss model provides a quick but rough upper and lower bounds for various properties of a heterogeneous material; however, the gap of the bounds grows with regard to the volume fraction (VF) of inclusions and degree of phase contrast \cite{kanoute2009review}. Although the numerical methods involve complicated discretizations and expensive computations, they offer a possibility to deal with homogenization of materials with arbitrary microstructures and constitutive models. These methods have been shown to be effective to model multiscale material behavior in both linear \cite{kaminski1999bem, terada2001fe, yuan2008homo} and nonlinear \cite{miehe2002fe, feyel1999fe2, feyel2003fe2, feyel2000fe2, yuan2008homo, hain2008numerical, liu2014regularized, liu2016nonlocal} problems given the properly defined material constituents and microstructure. However, when it comes to iterative computational design of composites with desired properties, these numerical approaches are not suitable owing to the huge computational cost \cite{fritzen2018two} and high-dimensional sample space \cite{olson1997computational, lookman2019active, fujii2001composite}.
With recent prevalence of data science, many machine learning (ML) approaches are applied to material modeling, analysis and design. A novel framework named materials knowledge systems (MKS) \cite{landi2010mks, fast2011mks, kalidindi2015mks} was formulated to exploit the merits of both analytical and numerical approaches. MKS has its theoretical rooted in statistical continuum mechanics theory \cite{kroner1986statistical} in which the structure-property linkage of the material is expressed as a polynomial series sum. Each term of the series is a product of local microstructure-related statistics and their corresponding physics-related (or influence) coefficients \cite{landi2010mks} which reflects the underlying knowledge of the localization relationship. The core of MKS is employing discrete Fourier transform (DFT) to calibrate these coefficients to the results obtained from finite element analysis (FEA). This framework is characterized with computational efficiency, data-driven property and remarkable accuracy in a variety of works \cite{landi2010mks, fast2011mks, kalidindi2015mks, yabansu2014mks, gupta2015mks}. There are also some other applications of ML approaches on computational materials and mechanics. Fritzen and Kunc \cite{fritzen2018two} proposed a two-stage data-driven homogenization approach for nonlinear solids. Lookman \emph{et al.} \cite{lookman2019active} employed an active learning approach to navigate the search space for identifying the candidates for guiding experiments or computations. The surrogate model and utility function are used for selecting among the unexplored data.
Traditional ML techniques rely largely on the feature engineering which is time-consuming and requires expert knowledge \cite{lecun2015deep}. Deep learning (DL) approaches have been developed to address this problem. Typical DL approaches, such as fully connected neural networks (FC-NN), convolutional neural network (CNN) and long short-term memory (LSTM), can automatically find the most salient features to be learned. These approaches have demonstrated tremendous success in a variety of applications such as speech recognition, computer vision (CV), natural language processing (NLP), etc. They turned out to excel at discovering the intricate structures within high-dimensional data \cite{lecun2015deep}. Some of the recent applications of DL approaches on material science include material classification \cite{zheng2016, Bell2015}, defect classification \cite{masci2012, cha2017deep, faghih2016deep}, microstructure identification \cite{azimi2018, chowdhury2016image}, microstructure reconstruction \cite{li2018transfer, li2018GAN}, composite strength prediction \cite{yeh1998modeling}, etc. In this paper, we are mostly concerned with the works employing DL to address multiscale problems of composites, particularly in the context of homogenization. For example, Lu \emph{et al.} \cite{lu2018data} adopted neural networks (NN) to establish a surrogate model for electric conduction homogenization. By substituting the RVE calculations with the data-driven model in multiscale modeling, a drastic saving of computational cost (of the order of $10^4$) was achieved compared with the FE$^2$ method \cite{feyel1999fe2}. Le \emph{et al.} \cite{le2015computational} proposed a decoupled computational homogenization approach for nonlinear elastic materials using NN to approximate the effective potential. Li \emph{et al.} \cite{li2018transfer} employed the transfer learning idea on CNN for microstructure reconstruction. Bhattacharjee and Matou\v{s} \cite{bhattacharjee2016nonlinear} performed both homogenization and localization on heterogeneous hyperelastic materials using a digital database and the manifold-based nonlinear reduced order model (MNROM). The mapping between the macroscopic loading conditions and the reduced space are realized through NN. Yang \emph{et al.} \cite{yang18gan} applied generative adversarial networks (GAN) to generate microstructures with desired material properties. Cang \emph{et al.} \cite{cang2017microstructure} implemented convolutional deep belief network (CDBN) to automate a two-way conversion between microstructures and their lower-dimensional feature representations. Bostanabad \emph{et al.} \cite{bost2016} adopted a supervised learning approach to characterize and reconstruct the stochastic microstructure.
Most of the above studies are image-based and perform representation learning within a 2D space. To fully capture the salient features of the microstructure, the 3D geometry should be considered. Very recently, Yang \emph{et al.} \cite{yang2018dl} showed the potential of three-dimensional CNN (3D-CNN) for effective elastic modulus homogenization for composites and demonstrated its advantages over traditional sophisticated physics-inspired approaches. In this work, we leverage the capability of 3D-CNN and design a network architecture for predicting the effective material properties of composites with complex heterogeneous microstructure. In particular, we consider the composite material whose microstructure can be modeled as a two-phase (matrix/inclusion) representative volume element (RVE) with randomly distributed inclusions. A diverse group of RVEs, or virtual experiment samples, have been created with different inclusion VFs and spatial distributions, so that the sample space is large enough to include the intrinsic features of the material. Finite element analysis is then performed for each of the samples to obtain the effective moduli through linear homogenization. The geometric information of the RVEs have been pre-processed to a structured (Euclidean) grid that the 3D-CNN can accept. The networks are then trained, verified and tested on synthetic data. The salient features of the proposed 3D-CNN approach include: (1) It provides an end-to-end solution for predicting the effective material properties of the composites with high efficiency and good accuracy given the geometric information of the corresponding RVEs; (2) It is able to reproduce the probability distribution of the material properties for the input characterized with uncertainty; and (3) Its transferability makes it extremely convenient while adding supplementary data or training a model for new datasets that come from different microstructure configurations. It is worth noticing that the proposed 3D-CNN approach is more advantageous for heterogeneous materials with multiple constituents and extremely complex microstructure since it has demonstrated extraordinary ability in handling high-dimensional inputs \cite{ji3DCNN, maturana3dCNN, kamnitsas3dcnn, yang2018dl}.
The rest of the paper is organized as follows. Section \ref{method} describes the proposed methodology. Specifically, generation of the training dataset (based on 2000 RVEs) is presented in Section \ref{pre}. Some pre-processing procedures including the conversion of the raw data into the input format of the 3D-CNN model, computational homogenization approach to obtain the labels and rescaling of the labels are given. In Section \ref{3dcnnintro}, the basic concepts and mathematical operations involved in the 3D-CNN are briefly introduce. Section \ref{results} presents the numerical results. We first conduct a series of parametric tests on the hyperparameters of the 3D-CNN to find an optimal network architecture. Then a comparison between the 3D-CNN prediction and FEA result is made with regard to the accuracy and efficiency in Section \ref{discussion3D-CNN}. The benefits of the 3D-CNN approach over traditional FEM are discussed. The uncertainty quantification (UQ) is conducted in Section \ref{UQ} to evaluate the performance of current 3D-CNN model on the input with uncertainty. In Section \ref{transferlearning}, the transferability of the proposed 3D-CNN model to a dataset representing a different type of composite microstructure is investigated. Section \ref{conc} is devoted to conclusions of the paper and the outlook of future work.
\section{Methodologies}\label{method}
\subsection{Generation of dataset and preprocessing}\label{pre}
In this present study, we consider particle reinforced composites, e.g., metal matrix composites, whose microstructure can be represented by a parametric two-phase RVE model with a matrix phase and a particle phase. We generate 2000 RVE samples with the volume fraction (VF) of inclusions ranging from $2\%$ to $28\%$ to establish the training data (see Fig. \ref{cloud_pt}(a)). The radius of each spherical inclusion follows a uniform distribution in the range of 0.05$\sim$0.1 mm while the length of the square RVE is 1.0 mm. The spherical inclusions within the RVE are randomly distributed based on the Hierarchical Random Sequential Adsorption (HRSA) algorithm \cite{bai2014auto} that could achieve a user-defined desired VF. Generally the RVE with low inclusion VF demonstrates greater randomness in terms of particle spatial distributions resulting in significant randomness of the effective elastic moduli. To resolve this issue, we impose an exponential distribution on the number of samples with regard to the VF, as shown in Fig. \ref{vf_distribution}, to most likely cover the manifold of the relationship between random RVEs and the effective elastic properties. This practice is meant to better capture the spatial characteristics of the RVE during training.
\begin{figure}[t!]
\centering
\includegraphics[width=1.0\textwidth]{./cloud_point_v2.pdf}
\caption{(a) Geometry of RVE and (b) Generated phase voxel (point cloud).}
\label{cloud_pt}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=0.8\textwidth]{./VF_distribution_2.pdf}
\caption{Distribution of number of samples with respect of the inclusion volume fraction.}
\label{vf_distribution}
\end{figure}
Preprocessing is required to convert the geometric data (or discretized mesh data) of RVEs into Euclidean grids, the input format that a 3D-CNN can take. We resample the phase information of RVEs within fixed Cartesian grids. In particular, these RVEs are converted to $101\times101\times101$ voxels where matrix phase is denoted by 0 while inclusion phase by 1 (see Fig. \ref{cloud_pt}(b)). Given the center location and geometric information of all these inclusions, a level-set function is used to assign a binary phase value $p$ to the voxel with coordinate $(x, y, z)$, namely,
\begin{equation}
\mathit{p}(x,y,z)= \begin{cases}
1; & \text{if } \sqrt{(x-x_i)^2+(y-y_i)^2+(z-z_i)^2} < r_i,~\exists~i\in{\{1,2,...n\}}\\
0; & \text{otherwise}
\end{cases}
\label{levelset}
\end{equation}
where $n$ is the total number of inclusions; $x_i$, $y_i$, $z_i$ and $r_i$ are coordinates of center and radius of $i$th spherical inclusion, respectively. It is noted that the size of 101 (length of 0.01 mm) is selected in order to cover all the microstructural details within the RVEs since the minimum radius for the spherical inclusion is 0.05 mm.
\begin{table}[t!]
\small
\centering
\caption{Material properties for RVE with spherical inclusions.}
\begin{tabular}{l c c}
\hline
\multicolumn{1}{l}{Materials} & \multicolumn{1}{l}{Young's modulus (GPa)} & \multicolumn{1}{l}{Poisson's ratio} \\ \hline
Matrix & 68.9 & 0.33 \\
Inclusion & 379.2 & 0.21 \\ \hline
\end{tabular}
\label{matprop}
\end{table}
The deep learning method falls into the category of supervised learning in which training data needs to be labelled. In this paper, linear elastic materials are considered for both the matrix and inclusion phases. The material properties of each single phase used in this study are given in Table \ref{matprop}. Since the considered composite is assumed to be orthotropic, its constitutive tensor has 9 independent variables from which the following vector of effective material properties can be obtained:
\begin{equation}
\centering
\mathbf{y}=\begin{bmatrix}
E_{11}&E_{22}&E_{33}&G_{23}&G_{13}&G_{12}&\nu_{21}&\nu_{31}&\nu_{12}&\nu_{32}&\nu_{13}&\nu_{23}
\end{bmatrix}^T\label{matparams}
\end{equation}
where $\mathbf{y}$ denotes the label for each RVE sample; $E$'s, $G$'s and $\nu$'s denote the effective elastic modulus, shear modulus and Poisson's ratio, respectively, along different directions. The computational homogenization is conducted based on the framework of the classical mathematical homogenization theory \cite{guedes1990homo, yuan2008homo} via FEM. Specifically, the homogenized constitutive tensor can be calculated through averaging $\Sigma_{ij}^{mn}(\mathbf{\xi})$ over the entire volume $\Theta$ of the RVE, expressed as
\begin{equation}\label{eq_homo:1}
\overline{L}_{ijmn}=\cfrac{1}{|\Theta|}{\displaystyle\int_{\mathbf{\xi}\in\Theta}\Sigma_{ij}^{mn}(\mathbf{\xi})d\mathbf{\xi}}
\end{equation}
in which $\Sigma_{ij}^{mn}(\mathbf{\xi})$ is the stress influence function with regard to the fine-scale coordinate $\mathbf{\xi}$. It can be interpreted as the fine-scale stress induced by an unit overall strain $\epsilon_{mn}^{c}$. The implementation of numerical homogenization is achieved by solving a RVE (or unit cell) problem under periodic boundary conditions (PBCs) and unit thermal strain \cite{yuan2008homo}. The components of constitutive tensor can then be obtained by averaging the stress field over the volume, given by
\begin{equation}\label{eq_homo:2}
\overline{L}_{ijmn}=\cfrac{1}{|\Theta|}{\displaystyle\int_{\mathbf{\xi}\in\Theta}\sigma_{ij}^{mn}(\mathbf{\xi})d\mathbf{\xi}}
\end{equation}
The constitutive tensor $\underline{\underline{\mathbf{C}}}$ can be represented in the Voigt notation, written as
\begingroup
\renewcommand*{\arraystretch}{1.2}
\begin{equation}\label{c_mat}
\begin{bmatrix}
\sigma_{xx}\\ \sigma_{yy}\\ \sigma_{zz}\\ \sigma_{yz}\\ \sigma_{zx}\\ \sigma_{xy}
\end{bmatrix}
=\underbrace{
\begin{bmatrix}
C_{11} &C_{12} &C_{13} &0 &0 &0 \\
C_{12} &C_{22} &C_{23} &0 &0 &0 \\
C_{13} &C_{23} &C_{33} &0 &0 &0 \\
0 & 0 & 0 & C_{44} & 0 & 0 \\
0 & 0 & 0 & 0 & C_{55} & 0 \\
0 & 0 & 0 & 0 & 0 & C_{66}
\end{bmatrix} }_{\underline{\underline{\mathbf{C}}}}
\begin{bmatrix}
\epsilon_{xx}\\ \epsilon_{yy}\\ \epsilon_{zz}\\ \gamma_{yz}\\ \gamma_{zx}\\ \gamma_{xy}
\end{bmatrix}
\end{equation}
\endgroup
The inverse of $\underline{\underline{\mathbf{C}}}$ results in the so-called stiffness matrix $\underline{\underline{\mathbf{S}}}$ shown as follows, from which the vector of effective material properties can be calculated.
\begingroup
\renewcommand*{\arraystretch}{1.5}
\begin{equation}\label{s_mat}
\underline{\underline{\mathbf{S}}}=\underline{\underline{\mathbf{C}}}^{-1}=\begin{bmatrix}
\frac{1}{E_{11}} &-\frac{\nu_{21}}{E_{22}} &-\frac{\nu_{31}}{E_{33}} &0 &0 &0 \\
-\frac{\nu_{12}}{E_{11}} & \frac{1}{E_{22}} &-\frac{\nu_{32}}{E_{33}} &0 &0 &0 \\
-\frac{\nu_{13}}{E_{11}} & -\frac{\nu_{23}}{E_{22}} & \frac{1}{E_{33}} &0 &0 &0 \\
0 & 0 & 0 & \frac{1}{G_{23}} & 0 & 0 \\
0 & 0 & 0 & 0 & \frac{1}{G_{31}} & 0 \\
0 & 0 & 0 & 0 & 0 & \frac{1}{G_{12}}
\end{bmatrix}
\end{equation}
\endgroup
The entire dataset is randomly divided into training, validation and testing set with a ratio of 1400:300:300. The training set is used for learning the parameters (i.e., weights and biases) of the 3D-CNN (see Section \ref{3dcnnintro}) while the validation set is used to tune the hyperparameters (i.e., the architecture) of the 3D-CNN. The validation set is also adopted as a regularizer via early stopping, i.e., to stop the training when the loss function on the validation set increases, as it is a sign of overfitting to the training data set \cite{ripley1996pattern}. The testing set, which is usually unseen to the training process, serves for confirming and evaluating the actual predictive power of the trained deep learning model.
Since the RVEs in this paper are generated artificially, we can directly extract the microstructure information from the formatted data. However, how to obtain the phase information of samples from field measurements is an issue of interest. The nondestructive imaging techniques such as X-ray micro-topography \cite{stienon2009xray, proudhon2007xray, Alp2020}, 3-D atom probe \cite{kelly2007atom} and automated serial sectioning \cite{spowart2006automated} have made possible to capture 3D material microstructures. These imaging techniques are characterized with high resolution. For example, the synchrotron radiation micro-tomography is able to sample microstructure with resolution of 2048 voxels in each dimension \cite{betz2007imaging}. Therefore, it will be promising for field measurement techniques to be incorporated into current framework with appropriate down-sampling on the microstructure data. Nevertheless, this is beyond the scope of the current study.
\subsection{3D convolutional neural network}\label{3dcnnintro}
The convolutional neural network (CNN or ConvNet) is proposed originally to solve computer vision problems. LeCun \emph{et al.} \cite{lecun1989} designed one of the very first CNNs to successfully recognize handwritten digits in 1990s. The applications of CNNs were limited by the less powerful computational ability at that time. In recent years, the CNN approach has been revived owing to the huge advancements on computational hardware such as the general purpose graphics processing units (GPUs). The CNN differs from the classical FC-NNs by its weights sharing mechanism. In this study, we propose a 3D-CNN architecture (see Fig. \ref {CNN_ARCH}) for inferring homogenized/effective material properties (e.g., elastic moduli, shear moduli and Poisson's ratio) from given microstructure configurations (e.g., discretized distribution of material phases).
\begin{figure}[t!]
\centering
\includegraphics[width=1.0\textwidth]{./Architecture.pdf}
\caption{Proposed 3D-CNN architecture for effective properties prediction of heterogeneous materials.}
\label{CNN_ARCH}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=1.0\textwidth]{./conv_op.pdf}
\caption{Convolution operation in the 3D-CNN model.}
\label{Conv_op}
\end{figure}
The 3D-CNN takes the preprocessed phase voxels as the input. Subsequent multiple convolutional layers serve as the critical composition of the CNN with 3D convolution filters and pooling operation. As indicated in Fig. \ref{Conv_op}, the 3D filter scans over the phase voxels and applies convolutional operation (dot product of tensor) to produce the feature map. The weights and biases of each filter are trained to extract the salient features from the input. Stride, padding and filter size are a few common hyperparameters defining convolutional operations. Stride denotes the size of step that filters move each time. For instance, the stride length of 1 means the filters scan the volume voxel by voxel. To preserve the spatial size of the output, it is convenient to pad the input with zero-value voxels. A good example is that the input and output size in Fig. \ref{Conv_op} will be identical ($21\times21\times21$) if the convolution operations are conducted with stride of 1 and 2-layer zero padding. Pooling layers are usually added between successive convolutional layers in the CNN. It progressively reduces the spatial size of data through down-sampling the voxel value. Pooling operations may compute the maximum or average value within a volume. Fig. \ref{Pool_op} demonstrates how the max-pooling operation works with volume size of $2\times2\times2$. The activation layers are employed to introduce nonlinearity into the CNN. It takes a single number and performs a certain fixed mathematical function. Some typical activation functions are Rectified Linear Unit (ReLU) $f(x)=\max(0, x)$, Sigmoid function $f(x)={\mathrm{1} }/{(\mathrm{1} + e^{-x})}$ and tanh function $f(x)=\tanh(x)$. Among these non-linear functions, ReLU (see Fig. \ref{relu}) is preferred and thus selected owing to its cheap arithmetic operation and excellent convergence properties on the stochastic gradient descent (SGD) algorithm compared with the Sigmoid or tanh functions. Mathematical expression of the output value $\gamma$ at position $(x, y, z)$ on $j$th feature map in $i$th 3D convolutional layer can be written as \cite{ji3DCNN}
\begin{equation}
\gamma_{j,xyz}^{(i)}={\rm ReLU}\left(b_{j}^{(i)}+\sum_{m=1}^{M^{(i-1)}}\sum_{p=0}^{P^{(i)}-1}\sum_{q=0}^{Q^{(i)}-1}\sum_{r=0}^{R^{(i)}-1}w_{jm,pqr}^{(i)}\gamma_{m,(x+p)(y+q)(z+r)}^{(i-1)} \right )
\end{equation}
where ${\rm ReLU(\cdot)}$ denotes element-wise ReLU function; $b_{j}^{(i)}$ is the common bias for $j$th feature map; $w_{jm,pqr}^{(i)}$ is the $(p, q, r)$th value of the 3D filter for $j$th feature map at $i$th layer associated with the $m$th feature map in the $(i-1)$th layer; $M^{(i-1)}$ is the number of feature maps at $(i-1)$th layer; $P^{(i)},~Q^{(i)}$ and $R^{(i)}$ denotes the size of the 3D filter at $i$th layer. In this paper, an constant filter size is used through the convolutional layers.
\begin{figure}[t!]
\centering
\includegraphics[width=0.8\textwidth]{./Pool_op.pdf}
\caption{Max pooling operation in the 3D-CNN model.}
\label{Pool_op}
\end{figure}
FC layers are employed at the end of the 3D-CNN where neurons between two neighboring layers are interconnected. FC layers take the flattened tensor from the previous hidden layer as the input and map them to desired output which are exactly the vector of effective material properties with length of 12 as shown in Eq. \eref{matparams}. The connection between two adjacent layers, here from $(i-1)$th to $i$th, can be expressed concisely in the form of tensor operations, given by
\begin{equation}
\boldsymbol{\gamma}^{(i)}={\rm \sigma}\left(\mathbf{{W}}^{(i)}\boldsymbol{\gamma}^{(i-1)} + \mathbf{b}^{(i)}\right)
\end{equation}
where $\boldsymbol{\gamma}^{(i-1)}$ and $\boldsymbol{\gamma}^{(i)}$ are the input and output for the $i$th layer; ${\rm \sigma(\cdot)}$ denotes the Sigmoid activation function acting element-wise; $\mathbf{W}^{(i)}$ and $\mathbf{b}^{(i)}$ are the weight matrix and bias vector between the $i$th and the $(i-1)$th FC layers. The weights and biases in the FC layers are also the trainable parameters of the 3D-CNN. The mean square error (MSE) between the 3D-CNN's prediction and the ground truth of the training dataset is adopted as the loss function, given by
\begin{equation}
\begin{aligned}
\mathcal{L}(\mathbf{W},\mathbf{b}|\mathbf{D})=\frac{1}{n}\sum_{k=1}^{n}\sum_{l=1}^{12}\left(\text{y}_{kl}^{truth}-\text{y}_{kl}^{pred}\right)^2
\end{aligned}
\label{mse}
\end{equation}
where $\mathbf{D}$ denotes the training data set \{$\mathbf{x}_k,\mathbf{y}_k$\}, $n$ denotes the total number of samples, $l$ denotes the index of component for the effective properties vector. The optimal parameters $\{\mathbf{W^*}$,$\mathbf{b^*}\}$ can be obtained by minimizing the loss function, namely,
\begin{equation}
\{\mathbf{W^{*}},\mathbf{b^{*}}\} = \argmin_{\{\mathbf{W},\mathbf{b}\}} \left\{\mathcal{L}(\mathbf{W},\mathbf{b}|\mathbf{D})\right\}
\label{argmin}
\end{equation}
\begin{figure}[t!]
\centering
\includegraphics[width=0.5\textwidth]{./ReLU.pdf}
\caption{Rectified linear unit (ReLU) function as the activation function.}
\label{relu}
\end{figure}
A common issue facing the DNN-based approaches is to mitigate the overfitting brought about by its extraordinary approximation ability. Several treatments are considered in this paper. Firstly, it is noted that there is a scale difference between the outputs of elastic (or shear) modulus and Poisson's ratio which might bring problems to the optimization. For example, an output variable with a large range of values could result in large error gradient values causing weight values to change dramatically, making the learning process unstable \cite{bishop1995neural}. Therefore, label rescaling is employed here to address this problem. The elastic (or shear) moduli and Poisson's ratios are scaled separately into the range of 0 to 1 with a min-max scaling manner, e.g.,
\begin{equation}
\bar{\mathbf{y}}=\frac{\mathbf{y}-\min(\mathbf{y})}{\max(\mathbf{y})-\min(\mathbf{y})}
\label{minmaxscale}
\end{equation}
where $\mathbf{y}$ denotes the output component vector while $\bar{\mathbf{y}}$ is the corresponding scaled output. In addition to label rescaling, early stopping \cite{girosi1995regularization} and sample shuffling during training are adopted as the regularizer to alleviate overfitting.
In this paper, the filter size of $5\times5\times5$, stride length of 1 and no-padding are configured on the convolutional layers. The max pooling with size $2\times2\times2$ is set on pooling layer. ReLU function is selected as the activation function due to the aforementioned merits. Other hyperparameters such as number of filters, depth of convolutional layers and FC layers are selected through parametric tests on Section \ref{parameterictest}. An adaptive learning rate optimization algorithm, Adam \cite{kingma2014adam}, is used for the training of 3D-CNN models.
\section{Results}\label{results}
In this section, the performance of the proposed 3D-CNN for heterogeneous material homogenization is evaluated. A series of parametric tests on the network hyperparameters (e.g., filter size, depth, width) of the 3D-CNN are conducted to find a suitable architecture for the current application. Then the trained 3D-CNN is used to predict the effective properties on the testing dataset with 300 RVEs. The performance of the 3D-CNN is discussed based on a comparison between the model inference and the results produced by traditional FEA. Since the randomness of the inclusion distribution is a significant aspect of the naturally occurring heterogeneous materials, uncertainty quantification is conducted on an independent dataset that imitates the input with uncertainty. Finally, the transferability of the trained 3D-CNN model to a new dataset (for RVEs with different inclusion shapes) is examined. The proposed 3D-CNN architecture is implemented with the high-level neural networks API - Keras \cite{2015keras} using Python 3.7. Our networks are trained on platform equipped with NVIDIA GeForce GTX 1080 Ti GPU and Intel Core i9-7980XE [email protected].
\subsection{Design of the 3D-CNN architecture}\label{parameterictest}
A typical CNN involves dozens of hyperparameters that control the learning process of the network. These include the number of filters, filter size, learning rate, number of hidden layers, and batch size, just to name a few. The huge sample space makes it nearly impossible to find an optimal combination of hyperparameters. Therefore, the hyperparameters are usually searched in a trail-and-error manner within a small sample space. Fortunately, some rules of thumb for selecting the hyperparameters can be applied here. For example, the number of filters in convolutional layer should reflect the enrichment of characterized features within the input. It usually depends on the number of samples and the complexity of the task \cite{krizhevsky2012imagenet}. The number of FC layers and neurons determine directly the total number of parameters (weights and biases) and thus affect the representational power of the network \cite{cybenko1989approximation}. Therefore, it is natural to select the hyperparameter combination based on the underlying physical and mathematical interpretation of the ``knowledge'' to be learned. In this section, We evaluate different 3D-CNN architectures with varying number of hidden layers and filters. The MSE on validation dataset) defined in Eq. \eref{mse} is used to measure the performance of each 3D-CNN architecture.
As is mentioned in Section \ref{pre}, the Cartesian grid used to sample the RVE is of size $101\times101\times101$ so that the smallest inclusion with radius equaling 0.05 mm could be captured. In our design of the 3D-CNN architecture, we select the fixed filter size to be 5 in all three dimensions so that it is identical to the size of smallest inclusion. The batch size during training is set to be 25 according to the memory space available on the hardware. The trained model with the best performance, i.e., lowest MSE, for each architecture after 1000 epochs are saved for later inference. This is the commonly used technique aforementioned as early stopping. Table \ref{performance} provides the configurations of each 3D-CNN architecture. The convolutional layer and fully connected layer are denoted by Conv($\cdot$) and FC($\cdot$) respectively. The values within the bracket of Conv($\cdot$) indicates the filter number and filter size. Similarly the values within the bracket of FC($\cdot$) represent the number of neurons (width) in each layer. For example, Conv(32, 5) means the convolutional layer has 32 filters whose size is $5\times5\times5$ while FC(64, 32) means the FC layers are composed of two layers whose widths are 64 and 32 respectively.
The corresponding MSE of each architecture is listed on Table \ref{performance}. It can be inferred from Case 2 and Case 5 that increasing the number of filters in each convolutional layer does not necessarily improve the prediction performance. A large width of the network might cause the overfitting issue on the training dataset. A similar situation is met while increasing the number of convolutional layers (e.g., Case 4) and FC layers (e.g., Case 7) on the basis of Case 2. Moreover, the comparison between Cases 1-3 demonstrates that two FC layers each with 64 and 32 neurons deliver the best prediction performance on unseen validation dataset. Taking both the accuracy and efficiency of the listed architecture into account, the 3D-CNN architecture with hyperparameters shown in Case 2 is employed in the remaining of this paper.
To check how the material phase information (input) is transformed through the multiple convolution layers, a group of example feature map slices are visualized in Fig. \ref{featuremap}. The feature map is 3D in the present 3D-CNN approach. However, for easier visualization, we only show the slices of the feature map. Typically the colored area in the feature map is called activated region which represents the extracted feature from the input. In our application, the activated region reflects the microstructural characteristics that the convolutional filters capture. It can be seen that the first convolutional layer preserves most of the details in the original input. As we go deeper into the convolutional layer, the feature map becomes abstract because it usually represents the high-level characteristics that is less visually recognizable.
\begin{table}[t!]
\caption{Performance comparison of various 3D-CNN architecture.}
\small
\centering
\begin{tabular}{l p{12cm} l l l}
\hline
\multicolumn{1}{l}{No.} & \multicolumn{1}{l}{Model description} & \multicolumn{1}{l}{MSE}\\
\hline
1 & Conv(16,5)+Conv(16,5)+Conv(32,5)+FC(32$\times$ 16) & 2.82$\times$ $10^{-4}$ \\
2 & Conv(16,5)+Conv(16,5)+Conv(32,5)+FC(64$\times$ 32) & 2.79$\times$ $10^{-4}$ \\
3 & Conv(16,5)+Conv(16,5)+Conv(32,5)+FC(128$\times$ 64) & 2.89$\times$ $10^{-4}$ \\
4 & Conv(16,5)+Conv(16,5)+Conv(16,5)+Conv(32,5)+FC(64$\times$ 32) & 6.33$\times$ $10^{-4}$ \\
5 & Conv(16,5)+Conv(32,5)+Conv(32,5)+FC(64$\times$ 32) & 3.61$\times$ $10^{-4}$ \\
6 & Conv(16,5)+Conv(16,5)+Conv(16,5)+FC(64$\times$ 32) & 3.44$\times$ $10^{-4}$ \\
7 & Conv(16,5)+Conv(16,5)+Conv(32,5)+FC(64$\times$ 32$\times$ 32) & 2.86$\times$ $10^{-4}$ \\
\hline
\end{tabular}
\label{performance}
\end{table}
\subsection{Prediction of effective properties} \label{discussion3D-CNN}
In this part, the performance of the trained 3D-CNN model is evaluated on the validation dataset which consists of 300 RVEs with the same VF range (e.g., 2\%-28\%). The prediction and ground truth (obtained through FEA) for the effective properties of each RVE sample is shown as scatter plots in Fig. \ref{predvstruth}. Since the baseline is given as red line, we can see that the trained model gives accurate prediction for the 12 components of Young's modulus, shear modulus and Poisson's ratio. It is also observed that the prediction on the samples with low VFs, e.g., the left-bottom part of the scatter plots for moduli ($E$'s and $G$'s) and the upper-right part for Poisson's ratios ($\nu$'s), perform identically well as the counterpart with high VFs, though larger randomness is present for RVEs with low VFs. Let us recall that, in Section \ref{pre}, an exponential distribution of sample number against VF is imposed while generating the datasets. As a result, the number of low VF samples is much greater than the number of the high VF samples, which alleviates the issue of low VF induced uncertainty. To measure the prediction performance quantitatively, we calculate the mean absolute relative error (MARE) for each component, defined as
\begin{equation}
\begin{aligned}
\text{MARE}=\frac{1}{n} \sum_{i=1}^{n} \frac{|\hat{\text{y}}_i-\text{y}_i|}{|\text{y}_i|}
\end{aligned}
\label{mare}
\end{equation}
where $\hat{\text{y}}_i$ and $\text{y}_i$ are prediction and ground truth of the component for the $i$th test sample. The results are summarized in Table \ref{maretable}. It is seen that the MAREs, for all the 12 components, are below 0.55\%.
\begin{figure}[t!]
\centering
\includegraphics[width=1.0\textwidth]{./FeatureMap_v2.pdf}
\caption{Visualization of the input slice and the feature map slice.}
\label{featuremap}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=1.0\textwidth]{./Pred_vs_Truth_new.pdf}
\caption{Comparison between the 3D-CNN prediction and ground truth (FEA).}
\label{predvstruth}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=0.65\textwidth]{./Consumed_time.pdf}
\caption{Comparison of the computational time per RVE for 3D-CNN prediction and FEA.}
\label{timeconsumed}
\end{figure}
\begin{table}[t!]
\caption{MARE on the testing data set.}
\small
\centering
\begin{tabular}{l l l l l l l l l l l l l}
\hline
\multicolumn{1}{l}{~} & \multicolumn{1}{l}{\textit{$E_{11}$}} & \multicolumn{1}{l}{\textit{$E_{22}$}} & \multicolumn{1}{l}{\textit{$E_{33}$}} & \multicolumn{1}{l}{\textit{$G_{23}$}} & \multicolumn{1}{l}{\textit{$G_{13}$}} & \multicolumn{1}{l}{\textit{$G_{12}$}} & \multicolumn{1}{l}{$\nu_{21}$} & \multicolumn{1}{l}{$\nu_{31}$} & \multicolumn{1}{l}{$\nu_{12}$} & \multicolumn{1}{l}{$\nu_{32}$} & \multicolumn{1}{l}{$\nu_{13}$} & \multicolumn{1}{l}{$\nu_{23}$}\\
\hline
MARE (\%) & 0.45&0.42&0.47&0.48&0.50&0.53&0.22&0.23&0.24&0.22&0.25&0.22 \\
\hline
\end{tabular}
\label{maretable}
\end{table}
The efficiency of the proposed 3D-CNN approach is also evaluated by drawing a contrast of computational time between 3D-CNN inference and finite element analysis (FEA), as shown in Fig. \ref{timeconsumed}. Note that the process of inference is defined as the prediction operation on new input data by the trained 3D-CNN model. It is well known that GPU parallelization has been highly exploited on deep learning models in the context of both network training and inference. However, to make the comparison fair, we also collect the averaged CPU time consumed by 3D-CNN by performing inference on the CPU. The configurations of hardware are given in the beginning of Section \ref{results}. It is noted that the CPU time of FEA depends largely on the number of discrete elements of the RVE. In our test, the number of tetrahedral elements in the discretized RVEs increases from 7705 for VF=2.13\% to 26136 for VF=28.22\% to maintain a reliable discretization. We collect the averaged computational time of 10 different RVEs for each fixed VF. For the 3D-CNN inference, however, the computational time is theoretically independent of VF since all the RVEs are sampled with $101\times101\times101$ voxels. We collect the computational time for 300 RVEs with all VF covered. It is seen from Fig. \ref{timeconsumed} that the GPU-based 3D-CNN inference provides 25$\times$ speedup for the low-VF samples and up to 50$\times$ speedup for the highest VF. Even on the CPU, the 3D-CNN beats the traditional FEA for VF greater than 12\%.
Another aspect that cannot be neglected is the computational time for training the 3D-CNN model. For the training dataset with 1400 RVEs considered in this paper, it takes about 35 hours on GPU to achieve a desirable trained model. Nevertheless, it is noticed that the high computational demand for training is one-off which means that, once the model is trained, the inference can be conducted on any upcoming new RVEs that fall into the ensemble. Even if the new RVE comes from another type of composite, the transferability of the trained 3D-CNN, discussed in Section \ref{transferlearning}, will largely reduce the time expense. We will verify that transfer learning makes the 3D-CNN extremely convenient for adding supplementary data or training a model for new datasets to account for new scenarios and enhance the generalizability of the trained model.
\begin{figure}[t!]
\centering
\includegraphics[width=1.0\textwidth]{./Pred_E_nu.pdf}
\caption{Distribution of effective properties of the 3D-CNN prediction and FEA result for the dataset of VF=7\%: (a) $E_{11}$ (b) $E_{22}$ (c) $E_{33}$ (d) $G_{23}$ (e) $G_{13}$ (f) $G_{12}$ (g) $\nu_{21}$ (h) $\nu_{31}$ (i) $\nu_{12}$ (j) $\nu_{32}$ (k) $\nu_{13}$ (l) $\nu_{23}$. }
\label{comparison}
\end{figure}
\subsection{Uncertainty quantification}\label{UQ}
Modelling of natural composites is usually characterized with uncertainty. The uncertainty may come from the measurement error, microstructural randomness, mixture of materials and some other natural (or artificial) systems. Predicting the effective properties in a probabilistic/statistical sense, such as obtaining the mean value and standard deviation (SD), would provide a better reference for engineering and designing materials.
Strictly speaking, the output of a trained 3D-CNN is deterministic for a given input. Therefore, the uncertainty of the 3D-CNN output is largely affected by the variance of the input. To verify that our 3D-CNN model is capable of preserving the uncertainty of the effective properties for the particle reinforced composite, we manually introduce the uncertainty into the dataset to be evaluated in the framework of Monte Carlo simulation. In particular, we generate a group of RVE samples VF following Gaussian distributions (e.g., mean of $7\%$, $14\%$ and $21\%$ for three configurations, and identical standard deviation of 0.7\%). In each configuration, 200 RVEs are generated. The details for the uncertainty quantification (UQ) dataset are listed in Table \ref{uq_para}.
\begin{table}[t!]
\caption{VF parameters used for uncertainty quantification.}
\small
\centering
\begin{tabular}{l l l l l}
\hline
\multicolumn{1}{l}{Mean ($\mu$, \%)} & \multicolumn{1}{l}{SD ($\sigma$, \%)} & \multicolumn{1}{l}{Number of RVE Samples}\\
\hline
7 & 0.7 & 200 \\
14 & 0.7 & 200 \\
21 & 0.7 & 200 \\
\hline
\end{tabular}
\label{uq_para}
\end{table}
Fig. \ref{comparison} presents the predicted distributions of the modulus and Poisson's ratio components in comparison with the reference ground truth. These histograms are fitted by Gaussian distributions whose mean and standard deviation parameters are also listed. It can be seen that the trained 3D-CNN produces very satisfactory prediction of the probabilistic distributions, e.g., the errors for the mean value of all the components are less than 1\% while the predicted standard deviations are also very close, but slightly larger than, the ground truth values.
The predicted distributions of effective properties for three VF cases are shown in Fig. \ref{allprediction}. It is obvious that the modulus components are positively correlated to the VF while the Poisson's ratio components are on the contrary, which are in accordance with the Voigt/Reuss models \cite{voigt, reuss1929}. In a word, the 3D-CNN's ability to reproduce the probabilistic distribution of the effective properties, together with its high computational efficiency (as discussed in Section \ref{discussion3D-CNN}), will make it a promising approach for probabilistic design of engineering composites \cite{du2002efficient, chen2006probabilistic}.
\begin{figure}[t!]
\centering
\includegraphics[width=1.0\textwidth]{./AllPredDist_v2.pdf}
\caption{Distribution of predicted effective properties for RVEs with mean VF of 7\%, 14\% and 28\%: (a) $E_{11}$ (b) $E_{22}$ (c) $E_{33}$ (d) $G_{23}$ (e) $G_{13}$ (f) $G_{12}$ (g) $\nu_{21}$ (h) $\nu_{31}$ (i) $\nu_{12}$ (j) $\nu_{32}$ (k) $\nu_{13}$ (l) $\nu_{23}$.}
\label{allprediction}
\end{figure}
\begin{figure}[t!]
\centering
\subfigure[VF = 6.26\%]{
\begin{minipage}[t]{0.32\linewidth}
\centering
\includegraphics[width=\linewidth]{./VF=6.pdf}
\end{minipage}%
}%
\subfigure[VF = 14.67\%]{
\begin{minipage}[t]{0.32\linewidth}
\centering
\includegraphics[width=\linewidth]{./VF=14.pdf}
\end{minipage}%
}%
\hfill
\centering
\subfigure[VF = 20.79\%]{
\begin{minipage}[t]{0.32\linewidth}
\centering
\includegraphics[width=\linewidth]{./VF=20.pdf}
\end{minipage}
}%
\caption{Sampled phase voxel for RVEs with ellipsoidal inclusions}\label{elliprve}
\end{figure}
\subsection{Transferability of the trained model} \label{transferlearning}
A major assumption required by lots of DL approaches is that the training data and future data must be from the same generator or source. In other words, they must be in the same feature space and follow the same distribution \cite{pan2009survey}. In many real-world applications, this assumption may not hold. In these cases, if the knowledge learned by the DL model can be transferred, it will largely reduce the effort on retraining the model on new datasets. The transferability refers to the convenience of transferring the learned knowledge from a trained model to a different but related problem. Transfer learning is usually achieved through transfer the pre-trained model to a new model with additional trainable parameters relying on new datasets of interest (e.g., adding additional layers to the trained network while fixing the transferred network parameters from the original model). The need for transferring learning arises when the acquired data can be easily outdated or when the target data is intractable (or costly) to obtain but a less rich dataset is available.
\begin{figure}[t!]
\centering
\includegraphics[width=10cm]{./learning_curve.pdf}
\caption{Comparison of the learning curve}
\label{learningcurve}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=16cm]{./TL_VS_TS_v3.pdf}
\caption{Comparison of the prediction performance for trained from scratch (TS) model and transfer learning (TL) model.}
\label{transferlearningperfor}
\end{figure}
To examine the transferability of the previously trained 3D-CNN model, we consider a new dataset of RVEs with ellipsoidal inclusions. The major and minor radius of the ellipsoids are randomly generated within the interval $[0.05,0.1]$ independently. The overall range of the VF is the same as the previous data set (e.g., 2\%-28\%). Following the similar manner in Section \ref{pre}, a much smaller dataset with only 320 samples is generated with the sample number as an exponential function of the VF. The entire data is divided into training, validation and testing set with the ratio of 200:60:60. We transfer the trained 3D-CNN model with the architecture described in Case 2 as shown in Table \ref{performance}, and establish a new 3D-CNN network by adding one additional convolution layer before flattening, e.g., Conv(32, 5), and activate the trainable parameters in the last FC layer (see Fig. \ref{CNN_ARCH}). We try to generalize the trained 3D-CNN for RVEs with spherical inclusions to the case of ellipsoidal inclusions (see Fig. \ref{elliprve} for example). The transfer learning (TL) model fine tuned with new dataset is compared with the model trained from scratch (TS) with regard to the learning curve and prediction performance.
The learning curves for both cases (e.g., TL \emph{vs.} TS) are shown in Fig. \ref{learningcurve} where the $x$-axis denotes the epoch and $y$-axis denotes the loss function value. It can be seen that the initial loss is much lower for the TL model which indicates that the transferred model for sphere inclusions can already well capture the latent features for RVEs with ellipsoidal inclusions. The asymptote for the TS convergence curve is much higher than that of the TL model. Given a small amount of training dataset, the TL model converges much faster, e.g., only taking dozens of epochs for the loss to decrease to $3.7\times10^{-4}$ which is close to our best model ($2.79\times10^{-4}$) discussed in Section \ref{parameterictest}. It demonstrates that we can successfully transfer the knowledge of as well as fine tune a pre-trained 3D-CNN model to achieve a good accuracy at a particular low training expense. Therefore, the transfer learning might help overcome problems such as lack of the data and high computational cost for training a large size model. These challenges are critical especially in field measurements where rich RVEs data are costly to obtain. The prediction performance of these both TL and TS models are compared in Fig. \ref{transferlearningperfor}. It is evident that the TL model outperforms the TS model no matter in the bias or variance of the effective properties. The averaged MARE for the TL and TS models on all the components are 0.43\% and 1.36\%, respectively.
\section{Conclusions}\label{conc}
In this paper, a 3D-CNN approach is proposed for determining the effective/homogenized properties of heterogeneous materials. In particular, we consider RVEs reinforced by reandomly distributed particle inclusion (e.g., spherical and elliptical inclusions). The geometries of the RVEs are generated using the Hierarchical Random Sequential Adsorption (HRSA) algorithm \cite{bai2014auto} and labeled for training the 3D-CNN model via FEA-based linear homogenization. The proposed 3D-CNN architecture consists of multiple hidden 3D convolution layers, pooling operation, flattening and FC layers. A parametric study of the network hyperparameters has been conducted to determine optimal network architecture with the best inference performance. The proposed approach was tested on a series of numerical experiments in the context of inference accuracy, computational efficiency, uncertainty quantification (UQ) ability and transferability. Results show promising potential of the proposed approach to advance efficient design and analysis of heterogeneous composite materials composed of representative microstructures.
It is worth mentioning that the comparison with the FEA results shows that the 3D-CNN model can reproduce the effective material properties with a high accuracy (e.g., the maximum prediction error around 0.5\%). Also, the 3D-CNN demonstrates advantages regarding the computational efficiency for the model inference over the traditional FEA, which could achieve a speed-up from 25$\times$ to 50$\times$ on GPU operation. In addition, the UQ study verifies the trained 3D-CNN is capable of accurately predicting probabilistic distributions of the effective material properties, in the framework of Monte Carlo simulation, when uncertain inputs are provided.
In summary, the proposed 3D-CNN is characterized with the following benefits: (1) It provides an end-to-end solution for predicting the effective material properties from 3D phase voxels which can be obtained via parametric modeling, advanced imaging techniques such as X-ray micro-topography and 3D atom probe; (2) It is able to reproduce the effective properties with a high accuracy and computational efficiency, which would empower a faster product design iteration or design optimization for composite materials; (3) The 3D-CNN model preserves the probabilistic distribution of effective material properties for the input with uncertainty. This feature makes the 3D-CNN a promising approach for probabilistic engineering design; (4) The knowledge learned by the 3D-CNN model can be easily transferred to a different type of composite at a very low training expense, in which a good prediction performance can still be achieved even on a new dataset of small size with the help of transfer learning. This particular characteristic becomes significant when RVEs data are costly to obtain.
Nevertheless, there remain some issues of interest on the 3D-CNN model to be studied in the future, that include, for example: (1) investigating the universality of transfer learning on other heterogeneous materials such as fiber-reinforced or polymer composites; (2) extending the current 3D-CNN to model composites with nonlinear material properties (to this end, the load condition on each RVE must be considered as part of the input for the networks); (3) applying the trained model or retraining a generative model for microstructure generation with desired effective properties \cite{yang18gan, li2018GAN}.
\section*{Acknowledgement}
The authors would like to thank Dr. Hao Sun and Dr. Ruiyang Zhang, from the Department of Civil and Environmental Engineering at Northeastern University, for their constructive suggestions and comments on designing the proposed network.
\section*{Data Availability}
The datasets and computer codes are available upon request from the authors.
\section*{References}
\bibliographystyle{elsarticle-num}
|
2,877,628,088,549 | arxiv |
\section{Problem Formulation}
Given a rooted tree $T = (V,E)$, let $L \subset V$ be the set of leaf nodes. We wish to find
a set $P \subseteq L$ which comprises a ``good'' placement. Let $f(P) \in \natnum^{\repfact + 1}$ be defined as:
$$f(P) := \langle p_o, p_1, ..., p_\repfact \rangle, ~~ p_i = |\{ v \in V \mid v \text{ has survival number } i \text { w.r.t } P\} |.$$
For our purposes, a good placement is
one in which $f(P)$ is lexicominimum. Note that optimizing for lexicominima prioritizes minimizing the survival frequency in ascending order of survival number. Intuitively, the number of replicas having survival number $i$ will be minimized before minimizing those having survival number $i+1$.
We first claim that any placement which lexicominimizes $f(P)$ must have \textit{balanced} nodes. This local property is key to obtain a near-linear running time. Before giving the formal definition, we motivate this idea with an example.
Consider Figure \ref{fig-refa} in which $P_1$ and $P_2$ consist of the leaves labeled $1$ and $2$ respectively. Upon computing $f(P_1)$ and $f(P_2)$, we find that $f(P_1) = \langle 2, 1, 3, 7 \rangle \lleq \langle 1, 1, 4, 7 \rangle = f(P_2)$. We invite the reader to verify that $P_2$ is an optimal solution for this tree. Consider the state of affairs at the root of the tree. in $P_1$, all replicas are placed in the subtree rooted at node $a$. This causes node $a$ to have survival number 0, while in $P_2$, 1 replica is present on $a$, while 2 are present on $b$. This causes the root to be \textit{unbalanced} when $P_2$ is considered, while in $P_1$ the root is balanced.
A placement $P$ is said to be balanced if all nodes $v \in V$ are balanced. Let node $n$ have children indexed $1, ..., k$. Also, let the subtree rooted at the $i^{th}$ child of $n$ have $\ell_i$ leaves, and $r_i$ replicas placed on it in placement $P$. Node $n$ is said to be balanced iff:
$$\ell_j - r_j > 0,~ \ell_i - r_i > 0 \implies |r_i - r_j| \leq 1, \text{ and } $$
$$\ell_i - r_i = 0,~ \ell_j - r_j > 0,~ r_j \neq 0 \implies r_i \leq r_j.$$
This definition makes a distinction between ``filled'' nodes (for which $\ell_i - r_i = 0$) and ``unfilled'' nodes, for which $\ell_i - r_j > 0$.
Imagining the children of $n$ as bins of capacity $\ell_i$, condition
\section{Introduction}
With the surge towards the cloud, our websites, services and data are increasingly being hosted by third-party data centers. These data centers are often contractually obligated to ensure that data is rarely, if ever unavailable. One cause of unavailability is co-occurring component failures, which can result in outages that can affect millions of websites \cite{Ver:2013:Blog}, and can cost millions of dollars in profits \cite{Ple:2013:Blog}. An extensive one-year study of availability in Google's cloud storage infrastructure showed that such failures are relatively harmful. Their study emphasizes that ``correlation among node failure dwarfs all other contributions to unavailability in our production environment" \cite{ForFra+:2010:OSDI}.
We believe that the correlation found among failure events arises due to dependencies among system components. Much effort has been made in the literature to produce quality statistical models of this correlation. But in using such models researchers do not make use of the fact that these dependencies can be explicitly modeled, since they are known to the system designers. In contrast, we propose a model wherein such dependencies are included, and demonstrate how an algorithm may make use of this information to optimize placement of data replicas within the data center.
To achieve high availability, data centers typically store multiple replicas of data to tolerate the potential failure of system components. This gives rise to a \emph{placement problem}, which, broadly speaking, involves determining which subset of nodes in the system should store a copy of a given file so as to maximize a given objective function (\emph{e.g.}, reliability, communication cost, response time, or access time). While our focus is on replica placements, we note that our model could also be used to place replicas of other system entities which require high-availability, such as virtual machines and mission-critical tasks.
\begin{figure}[b]
\begin{subfigure}[b]{0.5\textwidth}
\input{informal}
\vspace{-0.5cm}
\caption{Scenario I} \label{fig:scenarioI}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\input{informalb}
\vspace{-0.5cm}
\caption{Scenario II} \label{fig:scenarioII}
\end{subfigure}
\caption{Two scenarios represented by directed trees.}
\label{fig:scenarios}
\end{figure}
In this work, we present a new model for causal dependencies among failures, and a novel algorithm for optimal replica placement in our model. An example model is given as Fig. \ref{fig:scenarios}, in which three identical replicas of the same block of data are distributed on servers in a data center. Each server receives power from a surge protector which is located on each server rack. In Scenario I, each replica is located on nodes which share the same rack. In Scenario II, each replica is located on separate racks. As can be seen from the diagram of Scenario I (Fig. \ref{fig:scenarioI}), a failure in the power supply unit (PSU) on a single rack could result in a situation where every replica of a data block is completely unavailable, whereas in Scenario II, (Fig. \ref{fig:scenarioII}) three PSUs would need to fail in order to achieve the same result. In practice, Scenario I is avoided by ensuring that each replica is placed on nodes which lie on separate racks. This heuristic is already part of known best-practices. Our observation is that this simple heuristic can be suboptimal under certain conditions. For example, consider a failure in the aggregation switch which services multiple racks. Such a failure could impact the availability of every data replica stored on the rack. Moreover, this toy example only represents a small fraction of the number of events that could be modeled in a large data center.
While many approaches for replica placement have been proposed, our approach of modeling causal dependencies among failure events appears to be new. Other work on reliability in storage area networks has focused on objectives such as mean time to data loss \cite{DBLP:conficdcsLianCZ05,lian-chen-zhang}. These exemplify an approach towards correlated failure which we term ``measure-and-conquer''. In measure-and-conquer approaches, a measured degree of correlation is given as a parameter to the model. In contrast, we model explicit causal relations among failure events which we believe give rise to the correlation seen in practice. In \cite{DBLP:conficdcsLianCZ05} the authors consider high-availability replica placement, but are primarily focused on modeling the effects of repair time. Later work \cite{lian-chen-zhang} begins to take into account information concerning the network topology, which is a step towards our approach. Similar measure-and-conquer approaches are taken in \cite{ForFra+:2010:OSDI,BakWyl+:2002:TR, WeaMos+:2002:SRDS, NatYu+:2006:NSDI}. More recently, Pezoa and Hayat \cite{Pezoa} have presented a model in which spatially correlated failures are explicitly modeled. However, they consider the problem of task allocation, whereas we are focused on replica placement. In the databases community, work on replica placement primarily focuses on finding optimal placements in storage-area networks with regard to a particular distributed access model or mutual exclusion protocol \cite{HuJia+:2001:JPDC, SheWu:2001:TCS, ZhaWu+:2009:JPDC}. In general, much of the work from this community focuses on specialized communication networks and minimizing communication costs --- system models and goals which are substantially different from our own.
Recently, there has been a surge of interest in computer science concerning cascading failure in networks \cite{BluEas+:2011:FOCS, NieLui+:2014:IPL, KimDob:2010:TransRel, ZhuYan+:2014:TPDS}. While our model is most closely related to this work, the existing literature is primarily concerned with applications involving large graphs intended to capture the structure of the world-wide web, or power grids. The essence of all these models is captured in the \textit{threshold cascade model} \cite{BluEas+:2011:FOCS}. This model consists of a directed graph in which each node $v$ is associated with a threshold, $\ell(v) \in \natnum^+$. A node $v$ experiences a cascading failure if at least $\ell(v)$ of its incoming neighbors have failed. This model generalizes our own, wherein we pessimistically assume that $\ell(v) = 1$ for all nodes $v$. Current work in this area is focused on network design \cite{BluEas+:2011:FOCS}, exploring new models \cite{NieLui+:2014:IPL, KimDob:2010:TransRel}, and developing techniques for adversarial analysis \cite{ZhuYan+:2014:TPDS}. To our knowledge, no one has yet considered the problem of replica placement in such models.
\section{Model}\label{sec:model}
We model dependencies among failure events as a directed graph, where nodes represent failure events, and a directed edge from $u$ to $v$ indicates that the occurrence of failure event $u$ could trigger the occurrence of failure event $v$. We refer to this graph as the \emph{failure model}
Given such a graph as input, we consider the problem of selecting nodes on which to store data replicas. Roughly, we define a \emph{placement problem} as the problem of selecting a subset of these vertices, hereafter referred to as a \emph{placement}, from the failure model so as to satisfy some safety criterion. In our application, only those vertices which represent storage servers are candidates to be part of a placement. We refer to such vertices as \emph{placement candidates}. Note that the graph also contains vertices representing other types of failure events, which may correspond to real-world hardware unsuitable for storage (such as a ToR switch), or even to abstract events which have no associated physical component. In most applications, the set of placement candidates forms a subset of the set of vertices.
More formally, let $E$ denote the set of failure events, and $C$ denote the set of placement candidates. We are interested in finding a \emph{placement} of size $\repfact$, which is defined to be a set $P \subseteq C$, with $|P| = \repfact$. Throughout this paper we will use $P$ to denote a placement, and $\repfact$ to denote its size. We consistently use $C$ to denote the set of placement candidates, and $E$ to denote the set of failure events.
Let $G = (V,A)$ be a directed graph with vertices in $V$ and edges in $A$. The vertices represent both events in $E$ and candidates in $C$, so let \mbox{$V = E \cup C$}. A directed edge between events $e_1$ and $e_2$ indicates that the occurrence of failure event $e_1$ can trigger the occurrence of failure event $e_2$. A directed edge between event $e$ and candidate $c$ indicates that the occurrence of event $e$ could compromise candidate $c$. We will assume failure to act transitively. That is, if a failure event occurs, all failure events reachable from it in $G$ also occur. This a pessimistic assumption which leads to a conservative interpretation of failure.
We now define the notions of \textit{failure number} and \textit{failure aggregate}.
\begin{definition}\label{def-failure}
Let $e \in E$. The \textit{failure number} of event $e$, denoted $f(e,P)$, for a given placement $P$, is defined as the number of candidates in $P$ whose correct operation could be compromised by occurrence of event $e$. In particular, $$f(e,P) = | \{ p \in P \mid p \text{ is reachable from } e \text{ in } G \}|.$$
\end{definition}
As an example, node $u$ in Fig. \ref{fig:scenarios} has failure number $3$ in Scenario I, and failure number $1$ in Scenario II. The following property is an easy consequence of the above definition. A formal proof can be found in the appendix.
\begin{property}\label{lem-desc}
For any placement $P$ of replicas in tree $T$, if node $i$ has descendant $j$, then $f(j, P) \leq f(i, P)$.
\end{property}
The failure number captures a conservative criterion for a safe placement. Intuitively, we consider the worst case scenario, in which every candidate which \emph{could} fail due to an occurring event \emph{does} fail. Our goal is to find a placement which does not induce large failure numbers in any event.
To aggregate this idea across all events, we define \textit{failure aggregate}, a measure that accounts for the failure number of every event in the model.
\begin{definition}
The \emph{failure aggregate} of a placement $P$ is a vector in $\mathbb{N}^{\repfact+1}$, denoted $\vec{f}(P)$, where \mbox{$\vec{f}(P) := \langle p_\repfact, ..., p_1, p_0\rangle$}, and each $p_i := \left| \big\{ e \in E \mid f(e, P) = i\big\} \right|$.
\end{definition}
In Fig. \ref{fig:scenarios}, node $v$ has failure aggregate $\langle 2, 0, 0, 1 \rangle$ in Scenario I and failure aggregate $\langle 1, 0, 2, 0 \rangle$ in Scenario II. Failure aggregate is also computed in Fig. \ref{fig-balanced-survivalnums}.
In all of the problems considered in this paper, we are interested in optimizing $\vec{f}(P)$. When optimizing a vector quantity, we must choose a meaningful way to totally order the vectors. In the context of our problem, we find that ordering the vectors with regard to the \emph{lexicographic order} is both meaningful and convenient. The lexicographic order $\leq_L$ between $\vec{f}(P) = \langle p_\repfact, ..., p_1, p_0\rangle$ and $\vec{f}(P') = \langle p'_\repfact, ..., p'_1, p'_0\rangle$ is defined via the following formula:
$$\vec{f}(P) \leq_L \vec{f}(P') \iff \exists~ m > 0, ~\forall ~i > m \big[p_i = p'_i \wedge p_m \leq p'_m\big]. $$
To see why this is desirable, consider a placement $P$ which lexicominimizes $\vec{f}(P)$ among all possible placements. Such a placement is guaranteed to minimize $p_\repfact$, i.e. the number of nodes which compromise \emph{all} of the entities in our placement. Further, among all solutions minimizing $p_\repfact$, $P$ also minimizes $p_{\repfact-1}$, the number of nodes compromising \emph{all but one} of the entities in $P$, and so on for $p_{\repfact-2}, p_{\repfact-3},..., p_{0}$. Clearly, the lexicographic order nicely prioritizes minimizing the entries of the vector in an appealing manner.
Throughout the paper, any time a vector quantity is maximized or minimized, we are referring to the maximum or minimum value in the lexicographic order. We will also use $\vec{f}(P)$ to denote the failure aggregate, and $p_i$ to refer to the $i^{th}$ component of $\vec{f}(P)$, where $P$ can be inferred from context.
In the most general case, we could consider the following problem.
\begin{problem}\label{prob:additive-function}
Given graph $G = (V,A)$ with $V = C \,\cup\, E$, and positive integer $\repfact$ with $\repfact < |C|$, find a placement $P \subseteq C$ with $|P| = \repfact$ such that $\vec{f}(P)$ is lexicominimum.
\end{problem}
Problem \ref{prob:additive-function} is NP-hard to solve, even in the case where $G$ is a bipartite graph. In particular, a reduction to independent set can be shown. However, the problem is tractable for special classes of graphs, one of which is the case wherein the graph forms a directed, rooted tree with leaf set $L$ and $C = L$. Our main contribution in this paper is a fast algorithm for solving Problem \ref{prob:additive-function} in such a case. We briefly mention a greedy algorithm which solves the problem on $O(n^2\repfact)$ time. However, since $n \gg \repfact$ in practice our result of an $O(n + \repfact^2)$ algorithm is much preferred.
\subsection{An $O(n^2\repfact)$ Greedy Algorithm}
The greedy solution to this problem forms a partial placement $P'$, to which new replicas are added one at a time, until $\repfact$ replicas have been placed overall. $P'$ starts out empty, and at each step, the leaf $u$ which lexicominimizes $\vec{f}(P' \cup \{u\})$ is added to $P'$. This greedy algorithm correctly computes an optimal placement, however its running time is $O(n^2\repfact)$ for a tree of unbounded degree. This running time comes about since each iteration requires visiting $O(|L|)$ leaves for inclusion. For each leaf $q$ which is checked, every node on a path from $q$ to the root must have its failure number computed. Both the length of a leaf-root path and the number of leaves can be bounded by $O(n)$ in the worst case, yielding the result.
That the greedy algorithm works correctly is not immediately obvious. It can be shown via an exchange argument that each partial placement found by the greedy algorithm is a subset of some optimal placement. This is the content of Theorem 1 below.
To establish the correctness of the greedy algorithm, we first introduce some notation. For a placement $P$ and $S \subseteq V$, let $\vec{f}(S,P) = \langle g_\repfact, g_{\repfact - 1}, ..., g_1, g_0 \rangle$ where \mbox{$g_i := | \{ x \in S \mid f(x,P) = i \} |$}. Intuitively, $\vec{f}(S,P)$ gives the failure aggregate for all nodes in set $S \subseteq V$. We first establish the truth of two technical lemmas before stating and proving Theorem \ref{thm-greedy}.
\begin{comment} We introduce notation for the set of nodes on a path from $u$ to $v$ as $$u \rightsquigarrow v := \{ x \in V \mid x \text{ is on the path from node $u$ to node $v$ } \}.$$
\begin{equation}
X \cap Y \neq \emptyset \implies \vec{f}(X \cup Y, E) = \vec{f}(X, E) + \vec{f}(Y, E)
\end{equation}
\begin{equation}
\vec{f}(E, P \cup \{x\}) = \vec{f}(E, P) - \vec{f}(r \rightsquigarrow x, P) + \vec{f}(r \rightsquigarrow x, P \cup \{x\})
\end{equation}
\begin{equation}
x \notin P \implies \reallywidehat{\vec{f}(r \rightsquigarrow x, P )} = \vec{f}(r \rightsquigarrow x, P \cup \{x\})
\end{equation}
Additionally, if $\vec{v} = \langle v_1, ..., v_k \rangle$ we define the vector shifted one index to the left as $\hat{\vec{v}} = \langle v_2, ..., v_k, 0 \rangle$. The following propositions are trivially obtained from these definitions, and are presented without proof.
\begin{equation}
\vec{v}_\repfact = 0 \implies \vec{v} \lleq \hat{\vec{v}}
\end{equation}
\begin{equation}
\hat{\vec{v}} + \hat{\vec{w}} = \reallywidehat{\vec{v} + \vec{w}}
\end{equation}
\begin{equation}
\forall \alpha \in \mathbb{R} : \alpha\hat{\vec{v}} = \reallywidehat{\alpha \vec{v}}
\end{equation}
\end{comment}
\begin{lemma}\label{lem-path-ineq}
Let $r$ be the root of a failure model given by a tree. Given $P \subseteq C$, $a,b \in C - P$. If $f(r\rightsquigarrow a, P) <_L f(r\rightsquigarrow b, P)$ then $f(P \cup \{a\}) <_L f(P \cup \{b\})$.
\end{lemma}
\begin{proof}
Suppose $f(r \rightsquigarrow a, P) <_L f(r \rightsquigarrow b, P)$. Let nodes on the paths from $r$ to $a$ and from $r$ to $b$ be labeled as follows:
$$r \rightarrow a_1 \rightarrow a_2 \rightarrow ... \rightarrow a_n \rightarrow a$$
$$r \rightarrow b_1 \rightarrow b_2 \rightarrow ... \rightarrow b_m \rightarrow b$$
We proceed in two cases.
In the first case, there is some $1 \leq i \leq \min (m,n)$ for which $f(a_i, P) < f(b_i, P)$. Let $i$ be the minimum such index, and let $f(b_i, P) = k$. Clearly, \mbox{$f(P \cup \{a\})_k < f(P\cup \{b\})_k$}, since $P \cup \{b\}$ counts $b_i$ as having survival number $k$ and $P \cup\{a\}$ does not. Moreover, since $f(a_\ell, P) = f(b_\ell, P)$ for all $\ell < i$, we have that for all $j > k$, $f(P \cup \{a\})_j = f(P \cup \{b\})_j$ by Property \ref{prob:additive-function}.
In the second case, $f(a_i, P) \geq f(b_i, P)$ for all $1 \leq i \leq \min(m,n)$.
In this case, if $f(a_i, P) > f(b_i, P)$ for some $i$, the only way we could have $f(r \rightsquigarrow a, P) <_L f(r\rightsquigarrow b, P)$ is if there is some $j > i$ with $f(a_j, P) < f(b_j, P)$, but this is a contradiction. Therefore, $f(a_i,P) = f(b_i,P)$ for all $1 \leq i \leq \min(m,n)$. So, we must also have $n \leq m$, since if $n > m$, we would have \mbox{$f(r \rightsquigarrow a, P) >_L f(r \rightsquigarrow b, P)$}. Moreover, since $f(r \rightsquigarrow a, P) <_L f(r \rightsquigarrow b, P)$, we must have that $n < m$, for if $n = m$, we would have $f(r \rightsquigarrow a, P) = f(r \rightsquigarrow b, P$), a contradiction.
We have just shown the existence of some node $b_{n+1}$, for which we must have that \mbox{$f(b_{n+1}, P ) \leq f(a_n, P)$}. Notice that the path $r \rightsquigarrow a$ does not have an $(n+1)^{st}$ node, so it's clear that if $f(b_{n+1}, P) = k$, then $f(P \cup \{a\})_k < f(P \cup \{b\})_k$. Finally, since $n < m$, we have by Property \ref{prob:additive-function}, that $f(a_i, P) \leq f(a_n,P) \leq k$ for all $1 \leq i \leq n$. By an additional application of Property \ref{prob:additive-function} it's easy to see that for all $j > k$, we have $f(P \cup \{a\})_j = f(P \cup \{b\})_j$. \qed
\end{proof}
From Lemma \ref{lem-path-ineq}, we obtain the following result as an easy Corollary.
\begin{corollary}\label{coro-iff}
Let $r$ be the root of a failure model given by a tree. Given $P \subseteq C$, $a,b \in C - P$. Then $f(r\rightsquigarrow a, P) \lleq f(r\rightsquigarrow b, P)$ if and only if $f(P \cup \{a\}) \lleq f(P \cup \{b\})$.
\end{corollary}
\begin{proof}
Suppose $f(r\rightsquigarrow a, P) \lleq f(r\rightsquigarrow b, P)$. If $f(r\rightsquigarrow a, P) = f(r\rightsquigarrow b, P)$, then since the only nodes which change failure number when considering placements $P$ and $P \cup \{a\}$ are those on the paths $r \rightsquigarrow a$, and each of these nodes' failure numbers increase by $1$, we must have that $f(P \cup \{a\}) = f(P \cup \{b\})$, since the sequence of failure numbers in $r \rightsquigarrow a$ and $r \rightsquigarrow b$ are the same. If $f(r \rightsquigarrow a, P) <_L f(r \rightsquigarrow b, P)$ then by Lemma \ref{lem-path-ineq} the Corollary is proven.
If instead $f(P \cup \{a\}) \lleq f(P\cup \{b\})$, and yet $f(r \rightsquigarrow a, P) >_L f(r \rightsquigarrow b, P)$, then by Lemma \ref{lem-path-ineq} we obtain that $f(P \cup \{a\}) >_L f(P \cup \{b\})$, a contradiction. \qed
\end{proof}
Given a node $u$ in a tree, let $L(u)$ be the set of all leaves which are descendants of $u$.
\begin{lemma}\label{lem-technical}
Given $P \subseteq C$, $a,b \in C$. Let $c$ be the least common ancestor of $a$ and $b$, and let $d$ be the child of $c$ on the path from $c$ to $a$. If \mbox{$f(r\rightsquigarrow a, P) \lleq f(r\rightsquigarrow b, P)$} and $X \subset C - \{a,b\}$ for which $L(d) \cap X = \emptyset$, and $a,b \notin X$ then
$$f(P \cup X \cup \{a\}) \lleq f(P \cup X \cup \{b\}).$$
\end{lemma}
\begin{proof}
We have that $f(r\rightsquigarrow a, P) \lleq f(r\rightsquigarrow b, P)$. Consider $f(r\rightsquigarrow a, P \cup X)$ and $f(r\rightsquigarrow b, P \cup X)$. We wish to show that $f(r\rightsquigarrow a, P \cup X) \lleq f(r\rightsquigarrow b, P \cup X)$. Since $c$ is the least common ancestor of $a$ and $b$, it is clear that nodes on $r \rightsquigarrow c$ have equivalent failure numbers in both cases. Therefore it suffices to show that $f(c\rightsquigarrow a, P \cup X) \lleq f(c\rightsquigarrow b, P \cup X)$.
Note that since $d \cap L(X) = \emptyset$, we have that $f(c \rightsquigarrow a, P \cup X) = f(c \rightsquigarrow a, P)$. Moreover, since the addition of nodes in $X$ cannot cause failure numbers on the path $c \rightsquigarrow b$ to decrease, we must have that $f(c \rightsquigarrow b, P) \lleq f(c \rightsquigarrow b, P \cup X)$. Altogether, we have that
$$f(c \rightsquigarrow a, P \cup X) = f(c \rightsquigarrow a, P) \lleq f(c \rightsquigarrow b, P) \lleq f(c \rightsquigarrow b, P \cup X).$$
By applying Corollary \ref{coro-iff}, we obtain that $f(P \cup X \cup \{a\}) \lleq f(P \cup X \cup \{b\})$. \qed
\end{proof}
\begin{figure}[h]
\centering
\input{proof-figure-thm-1}
\caption{Named nodes used in Theorem \ref{thm-greedy}. The arrow labeled ``swap" illustrates the leaf nodes between which replicas are moved, and is not an edge of the graph.}\label{fig:thm-1}
\end{figure}
\begin{theorem}\label{thm-greedy}
Let $P_i$ be the partial placement from step $i$ of the greedy algorithm. Then there exists an optimal placement $P^\ast$, with $|P^\ast| = \repfact$ such that $P_i \subseteq P^\ast$.
\end{theorem}
\begin{proof}
The proof proceeds by induction on $i$. $P_0 = \emptyset$ is clearly a subset of any optimal solution. Given $P_i \subseteq P^\ast$ for some optimal solution $P^\ast$, we must show that there is an optimal solution $Q^\ast$ for which $P_{i+1} \subseteq Q^\ast$. Clearly, if $P_{i+1} \subseteq P^\ast$, then we are done, since $P^\ast$ is optimal. In the case where $P_{i+1} \not\subseteq P^\ast$ we must exhibit some optimal solution $Q^\ast$ for which $P_{i+1} \subseteq Q^\ast$. Let $u$ be the leaf which was added to $P_i$ to form $P_{i+1}$. Let $v$ be the leaf in $P^\ast - P_{i+1}$ which has the greatest-depth least common ancestor with $u$, where the depth of a node is given by its distance from the root (see Fig. \ref{fig:thm-1}). We set $Q^\ast = (P^\ast - \{v\}) \cup \{u\}$, and claim that $\vec{f}(Q^\ast) \lleq \vec{f}(P^\ast)$. Since $\vec{f}(P^\ast)$ is optimal, and $P_{i+1} \subseteq Q^\ast$ this will complete our proof.
Clearly, $f(a \rightsquigarrow u, P_i) \lleq f(a \rightsquigarrow v, P_i)$, since otherwise $f(r \rightsquigarrow u, P_i) >_L f(r \rightsquigarrow v, P_i)$, implying that $f(P_i \cup \{u\}) >_L f(P_i \cup \{v\})$, contradicting our use of a greedy algorithm.
Note that $u,v \notin (P^\ast - P_i - \{v\})$. Moreover, by choice of $v$, we have that $L(a) \cap (P^\ast - P_i - \{v\}) = \emptyset$, since the only nodes from $P^\ast$ in $L(a)$ must also be in $P_i$. To complete the proof, we apply Lemma \ref{lem-technical}, setting $X = P^\ast - P_i - \{v\}$. This choice of $X$ is made so as to yield the following equalities.
$$Q^\ast = (P^\ast - \{v\}) \cup \{u\} = P_i \cup (P^\ast - P_i - \{v\}) \cup \{u\}, $$
$$P^\ast = P_i \cup (P^\ast - P_i - \{v\}) \cup \{v\}. $$
By Lemma \ref{lem-technical}, we obtain inequality in the following formula,
$$f(Q^\ast) = f(P_i \cup (P^\ast - P_i - \{v\}) \cup \{u\}) \lleq f(P_i \cup (P^\ast - P_i - \{v\}) \cup \{v\}) = f(P^\ast).$$
Thereby completing the proof.\qed
\end{proof}
\section{Balanced Placements}
\begin{figure}[t]
\begin{minipage}[t]{0.48\textwidth}
\centering
\includegraphics[scale=0.09]{balanced-cex}
\put(0,58){$~~~2$}
\put(0,40){$1,2$}
\put(0,23){$1$}
\put(0,6){$1,2$}
\caption{Round-robin placement cannot guarantee optimality}\label{fig-cex}
\end{minipage}
\begin{minipage}[t]{0.48\textwidth}
\centering
\input{proof-figure-thm-2}
\caption{Nodes used in Theorem \ref {thm-balanced-sufficiency}.} \label{fig-proof-lca}
\end{minipage}\hfill
\end{figure}
Consider a round-robin placement in which the set of replicas placed at each node is distributed among its children, one replica per child, until all replicas have been placed. This process is then continued recursively at the children. Throughout the process, no child is given more replicas than its subtree has leaf nodes. This method has intuitive appeal, but it does not compute an optimal placement exactly as can be seen from Fig. \ref{fig-cex}. Let placements $P_1$ and $P_2$ consist of the nodes labeled by $1$ and $2$ in Fig. \ref{fig-cex} respectively. Note that both outcomes are round-robin placements. A quick computation reveals that $\vec{f}(P_1) = \langle 1, 1, 7, 0 \rangle \neq \langle 1, 3, 3, 2 \rangle = \vec{f}(P_2)$. Since the placements have different failure aggregates, round-robin placement alone cannot guarantee optimality.
Key to our algorithm is the observation that any placement which lexicominimizes $\vec{f}(P)$ must be \textit{balanced}. If we imagine each child $c_i$ of $u$ as a bin of capacity $\ell_i$, balanced nodes are those in which all unfilled children are approximately ``level'', and no child is filled while children of smaller capacity remain unfilled. These ideas are formalized in the following definitions.
\begin{definition}
Let node $u$ have children indexed $1, ..., k$, and let the subtree rooted at the $i^{th}$ child of node $u$ have $\ell_i$ leaves, and $r_i$ replicas placed on it in placement $P$. A node for which $\ell_i - r_i = 0$ is said to be \emph{filled}. A node for which $\ell_i - r_i > 0$ is said to be \emph{unfilled}.
\end{definition}
\begin{definition}\label{def-balanced}
Node $u$ is said to be \emph{balanced} in placement $P$ iff:
$$ \ell_i - r_i > 0 \implies ~\forall\,j \in \{1,...,k\} ~ (r_i \geq r_j - 1 ) .$$
Placement $P$ is said to be \emph{balanced} if all nodes $v \in V$ are balanced.
\end{definition}
To motivate a proof that lexico-minimum placements must be balanced, consider Fig. \ref{fig-balanced-placement} in which $P_1$ and $P_2$ are sets containing leaf nodes labeled $1$ and $2$ respectively. Fig. \ref{fig-balanced-survivalnums} presents two copies of the same tree, but with failure numbers labeled according to $P_1$ and $P_2$. Upon computing $f(P_1)$ and $f(P_2)$, we find that $f(P_1) = \langle 2, 1, 3, 7 \rangle \lgeq \langle 1, 1, 4, 7 \rangle = f(P_2)$. Note that for placement $P_1$, the root of the tree is unbalanced, therefore $P_1$ is unbalanced. Note also, that $P_2$ is balanced, since each of its nodes are balanced. We invite the reader to verify that $P_2$ is an optimal solution for this tree.
\begin{table}[tb]
\begin{minipage}[b]{0.33\textwidth}
\centering
\includegraphics[scale=0.075, trim=0 0 0 0, clip=true]{balanced}
\put(-88, -3){$1$}
\put(-71, -3){$1$}
\put(-67, 20){$1,2$}
\put(-46, 20){$2$}
\put(-12, 20){$2$}
\vspace{0.25cm}
\captionof{figure}{\strut Placements $P_1, P_2$ \label{fig-balanced-placement}}
\end{minipage}
\begin{minipage}[b]{0.67\textwidth}
\centering
\includegraphics[scale=0.075, trim=0 0 0 0, clip=true]{balanced}
\put(-50.5, 78){$3$}
\put(-71.5, 54){$3$}
\put(-29.5, 54){$0$}
\put(-80, 30.5){$2$}
\put(-63, 30.5){$1$}
\put(-46, 30.5){$0$}
\put(-29, 30.5){$0$}
\put(-12, 30.5){$0$}
\put(-88.5, 7){$1$}
\put(-71.5, 7){$1$}
\put(-46, 7){$0$}
\put(-29, 7){$0$}
\put(-12, 7){$0$}
\hspace{0.5cm}
\includegraphics[scale=0.075, trim=0 0 0 0, clip=true]{balanced}
\put(-50.5, 78){$3$}
\put(-71.5, 54){$1$}
\put(-29.5, 54){$2$}
\put(-80, 30.5){$0$}
\put(-63, 30.5){$1$}
\put(-46, 30.5){$1$}
\put(-29, 30.5){$0$}
\put(-12, 30.5){$1$}
\put(-88.5, 7){$0$}
\put(-71.5, 7){$0$}
\put(-46, 7){$0$}
\put(-29, 7){$0$}
\put(-12, 7){$0$}
\vspace{0.25cm}
\captionof{figure}{\strut Failure numbers for $P_1$ \textit{(right)} and $P_2$ \textit{(left)}.\label{fig-balanced-survivalnums}}
\end{minipage}
\end{table}
Our main result is that it is \textit{necessary} for an optimal placement to be balanced. However, the balanced property alone is not sufficient to guarantee optimality. To see this, consider the two placements in Fig. \ref{fig-cex}. By definition, both placements are balanced, yet they have different failure aggregates. Therefore, balancing alone is insufficient to guarantee optimality. Despite this, we can use Theorem \ref{thm-balanced-sufficiency} to justify discarding unbalanced solutions as suboptimal. We exploit this property of optimal placements in our algorithm.
\begin{theorem}\label{thm-balanced-sufficiency}
Any placement $P$ in which $\vec{f}(P)$ is lexicominimum among all placements for a given tree must be balanced.
\end{theorem}
\begin{proof}
Suppose $P$ is not balanced, yet $\vec{f}(P)$ is lexicominimum among all placements $P$. We proceed to a contradiction, as follows.
Let $u$ be an unbalanced node in $T$. Let $v$ be an unfilled child of $u$, and let $w$ be a child of $u$ with at least one replica such that $r_v < r_w - 1$. Since $v$ is unfilled, we can take one of the replicas placed on $w$ and place it on $v$. Let $q_w$ be the leaf node from which this replica is taken, and let $q_v$ be the leaf node on which this replica is placed (see Fig. \ref{fig-proof-lca}). Let \mbox{$P^\ast := (P - \{q_w\}) \cup \{q_v\}$}. We aim to show that $P^\ast$ is more optimal than $P$, contradicting $P$ as a lexicominimum.
Let $\vec{f}(P) := \langle p_\repfact, ..., p_0 \rangle$, and $\vec{f}(P^\ast) := \langle p^\ast_\repfact, ..., p^\ast_0 \rangle$. For convenience, we let \mbox{$f(w, P) = m$}. To show that $\vec{f}(P^\ast) <_L \vec{f}(P)$, we aim to prove that $p^\ast_m < p_m$, and that for any $k$ with $\repfact \geq k > m$, that $p^\ast_k = p_k$. We will concentrate on proving the former, and afterwards show that the latter follows easily.
To prove $p^\ast_m < p_m$, observe that as a result of the swap, some nodes change failure number. These nodes all lie on the paths $v \rightsquigarrow q_v$ and $w \rightsquigarrow q_w$. Let $S^-$ (resp. $S^+$) be the set of nodes whose failure numbers change to $m$ (resp. change from $m$), as a result of the swap. Formally, we define
$$S^- := \{x \in V \mid f(x, P) = m, f(x, P^\ast) \neq m \}, $$
$$S^+ := \{x \in V \mid f(x, P) \neq m , f(x, P^\ast) = m \}.$$
By definition, $p^\ast_m = p_m - |S^-| + |S^+|$. We claim that $|S^-| \geq 1$ and $|S^+| = 0$, which yields $p^\ast_m < p_m$. To show $|S^-| \geq 1$, note that $f(w, P) = m$ by definition, and after the swap, the failure number of $w$ changes. Therefore, $|S^-| \geq 1$.
To show $|S^+| = 0$, we must prove that no node whose failure number is affected by the swap has failure number $m$ after the swap has occured. We choose to show a stronger result, that all such node's failure number must be strictly less than $m$. Let $s_v$ be an arbitrary node on the path $v \rightsquigarrow q_v$, and consider the failure number of $s_v$. As a result of the swap, one more replica is counted as failed in each node on this path, therefore $f(s_v, P^\ast) = f(s_v, P) + 1$. Likewise, let $s_w$ be an arbitrary node on path $w \rightsquigarrow q_w$. One less replica is counted as failed in each node on this path, so $f(s_w, P^\ast) = f(s_w, P) - 1$. We will show that $f(s_w, P^\ast) < m$, and $f(s_v, P^\ast) < m$.
First, note that for any $s_w$, by Property \ref{lem-desc} $f(s_w, P^\ast) \leq f(w, P^\ast) = m-1 < m$. Therefore, $f(s_w, P^\ast) < m$, as desired.
To show $f(s_v, P^\ast) < m$, note that by supposition $r_w - 1 > r_v$, and from this we immediately obtain $f(w, P) - 1 > f(v,P)$ by the definition of failure number. Now consider the nodes $s_v$, for which
$$f(s_v, P) \leq f(v,P) < f(w,P) - 1 = m - 1 \implies f(s_v, P^\ast) - 1 < m - 1,$$
Where the first inequality is an application of Property \ref{lem-desc}, and the implication follows by substitution. Therefore $f(s_v, P^\ast) < m$ as desired.
Therefore, among all nodes in $P^\ast$ whose failure numbers change as a result of the swap, no node has failure number $m$, so $|S^+| = 0$ as claimed. Moreover, since $f(s, P^\ast) < m$ for any node $s$ whose failure number changes as a result of the swap, we also have proven that $p_k = p^\ast_k$ for all $k$ where $\repfact \geq k > m$. This completes the proof. \qed
\end{proof}
\section{An $O(n\repfact)$ Algorithm}
Our algorithm considers only placements which are balanced. To place $\repfact$ replicas, we start by placing $\repfact$ replicas at the root of the tree, and then proceed to assign these replicas to children of the root. We then recursively carry out the same procedure on each of the children.
Before the recursive procedure begins, we obtain values of $\ell_i$ at each node by running breadth-first search as a preprocessing phase.
The recursive procedure is then executed in two consecutive phases.
During the \textit{divide} phase, the algorithm is tasked with allocating $r(u)$ replicas placed on node $u$ to the children of $u$. After the divide phase, some child nodes are filled, while others remain unfilled. To achieve balance, each unfilled child $c_i$ will have either $r(c_i)$ or $r(c_i) - 1$ replicas placed upon them. The value of $r(c_i)$ is computed for each $c_i$ as part of the divide phase. The algorithm is then recursively called on each unfilled node to obtain values of optimal solutions for their subtrees. Nodes which are filled require no further processing. The output of this call is a pair of two optimal failure aggregates, one supposing $r(c_i)$ replicas are placed at $c_i$, the other supposing $r(c_i) -1$ are placed. Given these failure aggregates obtained from each child, the \textit{conquer} phase then chooses whether to place $r(c_i)$ or $r(c_i) - 1$ replicas on each unfilled child so as to achieve a lexicominimum failure aggregate for node $u$ overall. For ease of exposition, we describe an $O(n\repfact)$ version of our algorithm in this section, and prove it correct. In Section \ref{sec-improvements} then discuss improvements which can be used to obtain an $O(n + \repfact^2)$ algorithm. Finally, we describe some tree transformations which can be used to obtain an $O(n + \repfact \log \repfact)$ algorithm in Section \ref{sec-best}.
\subsection{Divide Phase}\label{sec-divide}
When node $u$ is first considered, it receives at most two possible values for the number of replicas it could be asked to accommodate. Let these be the values $r(u)$ and $r(u) - 1$. Let $u$ have a list of children indexed $1,2,..., m$, with leaf capacities $\ell_i$ where $1 \leq i \leq m$. The divide phase determines which children will be filled and which will be unfilled. Filled children will have $\ell_i$ replicas placed on them in the optimal solution, while the number of replicas on the unfilled children is determined during the conquer phase.
The set of unfilled children can be determined (without sorting) in an iterative manner using an $O(m)$ time algorithm similar to that for the Fractional Knapsack problem. The main idea of the algorithm is as follows: in each iteration, at least one-half of the children whose status is currently unknown are assigned a filled/unfilled status. To determine which half, the median capacity child (with capacity $\ell_{med}$) is found using the selection algorithm. Based upon the number of replicas that have not been assigned to the filled nodes, either \begin{inparaenum}[a)]\item the set of children $c_i$ with $\ell_i \geq \ell_{med}$ are labeled as ``unfilled" or \item the set of children $c_i$ with $\ell_i \leq \ell_{med}$ are labeled as ``filled"\end{inparaenum}. The algorithm recurses on the remaining unlabeled children. Pseudocode for this algorithm can be found in Algorithm \ref{alg-get-filled}
We briefly sketch the correctness of Algorithm 1. The following invariant holds after every execution of the while loop:
\begin{equation*}
\max(F)\cdot(|U| + |M|) < r - \sum_{c_i \in F} \ell_i \leq \min(U) \cdot |U| + \sum_{c_i \in M} \ell_i.
\end{equation*}
When $U = \emptyset$ or $F = \emptyset$ the invariant is not well-defined. These conditions are easy to test for: $U = \emptyset$ if and only if $\sum \ell_i = r(u)$, and $F = \emptyset$ if and only if $\ell_i > \floorfrac{r(u)}{|M|}$ for all $i$. Hence in what follows, we will work only with cases where $U \neq \emptyset$ and $F \neq \emptyset$. At the end of the algorithm, $M = \emptyset$, and the invariant reduces to the following
\begin{equation}\label{eqn-invariant-reduced}
\max(F) < \frac{r - \sum_{c_i \in F} \ell_i}{|U|} \leq \min(U).
\end{equation}
Equation \ref{eqn-invariant-reduced} indicates that the average number of replicas placed on the unfilled nodes lies between the maximum value of $F$ and the minimum value of $U$. From this, it is easy to see that the labeling is correct. Suppose that some filled child $c_i \in F$ has been incorrectly classified. This child contains at most $\ell_i - 1$ replicas, and yet is still unfilled. Moreover, to attain the average, some unfilled child must be assigned at least $\ceilfrac{r - \sum_{c_i \in F}^{\ell_i}}{|U|}$ replicas. Taking the difference of the number of replicas assigned to these two unfilled nodes, we have
\begin{align*}
& \Big\lceil\dfrac{r - \sum_{c_i \in F}\ell_i}{|U|}\Big\rceil - \ell_i + 1 \\
>~~ & \Big\lceil\dfrac{r - \sum_{c_i \in F}\ell_i}{|U|}\Big\rceil - \max(F) + 1 \\
\geq~~ & \Big\lceil\dfrac{r - \sum_{c_i \in F}\ell_i}{|U|}\Big\rceil - \max(F) + 2 \geq 2
\end{align*}
which is a violation of the balanced placement property. Therefore, all replicas are correctly classified. This completes the proof sketch.
\begin{algorithm}[t]
\SetKwProg{Fn}{Function}{begin}{end}
\Fn{\getFilled{$M$, $r$}}{
$F \gets \emptyset$ ; $U \gets \emptyset$ \tcp*[r]{$F$ := filled children $U$ := unfilled children}
\While{$M \neq \emptyset$}{
$\ell_{med} \gets \text{ median capacity of children in } M $ \;
$M_1 \gets \{c_i \in M \mid \ell_i < \ell_{med} \} $ \;
$M_2 \gets \{c_i \in M \mid \ell_i = \ell_{med} \} $ \;
$M_3 \gets \{c_i \in M \mid \ell_i > \ell_{med} \} $ \;
$x \gets r - \sum_{c_i \in F \cup M_1 \cup M_2} \ell_i$ \tcp*[r]{$x$ to be distributed among $M_3 \cup U$}
\uIf(\tcp*[f]{$M_1 \cup M_2$ guaranteed filled}){$x \geq \ell_{med} \cdot (|U| + |M_3|)$}{
$F \gets F\cup M_1 \cup M_2$ \; $M \gets M - (M_1 \cup M_2)$ \;
}\Else(\tcp*[f]{$M_2 \cup M_3$ guaranteed unfilled}) {
$U \gets F\cup M_2 \cup M_3$ \; $M \gets M - (M_2 \cup M_3)$ \;
}
}
\Return{($F$, $U$)} \tcp*[r]{return filled and unfilled children}
}
\caption{Determines filled and unfilled nodes}\label{alg-get-filled}
\end{algorithm}
Suppose we know that we only need to find placements of size $r(u)$ and $r(u) - 1$ for node $u$. Moreover, we know that in an optimal placement of size $r(u)$, each child $c_i$ only needs to accomodate either $r(c_i)$ or $r(c_i) - 1$ replicas. Suppose that optimal placements of size $r(c_i)$ and $r(c_i) - 1$ are available at each child $c_i$. Theorem \ref{thm-two-values} shows that these placements are all that is required to compute optimal placements of size $r(u)$ \emph{and also of size} $r(u) - 1$.
\begin{theorem}\label{thm-two-values}
In any case where $r(u)$ or $r(u) - 1$ replicas must be balanced among $k$ unfilled children, it suffices to consider placing either $\ceilfrac{r(u) - L}{k}$ or $\floorfrac{r(u)- L -1}{k}$ replicas at each unfilled child.
\end{theorem}
\begin{proof}
Let $s := r(u) - L$. Suppose $s \bmod k = 0$. If $s$ replicas are placed at $u$, then all unfilled children receive exactly $\frac{s}{k} ~(= \ceilfrac{s}{k})$ replicas. If $s - 1$ replicas are placed at $u$, one child gets $\frac{s}{k} - 1 = \floorfrac{s - 1}{k}$ replicas. If instead $s \bmod k > 0$, then the average number of replicas on each unfilled child is $\frac{s}{k} \notin \ints$. To attain this average using integer values, values both above and below $\frac{s}{k}$ are needed. However, since the unfilled children must be balanced, whatever values selected must have absolute difference at most 1. The only two integer values satisfying these requirements are $\ceilfrac{s}{k}$ and $\floorfrac{s}{k}$. But $\floorfrac{s}{k} = \floorfrac{s - 1}{k}$ when \mbox{$s \bmod k > 0$}. \qed
\end{proof}
\subsection{Conquer Phase}\label{sec-conquer}
Once the recursive call completes, we combine the results from each of the children to achieve the lexicographic minimum overall. Our task in this phase is to select $(r(u) - L) \bmod k$ unfilled children on which $\ceilfrac{r(u)-L}{k}$ replicas will be placed, and place $\floorfrac{r(u) - L - 1}{k}$ replicas on the remaining unfilled children. We need to do this in such a way that the resulting placement is lexicominimum. Recall also that we must return two values, one for $r(u)$ and another for $r(u) - 1$. We show how to obtain a solution in the $r(u) - 1$ case using a greedy algorithm. A solution for $r(u)$ can easily be obtained thereafter.
In this section, when two vectors are compared or summed, we are implicitly making use of an $O(\repfact)$ function for comparing two vectors of length $\repfact$ in lexicographic order.
Let $\vec{a}_i$ (respectively $\vec{b}_i$) represent the lexicominimum value of $\vec{f}(P)$ where $P$ is any placement of $\floorfrac{r(u)- L -1}{k}$ (respectively $\ceilfrac{r(u) - L}{k}$) replicas on child $i$. Recall that $\vec{a}_i, \vec{b}_i \in \natnum^{\repfact + 1}$, and are available as the result of the recursive call. We solve the optimization problem by encoding the decision to take $\vec{b}_i$ over $\vec{a}_i$ as a decision variable $x_i \in \{0,1\}$, for which either $x_i = 0$ if $\vec{a}_i$ is selected, or $x_i = 1$ if $\vec{b}_i$ is selected. The problem can then be described as an assignment of values to $x_i$ according to the following system of constraints, in which all arithmetic operations are performed point-wise.
\begin{equation}\label{eqn-greedy-constraints}
\min \displaystyle\sum_{i} \vec{a}_i + (\vec{b}_i - \vec{a}_i)x_i, ~~~ \text{subj. to: } \displaystyle\sum_{i}x_i = (r(u) - L) \bmod k.
\end{equation}
An assignment of $x_i$ which satisfies the requirements in (\ref{eqn-greedy-constraints}) can be found by computing $\vec{b}_i - \vec{a}_i$ for all $i$, and greedily assigning $x_i = 1$ to those $i$ which have the $(r(u) - L) \bmod k$ smallest values of $\vec{b}_i - \vec{a}_i$. This is formally stated as
\begin{theorem}\label{thm-greedy-system}
Let $\pi := (\pi_1,\pi_2,...,\pi_k)$ be a permutation of $\{1,2,...,k\}$ such that:
$$\vec{b}_{\pi_1} - \vec{a}_{\pi_1} \lleq \vec{b}_{\pi_2} - \vec{a}_{\pi_2} \lleq ... \lleq \vec{b}_{\pi_k} - \vec{a}_{\pi_k}~.$$
If vector $\vec{x} = \langle x_1, ..., x_k\rangle$ is defined according to the following rules: set $x_{\pi_i} = 1$ iff \mbox{$i < (r(u) - L) \bmod k$}, else $x_{\pi_i} = 0$, then $\vec{x}$ is an optimal solution to (\ref{eqn-greedy-constraints}).
\end{theorem}
The following Lemma greatly simplifies the proof of Theorem \ref{thm-greedy-system}.
\begin{lemma}\label{lem-logroup}
$\langle\ints^n, +\rangle$ forms a linearly-ordered group under $\lleq$. In particular, for any $\vec{x}, \vec{y}, \vec{z} \in \ints^n, \vec{x} \lleq \vec{y} \implies \vec{x} + \vec{z} \lleq \vec{y} + \vec{z}$.
\end{lemma}
A straight-forward proof of Lemma \ref{lem-logroup} can be found in the appendix.
\begin{proof}[Proof of Theorem \ref{thm-greedy-system}]
First, notice that a solution to (\ref{eqn-greedy-constraints}) which minimizes the quantity $\sum_i (\vec{b}_i - \vec{a}_i) x_i$ also minimizes the quantity $\sum_i \vec{a}_i + (\vec{b}_i - \vec{a}_i)x_i.$ It suffices to minimize the former quantity, which can be done by considering only those values of $(\vec{b}_i - \vec{a}_i)$ for which $x_i = 1$. For convenience, we consider $\vec{x}$ to be the characteristic vector of a set $S \subseteq \{1,...,k\}$. We show that no other set $S'$ can yield a characteristic vector $\vec{x}'$ which is strictly better than $\vec{x}$ as follows.
Let $\alpha := (r(u) - L) \bmod k$, and let $S := \{\pi_1, ..., \pi_{\alpha - 1} \}$ be the first $\alpha - 1$ entries of $\pi$ taken as a set. Suppose that there is some $S'$ which represents a feasible assignment of variables to $\vec{x}'$ for which $\vec{x}'$ is a strictly better solution than $\vec{x}$. $S' \subseteq \{1, ..., k\}$, such that $|S'| = \alpha - 1$, and $S' \neq S$. Since $S' \neq S$, and $|S'| = |S|$ we have that $S - S' \neq \emptyset$ and $S' - S \neq \emptyset$. Let $i \in S-S'$ and $j \in S' - S$. We claim that we can form a better placement, $S^\ast = (S' - \{j\}) \cup \{i\}$. Specifically,
\begin{equation}\label{eqn-claim01}
\sum_{\ell \in S^\ast} (\vec{b}_\ell - \vec{a}_\ell) \lleq \sum_{m \in S'} (\vec{b}_m - \vec{a}_m)~.
\end{equation}
which implies that replacing a single element in $S'$ with one from $S$ does not cause the quantity minimized in (\ref{eqn-greedy-constraints}) to increase.
To prove (\ref{eqn-claim01}) note that \mbox{$j \notin S$ and $i \in S \implies (\vec{b}_i - \vec{a}_i) \lleq (\vec{b}_j - \vec{a}_j)$.} We now apply Lemma \ref{lem-logroup}, setting $\vec{x} = (\vec{b}_i - \vec{a}_i)$, $\vec{y} = (\vec{b}_j - \vec{a}_j)$, and \mbox{$\vec{z} = \sum_{\ell \in (S^\ast - \{i\})} (\vec{b}_\ell - \vec{a}_\ell)$}. This yields
$$\sum_{\ell \in (S^\ast - \{i\})} (\vec{b}_\ell - \vec{a}_\ell) + (\vec{b}_i - \vec{a}_i) \lleq
\sum_{\ell \in (S^\ast - \{i\})} (\vec{b}_\ell - \vec{a}_\ell) + (\vec{b}_j - \vec{a}_j)~.$$
But since $S^\ast - \{i\} = S' - \{j\}$, we have that
\begin{equation}\label{eqn-claim02}
\sum_{\ell \in (S^\ast - \{i\})} (\vec{b}_\ell - \vec{a}_\ell) + (\vec{b}_i - \vec{a}_i) \lleq
\sum_{m \in (S' - \{j\})} (\vec{b}_m - \vec{a}_m) + (\vec{b}_j - \vec{a}_j)~.
\end{equation}
Clearly, (\ref{eqn-claim02}) $\implies$ (\ref{eqn-claim01}), thereby proving (\ref{eqn-claim01}).
This shows that any solution which is not $S$ can be modified to swap in one extra member of $S$ without increasing the quantity minimized in (\ref{eqn-greedy-constraints}). By induction, it is possible to include every element from $S$, until $S$ itself is reached. Therefore, $\vec{x}$ is an optimal solution to (\ref{eqn-greedy-constraints}). \qed
\end{proof}
In the algorithm, we find an optimal solution to (\ref{eqn-greedy-constraints}) by assigning $\ceilfrac{r(u) - L - 1}{k}$ replicas to those children where $i$ is such that \mbox{$1 \leq i < (r(u) - L) \bmod k$}, and $\floorfrac{r(u) - L}{k}$ replicas to those remaining. To do this, we find the unfilled child having the $((r(u) - L) \bmod k)^{th}$ largest value of $\vec{b}_i - \vec{a}_i$ using linear-time selection, and use the partition procedure from quicksort to find those children having values below the selected child. This takes time $O(k\repfact)$ at each node.
At the end of the conquer phase, we compute and return the sum\footnote{In the mentioned sum we assume for notational convenience, that the vectors have been indexed in increasing order of $\vec{b}_i - \vec{a}_i$, although the algorithm performs no such sorting.}
\begin{equation}\label{eqn-recursive-sum}
\sum_{i \,<\, (r(u) - L )\bmod k}\vec{b}_i + \sum_{i \,\geq\, (r(u) - L) \bmod k} \vec{a}_i + \sum_{j \,:\, \text{filled}} \vec{f}(P_j) + \vec{1}_{r(u)-1},
\end{equation} where $P_j$ is the placement of $\ell_j$ replicas on child $j$ and $\vec{1}_{r(u)-1}$ is a vector of length $\rho$ having a one in entry $r(u)-1$ and zeroes everywhere else. The term $\vec{1}_{r(u)-1}$ accounts for the failure number of $u$. This sum gives the value of an optimal placement of size $r(u) - 1$. Note there are $k+1$ terms in the sum, each of which is a vector of length at most $\rho + 1$. Both computing the sum and performing the selection take $O(k\repfact)$ time at each node, yielding $O(n\repfact)$ time overall.
We have only focused upon computing the \textit{value} of the optimal solution. The solution itself can be recovered easily by storing the decisions made during the conquer phase at each node, and then combining them to output an optimal placement.
\section{An $O(n + \repfact^2)$ Algorithm}
\label{sec-improvements}
An $O(n + \repfact^2)$ running time can be achieved by an $O(n)$ divide phase, and an $O(\repfact^2)$ conquer phase. The divide phase already takes at most $O(n)$ time overall, so to achieve our goal, we concern ourselves with optimizing the conquer phase. The conquer phase can be improved upon by making two changes. First, we modify the vector representation used for return values. Second, we transform the structure of the tree to avoid pathological cases.
In the remainder of the paper, we will use array notation to refer to entries of vectors. For a vector $\vec{v}$, the $k^{th}$ entry of $\vec{v}$ is denoted $\vec{v}[k]$.
\subsubsection{Compact Vector Representation}
Observe that the maximum failure number returned from child $c_i$ is $r(c_i)$. This along with Property \ref{lem-desc} implies that the vector returned from $c_i$ will have a zero in indices $\repfact, \repfact-1, ..., r(c_i) +1$. To avoid wasting space, we modify the algorithm to return vectors of length only $r(c_i)$. At each node, we then compute (\ref{eqn-recursive-sum}) by summing entries in increasing order of their index. Specifically, to compute $\vec{v}_1 + \vec{v}_2 + ... + \vec{v}_k$, where each vector $\vec{v}_j$ has length $r(c_i)$, we first allocate an empty vector $\vec{w}$, of size $r(c_i)$, to store the result of the sum. Then, for each vector $\vec{v}_j$, we set $\vec{w}[i] \gets \vec{w}[i] + \vec{v}_j[i]$ for indices $i$ from $0$ up to $r(c_i)$. After all vectors have been processed, $\vec{w} = \vec{v}_1 + ... + \vec{v}_k$. This algorithm takes \mbox{$r(c_1) + ... + r(c_k) = O(r(u))$} time.
Using smaller vectors also implies that the $((r(u) - L) \bmod k)^{th}$ best child is found in $O(r(u))$ time, since each unfilled child returns a vector of size at most $O(\frac{r(u)}{k})$, and there are only $k$ unfilled children to compare.
With these modifications the conquer phase takes $O(r(u))$ time at node $u$.
\subsubsection{Tree Transformations}
Note that for each $i$, nodes at depth $i$ have $O(\repfact)$ replicas placed on them in total. We can therefore achieve an $O(\repfact^2)$ time conquer phase overall by ensuring that the conquer phase only needs to occur in at most $O(\repfact)$ levels of the tree. To do this, we observe that when $r(u) = 1$, any leaf with minimum depth forms an optimal placement. Recursive calls can therefore be stopped once $r(u) = 1$.
To ensure that $r(u) = 1$ after $O(\repfact)$ levels, we contract paths on which all nodes have degree two into a single pseudonode during the preprocessing phase. The length of this contracted path is stored in the pseudonode, and is accounted for when computing the sum. This suffices to ensure $r(u)$ decreases by at least one at each level, yielding an $O(n + \repfact^2)$ algorithm.
\section{An $O(n + \repfact \log \repfact)$ Algorithm}
\label{sec-best}
In this section, we extend ideas about tree transformation from the last section to develop an algorithm in which the conquer phase only needs to occur in at most $O(\log \repfact)$ levels. We achieve this by refining the tree transformations described in Section \ref{sec-improvements}.
To ensure that there are only $O(\log \repfact)$ levels in the tree, we transform the tree so as to guarantee that as the conquer phase proceeds down the tree, $r(u)$ decreases by at least a factor of two at each level. This happens automatically when there are two or more unfilled nodes at each node, since to balance the unfilled children, at most $\ceilfrac{r(u) - L}{2}$ replicas will be placed on each of them. Problems can therefore only arise when a tree has a path of nodes each of which have a single, unfilled child. We call such a path a \textit{degenerate chain}. By detecting and contracting all such degenerate chains, we can achieve an $O(\repfact \log \repfact)$ conquer phase.
Fig. \ref{fig-degenerate-unfilled-case} illustrates a degenerate chain. In this figure, each $T_i$ with $1 \leq i \leq t - 1$ is the set of all descendant nodes of $v_i$ which are filled. Thus, $v_1, ..., v_{t-1}$ each have only a single unfilled child (since each $v_i$ has $v_{i+1}$ as an child). In contrast, node $v_t$ has at least two unfilled children. It is easy to see that if the number of leaves in each $T_i$ is $O(1)$ then $t$, the length of the chain, can be as large as $O(\repfact)$. This would imply that there can be $O(\repfact)$ levels in the tree where the entire conquer phase is required. To remove degenerate chains, we contract nodes $v_1, ..., v_{t-1}$ into a single pseudonode $w$, as in Fig. \ref{fig-contracted-nodes}. However, we must take care to ensure that the pair of vectors which pseudonode $w$ returns takes into account contributions from the entire contracted structure. We will continue to use $v_i$ and $T_i$ throughout the remainder of this section to refer to nodes in a degenerate chain.
To find and contract degenerate chains, we add an additional phase, the \textit{transform} phase, which takes place between the divide and conquer phases. Recall that after the divide phase, the set of filled and unfilled children are available at each node. Finding nodes in a degenerate chain is therefore easily done via a breadth-first search. We next consider what information must be stored in the pseudonode, to ensure that correct results are maintained.
\begin{figure}
\begin{subfigure}{0.60\textwidth}
\centering
\input{degenerate-case}
\caption{A degenerate chain.}\label{fig-degenerate-unfilled-case}
\end{subfigure}
\begin{subfigure}{0.35\textwidth}
\centering
\raisebox{1cm}{ \input{contracted-nodes} }
\caption{Contracted pseudonode.}\label{fig-contracted-nodes}
\end{subfigure}
\caption{Illustration of a degenerate chain in which each $v_i$ where $1 \leq i \leq t-1$ represents a node which has a single unfilled child. All filled descendents of node $v_i$ are collectively represented as $T_i$. In the figure on the right, nodes $v_1, ..., v_{t-1}$ have been contracted into pseudonode $w$.}
\end{figure}
Let $(\vec{a_w}, \vec{b_w})$ be the pair of values which will be returned by pseudonode $w$ at the end of the conquer phase. In order for the transformation to be correct, the vectors $(\vec{a_w}, \vec{b_w})$ must be the same as those which would have been returned at node $v_1$ had no transformation occurred. To ensure this, we must consider and include the contribution of each node in the set \mbox{$T_1 \cup ... \cup T_{t-1} \cup \{v_1, ..., v_{t-1}\}$}. It is easy to see that the failure numbers of nodes in $\{v_1, ..., v_{t-1}\}$ depend only upon whether $r(v_t)$ or $r(v_t) - 1$ replicas are placed on node $v_t$, while the filled nodes in sets $T_1, ..., T_{t-1}$ have no such dependency. Observe that if $r(v_t)$ replicas are placed on $v_t$, then $r(v_i)$ replicas are placed at each node $v_i$. If instead $r(v_t) - 1$ replicas are placed, then $r(v_i) - 1$ replicas are placed at each $v_i$. Since values of $r(v_i)$ are available at each node after the divide phase, enough information is present to contract the degenerate chain before the conquer phase is performed.
The remainder of this section focuses on the technical details needed to support our claim that the transform phase can be implemented in time \mbox{$O(n + \repfact \log \repfact)$} overall. Let $S_w := T_1 \cup ... \cup T_{t-1} \cup \{v_1, ..., v_{t-1}\}$, and let the contibution of nodes in $S_w$ to $\vec{a_w}$ and $\vec{b_w}$ be given by vectors $\vec{a}$ and $\vec{b}$ respectively. The transform phase is then tasked with computing $\vec{a}$ and $\vec{b}$, and contracting the degenerate chain. We will show that this can be done in time $O(|S_w| + r(v_1))$ for each pseudonode $w$.
Pseudocode for the transform phase is given in Algorithm \ref{alg-transf}. The transform phase is started at the root of the tree by invoking \transf{$root,false, \repfact$}. \transf is a modified recursive breadth-first search. As the recursion proceeds down the tree, each node is tested to see if it is part of a degenerate chain (lines \ref{line-bottom} and \ref{line-test2}). If a node is not part of a degenerate chain, the call continues on all unfilled children (line \ref{line-pass-on}). The first node ($v_1$) in a degenerate chain is marked by passing down $chain \gets true$ at lines \ref{line-mark-ru} and \ref{line-mark-rv1}. The value of $r(v_1)$ is also passed down to the bottom of the chain at lines \ref{line-mark-ru} and \ref{line-mark-rv1}. Once the bottom of the chain (node $v_t$) has been reached, the algorithm allocates memory for three vectors, $\vec{a}, \vec{b}$ and $\vec{f}$, each of size $r(v_1)+1$ (line \ref{line-alloc}). These vectors are then passed up through the entire degenerate chain (line \ref{line-return}), along with node $u$, whose use will be explained later. When a node $u$ in a degenerate chain receives $\vec{a}, \vec{b}$, and $\vec{f}$, $u$ adds its contribution to each vector (lines \ref{line-contribstart}-\ref{line-contribend}). The contribution of node $u$ consists of two parts. First, the contribution of the filled nodes is added to $\vec{f}$ by invoking a special \filled subroutine (see Algorithm \ref{alg-filled}) which computes the sum of the failure aggregates of each filled child of $u$ (lines \ref{line-contribstart}-\ref{line-filledend}). Note that \filled uses pass-by-reference semantics when passing in the value of $\vec{f}$. The contribution of node $u$ itself is then added, by summing the number of leaves in all of the filled children, and the number of replicas on the single unfilled child, $v$ (lines \ref{line-ustart}-\ref{line-contribend}). By the time that the recursion reaches the start of the chain on the way back up (line \ref{line-chainback-to-start}), all nodes have added their contribution, and the pseudonode is created and returned (line \ref{line-pseudonodecreate}).
\begin{algorithm}[t]
\SetKwFunction{transf}{Transform}\SetKwFunction{mkPseudo}{Make-Pseudonode}
\SetKwProg{Fn}{Function}{begin}{end}
\Fn{\transf{$u, chain, r(v_1)$}}{
\If{$u$ has two or more unfilled children}{ \label{line-bottom}
\ForEach{child $c_i$ unfilled}{
\label{line-pass-on}$(-, -, -, x) \gets $\transf{$c_i, false, \bot$} \;
$c_i \gets x$ \label{line-update}\;
}
\lIf{$chain = false$} { \Return{$(\bot, \bot, \bot, u)$} \label{line-vt}}
\lElse(\tcp*[f]{$3\cdot O(r(v_1))$ time}){ \Return{$(\vec{0}_{r(v_1) + 1}, \vec{0}_{r(v_1)+1}, \vec{0}_{r(v_1)+1}, u)$} \label{line-alloc} }
}
\If{$u$ has one unfilled child, $v$}{ \label{line-test2}
\If{$chain = false$} {
\tcp{pass $r(v)$ as max vector length}$(\vec{a},\vec{b},\vec{f},x) \gets$ \transf{$v, true, r(v)$} \label{line-mark-ru}\;
}\Else{
$(\vec{a},\vec{b},\vec{f},x) \gets$ \transf{$v, true, r(v_1)$} \label{line-mark-rv1}\;
}
\ForEach{filled child $c_i$} { \label{line-contribstart}
\filled{$c_i, \vec{f}$} \tcp*[r]{$O(n_i)$ time} \label{line-filledend}
}
$k \gets \sum_i \ell_i + r(v) - 1$\label{line-ustart}\;
$\vec{a}[k+1] \gets \vec{a}[k+1] + 1$\;
$\vec{b}[k] \gets \vec{b}[k] + 1$\label{line-contribend}\;
\If{$chain = false$}{ \label{line-chainback-to-start}
$x \gets $ \mkPseudo{$\vec{a}, \vec{b}, \vec{f}, x$} \label{line-pseudonodecreate}
}
\Return{$(\vec{a},\vec{b},\vec{f},x)$}\label{line-return}
}
}
\caption{Transform phase}\label{alg-transf}
\end{algorithm}
The transformation takes place as \transf is returned back up the tree. At the end of the degenerate chain, node $v_t$ is returned (lines \ref{line-vt}-\ref{line-alloc}), and this value is passed along the length of the entire chain (line \ref{line-return}), until reaching the beginning of the chain, where the pseudonode is created and returned (line \ref{line-pseudonodecreate}). When the beginning of the chain is reached, the parent of $v_1$ updates its reference (line \ref{line-update}) to refer to the newly created pseudonode. At line \ref{line-update} note that if $c_i$ was \textit{not} the beginning of a degenerate chain, $x = c_i$ and the assignment has no effect (see lines \ref{line-vt}-\ref{line-alloc}).
We provide pseudocode for the \filled and \mkPseudo subroutines in Algorithms \ref{alg-filled} and \ref{alg-mkpseudo}. The \mkPseudo subroutine runs in $O(1)$ time. It is easy to see that the \filled routine runs in $O(n_i)$ time, where $n_i$ is the number of nodes in the subtree rooted at child $c_i$. The \transf routine therefore takes $O(|T_i|)$ time to process a single node $v_i$. The time needed for \transf to process an entire degenerate chain is therefore $O(|S_w|) + 3\cdot O(r(v_1))$, where the $3\cdot O(r(v_1))$ term arises from allocating memory for vectors $\vec{a}$, $\vec{b}$ and $\vec{f}$ at the last node of the chain.
When we sum this time over all degenerate chains, we obtain a running time of $O(n + \repfact \log \repfact)$ for the transform phase. To reach this result, we examine the sum of $r(v_1)$ for all pseudonodes at level $i$. Since there are at most $\repfact$ replicas at each level $i$, this sum can be at most $O(\repfact)$ in any level. There are only $O(\log \repfact)$ levels where $r(u) > 1$ after degenerate chains have been contracted, thus, pseudonodes can be only be present in the first $O(\log \repfact)$ levels of the tree. Therefore the $3\cdot O(r(v_1))$ term sums to $O(\repfact \log \repfact)$ overall. Since $|S_w|$ clearly sums to $O(n)$ overall, the transform phase takes at most $O(n + \repfact \log \repfact)$ time.
Finally, after the transformation has completed, we can ensure that the value of $r(u)$ decreases by a factor of two at each level. This implies that there are only $O(\log \repfact)$ levels where the conquer phase needs to be run in its entirety. Therefore, the conquer phase takes $O(\repfact \log \repfact)$ time overall. When combined with the $O(n)$ divide phase and the $O(n + \repfact \log \repfact$) transform phase, this yields an $O(n + \repfact \log \repfact)$ algorithm for solving replica placement in a tree.
\begin{algorithm}[t]
\SetKwProg{Fn}{Function}{begin}{end}
\Fn{\filled{$u$, $\vec{f}$}}{
\ElseIf{$u$ is a leaf}{
$\vec{f}[0] \gets \vec{f}[0] + 1$\;
\Return \;
}{
\ForEach{child $c_i$}{
\filled{$c_i, \vec{f}$}
}
$a \gets \sum_i \ell_i$ \;
$\vec{f}[a] \gets \vec{f}[a] + 1$\;
\Return \;
}
}
\caption{Computes failure aggregate of filled nodes}\label{alg-filled}
\end{algorithm}
\begin{algorithm}[t]
\SetKwProg{Fn}{Function}{begin}{end}
\Fn{\mkPseudo{$\vec{a}$, $\vec{b}$, $\vec{f}$, $x$}}{
allocate a new node $node$\;
$node.\vec{a} \gets \vec{a} + \vec{f}$\;
$node.\vec{b} \gets \vec{b} + \vec{f}$\;
$node.child \gets x$\;
\Return $node$
}
\caption{Creates and returns a new pseudonode}\label{alg-mkpseudo}
\end{algorithm}
\section{Conclusion}\label{sec-conclusion}
In this paper, we formulate the replica placement problem and show that it can be solved by a greedy algorithm in $O(n^2 \repfact)$ time. In search of a faster algorithm, we prove that any optimal placement in a tree must be balanced. We then exploit this property to give a $O(n\repfact)$ algorithm for finding such an optimal placement. The running time of this algorithm is then improved, yielding an $O(n + \repfact \log \repfact)$ algorithm. An interesting next step would consist of proving a lower bound for this problem, and seeing how our algorithm compares. In future work we plan to consider replica placement on additional classes of graphs, such as special cases of bipartite graphs.
We would like to acknowledge insightful comments from S. Venkatesan and Balaji Raghavachari during meetings about results contained in this paper, as well as comments from Conner Davis on a draft version of this paper.
|
2,877,628,088,550 | arxiv | \section{Introduction} \label{s:intro}
From the deployment experiences of long term evolution (LTE) relay in operator's network, the most powerful competitor for relay is the repeaters, which can be viewed as an amplify-and-forward relay. Compared with relay specified in 3GPP Release 10 \cite{wg2010evolved}, the repeaters have the advantage of low cost, despite that the noise floor would be raised by the repeaters.
Another advantage is that deploying repeaters in the network does not require significant software or hardware update for core network and base stations, which has relatively low implementation complexity.
One of the most important application scenarios for relays is to provide wireless backhaul when there is no fiber connection or hard to deploy fibers.
However, based on the fiber coverage data in China published by the Ministry of Industry and Information Technology (MIIT) of China, by the year 2019, the fiber and LTE coverage has already reached 98\% across administrative villages throughout China.
This inevitably reduces the demand for relay deployment.
Another issue is that relay needs power supply, and it is often difficult to provide electric power in places where optical fiber is difficult to deploy.
In other application scenarios, for example, in hot spot areas such as stadiums or exhibition halls, where relays are expected for throughput enhancement, the difficulty of reaching power supply is not less than that of optical fiber connection.
If the power is available, it would be easier for operators to deploy a node in the form of a small cell or Remote Radio Unit (RRU) than a relay to achieve both coverage and capacity enhancement, without making substantial changes to the network.
As a result, in addition to performance, power supply is one of the most important factors that limits the development of technologies such as relay or repeaters.
On the other hand, RIS is an emerging technique employing metasurface to reflect the signal from the source node to the destination node without consuming any energy \cite{wu2019towards,zhang2020capacity,wu2019intelligent}.
Not only the spectral efficiency but also the energy efficiency can be improved through RIS, which can be regarded as a technology that can be immune to power supply limitation.
Considering the low cost and low power features \cite{subrt2012intelligent}, it may be possible to install passive or semi-passive control of RIS that is free from the wired connections.
However, the research on comparison of RIS and relays is at state of early age \cite{bjornson2019intelligent,boulogeorgos2020performance,ntontin2020rate}.
In \cite{bjornson2019intelligent}, RIS-assisted single input single output (SISO) systems was compared with classic decode and forward (DF) relaying, which ignored the small scale fading.
RIS-assisted SISO systems was compared with amplify and forward (AF) relaying wireless systems in \cite{boulogeorgos2020performance}.
\cite{ntontin2020rate} considered the insertion losses and power consumption of
the electronic components related with the deployed nodes for energy efficiency.
In this paper, we compare the RIS and its counterpart FDR/HDR from the aspects of system models, performance and control aspects.
The basic question we would like to answer is what the fundamental difference between RIS and relay from both theoretical industrial views.
We first analyze the system models and end-to-end throughput maximization problem for RIS-aided, FDR-aided and HDR-aided multiple input multiple output (MIMO) systems.
Then an alternating weighted minimum mean square error (MMSE) algorithm is proposed for finding approximate solutions.
Some simulation results are provided to demonstrate its efficacy.
Finally, we discuss the comparisons of RIS and relays.
{\it Notation:}
$\mathbb{C}^{N \times M }$ represents the set of complex ${N \times M}$-matrix.
The conjugate transpose is denoted by $(\cdot)^H$.
$ \mathcal{CN}$(${\bm\mu}$, $\mathbf{\Sigma}$) denotes the circularly symmetric complex Gaussian distribution with mean ${\bm\mu}$ and covariance matrix $\mathbf{\Sigma}$.
$\mathrm{tr}(\cdot)$ represents the trace of matrix.
$\mathbf{I}_N $ denotes the $N\times N$ identity matrix.
$\mathbf{X} \succeq \mathbf{0}$ means that $\mathbf{X}$ is a positive semidefinite matrix.
\section{System Model and Problem Formulation} \label{s:model}
In this section, we describe the system models and formulate the optimization problems for MIMO communication system assisted by RIS, full duplex relay and half duplex relay, respectively.
\subsection{Reconfigurable Intelligent Surface}
Consider a RIS-aided MIMO communication system where the transmitter is equipped with $M$ antennas and the receiver is equipped with $N$ antennas as shown in Fig. \ref{fig:system_RIS}. The RIS is equipped with $K$ reflecting elements.
The channel between the transmitter to the receiver is denoted as $\mathbf{H}_d \in \mathbb{C}^{N \times M} $.
Denote $\mathbf{H}_1 \in \mathbb{C}^{K \times M} $ as the channel matrix from the transmitter to the RIS, $\mathbf{H}_2 \in \mathbb{C}^{N \times K} $ as the channel matrix from the RIS to the receiver.
\begin{figure}[!hbtp]
\begin{center}
\includegraphics[angle=0,width=0.5\textwidth]{channel_model}
\end{center}
\caption{An illustration of a RIS-aided MIMO communication system. }
\label{fig:system_RIS}
\end{figure}
The transmitted signal ${\bm x} \in \mathbb{C}^{M \times 1}$ at the transmitter can be represented by
\begin{align} \label{eq:xvs}
{\bm x}= \mathbf{V} {\bm s},
\end{align}
where $\mathbf{V} \in \mathbb{C}^{M \times l} $ is the transmit beamforming matrix, ${\bm s} \in \mathbb{C}^{l \times 1}$ is the transmit data,
$l$ is the number of data streams, assuming $\mathbb{E}\{{\bm s}{\bm s}^H\}=\mathbf{I}$.
Consider the power constraint $\mathrm{tr}(\mathbf{V}\mathbf{V}^H)\leq P_s$, where $P_s$ is the power budget of the transmitter.
The received signal ${\bm y} \in \mathbb{C}^{N \times 1}$ at the receiver is given by
\begin{align}
{\bm y}= (\mathbf{H}_d +\mathbf{H}_2 \mathbf{\Phi} \mathbf{H}_1) \mathbf{V} {\bm s} + {\bm n}_1,
\label{eq:y_ris}
\end{align}
where
$ {\bm n}_1 \sim \mathcal{CN}(0, \sigma_D^2 \mathbf{I}_K) $ is an additive white Gaussian noise (AWGN) with zero mean and variance $\sigma_D^2 \mathbf{I}_N$,
the diagonal matrix $\mathbf{\Phi} = \text{diag} (\phi_1, \phi_2,...,\phi_k)$ denotes RIS reflection coefficients.
The estimated signal $\hat{\bm s}$ at receiver is given by
\begin{align}
\hat{\bm s} = \mathbf{U} {\bm y},
\end{align}
where $\mathbf{U} $ is the receive beamforming matrix.
The throughput maximization of the RIS-aided MIMO communication system can be given by
\begin{subequations} \label{P1}
\begin{align}
\max \limits_{\mathbf{V},\mathbf{\Phi}}~ & \log \det \left( \mathbf{I}_N + \mathbf{H}\mathbf{V}\mathbf{V}^H \mathbf{H}^H \mathbf{R}_n^{-1} \right)\\
\text{s.t.} \quad
& \mathrm{tr}(\mathbf{V}\mathbf{V}^H)\leq P_t,\\
& | \phi_k|\leq 1,
\end{align}
\end{subequations}
where $\mathbf{H}=\mathbf{H}_d +\mathbf{H}_2 \mathbf{\Phi} \mathbf{H}_1$, $\mathbf{R}_n^{-1} = \sigma_D^2 \mathbf{I}_K$.
It can be seen that the objective function of problem \eqref{P1} involves the multiplication of $\mathbf{\Phi}$ and $\mathbf{V}$,
which is a nonconvex function. Thus problem \eqref{P1} is a non-convex optimization problem, the global optimal solution of which is hard to calculate.
\subsection{Full Duplex Relay}
As shown in Fig. \ref{fig:system_FDR}, we consider a MIMO communication system assisted by a FDR that suffers from self-interference, where $\mathbf{H}_s \in \mathbb{C}^{L \times L}$ is the channel matrix.
\begin{figure}[!hbtp]
\begin{center}
\includegraphics[angle=0,width=0.5\textwidth]{channel_model_FDR}
\end{center}
\caption{An illustration of a FDR-aided MIMO communication system. }
\label{fig:system_FDR}
\end{figure}
The transmitted signal ${\bm x} $ at the transmitter is the same as \eqref{eq:xvs}.
The transmitted signal ${\bm x}_r$ at the FDR is given by
\begin{align} \label{eq_x_r}
{\bm x}_r = \mathbf{G} (\mathbf{H}_1 {\bm x}+\mathbf{H_s} {\bm x}_r + {\bm n}_1),
\end{align}
where $\mathbf{G} \in \mathbb{C}^{L \times L} $ is the FDR transmit matrix.
According to \eqref{eq_x_r}, we can obtain
\begin{align} \label{eq_x_r_2}
{\bm x}_r = (\mathbf{I}_k- \mathbf{G}\mathbf{H}_s)^{-1}\mathbf{G}(\mathbf{H}_1 {\bm x}+{\bm n_1}).
\end{align}
Defining $\mathbf{F}=(\mathbf{I}_R-\mathbf{G}\mathbf{H}_s)^{-1}\mathbf{G}$,
the received signal ${\bm y}$ at the receiver is given by
\begin{align}
{\bm y}&=(\mathbf{H}_d+\mathbf{H}_2\mathbf{F}\mathbf{H}_1)\mathbf{V} { \bm s}+ \mathbf{H}_2\mathbf{F} {\bm n}_1+ {\bm n}_2.
\end{align}
The throughput maximization problem of the FDR-aided MIMO communication system is given by
\begin{subequations} \label{P_FDR}
\begin{align}
\max \limits_{\mathbf{V},\mathbf{F}}~ & \log\det\left(\mathbf{I}_N+ \mathbf{H}\mathbf{V}\mathbf{V}^H
\mathbf{H}^H\mathbf{R}_n^{-1}\right) \\
\text{s.t.} \quad
& \text{tr}(\mathbf{V}\mathbf{V}^H)\leq P_s,\\
& \text{tr}(\mathbf{F} \mathbf{D} \mathbf{F}^H)
\leq P_r,
\end{align}
\end{subequations}
where $\mathbf{R}_n=\sigma_R^2\mathbf{H}_2\mathbf{F}\mathbf{F}^H\mathbf{H}_2^H +\sigma_D^2\mathbf{I}_N$,
define
$ \mathbf{H}=\mathbf{H}_d+ \mathbf{H}_2\mathbf{F}\mathbf{H}_1 $,
$\mathbf{D}=(\mathbf{H}_1\mathbf{V}\mathbf{V}^H\mathbf{H}_1^H +\sigma_R^2\mathbf{I})$
for clarity.
\subsection{Half Duplex Relay}
We consider the HDR aided MIMO communication system as comparison as shown in Fig. \ref{fig:system_HDR}.
\begin{figure}[!hbtp]
\begin{center}
\includegraphics[angle=0,width=0.5\textwidth]{channel_model_HDR}
\end{center}
\caption{Illustration of a HDR-aided MIMO communication system. }
\label{fig:system_HDR}
\end{figure}
The received signal ${\bm y}$ is given by
\begin{align}
{\bm y}&=(\mathbf{H}_d+\mathbf{H}_2\mathbf{G}\mathbf{H}_1)\mathbf{V} { \bm s}+ \mathbf{H}_2\mathbf{G} {\bm n}_1+ {\bm n}_2,
\end{align}
where $\mathbf{G} \in \mathbb{C}^{L \times L} $ is the HDR transmit matrix.
For clarity, defining
$ \mathbf{H}=\mathbf{H}_d+ \mathbf{H}_2\mathbf{G}\mathbf{H}_1 $,
the corresponding throughput maximization problem is given by
\begin{subequations} \label{P_HDR}
\begin{align}
\max \limits_{\mathbf{V},\mathbf{G}}~ &\frac{1}{2}\log\det\left(\mathbf{I}_N+ \mathbf{H}\mathbf{V}\mathbf{V}^H\mathbf{H}^H\mathbf{R}_n^{-1}\right)\\
\text{s.t.} \quad
& \text{tr}(\mathbf{G}(\mathbf{H}_1 \mathbf{V}\mathbf{V}^H \mathbf{H}_1^H+\sigma_R^2\mathbf{I})\mathbf{G}^H)\leq2P_r, \\
&\text{tr}(\mathbf{V}\mathbf{V}^H)\leq2P_s,
\end{align}
\end{subequations}
where $\mathbf{R}_n=\sigma_R^2\mathbf{H}_2\mathbf{G}\mathbf{G}^H\mathbf{H}_2^H +\sigma_D^2\mathbf{I}_N$.
Since the transmission needs two time slots for one transmission, the objective function and constraints have the factor $\frac{1}{2}$ and $2$, respectively \cite{kang2009capacity}.
\begin{remark}\label{remark_1}
From the system models, it can be observed that RIS can be regarded as a full-duplex MIMO relay without self-interference.
\end{remark}
\section{Alternating Weighted MMSE Approach }
\subsection{Reconfigurable Intelligent Surface}
Assuming the signal ${\bm s}$ is independent of the noise ${\bm n}_1$ in \eqref{eq:y_ris},
the MSE covariance matrix $\mathbf{E}$ can be represented by
\begin{align}
\mathbf{E}&=\mathbb{E}\{(\hat{{\bm s}}-{\bm s})(\hat{{\bm s}}-{\bm s})^H\} \nonumber \\
&=
(\mathbf{U}^H\mathbf{H}\mathbf{V-\mathbf{I}})
(\mathbf{U}^H\mathbf{H}\mathbf{V}-\mathbf{I})^H+\sigma_D^2 \mathbf{U}^H\mathbf{U}.
\end{align}
The MSE minimization problem is to solve
\begin{subequations} \label{P_MMSE}
\begin{align}
\min \limits_{\mathbf{V},\mathbf{\Phi},\mathbf{U}}~ & \mathrm{tr}(\mathbf{E}) \\
\text{s.t.} \quad
& \mathrm{tr}(\mathbf{V}\mathbf{V}^H)\leq P_t,\\
& | \phi_k|\leq 1.
\end{align}
\end{subequations}
Assuming that the transmit beamforming $\mathbf{V}$ and RIS reflection coefficients $\mathbf{\Phi}$ are fixed,
the receive beamforming $\mathbf{U}$ in the MMSE receiver is written by
\begin{align} \label{eq_U_mmse}
\mathbf{U}_{\text{mmse}}=(\mathbf{H}\mathbf{V}\mathbf{V}^H\mathbf{H}^H +\sigma_D^2\mathbf{I})^{-1}\mathbf{H}\mathbf{V}.
\end{align}
The corresponding MSE matrix $\mathbf{E}$ can be represented by
\begin{align} \label{eq_E_mmse}
\mathbf{E}_{\text{mmse}}&=\mathbf{I}-\mathbf{V}^H\mathbf{H}^H \mathbf{R}_n^{-1}
(\mathbf{H}\mathbf{V}\mathbf{V}^H\mathbf{H}^H\mathbf{R}_n^{-1} +\mathbf{I})^{-1}\mathbf{HV}
\end{align}
The weighted MSE minimization problem is given by
\begin{subequations} \label{P_WMMSE}
\begin{align}
\min \limits_{\mathbf{V},\mathbf{\Phi},\mathbf{U},\mathbf{W}}~ & \mathrm{tr}(\mathbf{WE}) - \log \det (\mathbf{W}) \label{P_WMMSE_obj}\\
\text{s.t.} \quad
& \mathrm{tr}(\mathbf{V}\mathbf{V}^H)\leq P_s,\\
& | \phi_k|\leq 1,
\end{align}
\end{subequations}
where $\mathbf{W}\succeq 0$ is a weight matrix.
\begin{lem} \label{lemma1}
The Weighted MSE minimization problem \eqref{P_WMMSE} is shown to
be equivalent to the throughput maximization problem \eqref{P1}.
\end{lem}
\begin{IEEEproof}
First, the optimal $\mathbf{U}$ and $\mathbf{E}$ of problem \eqref{P_WMMSE} are given in \eqref{eq_U_mmse} and \eqref{eq_E_mmse}, respectively.
When other variables are given, the objective function of problem \eqref{P_WMMSE} is convex with respect to (w.r.t) $\mathbf{W}$.
Therefore, we can obtain
$\mathbf{W}= \mathbf{E}_{\text{mmse}}^{-1}$ by checking first order optimality condition.
Substituting $\eqref{eq_U_mmse}$ and $\mathbf{W}$ into \eqref{P_WMMSE_obj},
we have
\begin{align}
\min \limits_{\mathbf{V},\mathbf{\Phi}}~ & - \log \det (\mathbf{E}_{\text{mmse}}^{-1})
\nonumber \\
=\max \limits_{\mathbf{V},\mathbf{\Phi}}~ & \log \det (\mathbf{E}_{\text{mmse}}^{-1})
\nonumber \\
\overset{\text{(a)}}{=} \max \limits_{\mathbf{V},\mathbf{\Phi}} ~ &
\log \det (\mathbf{I}+\mathbf{V}^H\mathbf{H}^H\mathbf{R}_n^{-1}\mathbf{HV})^{-1}, \nonumber \\
\overset{\text{(b)}}{=} \max \limits_{\mathbf{V},\mathbf{\Phi}}~ & \log \det \left( \mathbf{I}_N + \mathbf{H}\mathbf{V}\mathbf{V}^H \mathbf{H}^H \mathbf{R}_n^{-1} \right),
\end{align}
where (a) comes from equality $\mathbf{I}-\mathbf{B}(\mathbf{AB}+\mathbf{I})^{-1}\mathbf{A}=(\mathbf{I}+\mathbf{BA})^{-1}$,
(b) comes from equality $\det(\mathbf{I+\mathbf{AB}})=\det(\mathbf{I+\mathbf{BA}})$.
\end{IEEEproof}
Since problem \eqref{P1} is nonconvex, it is difficult to handle.
Therefore, we can handle the problem \eqref{P_WMMSE} by utilizing alternating optimization.
The objective function of problem \eqref{P_WMMSE} is convex w.r.t each of the optimization variables
$\{\mathbf{V},\mathbf{\Phi},\mathbf{W},\mathbf{U}\}$.
When $\mathbf{V},\mathbf{\Phi}$ are given, the optimal $\mathbf{U}$ can be obtained from \eqref{eq_U_mmse}.
With \eqref{eq_E_mmse}, the weight matrix $\mathbf{W}$ is given by
\begin{align} \label{eq_W_mmse}
\mathbf{W}= \mathbf{E}_{\text{mmse}}^{-1}.
\end{align}
When $\mathbf{\Phi},\mathbf{U},\mathbf{W}$ are given, by substituting \eqref{eq_U_mmse} and \eqref{eq_E_mmse},
problem \eqref{P_WMMSE} can be reformulated as
\begin{subequations} \label{P_WMMSE_UWPhi}
\begin{align}
\min_{\mathbf{V}}~ & \text{tr}(\mathbf{V}^H\mathbf{H}^H\mathbf{U}\mathbf{W} \mathbf{U}^H\mathbf{H}\mathbf{V})
-\text{tr}(\mathbf{W}\mathbf{U}^H\mathbf{H}\mathbf{V}) \nonumber \\
& \quad -\text{tr}(\mathbf{W}\mathbf{V}^H\mathbf{H}^H\mathbf{U}) \\
\text{s.t.} ~&\text{tr}(\mathbf{V}\mathbf{V}^H)\leq P_s,
\end{align}
\end{subequations}
which is a convex problem and can be efficiently solved by off-the-shelf convex solvers (e.g. CVX and SeDuMi) \cite{boyd2004convex}.
When $\mathbf{U},\mathbf{W},\mathbf{V}$ are given,
the objective function can be reformulated as follows
\begin{align}\label{eq_f_phi}
f(\mathbf{\Phi})=
\text{tr} (\mathbf{W}
(\mathbf{U}^H\mathbf{H}_d\mathbf{V}\mathbf{V}^H\mathbf{H}_1^H \mathbf{\Phi}^H\mathbf{H}_2^H\mathbf{U}
+\mathbf{U}^H\mathbf{H}_2\mathbf{\Phi}\mathbf{H}_1\mathbf{V} \mathbf{V}^H\mathbf{H}_d^{H}\mathbf{U}
+\mathbf{U}^{H}\mathbf{H}_2\mathbf{\Phi}\mathbf{H}_1\mathbf{V} \mathbf{V}^H\mathbf{H}_1^{H}\mathbf{\Phi}^H\mathbf{H}_2^H\mathbf{U})
\nonumber\\
-\mathbf{W}(\mathbf{V}^H\mathbf{H}_1^H\mathbf{\Phi}^H \mathbf{H}_2^H\mathbf{U}+
\mathbf{U}^H\mathbf{H}_2\mathbf{\Phi}\mathbf{H}_1\mathbf{V}))
\end{align}
Denote $\mathbf{a} = \text{diag} (\mathbf{A})$,
then there is
\begin{align} \label{eq:20210121_0045}
\text{tr}(\mathbf{A}^H \mathbf{B} \mathbf{A} \mathbf{C} )=\mathbf{a} ^H( \mathbf{C} \odot \mathbf{B}^T )\mathbf{a} ,
\end{align}
where $\odot$ is the operation of taking the real part.
With the aid of equality \eqref{eq:20210121_0045},
problem \eqref{P_WMMSE} then becomes
\begin{subequations} \label{P_WMMSE_UWV}
\begin{align}
\min_{{\bm \phi}}~& \ \ {\bm \phi}^H\mathbf{\Xi}{\bm \phi}+2\mathcal{R}\{{\bm \phi}^H\mathbf{b} ^*\} \\
\text{s.t.} & \ \ |{\bm \phi}_k|\leq 1, \\
\text{where}& \nonumber \\
\mathbf{\Xi} &=(\mathbf{H}_2^H\mathbf{U}\mathbf{W}\mathbf{U}^H\mathbf{H}_2)\odot
(\mathbf{H}_1\mathbf{V}\mathbf{V}^H\mathbf{H}_1^H)^T, \\
\mathbf{B}
&=\mathbf{H}_1\mathbf{V}(\mathbf{V}^H\mathbf{H}_d^H\mathbf{U}
-\mathbf{I})\mathbf{W}\mathbf{U}^H\mathbf{H}_2,\\
\mathbf{b} &=\text{diag}(\mathbf{B}),
\end{align}
\end{subequations}
$\mathcal{R}$ is the operation of taking the real part.
Problem \eqref{P_WMMSE_UWV} is also a convex problem.
Alternating optimization algorithm for obtaining an approximate solution to problem \eqref{P_WMMSE} is summarized in Algorithm \ref{alg1} below.
\begin{algorithm}[H]
\caption{Alternating optimization algorithm}
\label{alg1}
\begin{algorithmic}[1]
\STATE
Initialize a feasible $\{\mathbf{V}^1,\mathbf{\Phi}^1 \} $ to problem \eqref{P_WMMSE};
\REPEAT
\STATE
For given $\{\mathbf{V}^{i},\mathbf{\Phi}^{i} \} $ , the optimal $\{\mathbf{U}^{i},\mathbf{W}^{i} \} $ are given by \eqref{eq_U_mmse} and \eqref{eq_W_mmse}, respectively.
\STATE
Solve Probelm \eqref{P_WMMSE_UWPhi} for given $\{\mathbf{\Phi}^{i},\mathbf{U}^{i}, \mathbf{W}^{i} \}$ by convex optimization, denote the solution as $\mathbf{V}^{i+1}$.
\STATE
Solve Probelm \eqref{P_WMMSE_UWV} for given $\{\mathbf{U}^{i}, \mathbf{W}^{i},\mathbf{V}^{i+1} \}$ by convex optimization, denote the solution as $\mathbf{\Phi}^{i+1} = \text{diag} ({\bm \phi})$.
\STATE
set $i:=i+1$;
\UNTIL $| f(\mathbf{\Phi}^{i})-f(\mathbf{\Phi}^{i-1})| \leq \epsilon$.
\STATE
$\{\mathbf{V}^{i},\mathbf{\Phi}^{i},\mathbf{U}^{i-1},\mathbf{W}^{i-1}\}$ is the obtained approximate solution of problem \eqref{P_WMMSE}.
\end{algorithmic}
\end{algorithm}
\begin{remark}\label{remark_2}
The optimal solution generated by alternating optimization algorithm 1 is a stationary point of weighted MSE minimization problem \eqref{P_WMMSE}.
The proof is omitted due to limited space.
Similar proof can be found in \cite{shi2011iteratively}.
\end{remark}
\subsection{Full Duplex Relay}
Similarly, the throughput maximization problem \eqref{P_FDR} is equivalent to the weighted MSE minimization problem as follows
\begin{subequations} \label{P_WMMSE_FDR}
\begin{align}
\min \limits_{\mathbf{V},\mathbf{F},\mathbf{U},\mathbf{W}}~ & \mathrm{tr}(\mathbf{WE}) - \log \det (\mathbf{W}) \\
\text{s.t.} \quad
& \text{tr}(\mathbf{V}\mathbf{V}^H)\leq P_s,\\
& \text{tr}(\mathbf{F}\mathbf{D}\mathbf{F}^H)\leq P_r.
\end{align}
\end{subequations}
When $\mathbf{V},\mathbf{F}$ are given, the optimal $\mathbf{U}$, $\mathbf{W}$ can be obtained as
\begin{align}
\mathbf{U}_{\text{mmse}}&=
\left(\mathbf{H}\mathbf{V}\mathbf{V}^H\mathbf{H}^{\mathrm{H}}+
\mathbf{R}_n
\right)^{-1}\mathbf{H}\mathbf{V},\\
\mathbf{W}&=\mathbf{E}^{-1}_{\text{mmse}},
\end{align}
where $\mathbf{E}_{\text{mmse}}=\mathbf{I}-\mathbf{V}^H\mathbf{H}^H \mathbf{R}_n^{-1}
(\mathbf{H}\mathbf{V}\mathbf{V}^H\mathbf{H}^H\mathbf{R}_n^{-1} +\mathbf{I})^{-1}\mathbf{HV}$.
When $\mathbf{F},\mathbf{U},\mathbf{W}$ are given,
the problem \eqref{P_WMMSE_FDR} is convex w.r.t. the transmit beamforming $\mathbf{V}$.
When $\mathbf{U},\mathbf{W},\mathbf{V}$ are given,
the objective function of problem \eqref{P_WMMSE_FDR} can be written as follows
\begin{align}\label{eq_f_F}
g(\mathbf{F})=
\text{tr}(\mathbf{F}^H\mathbf{H}_2^H\mathbf{U}\mathbf{W}\mathbf{U}^H \mathbf{H}_2\mathbf{F}\mathbf{H}_1\mathbf{V}\mathbf{V}^H\mathbf{H}_1^H)+
\text{tr}
(\mathbf{F}^H\mathbf{H}_2^H\mathbf{U}\mathbf{W} (\mathbf{U}^H\mathbf{H}_d\mathbf{V}-\mathbf{I})\mathbf{V}^H \mathbf{H}_1^H)\nonumber\\
+\text{tr}
(\underbrace{\mathbf{H}_1\mathbf{V}(\mathbf{V}^H\mathbf{H}_d^H\mathbf{U} -\mathbf{I})\mathbf{W}\mathbf{U}^H\mathbf{H}_2}_{\mathbf{B}}\mathbf{F})
+\sigma_R^2\text{tr}(\mathbf{F}^H
\underbrace{\mathbf{H}_{2}^H\mathbf{U}\mathbf{W} \mathbf{U}^H\mathbf{H}_2}_{\mathbf{C}}\mathbf{F})
\nonumber \\
=\text{vec}(\mathbf{F})^{\mathrm{H}}\underbrace{ (\mathbf{H}_1\mathbf{V}\mathbf{V}^H\mathbf{H}_1^H)^T \otimes(\mathbf{H}_2^H\mathbf{U}\mathbf{W}\mathbf{U}^H \mathbf{H}_2)}_{\mathbf{A}}
\text{vec}(\mathbf{F})
+2\mathcal{R}\left\{
\underbrace{\text{vec}(\mathbf{B}^H)^H}_{\mathbf{b}^H}\text{vec}(\mathbf{F})
\right\}\nonumber\\
+\sigma_R^2\text{vec}(\mathbf{F})^H(\mathbf{I}_K \otimes\mathbf{C})\text{vec}(\mathbf{F}),
\end{align}
where $\otimes$ is the operation of Kronecker product.
With the aid of equality $\text{tr}(\mathbf{A}^H \mathbf{B} \mathbf{A} \mathbf{C}) = \text{vec}(\mathbf{A})^H \mathbf{C}^T \otimes \mathbf{B}\text{vec}(\mathbf{A})$, the problem \eqref{P_WMMSE_FDR} can be reformulated as
\begin{subequations} \label{P_WMMSE_FDR_UWV}
\begin{align}
\min_{\mathbf{F}}~ & \mathbf{f}^H\mathbf{\Xi}
\mathbf{f}+ 2\mathcal{R}(\mathbf{b}^H\mathbf{f})\\
\text{s.t.} \quad
&\mathbf{f}^H(\mathbf{I}\otimes\mathbf{D})\mathbf{f}\leq P_r,
\end{align}
\end{subequations}
in which $\mathbf{\Xi}=\mathbf{A}+\sigma_R^2(\mathbf{I}\otimes\mathbf{C})$,
$\mathbf{f}=\text{vec}(\mathbf{F})$.
It can be seen that problem \eqref{P_WMMSE_FDR_UWV} is a convex problem.
Similar to the discussion for problem \eqref{P_WMMSE},
alternating optimization algorithm can be utilized to handle problem \eqref{P_WMMSE_FDR}.
\subsection{Half Duplex Relay}
Problem \eqref{P_HDR} is similar to Problem \eqref{P_FDR}.
Therefore, alternating optimization algorithm 1 can be utilized to solve Problem \eqref{P_HDR} by replacing variable $\mathbf{F}$ with $\mathbf{G}$.
\section{Numerical Results and Analysis}
In this section, numerical results are presented to validate the effectiveness of the proposed algorithm.
Besides, the comparisons of RIS and relays are stdudied.
\begin{figure}[h]
\begin{center}
\includegraphics[angle=0,width=0.44\textwidth]{distance}
\end{center}
\caption{The simulation setup. }
\label{fig:setup}
\end{figure}
\subsection{Numerical Results}
The simulation setup is shown in Fig. \ref{fig:setup}.
The location of source node and destination node is set as
(0,0) and ($d_{sd}$,0), respectively.
The location of RIS (or FDR/HDR) is set as $(d_1,d_r)$.
We adopt path loss model from the 3GPP Urban Micro (UMi) \cite{access2010further}
(Table B.1.2.1-1) and Rayleigh fading as small scale fading.
Consider the line-of-sight (LOS) and non-LOS (NLOS) versions
of UMi, which are defined for distances $\geq$ 10 m.
Default system parameters are set as:
$d_{sd}=100$m,
$d_1=50$m,
$d_r=10$m,
$M=4$,
$N=4$,
$L=4$,
$K=200$,
$P_s=P_r=43$dBm,
carrier frequency $f = 3$GHz,
bandwidth $B=100$MHz.
Fig. \ref{fig:rate_Num_antenna} plots achievable rate versus the number of reflecting elements, i.e. $K$, with different transmission power $P_s$.
It can be seen that achievable rate is an increasing function w.r.t $K$.
Besides, RIS performs better for higher $P_s$.
\begin{figure}[!hbtp]
\begin{center}
\includegraphics[angle=0,width=0.56\textwidth]{fig_rate_Num_antenna}
\end{center}
\caption{Achievable rate versus the number of RIS reflecting elements. }
\label{fig:rate_Num_antenna}
\end{figure}
\begin{figure}[!hbtp]
\begin{center}
\includegraphics[angle=0,width=0.5\textwidth]{fig_rate_distance}
\end{center}
\caption{Achievable rate versus distance. }
\label{fig:rate_distance1}
\end{figure}
Fig. \ref{fig:rate_distance1} plots achievable rate versus the distance $d_1$ with different $d_r$ .
Energy efficiency is a very important metric to operator, so we investigate the energy efficiency performance for both RIS and FDR/HDR.
Fig. \ref{fig:rate_distance2} plots energy efficiency versus the distance $d_1$ with different $d_r$.
\begin{figure}[!hbtp]
\begin{center}
\includegraphics[angle=0,width=0.5\textwidth]{fig_EE_distance}
\end{center}
\caption{Energy efficiency versus distance. }
\label{fig:rate_distance2}
\end{figure}
In Fig. \ref{fig:rate_distance1} and Fig. \ref{fig:rate_distance2}, both transmission power $P_s$ at source node and $P_r$ at relay are set as 43dBm.
From Fig. \ref{fig:rate_distance1} and Fig. \ref{fig:rate_distance2}, we can get the following observations:
\begin{itemize}
\item In terms of achievable rate, FDR performs best, the performance of RIS is neck and neck with the performance of HDR.
\item In terms of energy efficiency, the performance of RIS is comparable to that of performance of FDR.
Moreover, the performance of RIS can be better than that of FDR with a large number of reflecting elements and good deployment.
\item Deploying RIS in the vicinity of source node or destination node can achieve better performance.
\end{itemize}
\begin{remark}
According to the above observations, although the RIS does not have a matchable rate compared with FDR, it does not lose FDR in terms of energy efficiency performance, which is an especially important performance metric from the perspective of operator.
Considering the fact that RIS has the advantage in both implementation and energy consumption, this technique is very alluring to the operators.
\end{remark}
\subsection{Comparison between RIS and Relays}
In this section, we discuss the comparison between RIS and relays from different perspectives.
From the perspective of system models, RIS can be regarded as a full-duplex MIMO relay without self-interference.
In addition to the differences in system models, we have also seen differences in perspective of performance with different cases.
Finally, we consider the difference between RIS and relay in actual deployment and application methods from the perspective of operator.
Since RIS can operate in a passive or semi-passive manner, which allows RIS to be deployed in a more flexible way than relay.
As for the loss of power and performance, it can be compensated by increasing the number of reflecting elements.
The most important feature for RIS is the possibility to be free from power supply.
Only then can it be possible to realize the flexible and on-demand deployment of the sixth generation (6G) network.
It not only is a supplement of coverage, but also provides on-demand service for users and traffic.
To achieve the above goal, the network needs to control the RIS in a wireless way.
New designs are required in terms of protocol architecture and control methods.
Furthermore, the designed solution must ensure low power consumption and low complexity.
There is no restriction on power for relay.
Therefore, the control and processing could be more complicated at relays.
\section{Conclusion and Future Work}
In this paper, we proposed the alternating weighted MMSE algorithm to handle throughput maximization problem by jointly optimizing transmit beamforming and reflecting coefficient of RIS.kang2009capacity
Moreover, the comparisons between RIS and relays are investigated:
\begin{itemize}
\item From the perspective of system models, RIS can be regarded as a full-duplex MIMO relay without self-interference.
\item From the perspective of performance, RIS is comparable to the spectral efficiency performance of HDR and achieve the same and even better energy efficiency performance than FDR.
\item From the perspective of deployment requirement and controlling method, RIS can provide a low-cost and flexible solution which is free from wired power supply.
\end{itemize}
Our future work includes extension to multiple users MIMO, system level evaluation of RIS-aided network, and system architecture design to achieve low power consumption and low complexity control of RIS.
|
2,877,628,088,551 | arxiv | \section{Introduction}
Let $\pi$ be a unitary cuspidal automorphic representation for ${\rm GL}(2)$ over a number field $F$. We assume that it is not of solvable polyhedral type, which means that it does not correspond to an Artin representation of dihedral, tetrahedral, or octahedral type. Associated to a finite place $v$ where $\pi$ is unramified, we have the multiset of Satake parameters $\{\alpha_v (\pi), \beta_v (\pi)\}$ and their sum is called the Hecke eigenvalue $a_v(\pi)$ of $\pi$ at $v$.
One can ask about the distribution of the sequence $(a_v(\pi))_v$.
If one restricts to automorphic representations that correspond to holomorphic forms, then more is known. For example the Sato-Tate conjecture has been proved for a wide range of Hilbert modular forms~\cite{ST}. In the general case however, much less is known.
For example, in an appendix to~\cite{Sh94}, J.-P.~Serre asked if, for self-dual $\pi$, it can be shown that there are infinitely many Hecke eigenvalues greater than a given positive constant $c$ (and similarly, if there are infinitely many Hecke eigenvalues less than a given negative constant $c'$). An answer to this was provided by Theorem~1.2 of~\cite{Wa18} with $c = 0.905$ and $c' = -1.164$.
In the case of when $\pi$ is a non-self-dual, one can extend the question as follows: For what angle $\theta$ can it be shown that there are infinitely many Hecke eigenvalues in any sector of size $\theta$? Furthermore, given any such sector, for what $c$ do we have infinitely many Hecke eigenvalues greater than size $c$?
A consequence of Theorem~1.3 of~\cite{Wa18} is that this holds true for $\theta = \pi$ radians, with $c = 0.5$. In this paper, we will improve the value of $\theta$ to 2.63 radians and improve $c$ to $0.595$.
\begin{theorem}\label{t1}
Let $\pi$ be a non-self-dual unitary cuspidal automorphic representation for ${\rm GL}(2)/F$, where $F$ is a number field, that is not of solvable polyhedral type. Then, for any angle $\phi$ we have that the following set of places
\begin{align*}
\{v \mid {\rm arg}(a_v(\pi)) \in (\phi -1.314, \phi + 1.314)\}
\end{align*}
has positive upper Dirichlet density.
Furthermore, the subset of such places whose associated Hecke eigenvalue has a size of at least 0.595 also has positive upper Dirichlet density.
\end{theorem}
\section{Asymptotic properties of certain Dirichlet series}\label{dsec}
In this section, we assume that $\pi$ is a cuspidal automorphic representation for ${\rm GL}(2)/F$ that is not self-dual and not of solvable polyhedral type.
\begin{notation}
Denote by $X = X (\pi)$ the set of archimedean places as well as places at which $\pi$ is ramified.
Values of $k$ will be associated to our examination of the asymptotic behaviour of $$\sum_{v \not \in X} {\rm Re}(e ^{i \phi}a_v(\pi))^k {\rm N}v^{-s},$$
as $s \rightarrow 1^+$,
for $k = 3,4,6,8$, where $\phi$ is any fixed angle in $[0,2 \pi)$. Let $\omega$ be the central character of $\pi$ and denote the order of this character by $r$. Lastly, we will write $\ell (s) := \log (1/(s-1))$.
\end{notation}
We will repeatedly make use of the bounds towards the Ramanujan conjecture of Kim--Sarnak~\cite{Ki03} (in the rational case) and Blomer--Brumley~\cite{BB11} (for number fields). We will also need the functoriality results of Gelbart--Jacquet~\cite{GJ78}, Kim--Shahidi~\cite{KS00,KS02}, and Kim~\cite{Ki03}, regarding the symmetric square, cube, and fourth power lifts of cuspidal automorphic representations for {\rm GL}(2).
\subsection{$k = 3$}
We consider incomplete $L$-functions of the form $L^X(s, \pi ^m\times \overline{\pi}^n)$ where $m,n$ are non-negative integers and $m+n = 3$.
In the case $(m,n)= (2,1)$, making use of Clebsch--Gordan decompositions and the unitary of $\pi$, we obtain
\begin{align*}
L^X(s, \pi \times \pi \times \overline{\pi}) = L^X(s, {\rm Sym}^3 \pi \otimes \omega ^{-1}) L^X(s, \pi)^2
\end{align*}
where $\omega$ is the central character of $\pi$. Taking logarithms and using the bounds towards the Ramanujan conjecture~\cite{Ki03,BB11} we obtain
\begin{align*}
\sum_{v \not \in X}\frac{a_v(\pi) ^2 \overline{a_v(\pi)}}{{\rm N}v^s} = O(1),
\end{align*}
as $s \rightarrow 1^+$.
Using a similar approach for the other cases, we see that the same asymptotic behaviour occurs for $\sum_{v \not \in X}a_v(\pi) ^m \overline{a_v(\pi)^n} {\rm N}v^{-s} $ for $(m,n) = (3,0), (1,2),$ and $(0,3)$.
Therefore, for any $\phi \in [0,2 \pi)$,
\begin{align*}
\sum_{v \not \in X}\frac{{\rm Re}(e ^{i \phi}a_v(\pi))^3}{{\rm N}v^s} = \frac{1}{2 ^3} &\left(\sum_{v \not \in X}\frac{e ^{3i \phi}a_v(\pi)^3}{{\rm N}v^s}
+3 \sum_{v \not \in X}\frac{e ^{i\phi}a_v(\pi)^2 \overline{a_v(\pi)}}{{\rm N}v^s} \right. \\
&+ \left. 3\sum_{v \not \in X}\frac{e ^{-i\phi}a_v(\pi) \overline{a_v(\pi)^2}}{{\rm N}v^s}
+ \sum_{v \not \in X}\frac{e ^{-3i \phi} \overline{a_v(\pi)^3}}{{\rm N}v^s}\right)\\
= O(1) &
\end{align*}
since each of the four series on the right-hand side is bounded as $s \rightarrow 1^+$.
\subsection{$k = 4$:} \label{sseck4} Using the same approach as in the $k = 3$ case, we find that
\begin{align*}
L^X(s, \pi \times \pi \times \pi \times \pi) = L^X(s, {\rm Sym}^4 \pi)L^X(s, {\rm Sym}^2 \pi \otimes \omega)^3 L^X(s, \omega ^2)^2.
\end{align*}
Therefore, if $\pi$ has central character of order two,
$$\sum_{v \not \in X}a_v(\pi) ^4 {\rm N}v^{-s} = 2 \ell(s) + O(1)$$ as $s \rightarrow 1^+$. If not, then the series is bounded in that limit.
We also note
\begin{align*}
L^X(s, \pi \times \pi \times \pi \times \overline{\pi}) = L^X(s, {\rm Sym}^4 \pi \otimes \omega ^{-1})L^X(s, {\rm Sym}^2 \pi)^3 L^X(s, \omega)^2,
\end{align*}
which then implies
\begin{align*}
\sum_{v \not \in X}{a_v(\pi) ^3 \overline{a_v(\pi) }} {{\rm N}v^{-s}} = O(1)
\end{align*}
as $s \rightarrow 1^+$, since $\pi$ is not self-dual.
We similarly obtain
\begin{align*}
\sum_{v \not \in X} {a_v(\pi) ^2 \overline{a_v(\pi)^2 }}{ {\rm N}v^{-s}} = 2 \ell(s) + O(1),
\end{align*}
and we conclude
\begin{align*}
\sum_{v \not \in X}{{\rm Re}(e ^{i \phi}a_v(\pi))^4}{{\rm N}v^{-s}} = q_4 \cdot \ell(s) + O(1) ,
\end{align*}
where
\begin{align*}
q_4=q_4 (\pi,\phi) =
\begin{cases}
\frac{3 + \cos 4\phi}{4}, &\text{ if }r=2,\\
\frac 34, &\text{ if } r \geq 3.
\end{cases}
\end{align*}
\subsection{$k = 6$:}\label{sseck6}
Note that the incomplete $L$-function
$L^X(s, \pi ^{\times m}\times \overline{\pi}^{\times n})$, for non-negative integers $m + n = 6$, can be expressed as
\begin{align}\label{eq1}
L^X(s, {\rm Sym}^3 \pi \times {\rm Sym}^3 \pi \otimes \omega ^{-n})L^X(s, {\rm Sym}^3 \pi \times \pi \otimes \omega ^{1-n})^4 L^X(s, \pi \times \pi \otimes \omega ^{2-n})^4
\end{align}
and also as
\begin{align}\label{eq2}
&L^X(s, {\rm Sym}^4 \pi \times {\rm Sym}^2 \pi \otimes \omega ^{-n})L^X(s, {\rm Sym}^4 \pi \otimes \omega ^{1-n}) \\ \notag
& \cdot L^X(s, ({\rm Sym}^2 \pi \otimes \omega ^{1-n})\times {\rm Sym}^2 \pi)^3 L^X(s, {\rm Sym}^2 \pi \otimes \omega ^{2-n})^5 L^X(s, \omega ^{3-n})^2.
\end{align}
The first and third $L$-functions in equation~\ref{eq1} are either invertible at $s=1$ or have a simple pole there. The second $L$-function is invertible at $s=1$. Therefore, equation~\ref{eq1} either is invertible at $s=1$, or has a pole of order 1, 4, or 5 there.
The third and fifth $L$-functions in~\ref{eq2} are either invertible at $s=1$ or have a simple pole there. The rest are invertible at $s=1$. Therefore, at $s=1$ equation~\ref{eq2} is either invertible there or has a pole of order 2, 3, or 5.
So $L^X(s, \pi ^{\times m}\times \overline{\pi}^{\times n})$ is either invertible at $s=1$ or has a pole of order 5. In the latter case, this holds if and only if $\omega ^{3-n}= 1$.
If $r = 2$, then the incomplete $L$-function $L^X(s, \pi ^{\times m}\times \overline{\pi}^{\times n})$ has a pole of order 5 exactly when $n = 1,3,5$. If $r = 3$, then this $L$-function has a pole of order 5 exactly when $n = 0,3,6$. If $r \geq 4$, then it has a pole of order 5 only when $n = 3$.
Denote by $\alpha_v (\pi)$ and $\beta_v (\pi)$ the Satake parameters of $\pi$ at $v$. Taking logarithms and applying the known bounds on the size of the Satake parameters, we obtain:
\begin{align*}
\sum_{v \not \in X}\sum_{t = 1,2} \frac{(\alpha_v(\pi) ^t + \beta_v(\pi) ^t)^6 \omega_v^{-tn}}{{\rm N}v^{st}}=
\begin{cases} O(1), & \text{ if } r = 2 \text{ and } n = 1,3,5, \\
& \text{ if } r = 3 \text{ and } n = 0,3,6, \\
& \text{ or if } r \geq 4 \text{ and } n \neq 3. \\
5 \ell(s) + O(1), &\text{ if } r = 2 \text{ and } n = 0,2,4,6, \\
& \text{ if } r = 3 \text{ and } n = 1,2,4,5, \\
& \text{ or if } r \geq 4 \text{ and } n=3. \end{cases}
\end{align*}
So for any angle $\phi \in [0,2 \pi)$,
\begin{align}\label{bigeq}
\frac{1}{2 ^6}{\sum_{n = 0}^{6}} \ ^6C_n \sum_{v \not \in X}\sum_{t = 1,2} \frac{(\alpha_v(\pi) ^t + \beta_v(\pi) ^t)^6 \omega_v^{-tn} }{{\rm N}v^{st}}e^{i (6-2n)\phi} \\ \notag
=\begin{cases}
\frac{5}{16}(3 \cos 4 \phi+ 5) \cdot \ell(s) + O(1), &\text{ if }r = 2, \\
\frac{5}{32}(\cos 6 \phi+ 10) \cdot \ell(s) + O(1), &\text{ if }r = 3, \\
\frac{25}{16} \cdot \ell(s) + O(1), &\text{ if }r \geq 4, \\
\end{cases}
\end{align}
as $s \rightarrow 1^+$.
We also note that the left-hand side of equation \ref{bigeq} above is equal to
\begin{align*}
\frac{1}{2 ^6} \sum_{v \not \in X} \sum_{t = 1 }^{ 2} \frac{(\alpha_v (\pi) ^t + \beta_v (\pi) ^t)^6}{{\rm N}v^{st}}(e^{i \phi} + \omega_v^{-t} e^{-i\phi})^6 .
\end{align*}
Since $$(\alpha_v (\pi) ^t + \beta_v (\pi) ^t)(e^{i \phi} + \omega_v^{-t} e^{-i \phi}) = (\alpha_v (\pi) ^t + \beta_v (\pi) ^t)e^{i \phi}+ (\overline{\alpha_v (\pi) ^t + \beta_v (\pi) ^t} )e^{-i \phi}$$
we know that
\begin{align*}
\sum_{v \not \in X} \frac{(\alpha_v (\pi) ^2 + \beta_v (\pi) ^2)^6}{{\rm N}v^{2s}}(e^{i \phi} + \omega_v^{-2} e ^{-i\phi})^6
\end{align*}
is non-negative.
We conclude
\begin{align}\label{k6eq}
\sum_{v \not \in X} \frac{{\rm Re}(a_v(\pi))^6}{{\rm N}v^s} \leq q_6 \cdot \ell(s) + O(1).
\end{align}
where we can choose
\begin{align*}
q_6 = q_6(\pi) =
\begin{cases}
\frac 5 2, & \text{ if } r = 2, \\
\frac{55}{32}, & \text{ if }r = 3, \\
\frac{25}{16}, & \text{ if }r \geq 4.
\end{cases}
\end{align*}
\subsection{$k=8$:}\label{sseck8}
For non-negative integers $m + n = 8$, we have
\begin{align}\label{k8mneq}
&L^X(s, \pi ^{\times m} \times \overline{\pi}^{\times n}) \\ \notag
=& L^X(s, \pi ^{\times 8} \otimes \omega ^{-n}) \\ \notag
=&L^X(s, {\rm Sym}^4 \pi \times {\rm Sym}^4 \pi \otimes \omega ^{-n})L^X(s, {\rm Sym}^2 \pi \times {\rm Sym}^2 \pi \otimes \omega ^{2-n})^9
L^X(s, \omega ^{4-n})^4 \\ \notag
&\cdot L^X(s, {\rm Sym}^4 \pi \times {\rm Sym}^2 \pi \otimes \omega ^{1-n})^6
L^X(s, {\rm Sym}^4 \pi \otimes \omega ^{2-n})^4 L^X(s, {\rm Sym}^2 \pi \otimes \omega ^{3-n})^{12}.
\end{align}
$L^X(s, {\rm Sym}^4 \pi \times {\rm Sym}^4 \pi \otimes \omega ^{-n})$ has a simple pole at $s=1$ when $n = 4$. If it has a pole for other values of $n$, then either it means that ${\rm Sym}^4 \pi$ admits a self-twist, or $\omega$ has order less than or equal to four.
Since there is no known characterisation of when ${\rm Sym}^4 \pi$ admits a self-twist, we examine different cases in terms of the possible order of the central character. If we assume that $L^X(s, {\rm Sym}^4 \pi \times {\rm Sym}^4 \pi \otimes \omega ^{-n})$ has a simple pole at $s=1$, then
$${\rm Sym}^4 \pi \otimes \omega ^{-n} \simeq \widetilde{{\rm Sym}^4 \pi}.$$ Considering the central characters of each side, we obtain $\omega ^{10-5n} = \omega ^{-10}$ and so $\omega$ has order dividing $(20-5n)$.
At this stage, we consider all the different possible pairs of values of $(r,n)$ for which the incomplete $L$-function $L^X(s, {\rm Sym}^4 \pi \times {\rm Sym}^4 \pi \otimes \omega ^{-n})$ may have a (simple) pole. We mention a few cases explicitly here:
If $r = 2$, then $L^X(s, {\rm Sym}^4 \pi \times {\rm Sym}^4 \pi \otimes \omega ^{-n})$ has a simple pole when $n$ is even and is invertible otherwise. If $r = 3$, then the $L$-function has a pole exactly when $n = 1,4,7$. If $r=4$, there is a pole exactly when $n = 0,4,8$, and if $r = 5$, we cannot rule out the existence of a pole for any value of $n$.
For $L^X(s, {\rm Sym}^2 \pi \times {\rm Sym}^2 \pi \otimes \omega ^{2-n})$, we note that Theorem 2.2.2 of~\cite{KS02} states that for non-dihedral $\pi$, the adjoint lift of $\pi$ admits a self-twist if and only if ${\rm Sym}^3 \pi$ is not cuspidal. However, we have assumed that $\pi$ is not of solvable polyhedral type which means that its symmetric cube lift must be cuspidal, so its adjoint lift, and thus its symmetric square lift, cannot admit a non-trivial self-twist. We now consider the cases of the different values of $r$: If $r = 2$, then the $L$-function has a pole when $n$ is even and is invertible otherwise. If $r = 3$, the $L$-function has a pole exactly when $n = 1,4,7$. If $r = 4$, the $L$-function has a pole exactly when $n = 0,4,8$. Lastly, if $r \geq 5$, then the $L$-function only has a pole when $n = 4$.
For $L^X(s, \omega ^{4-n})$, the analysis has the exact same outcomes as in the paragraph directly above.
Finally, we note that the last three $L$-functions in equation (\ref{k8mneq}), namely,
\begin{align*}
&L^X(s, {\rm Sym}^4 \pi \times {\rm Sym}^2 \pi \otimes \omega ^{1-n}),\\
&L^X(s, {\rm Sym}^4 \pi \otimes \omega ^{2-n}), \text{ and }\\
&L^X(s, {\rm Sym}^2 \pi \otimes \omega ^{3-n}),
\end{align*}
are all invertible at $s=1$.
We consider
\begin{align}\label{ank}
A(n,r):= \sum_{v \not \in X}\sum_{t = 1 }^{ 8} \frac{(\alpha_v(\pi) ^t + \beta_v(\pi) ^t)^8 \omega_v^{-tn}}{{\rm N}v^{st}}
\end{align}
If $n = 0$, then from the discussion above on the possible existence (and order) of poles at $s=1$ of the various $L$-functions, we find that equation \ref{ank} is bounded as $s \rightarrow 1^+$ when $r \neq 2,4,5,10,20$. In the case where $r = 2$ or $4$, we have $$A(n,r) = 14 \cdot \ell(s) + O(1),$$ and
in the case where $r = 5,10,$ or $20$, we have $$A(n,r) \leq \ell(s) + O(1).$$ We proceed similarly in considering other values of $n$ and $r$, recording the asymptotic behaviour of $A(n,r)$ in the table below:
\begin{center}
\begin{tabular}{ |c|c|c| }
\hline \textbf{n} & \textbf{r} & \textbf{A(n,r)} \\
\hline \hline 0 or 8 & $ 2,4$& $14 \ell(s) + O(1)$ \\
\hline & $ 5,10,20$& $\leq \ell(s) + O(1)$ \\
\hline & otherwise
& O(1) \\
\hline 1 or 7 & $3$ & $14 \ell(s) + O(1)$ \\
\hline & $5,15$ & $\leq \ell(s) + O(1)$ \\
\hline & otherwise & O(1) \\
\hline 2 or 6 & $2$ & $14 \ell(s) + O(1)$ \\
\hline & $5,10$ & $\leq \ell(s) + O(1)$ \\
\hline & otherwise & O(1) \\
\hline 3 or 5 & $5$ & $\leq \ell(s) + O(1)$ \\
\hline & otherwise & O(1) \\
\hline 4 & all & $ 14 \ell(s) + O(1)$ \\
\hline
\end{tabular}
\end{center}
We can use the above to establish asymptotic bounds on
\begin{align*}
{\sum_{n = 0}^{8}} \sum_{v \not \in X}\sum_{t = 1 }^{ 8} \ ^8C_n \frac{(\alpha_v(\pi) ^t + \beta_v(\pi) ^t)^8 \omega_v^{-tn} }{{\rm N}v^{st}}e^{i (8-2n)\phi}.
\end{align*}
We scale the left-hand side of equation above by $1/2^8$ and use positivity to obtain
\begin{align}
\sum_{v \not \in X} \frac{({\rm Re}(a_v(\pi) e^{i\phi}))^8}{{\rm N}v^s}\label{k8eq}
&\leq \sum_{v \not \in X} \sum_{t = 1 }^{ 8}
\frac{({\rm Re}(\alpha_v(\pi)^t e^{i\phi} + \beta_v(\pi)^t e^{i\phi}))^8}{{\rm N}v^{st}} \\
&\leq q_8 \cdot \ell(s) + O(1), \notag
\end{align}
as $s \rightarrow 1^+$, where
\begin{align*}
2^8 \cdot q_8=
\begin{cases}
1792 , & \text{ if } r = 2, \\
1204 , & \text{ if } r = 3, \\
1008 , & \text{ if } r = 4 ,\\
1166 , & \text{ if } r = 5, \\
1038 , & \text{ if } r= 10, \\
996 , & \text{ if } r= 15 ,\\
982 , & \text{ if } r = 20,\\
980 , &\text{ otherwise. }
\end{cases}
\end{align*}
\begin{remark}
These bounds appear to be best possible given current knowledge; in particular, there is no known characterisation for when a symmetric fourth power lift from GL(2) admits a self-twist (in contrast to, say, the symmetric square and cube cases, which are well-understood). For context, if we assumed the Ramanujan conjecture, then for any $r \geq 6$ with $r \neq 10,15,20$, the left-hand side of equation (\ref{k8eq}) would have a lower bound of $(980/2^8)\cdot \ell(s) + O(1)$.
\end{remark}
\section{Bounding subsets of Hecke eigenvalues}
First we recall that the upper and lower Dirichlet densities of a set $S$ of places of a number field $F$ are defined as
\begin{align*}
\overline{\delta}(S) = \limsup_{s \rightarrow 1^+} \frac{\sum_{v \in S}{\rm N}v^{-s}}{\log (1/(s-1))}
\end{align*}
and
\begin{align*}
\underline{\delta}(S) = \liminf_{s \rightarrow 1^+} \frac{\sum_{v \in S}{\rm N}v^{-s}}{\log (1/(s-1))},
\end{align*}
respectively, and note that these are equal if and only if the set has a Dirichlet density $\delta (S)$.
\subsection{Absolute value of Hecke eigenvalues}
The following lemma simply arises from adjusting the proof of Theorem~4.1 from~\cite{KS02}.
\begin{lemma} \label{kslem} Let $Q =r/s \geq 2$ be a rational number, where $r,s$ are positive integers. Then, for any unitary cuspidal automorphic representation $\pi$ for ${\rm GL}(2)$ over a number field,
we have
\begin{align*}
\overline{\delta}\{v \mid |a_v (\pi)| > Q\} \leq \frac{1}{1 + (Q ^2 -1)^2 + (Q ^4 -3 Q ^2 + 1)^2}.
\end{align*}
\end{lemma}
\begin{proof}
If $\pi$ is of solvable polyhedral type, then we know that it corresponds to an Artin representation~\cite{La80, Tu81} and therefore satisfies the Ramanujan conjecture, so the inequality holds.
If $\pi$ is not of solvable polyhedral type, we know that its adjoint and symmetric fourth power lifts are cuspidal. We construct the following isobaric automorphic representation
\begin{align*}
\eta = s^4 \alpha \cdot \textbf{1}\boxplus s^4\beta \cdot {\rm Ad}\pi \boxplus s^4 \gamma \cdot (\omega ^{-2}\otimes {\rm Sym}^4 \pi)
\end{align*}
where $\alpha,\beta,\gamma$ are non-negative integers whose values will be determined later.
Now
\begin{align*}
a_v(\eta) &= s^4\alpha + s^4\beta (|a_v (\pi)|^2 - 1) + s^4\gamma (|a_v (\pi)|^4 - 3 |a_v(\pi)|^2 +1).
\end{align*}
If $|a_v(\pi)| > Q \geq 2$, then $a_v(\eta) > s^4(\alpha + \beta (Q ^2 -1) + \gamma (Q ^4 -3 Q ^2 +1))$.
For some automorphic representation $\mu$ and non-negative real number $t$, define $T(\mu,t)$ to be the set of finite places $v$ at which $|a_v(\mu)| > t$. From~\cite{Ra97} we know that
\begin{align*}
\overline{\delta}(T(\mu,t)) \leq \frac{-{\rm ord}_{s=1}L(s,\mu \times \widetilde{\mu})}{t ^2}
\end{align*}
Therefore, since $v \in T(\pi,Q) \Rightarrow v \in T(\eta, s^4\alpha + s^4\beta (Q ^2 -1) + s^4\gamma (Q ^4 -3 Q ^2 +1))$, we have
\begin{align*}
\overline{\delta}(T(\pi,Q)) \leq \frac{s^8(\alpha ^2 + \beta ^2 + \gamma^2)}{(s^4\alpha + s^4\beta (Q ^2 -1) + s^4\gamma (Q ^4 -3 Q ^2 +1)) ^2}\\
\end{align*}
Choose $\alpha = 1$, $\beta = Q ^2 -1$, and $\gamma = Q ^4 -3 Q^2 + 1$ to get
\begin{align*}
\overline{\delta}(T(\pi,Q)) \leq \frac{1}{1 + (Q ^2 -1)^2 + (Q ^4 -3 Q ^2 +1)^2}.
\end{align*}
\end{proof}
\subsection{Case of central characters of order at least 6}
\label{mpf}
From here on, we assume that $\pi$ is non-self-dual and not of solvable polyhedral type, and we will fix an angle $\phi \in [0,2 \pi)$.
We will also make use of the notations $q_4,q_6,$ and $q_8$ from subsections \ref{sseck4}, \ref{sseck6}, and \ref{sseck8}, respectively. Later in the proof we will make the distinction between the cases $r < 6$ and $r \geq 6$.\\
Let
\begin{align*}
A = A (\pi) &:= \{v \not \in X\mid {\rm Re}(a_v(\pi)e^{-i \phi})>0 \},\\
B = B (\pi)&:= \{v \not \in X \mid {\rm Re}(a_v(\pi)e^{-i \phi}) \leq 0 \}.
\end{align*}
Given a set $S$ of finite places and a non-negative integer $t$,
we establish the notation
$${\rm ls} (S,t) :=
\limsup_{s \rightarrow 1^+} \left(\frac{\sum_{v \in S}{({\rm Re}(a_v (\pi)e^{-i \phi}))^t}{{\rm N}v^{-s}}}{\log (1/ (s-1))}\right)$$
and similarly
\begin{align*}
{\rm li} (S,t) :=
\liminf_{s \rightarrow 1^+} \left(\frac{\sum_{v \in S}{({\rm Re}(a_v (\pi)e^{-i \phi}))^t}{{\rm N}v^{-s}}}{\log (1/ (s-1))}\right).
\end{align*}
We also note the following identities that will be referred to later:\\
Given real-valued functions $f,g$ and a point $w \in \R$, we have
\begin{align}\label{lslem}
\limsup_{s \rightarrow w} (f(w) + g(w)) \geq \limsup_{s \rightarrow w} f(w) + \liminf_{s \rightarrow w} g(w) \geq \liminf_{s \rightarrow w} (f(w) + g(w)).
\end{align}
Furthermore, if $f$ and $g$ are non-negative functions, then
\begin{align}\label{lslem2}
\limsup_{s \rightarrow w} (f(w) \cdot g(w)) \leq \limsup_{s \rightarrow w} f(w) \cdot \limsup_{s \rightarrow w} g(w).
\end{align}
From subsection~\ref{sseck4} we have that ${\rm ls}(\Sigma_F-X,4)={\rm li}(\Sigma_F-X,4) = q_4$, where $\Sigma_F$ is the set of places of $F$. Applying equation~(\ref{lslem}), we have
\begin{align*}
{\rm li}(A,4)= q_4-{\rm ls} (B,4) .
\end{align*}
We set $d:={\rm ls} (B,4)$. Define
$$S = S (\beta) := \{v \in A \mid ({\rm Re}(a_v (\pi)e ^{-i \phi}))^4 > (q_4-d)\beta\},$$ for some constant $\beta \leq 1$, where we make the assumption that $\overline{\delta}(S) < 1/m$, for some constant $m$.
Note that
\begin{align*}
{\rm li}(A-S,4) \leq (q_4-d)\beta \cdot \underline{\delta}(A-S).
\end{align*}
Using equation~(\ref{lslem}),
\begin{align*}
{\rm li}(A-S,4) + {\rm ls} (S,4) &\geq {\rm li}(A, 4) = q_4 -d \\
{\rm ls} (S, 4) &\geq \left(q_4-d\right)(1- \beta \underline{\delta}(A-S)).
\end{align*}
Applying equations (\ref{k8eq}) and (\ref{lslem2}),
\begin{align*}
{\rm ls} (S, 4)^2 &\leq {\rm ls} (S, 8) \cdot {\rm ls} (S,0) \\
\left(q_4-d\right)^2(1- \beta \underline{\delta}(A-S))^2 & \leq q_8 \cdot \overline{\delta}(S)
\end{align*}
and from~(\ref{lslem}) we have $$\underline{\delta}(A-S) \leq \overline{\delta}(A) - \overline{\delta}(S),$$ so
\begin{align}\notag
\left(q_4-d\right)^2(1- \beta (\overline{\delta}(A) - \overline{\delta}(S)))^2 & \leq q_8 \cdot \overline{\delta}(S) \\
\left(q_4-d\right)^2(1- \beta (1 - \overline{\delta}(S)))^2 & \leq q_8 \cdot \overline{\delta}(S). \label{deseq1}
\end{align}
Now define
\begin{align*}
T = T(\alpha) := \left\{v \in A \Biggm| ({\rm Re}(a_v (\pi)e ^{-i \phi})) ^3 \geq \alpha d ^{5/4} \left(q_8 - \left(q_4-d\right)^2\right)^{-1/4}\right\}
\end{align*}
for some constant $\alpha \leq 1$, and we make the assumption that $\overline{\delta}(T) < 1/m$.
Note that
\begin{align*}
{\rm ls} (A-T,3) \leq \alpha d ^{5/4} \left(q_8 - \left(q_4-d\right)^2\right)^{-1/4} \overline{\delta}(A-T),
\end{align*}
Using the method from Section 3.1
of~\cite{Wa18} applied to our setting, we deduce
\begin{align*}
{\rm ls} (A-T,3) + {\rm ls} (T,3) \geq d ^{5/4} \left(q_8 - \left(q_4-d\right)^2\right)^{-1/4}.
\end{align*}
Combining the two equations above,
\begin{align*}
{\rm ls} (T,3) &\geq d ^{5/4} \left(q_8 - \left(q_4-d\right)^2\right)^{-1/4} (1- \alpha \overline{\delta}(A-T)),\\
{\rm ls} (T,3)^2 &\geq
\left(\frac{d ^{5/4}}{ \left(q_8 - \left(q_4-d\right)^2\right)^{1/4}}\right)^2 (1- \alpha)^2 .
\end{align*}
From equation (\ref{k6eq}), we have
\begin{align*}
{\rm ls} (T,3)^2 \leq {\rm ls} (T,6)\cdot {\rm ls} (T,0) \leq q_6 \cdot \overline{\delta}(T),
\end{align*}
and so
\begin{align}
\left(\frac{d ^{5/4}}{ \left(q_8 - \left(q_4-d\right)^2\right)^{1/4}}\right)^2 (1- \alpha)^2 \leq q_6 \cdot \overline{\delta}(T). \label{deseq2}
\end{align}
Given $\beta$, choose $\alpha$ such that
\begin{align*}
\left(\left(q_4-d\right)\beta \right)^{1/4} = \left(\alpha \frac{d ^{5/4}}{ \left(q_8 - \left(q_4-d\right)^2\right)^{1/4}}\right)^{1/3}
\end{align*}
We now specify $r \geq 6$.
We therefore can set $q_4 = 3/4$, $q_6 = 25/16$, and $q_8 = 519/128$.
If we choose $\alpha$ and $\beta$ such that the upper Dirichlet densities of the sets $S$ and $T$ are bounded above by $1/234$, then the equations~(\ref{deseq1}) and~(\ref{deseq2}) imply that ($\beta = 0.4906 \dots$, $d= 0.4934 \dots$) is a boundary case. Therefore, there is an upper Dirichlet density of at least $1/234$ for the set of places $v \in A$ such that $${\rm Re}(a_v (\pi)e ^{-i \phi}) > ((q_4-d)\beta)^{1/4} -\epsilon = 0.59566 \dots - \epsilon,$$
for any $\epsilon > 0$.
Recall that Lemma~\ref{kslem} states that for $Q \geq 2$ we have
\begin{align*}
\overline{\delta}\{v \mid |a_v (\pi)| > Q\} \leq \frac{1}{1 + (Q ^2 -1)^2 + (Q ^4 -3 Q ^2 + 1)^2}.
\end{align*}
The right-hand side is smaller than $1/234$ when $Q > 2.341$. This implies that there is a positive upper Dirichlet density of places $v$ where $a_v (\pi) e^{-i \phi}$ lies in the region $$\{z \in \C \mid {\rm Re}(ze ^{-i \phi})>0.59566 , |z| \leq 2.341 \}.$$ Note that $\cos ^{-1} (0.59566 / 2.341) = 1.31352$ radians
(which is equal to $75.259$ degrees). This means that there is a positive upper Dirichlet density of places $v$ whose associated Hecke eigenvalues whose argument is in the interval $$(-1.31353 - \phi , +1.31353 - \phi).$$
\subsection{Case of central characters of order at most five}\label{loworder}
We now assume that the central character $\omega$ of the cuspidal automorphic representation $\pi$ is of order less than six.
We are handling these cases separately since our bounds for the asymptotic behaviour of various Dirichlet series from Section \ref{dsec} are less strong, and so would lead to a weaker result if we only relied on the proof for the $r \geq 6$ case in the previous two subsections.
At a finite place $v$ where $\pi$ is unramified, we have the associated multiset of Satake parameters $\{\alpha_v (\pi),\beta_v (\pi)\}$ where their product is equal to some (not necessarily primitive) $r$th root of unity $e^{i\mu }$, and their sum is equal to the Hecke eigenvalue $a_v(\pi)$. We write $\alpha_v (\pi) = \rho e ^{i \theta}$ and $\beta_v(\pi) = \rho ^{-1} e^{i (-\theta + \mu)}$, for some positive real number $\rho$ and some angle $\theta$.
Unitarity implies that
\begin{align}\label{unit}
\{\rho e ^{-i \theta}, \rho ^{-1} e ^{i (\theta - \mu) }\}= \{\rho ^{-1} e ^{-i \theta}, \rho e ^{i (\theta - \mu) }\}.
\end{align}
If $\rho = 1$, then
\begin{align*}
{\rm Re}(a_v(\pi)) = (1 + \cos \mu) \cos \theta + \sin \mu \sin \theta \\
{\rm Im} (a_v(\pi)) = (1- \cos \mu)\sin \theta + \sin \mu \cos \theta
\end{align*}
and
\begin{align*}
\frac{{\rm Im} (a_v(\pi))}{{\rm Re} (a_v(\pi))}= \frac{\sin \mu}{1 + \cos \mu}= \tan (\mu /2) ,
\end{align*}
so ${\rm arg}(a_v(\pi)) = \mu /2 + n\pi$, for some integer $n$.\\
If $\rho \neq 1$, then equation (\ref{unit}) implies $e^{-i \theta}= e^{i (\theta - \mu)}$, so $\theta = \mu /2 + n \pi$ for some integer $n$. This again means
\begin{align}\label{angleeq}
{\rm arg}(a_v(\pi)) = \mu /2 + n \pi.
\end{align}
We also want to apply the method of Subsection~\ref{mpf}. For each $r$ (and corresponding $q_4,q_6$ and $q_8$), we obtain a statement, for any angle $\phi \in [0,2 \pi)$, of the form
\begin{align}\label{deqn}
\overline{\delta}(\{v \mid {\rm Re}(a_v(\pi) e ^{-i \phi})> T(r)\})> 0.
\end{align}
For $r = 5$, we set $q_4 = 3/4 $, $q_6 = 25/16$, and $q_8 = 583/128$, and obtain $T(5)=0.679$.
For $r = 4$, we set $q_4 = 3/4$, $q_6 = 25/16$, and $q_8 = 504/128$, and get $T(4) = 0.684$.
For $r = 3$, set $q_4 = 3/4$, $q_6 = 55/32$, and $q_8 = 602/128$, obtaining $T(3) = 0.678$.\\
In the case of $r = 2$, we set $q_4 = (3 + \cos 4 \phi)/4 $, $q_6 = 5/2$, and $q_8 = 7$, and obtain, for $\cos 4 \phi \geq -0.785$,
\begin{align*}
\overline{\delta}(\{v \mid {\rm Re}(a_v(\pi) e ^{-i \phi})> 0.5956\})> 0.
\end{align*}
If $\cos 4 \phi < -0.785$, then we conclude that
\begin{align*}
\overline{\delta}(\{v \mid {\rm Re}(a_v(\pi) e ^{-i \phi})> 0.5723 \})> 0
\end{align*}
and use of basic geometry in this setting then implies that $|a_v(\pi)|> 0.702$.
Applying the results from the above equations \ref{angleeq} and \ref{deqn} for suitable values of $r$ and $\phi$, we find that any sector of angle greater than $144^\circ$ (i.e., 2.51 radians) must contain a positive upper Dirichlet density of Hecke eigenvalues of size greater than $0.5956$, which proves Theorem~\ref{t1} for $r \leq 5$.
\subsection{Acknowledgements} The author would like to thank Dinakar Ramakrishnan for suggesting this problem.
|
2,877,628,088,552 | arxiv | \section{Introduction}
\label{sec:intro}
\begin{figure*}
\centering
\includegraphics[width=0.7\textwidth]{fig1.eps}
\caption{
Optical spectra of three nova-like variables:
RW Sex (top; Beuermann et al. 1992),
IX Vel (top middle; A. F. Pala \& B. T. Gaensicke, private communication)
and RW Tri in and out of eclipse (bottom two panels; Groot et al. 2004).
The data for RW Sex and RW Tri were digitized from the respective publications,
and the IX Vel spectrum was obtained using the XSHOOTER spectrograph
on the Very Large Telescope on 2014 October 10.
These systems have approximate inclinations of $30^\circ$, $60^\circ$ and $75^\circ$
(see section 5.4) respectively.
The trend of increasing Balmer line emission with inclination can be seen.
In RW Tri strong single-peaked emission in the Balmer lines is seen even
in eclipse, indicating that the lines may be formed in a spatially
extensive disk wind, and there is even a suggestion
of a (potentially wind-formed) recombination continuum in the eclipsed
spectrum. We have attempted to show each spectrum over a similar dynamic range.
}
\label{novalikes}
\end{figure*}
Cataclysmic variables (CVs) are systems in which a white dwarf
accretes matter from a donor star via Roche-lobe overflow. In
non-magnetic systems this accretion is mediated by a Keplerian disk
around the white dwarf (WD). Nova-like variables (NLs) are a subclass
of CVs in which the disk is always in a relatively
high-accretion-rate state ($\dot{M} \sim
10^{-8}$~M$_{\odot}$~yr$^{-1}$). This makes NLs an excellent
laboratory for studying the properties of steady-state accretion
disks.
It has been known for a long time that winds emanating from the
accretion disk are important in shaping the ultraviolet (UV) spectra
of high-state CVs \citep{heap1978, greensteinoke1982}. The most spectacular evidence for such
outflows are the P-Cygni-like profiles seen in UV resonance lines such as
C~\textsc{iv}~$\lambda1550$\ (see e.g. Cordova \& Mason
1982\nocite{cordova1982}). Considerable effort has been spent over the
years on understanding and modelling these UV features (e.g. Drew \&
Verbunt 1985\nocite{drewverbunt1985}; Mauche \& Raymond
1987\nocite{maucheraymond1987}; Drew 1987; Shlosman \& Vitello 1993; [hereafter
SV93]\nocite{SV93}; Knigge, Woods \& Drew 1995\nocite{KWD95};
Knigge \& Drew 1997\nocite{kd1997};
Knigge et al. 1997\nocite{knigge1997}; Long \& Knigge 2002 [hereafter LK02]\nocite{LK02},
Noebauer et al. 2010\nocite{noebauer};
Puebla et al. 2011\nocite{puebla2011}). The basic picture emerging from these efforts is
of a slowly accelerating, moderately collimated bipolar
outflow that carries away $\simeq 1\% - 10\%$ of the accreting
material. State-of-the-art simulations of line formation in this type
of disk wind can produce UV line profiles that are remarkably similar
to observations.
Much less is known about the effect of these outflows on the optical
spectra of high-state CVs. These spectra are typically characterized
by H and He emission lines superposed on a blue continuum. In many
cases, and particularly in the SW~Sex subclass of NLs
\citep{HSK86,DR95}, these lines are single-peaked. This is contrary to
theoretical expectations for lines formed in accretion disks, which
are predicted to be double-peaked \citep{smak1981, hornemarsh1986}.
{\em Low-state} CVs (dwarf novae in quiescence) do, in fact,
exhibit such double-peaked lines \citep{marshhorne1990}.
Murray \& Chiang (1996, 1997; hereafter referred to collectively as MC96)\nocite{MC96, MC97}
have shown that the presence of disk winds may
offer a natural explanation for the single-peaked optical emission lines in
high-state CVs, since they can strongly affect the radiative transfer
of line photons. Strong support for a significant wind contribution to the
optical emission lines comes from observations of eclipsing
systems. There, the single-peaked lines are often only weakly
eclipsed, and a significant fraction of the line flux remains visible
even near mid-eclipse \citep[e.g.][]{baptista2000,groot2004}.
This points to line formation in a spatially
extended region, such as a disk wind (see Fig.~\ref{novalikes}).
Further evidence for a wind contribution to the optical lines comes
from isolated observations of P-Cygni-like line profiles even in optical
lines, such as H$\alpha$\ and He \textsc{i} $\lambda5876$ \citep{patterson1996, RN98, kafka2004}.
Could disk winds also have an impact on the UV/optical {\em continuum}
of high-state CVs? This continuum is usually thought to be dominated
by the accretion disk and modelled by splitting the disk into
a set of concentric, optically thick, non-interacting annuli following
the standard $T_{eff}(R) \propto R^{-3/4}$ radial temperature
distribution \citep{shakurasunyaev1973}. In such
models, each annulus is taken to emit either as a blackbody or,
perhaps more realistically, as a stellar/disk atmosphere model
\citep{Schwarzenberg-Czerny1977,wade1984,wade1988}.
In the latter case, the local surface gravity, $\log{g}(R)$, is
assumed to be set solely by the accreting WD, since self-gravity is
negligible in CV disks.
Attempts to fit the observed spectral energy distributions (SEDs) of
high-state CVs with such models have met with mixed success. In
particular, the SEDs predicted by most stellar/disk atmosphere models
are too blue in the UV \citep{wade1988,long1991,long1994,knigge1998} and exhibit
stronger-than-observed Balmer jumps in absorption
\citep{wade1984,haug1987,ladous1989b,knigge1998}. One possible
explanation for these problems is that these models fail to capture
all of the relevant physics. Indeed, it has been argued that a
self-consistent treatment can
produce better agreement with observational data (e.g. Shaviv et
al. 1991; but see also Idan et al. 2010).
\nocite{idanshaviv2010} \nocite{shaviv1991}
However, an alternative explanation, suggested by Knigge et al.
(1998b; see also Hassall et al. 1985)\nocite{KLWB98,hassall},
is that recombination continuum emission from the base of the
disk wind might fill in the disk's
Balmer absorption edge and flatten the UV spectrum.
\nocite{groot2004}
\nocite{beuermann1990}
\nocite{beuermann1992}
\nocite{higginbottom2013}
Here, we carry out Monte Carlo radiative transfer simulations in
order to assess the likely impact of accretion disk winds on the
optical spectra of high-state CVs. More specifically, our goal is to
test whether disk winds of the type developed to account for the UV
resonance lines would also naturally produce significant amounts of
optical line and/or continuum emission. In order to achieve this, we
have implemented the `macro-atom' approach developed by Lucy
(2002, 2003) into the Monte Carlo ionization and radiative transfer
code described by LK02 (a process initiated by Sim et al. 2005; hereafter SDL05).
With this upgrade, the code is able to deal correctly with processes involving
excited levels, such as the recombination emission produced by CV
winds.
The remainder of this paper is organized as follows. In Section~2, we
briefly describe the code and and the newly implemented macro-atom
approach. In Section~3, we describe the kinematics and geometry of our
disk wind model.
In Section~4, we present spectra simulated from the benchmark model
employed by LK02, and, in Section~5, we present a revised model
optimized for the optical waveband. In Section~6, we summarize our
findings.
\section{\sc{python}: A Monte Carlo Ionization and Radiative Transfer Code}
\textsc{Python}\ is a Monte Carlo ionization and radiative transfer code which
uses the Sobolev approximation to treat line transfer
\citep[e.g.][]{sobolev1957,sobolev1960,rybickihummer1978}.
The code has already been described extensively by LK02, SDL05 and Higginbottom et al. (2013; hereafter H13), so here we provide only a brief summary of its operation,
focusing particularly on new
aspects of our implementation of macro-atoms into the code.
\subsection{Basics}
\textsc{Python}\ operates in two distinct stages. First, the ionization state,
level populations and temperature structure are calculated. This is
done iteratively, by
propagating several populations of Monte Carlo energy quanta (`photons')
through a model wind. The geometric and kinemetic properties of the
outflow are specified on a pre-defined spatial grid. In each of these
iterations (`ionization cycles'), the code records estimators that
characterize the radiation field in each grid cell. At the end
of each ionization cycle, a new electron temperature is calculated
that more closely balances heating and cooling in the
plasma. The radiative estimators and updated electron
temperature are then used to revise the ionization state of the wind,
and a new ionization cycle is started. The process is repeated until
heating and cooling are balanced throughout the wind.
This converged model is then used as the basis for the second set of
iterations (`spectral cycles'). In these, the emergent spectrum over
the desired spectral range is synthesized by tracking populations of
energy packets through the wind and computing the emergent spectra at
a number of user-specified viewing angles.
\textsc{Python}\ is designed to operate in a number of different
regimes, both in terms of the scale of the system and in terms of the
characteristics of the underlying radiation field.
It was originally developed by LK02 in order to model the UV spectra
of CVs with a simple biconical disk wind model. SDL05
\nocite{simmacro2005} used the code to model Brackett
and Pfund line profiles of H in young-stellar objects (YSOs). As part
of this effort, they implemented a `macro-atom' mode (see below) in
order to correctly treat H recombination lines with
\textsc{Python}. Finally, H13 used \textsc{Python}\ to model broad absorption line (BAL) QSOs. For
this application, an improved treatment of ionization was implemented,
so that the code is now capable of dealing with arbitrary
photo-ionizing SEDs, including non-thermal and multi-component ones.
\subsection{Ionization and Excitation: `Simple Atoms'}
\label{simpleatoms}
Prior to SDL05, the relative ionization fractions for all atomic
species were estimated via the modified Saha equation (Mazzali \&
Lucy 1993)
\begin{equation}
\frac{n_{j+1} n_e}{n_j} = W [\xi + W(1-\xi)]
\left(\frac{T_e}{T_R}\right)^{1/2}
\left(\frac{n_{j+1}n_e}{n_j}\right)^*_{T_R}. \label{ionization}
\end{equation}
Here, the `starred' term on the right represents abundances computed with
the Saha equation at temperature $T_R$, but using partition functions
from the dilute blackbody approximation.
$W$ is an effective dilution factor, $\xi$ is the
fraction of recombinations going directly to the ground state, and
$T_R$ and $T_e$ are the radiation and electron temperatures,
respectively. This simple ionization scheme produces reasonable
results when the photoionizing SED can be approximated by a dilute
blackbody. This is the case for high-state CVs. (As noted above, an
improved, but more complex treatment of ionization that is appropriate
for more complex SEDs is described in H13.)
Similarly, the relative excitation fractions within each ionization
stage of a given species were estimated via a modified (dilute) Boltzmann
equation,
\begin{equation}
\frac{n_{jk}}{n_j} = \frac{W g_k}{z_j(T_R)} \exp(-E_k/kT_R),
\end{equation}
where $n_{jk}$ is the population of level $k$ in ionic stage $j$,
$E_k$ is the energy difference between level $k$ and the ground state,
$g_k$ is the statistical weight of level $k$
and $z_j(T_R)$ is the partition function of ionic stage $j$.
Finally, \textsc{Python}\ originally modelled all bound-bound processes as transitions
within a simple two-level atom \cite[e.g.][]{mihalas}.
This framework was used for the treatment of line transfer and also
for the line heating and cooling calculations (see LK02).
The approximation works reasonably well for resonance
lines, such as C~\textsc{iv}~$\lambda1550$, in which the lower level is the ground state.
However, it is a poor approximation for many other
transitions, particularly those where the upper level
is primarily populated from above. Thus an improved method for
estimating excited level populations and simulating line transfer is
needed in order to model recombination lines and continua.
\subsection{Ionization and Excitation: Macro-Atoms}
Lucy (2002, 2003\nocite{lucy2002, lucy2003}; hereafter L02, L03)
has shown that it is possible to calculate the emissivity of a gas in
statistical equilibrium accurately by quantising matter into
`macro-atoms', and radiant and kinetic energy into indivisible energy
packets (r- and k- packets, respectively). His macro-atom scheme
allows for all possible transition paths from a given level and
provides a full non-local thermodynamic equilibrium (NLTE) solution
for the level populations based on Monte Carlo estimators. The macro-atom
technique has already been used to model Wolf-Rayet star
winds \citep{sim2004}, AGN disk winds \citep{simlong2008, tatum2012},
supernovae \citep{kromersim2009, kerzendorfsim} and YSOs (SDL05). A full
description of the approach can be found in L02 and L03.
Briefly, macro-atom NLTE level populations and ionization fractions
are calculated by solving the statistical equilbrium equations between
each pair of levels. In the framework of the Sobolev escape probability formalism (Rybicki \& Hummer 1978; L02; Sim 2004),
the bound-bound excitation rate, ${\cal R}_{lu}$, in an ion is given by
\begin{equation}
{\cal R}_{lu} = B_{lu} n_l J_{est} + C_{lu} n_l n_e,
\end{equation}
where $u$ and $l$ denote the upper and lower levels, $C$ represents the
collisional rate coefficients, and $B$ is the usual Einstein
coefficient. $J_{est}$ is the Monte Carlo estimator for the mean intensity
impinging on the Sobolev region, weighted by an angle-dependent escape probability,
given by \citep{sim2004}
\begin{equation}
J_{est} = \frac{c}{4 \pi \nu_0 V} \sum_{i} w_i \frac{1 - e^{-\tau_{s,i}}}{\tau_{s,i}} \frac{1}{(dv/ds)_i}.
\end{equation}
Here $w$ is the photon weight (in luminosity units), $\nu_0$
is the line frequency, $dv/ds$ is the velocity gradient and
$\tau_s$ is the Sobolev optical depth.
The sum is over all photons that come into resonance with the line,
and thus represents an integral over solid angle.
The corresponding de-excitation rate is then
\begin{equation}
{\cal R}_{ul} = \beta_{lu} A_{ul} n_u + B_{ul} n_u J_{est} +
C_{ul} n_u n_e,
\label{eq:nlte_rul}
\end{equation}
where $A$ is the usual Einstein coefficient.
The quantity $\beta_{lu}$ is the {\em angle-averaged} probability
that a given line photon will escape the Sobolev region.
In our implementation of the macro-atom approach, we also explicitly
take into account the photoionization and collisional ionization rates
between a lower level, $l$, and the continuum (or, in the case of ions
with more than one bound electron, the ground state of the upper ion),
$\kappa$,
\begin{equation}
{\cal R}_{l \kappa}= n_l \int_{\nu_0}^{\infty} \frac{ 4 \pi J_{\nu}
\sigma_{\nu}}{h \nu} d\nu + C_{l \kappa} n_l n_e.
\end{equation}
Here, $\sigma_{\nu}$ is the photoionization cross section, and $J_{\nu}$
is the mean intensity. The corresponding recombination rate is given
by
\begin{equation}
{\cal R}_{\kappa l} = \alpha_{\kappa l} n_{\kappa} n_e + C_{\kappa l}
n_\kappa n_e, \\
\end{equation}
where $\alpha_{\kappa l}$ is the radiative recombination coefficient
to level $l$. This treatment means that radiative and collisional
rates to and from all levels are considered when calculating both the
ionization state and the level populations, although we neglect
ionization directly to excited levels of the upper ion. The
\cite{vanregemorter} approximation is used for collisional
transitions. This means that collisions between radiatively
forbidden transitions are not taken into account when one
splits levels ito $l$- and $s$-subshells, as well
as principal quantum number, $n$ (as we have done with He~\textsc{i};
see section~\ref{sec:data}). Although this approximation is, in general,
a poor one, the effect is second order in the physical
regime where recombination lines are formed in our models.
This is because bound-free processes are dominant in determining
level populations and emissivities. We have verified that this
is indeed the case in the He~\textsc{i} emission regions in our models.
\subsection{Ionization and Excitation: A Hybrid Approach}
SDL05 implemented a macro-atom treatment of H in \textsc{Python}\ and used
this to predict the observable properties of a pure H wind
model for YSOs. Our goal here is to simultaneously model the optical
and ultraviolet spectra of high-state CVs. Since the optical spectra
are dominated by H and He recombination lines, both of these species
need to be treated as macro-atoms. The UV spectra, on the other hand,
are dominated by resonance lines associated with metals. This means we
need to include these species in our models, but they can be treated
with our (much faster) simple-atom approach. We have therefore
implemented a hybrid ionization and excitation scheme into \textsc{Python}. Any
atomic species can now be treated either in our simple-atom
approximation or with the full macro-atom machinery. In our CV
models, we treat H and He as macro-atoms and all metals as
simple-atoms. Species treated with either method
are full taken into account as sources of both bound-free opacity and line opacity,
and contribute to the heating and cooling balance of the plasma.
\subsection{Atomic Data}
\label{sec:data}
We generally use the same atomic data as H13, which is an updated
version of that described by LK02. In addition, we follow SDL05 in
treating H as a 20-level atom, where each level is defined by
the principal quantum number, $n$. For the macro-atom treatment of
He, we have added the additional level and line information required
from \textsc{Topbase} \citep{topbase2005}. He~\textsc{ii} is treated
in much the same way as H, but with 10 levels. He~\textsc{i} has
larger energy differentials between different l-subshells and triplet
and singlet states. Thus, we still include levels up to $n=10$, but
explicitly treat the $l$ and $s$ sub-orbitals as distinct levels
instead of assuming they are perfectly `l-mixed'. This allows us
to model the singlet and triplet He~\textsc{i} lines that are ubiquitous
in the optical spectra of CVs \citep[e.g.][]{dhillon1996}.
\subsection{Code Validation and Testing}
\textsc{Python}\ has been tested against a number of radiative transfer and
photoionization codes. LK02 and H13 conducted comparisons of
ionization balance with \textsc{Cloudy} \citep{cloudy2013}, demonstrating
excellent agreement. We have also carried out comparisons
of ionization and spectral synthesis with the supernova code
\textsc{Tardis.} \textsc{Tardis} is described by
\cite{kerzendorfsim}, and the spectral comparisons can be found
therein. For the effort reported here, we have additionally carried
out tests of the macro-atom scheme in \textsc{Python}. Fig.~\ref{tests} shows
two of these tests. In the top panel, we compare the Balmer series
emissivities as predicted by \textsc{Python}\ in the l-mixed Case~B limit against the
analytical calculations by \cite{seaton1959}. In the bottom panel, we
compare \textsc{Python}\ and \textsc{Tardis} predictions of He \textsc{i} level populations
for a particular test case. Agreement is excellent for both H and He.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{fig_caseb_tardis.eps}
\caption{
{\sl Top Panel:} `Case B' Balmer decrements computed
with \textsc{Python} compared to analytic calculations
by Seaton (1959). Both calculations are calculated at $T_e=10,000$K.
(see Osterbrock 1989 for a discussion of this commonly used approximation).
{\sl Bottom Panel:} a comparison of He I level populations (the most complex ion we currently
treat as a macro-atom) between \textsc{Python}\ and \textsc{Tardis} models.
The calculation is conducted with physical parameters of $n_e=5.96\times10^4$~cm$^{-3}$,
$T_e=30,600$K, $T_R=43,482$K and $W=9.65\times10^{-5}$.
Considering the two codes use different atomic data and \textsc{Tardis,} unlike \textsc{Python,} currently has a complete treatment of collisions between
radiatively forbidden transitions, the factor of
$<2$ agreement is encouraging.
}
\label{tests}
\end{figure}
\nocite{osterbrock}
\nocite{seaton1959}
\section{Describing the System and its Outflow}
\textsc{Python}\ includes several different kinematic models of accretion disk
winds, as well as different options for describing the physical and
radiative properties of the wind-driving system under
consideration. Most of these features have already been discussed by
LK02 and H13, so below we only briefly recount the key aspects of the
particular system and wind model used in the present study.
\subsection{Wind Geometry and Kinematics}
\label{kinematics}
We adopt the kinematic disk wind model developed by SV93.
A schematic of this model is shown in
Fig.~\ref{cartoon}. In this parametrization, a smooth, biconical
disk wind emanates from the accretion disk between radii $r_{min}$ and
$r_{max}$. The covering fraction of the outflow is also controlled by the
inner and outer opening angles of the wind, $\theta_{min}$ and
$\theta_{max}$, and the launch angle of the other streamlines is given
by
\begin{equation}
\theta(r_0) = \theta_{min} + (\theta_{max} - \theta_{min}) \left(\frac{r_0 - r_{min}}{r_{max} - r_{min}} \right)^{\gamma},
\label{theta}
\end{equation}
where $r_0$ is the launch radius of the streamline.
The poloidal (non-rotational) velocity field of the wind, $v_l$, is given by
\begin{equation}
v_l=v_0+\left[v_{\infty}(r_0)-v_0\right]\frac{\left(l/R_v\right)^{\alpha}}{\left(l/R_v\right)^{\alpha}+1},
\label{v_law}
\end{equation}
where $l$ is the poloidal distance along a particular wind
streamline. The terminal velocity along a streamline, $v_{\infty}$, is
set to a fixed multiple of $v_{esc}$, the escape velocity at the launch
point. The launch velocity from the disk surface, $v_0$, is assumed to
be constant (set to $6$~km~s$^{-1}$). Once the wind is launched, it
accelerates, reaching half of its terminal velocity at $l = R_v$. The
velocity law exponent $\alpha$ controls how quickly the wind
accelerates. Larger values of $\alpha$ cause the main region of
acceleration to occur close to $R_v$, whereas smaller values
correspond to fast acceleration close to the disk (see
Fig.~\ref{acc_law}). The rotational velocity $v_\phi$ is
Keplerian at the base of the streamline
and we assume conservation of specific angular momentum,
such that
\begin{equation}
v_\phi r = v_{k} r_0,
\label{v_law}
\end{equation}
where $v_{k}=(GM_{WD}/r_0)^{1/2}$.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{fig2_cartoon.eps}
\caption{Cartoon illustrating the geometry and kinematics of the benchmark CV wind model.}
\label{cartoon}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{acc_law.eps}
\caption{
The adopted poloidal velocity law for various values of the
acceleration exponent, $\alpha$.
}
\label{acc_law}
\end{figure}
The density at position $(r,z)$ in the wind, $\rho(r,z)$, is
calculated from the mass continuity equation, yielding
\begin{equation}
\rho(r,z) = \frac{r_0}{r} \frac{dr_0}{dr} \frac{\phi(r_0)}{v_z(r,z)}.
\label{density}
\end{equation}
Here,
$v_z$ is the vertical velocity component and, following SV93,
$\phi(r_0)$ is the local mass-loss rate per unit area at $r_0$,
defined as
\begin{equation}
\phi(r_0) \propto \dot{M}_{wind} r_0^\lambda \cos [\theta(r_0)].
\label{density}
\end{equation}
We adopt $\lambda = 0$ and normalize $\phi(r_0)$ by
matching its integral over both sides of the disk
to the user-specified total mass-loss rate, $\dot{M}_{wind}$.
\subsection{Sources and Sinks of Radiation}
\label{radsources}
The net photon sources in our CV model are the accretion disk, the
WD and, in principle, a boundary layer with user-defined temperature
and luminosity. All of these radiating bodies are taken to be
optically thick, and photons striking them are assumed to be destroyed
instantaneously. The secondary star is not included as a radiation
source, but is included as an occulting body. This allows us to model
eclipses. Finally, emission from the wind itself is also accounted for, but
note that we assume the outflow to be in radiative equilibrium. Thus all
of the heating of the wind, as well as its emission, is ultimately
powered by the radiation field of the net photon sources in the
simulation. In the following sections, we will describe our treatment
of these system components in slightly more detail.
\subsubsection{Accretion Disk}
\textsc{Python}\ has some flexibility when treating the accretion
disk as a source of photons. The disk is broken down into annuli
such that each annulus contributes an equal amount to the bolometric
luminosity. We take the disk to be geometrically thin, but optically
thick, and thus adopt the temperature profile of a standard
\cite{shakurasunyaev1973} $\alpha$-disk. An annulus can then
be treated either as a blackbody with the corresponding effective
temperature or as a stellar atmosphere model with the appropriate
surface gravity and effective temperature. Here, we use blackbodies
during the ionization cycles and to compute our Monte Carlo
estimators. However, during the spectral synthesis stage of the
simulation we use stellar atmosphere models. This produces more
realistic model spectra and allows us to test if recombination
emission from the wind base can fill in the Balmer jump, which is
always in absorption in these models. Our synthetic stellar atmosphere
spectra are calculated with
\textsc{Synspec}\footnote{http://nova.astro.umd.edu/Synspec43/synspec.html}
from either Kurucz \citep{kurucz1991} atmospheres (for $T_{eff} \leq
50,000$~K) or from \textsc{TLUSTY} \citep{tlusty} models (for $T_{eff} > 50,000$~K).
\subsubsection{White Dwarf}
The WD at the center of the disk is always present as a spherical occulting
body with radius $R_{WD}$ in \textsc{Python}\ CV models, but it can also be included
as a source of radiation. In the models presented here, we treat the
WD as a blackbody radiator with temperature $T_{WD}$ and luminosity
$L_{WD} = 4\pi R_{WD}^2 \sigma T_{WD}^4$.
\subsubsection{Boundary Layer}
It is possible to include radiation from a boundary layer (BL) between
the disk and the WD. In \textsc{Python}, the BL is described as
a blackbody with a user-specified effective temperature and
luminosity. In the models presented here, we have followed LK02 in setting
the BL luminosity to zero. However, we have confirmed that the addition of an isotropic
BL with $L_{BL} = 0.5 L_{acc}$ and temperatures in the range $80~{\rm
kK} \leq T_{BL} \leq 200~{\rm kK}$ would not change any of our main
conclusions.
\subsubsection{Secondary Star}
The donor star is included in the system as a pure radiation sink,
i.e. it does not emit photons, but absorbs any photons that strike its
surface. The secondary is assumed to be Roche-lobe filling, so its
shape and relative size are defined by setting the mass ratio of the system,
$q = M_2/M_{WD}$. The inclusion of the donor star as an occulting body
allows us to model eclipses of the disk and the wind. For this
purpose, we assume a circular orbit with a semi-major axis $a$ and
specify orbital phase such that $\Phi_{orb} = 0$ is the
inferior conjunction of the secondary (i.e. mid-eclipse for $i \simeq
90^o$).
\begin{table}
\centering
\begin{tabular}{p{2cm}p{2cm}p{2cm}}
\multicolumn{2}{|l|}{Model Parameters} \\
\hline Parameter & Model A & Model B \\
\hline \hline
$M_{WD}$ & $0.8~M_{\odot}$ & \\
$R_{WD}$ & $7\times10^{8}$~cm & \\
$T_{WD}$ & $40,000$~K & \\
$M_{2}$ & -& $0.6~M_{\odot}$ \\
$q$ &- & $0.75$ \\
$P_{orb}$ &- & $5.57$~hr \\
$a$ & -& $194.4~R_{WD}$ \\
$R_2$ & - & $69.0~R_{WD}$ \\
$\dot{M}_{acc}$ & $10^{-8}~M_{\odot}yr^{-1}$ &\\
$\dot{M}_{wind}$ & $10^{-9}~M_{\odot}yr^{-1}$å & \\
$r_{min}$ & $4~R_{WD}$ & \\
$r_{max}$ & $12~R_{WD}$ & \\
$r_{disk}$(max) & $34.3~R_{WD}$ & \\
$\theta_{min}$ & $20.0^{\circ}$ & \\
$\theta_{max}$ & $65.0^{\circ}$ & \\
$\gamma$ & $1$ & \\
$v_{\infty}$ & $3~v_{esc}$ & \\
$R_v$ & $100~R_{WD}$ & $142.9~R_{WD}$ \\
$\alpha$ & $1.5$ & $4$\\
\end{tabular}
\centering
\caption{
Parameters used for the geometry and kinematics of the benchmark
CV model (model A), which is optimized for the UV band, and a model
which is optimized for the optical band and described in section 5 (model B).
For model B, only parameters which are altered are given - otherwise the
model A parameter is used. $P_{orb}$ is the orbital period
(the value for RW Tri from Walker 1963 is adopted, see section 5.4) and
$R_2$ is the radius of a sphere with the volume of the secondary's Roche lobe.
Other quantities are defined in the text or Fig.~\ref{cartoon}.
Secondary star parameters are only quoted for
model B as we do not show eclipses with the
benchmark model (see section 5.4).
}
\label{wind_param}
\label{modelb_table}
\end{table}
\nocite{walker1963}
\section{A Benchmark Disk Wind Model}
\label{modela}
Our main goal is to test whether the type of disk wind model that has
been successful in explaining the UV spectra of CVs could also have a
significant impact on the optical continuum and emission line spectra
of these systems. In order to set a benchmark, we therefore begin by
investigating one of the fiducial CV wind models that was used by SV93
and LK02 to simulate the UV spectrum of a typical high-state
system. The specific parameters for this model (model A) are listed in
Table~1. A key point is that the wind mass-loss rate in this model is
set to 10$\%$ of the accretion rate through the disk. The inner edge of
the wind ($r_{min}$) is set to $4~R_{WD}$ following SV93.
The sensitivity to some of these parameters is briefly discussed in
section~5.
\subsection{Physical Structure and Ionization State}
\label{modela_ionization}
\begin{figure*}
\includegraphics[width=0.8\textwidth]{fig5.eps}
\caption{
The physical properties of the wind -- note the logarithmic scale.
Near the disk plane the wind is dense, with low poloidal velocities.
As the wind accelerates it becomes less dense
and more highly ionized. The dominant He ion
is almost always He III, apart from in a small
portion of the wind at the base, which is partially shielded
from the inner disk.
}
\label{wind}
\end{figure*}
Fig.~\ref{wind} shows the physical and ionization structure
of the benchmark disk wind model. The ionization parameter shown in the bottom
right panel is given by
\begin{equation}
U = \frac{4\pi}{n_H c}\int_{13.6{\rm{eV}}}^{\infty}\frac{{J_{\nu}d\nu}}{h\nu},
\end{equation}
\noindent where $\rm{n_H}$ is the local number density of H, and $\nu$ denotes photon
frequency. The ionization parameter is a useful measure of the ionization state of a plasma,
as it evaluates the ratio of the number density of ionizing photons to the local
H density.
There is an obvious drop-off in density
and temperature with distance away from the disk, so any line
formation process that scales as $\rho^2$ -- i.e. recombination and
collisionally excited emission -- should be expected to operate
primarily in the dense base of the outflow. Moreover, a comparison of
the rotational and poloidal velocity fields shows that rotation
dominates in the near-disk regime, while outflow dominates further out
in the wind.
The ionization equation used in the `simple atom' approach used by
LK02 (see section~\ref{simpleatoms}) should be a reasonable approximation to
the photoionization equilibrium in the benchmark wind model. Even
though the macro-atom treatment of H and He does affect the
computation of the overall ionization equilibrium, we would expect the
resulting ionization state of the wind to be quite similar to that
found by LK02. The bottom panels in Fig.~\ref{wind} confirm that this
is the case. In particular, He is fully ionized
throughout most of the outflow, except for a small region near the
base of the wind, which is shielded from the photons produced by the
hot inner disk. In line with the results of LK02, we also find
that C\textsc{iv} is the dominant C ion throughout the wind,
resulting in a substantial absorbing column across a large range of
velocities. As we shall see, this produces the broad, deep and
blue-shifted C\textsc{iv}~$\lambda1550$ absorption line that
is usually the most prominent wind-formed feature in the UV spectra of
low-inclination nova-like CVs.
\begin{figure*}
\includegraphics[width=0.9\textwidth]{modela_uv_opt.eps}
\caption{
UV (left) and optical (right) synthetic spectra for model A, our benchmark model,
computed at sightlines of 10, 27.5, 45, 62.5 and 80 degrees.
The inset plots show zoomed-in line profiles for
He~\textsc{ii}~$\lambda1640$\ and H$\alpha$. Double-peaked line emission can be seen in
He~\textsc{ii}~$\lambda1640$, He~\textsc{ii}~$\lambda4686$, H$\alpha$\ and some He I lines, but the
line emission is not always sufficient to overcome the absorption
cores from the stellar atmosphere models. The model
also produces a prominent He~\textsc{ii}~$\lambda3202$\ line at high inclinations.
}
\label{spec}
\end{figure*}
\subsection{Synthetic Spectra}
\label{modela_spectrum}
We begin by verifying that the benchmark model still produces UV
spectra that resemble those observed in CVs. We do expect this to be
the case, since the ionization state of the wind has not changed
significantly from that computed by LK02 (see section~\ref{modela_ionization}).
The left column of panels in Fig.~\ref{spec} shows that this expectation
is met: all of the strong metal resonance
lines -- notably N~\textsc{v}~$\lambda1240$,
Si~\textsc{iv}~$\lambda1400$ and C~\textsc{iv}~$\lambda1550$ --
are present and exhibit clear P-Cygni profiles
at intermediate inclinations. In addition, however, we now also find
that the wind produces significant Ly$\alpha$ and
He~\textsc{ii}~$\lambda1640$ emission lines.
Fig.~\ref{spec} (right-hand panel) and Fig.~\ref{spec_continuum}
show the corresponding optical spectra produced for
the benchmark model, and these do exhibit some emission lines
associated with H and He. We see a general trend from absorption lines to emission lines
with increasing inclination, as one might expect from our wind
geometry. This trend is consistent with observations, as can be seen
in Fig.~1. However, it is clear that this particular model
does not produce all of the lines seen in observations of high-state CVs.
The higher-order Balmer series lines are too weak
to overcome the intrinsic absorption from the disk atmosphere, and the wind
fails to produce any observable emission at low and intermediate inclinations.
This contrasts with the fact that emission lines are seen
in the optical spectra of (for example) V3885 Sgr \citep{hartley2005}
and IX Vel \citep[][see also Fig.~1]{beuermann1990}.
The emissivity of these recombination
features scales as $\rho^2$, meaning that they form almost entirely in the
dense base of the wind, just above the accretion disk. Here, the
velocity field of the wind is still dominated by rotation, rather than
outflow, which accounts for the double-peaked shape of the lines. In
principle, lines formed in this region can still be single peaked,
since the existence of a poloidal velocity {\em gradient} changes the
local escape probabilities (MC96). However, as
discussed further in section~5.3, the
radial velocity shear in our
models is not high enough for this radiative transfer effect
to dominate the line shapes.
The Balmer jump is in absorption at all inclinations for our benchmark
model. This is due to the stellar atmospheres we have used to
model the disk spectrum; it is not a result of photoabsorption in the
wind. In fact, the wind spectrum exhibits the Balmer jump in {\em
emission}, but this is not strong enough to overcome the intrinsic
absorption edge in the disk spectrum. This is illustrated in
Fig.~\ref{cont}, which shows the angle-integrated spectrum of the system,
i.e. the spectrum formed by all escaping photons, separated into the
disk and wind contributions. Even though the wind-formed Balmer
recombination continuum does not completely fill in the Balmer
absorption edge in this model, it does already contribute
significantly to the total spectrum. This suggests that modest changes
to the outflow kinematics might boost the wind continuum and produce
emergent spectra with weak or absent Balmer absorption edges.
\begin{figure}
\includegraphics[width=0.45\textwidth]{modela_opt_cont.eps}
\caption{Synthetic optical spectra from model A computed for
sightlines of 10, 27.5, 45, 62.5 and 80 degrees. In these plots
the flux is divided by a polynomial fit to the
underlying continuum redward of the Balmer edge, so that
line-to-continuum ratios and the true depth of the
Balmer jump can be shown.}
\label{spec_continuum}
\end{figure}
\begin{figure}
\includegraphics[width=0.45\textwidth]{modela_escaping.eps}
\caption{Total packet-binned spectra across all viewing angles, in units
of monochromatic luminosity.
The thick black line shows the total
integrated escaping spectrum,
while the green line shows disk photons which escape without being reprocessed by
the wind. The red line show the contributions from reprocessed photons.
Recombination continuum emission blueward of the Balmer
edge is already prominent relative to other wind continuum processes, but is not sufficient
to fill in the Balmer jump in this specific model}
\label{cont}
\end{figure}
\newpage
\section{A Revised Model Optimized for Optical Wavelengths}
The benchmark model discussed in section~\ref{modela} was originally
designed to reproduce the wind-formed lines seen in the UV spectra of
high-state CVs. As we have seen, this model does produce some observable
optical emission. We can now attempt to construct a model that more closely
matches the observed optical spectra of CVs.
Specifically, we aim to assess whether a revised model can:
\begin{itemize}
\item account for all of the lines we see in optical spectra of CVs while preserving
the UV behaviour;
\item produce single-peaked Balmer emission lines;
\item generate enough of a wind-formed recombination continuum
to completely fill in the disk's Balmer absorption edge for
reasonable outflow parameters.
\end{itemize}
The emission measure of a plasma is directly proportional to its density.
The simplest way to simultaneously affect the density in the wind (for fixed mass loss rate),
as well as the velocity gradients, is by modifying the poloidal velocity
law. Therefore, we focus on just two kinematic variables (section~\ref{kinematics}):
\begin{itemize}
\item the acceleration length, $R_v$, which controls the
distance over which the wind accelerates to $\frac{1}{2}~v_{\infty}$;
\item the acceleration exponent, $\alpha$, which controls the rate
at which the poloidal velocity changes near $R_v$.
\end{itemize}
The general behaviour we might expect is that outflows with denser
regions near the wind base -- i.e. winds with larger $R_{v}$ and/or
larger $\alpha$ -- will produce stronger optical emission signatures.
However, this behaviour may be moderated by the effect of the increasing
optical depth through this region, which can also affect the line profile shapes.
In addition, modifying $R_v$ also increases the emission {\em volume}.
Based on a preliminary exploration of models with different kinematics,
we adopt the parameters listed in table~\ref{modelb_table}
for our `optically optimized' model (model B).
\subsection{Synthetic Spectra}
\begin{figure*}
\includegraphics[width=0.9\textwidth]{modelb_uv_opt.eps}
\caption{
UV (left) and optical (right) synthetic spectra for model B computed at
sightlines of 10, 27.5, 45, 62.5 and 80 degrees.
Model A is shown in grey for comparison.
The inset plots show zoomed-in line profiles for
He~\textsc{ii}~$\lambda1640$\ and H$\alpha$. The Balmer and He
are double-peaked, albeit with narrower profiles.
Strong He~\textsc{ii}~$\lambda4686$\ emission can be seen, as well as a trend
of a deeper Balmer jump with decreasing inclination.
}
\label{uvoptb}
\end{figure*}
Fig.~\ref{uvoptb} shows the UV and optical spectra for the
optically optimized model for the full range of inclinations.
As expected, the trend from absorption to emission
in the optical is again present, but in this revised model we produce emission
lines in the entire Balmer series at high inclinations, as well as the observed lines
in He~\textsc{ii} and He~\textsc{i}. This can be seen more clearly in the
continuum-normalized spectrum in Fig.~\ref{continuumb}.
Two other features are worth noting in the optical
spectrum. First, the collisionally excited Ca~{\sc ii} emission line at 3934~\AA\
becomes quite prominent in our densest models. Second, our model predicts a detectable
He~\textsc{ii} recombination line at 3202~\AA. This is the He
equivalent of Paschen~$\beta$ and should be expected in all systems that
feature a strong He~\textsc{ii}~$\lambda4686$ line (the He
equivalent of Paschen~$\alpha$).
This line is somewhat unfamiliar observationally, because it
lies bluewards of the atmospheric cut-off, but
also redwards of most ultraviolet spectra.
Our models do not exhibit P-Cygni profiles in the optical lines.
This is perhaps not surprising. LK02 and SV93 originally designed such models
to reproduce the UV line profiles. Thus, most of the wind
has an ionization parameter of $\log U \sim 2$ (see Fig.~\ref{wind}).
This means H and He are fully ionized throughout
much of the wind and are successful in producing recombination features.
However, the line opacity throughout the wind is too
low to produce noticeable blue shifted absorption.
We suspect that the systems that exhibit such profiles must
possess a higher degree of ionization stratification, although the lack
of contemporary observations means it is not known for certain if the
P-Cygni profiles in UV resonance lines and optical H and He lines exist simultaneously.
Ionization stratification could be caused by a clumpy flow, in which the ionization state
changes due to small scale density fluctuations, or a stratification in density
and ionizing radiation field over larger scales.
Invoking clumpiness in these outflows is not an unreasonable
hypothesis. Theories of line-driven winds predict an unstable flow
\citep{macgregor1979,owockirybicki1984,owockirybicki1985}, and
simulations of CV disk winds also produce density inhomogeneities
\citep{proga1998,pkdh2002}.
Tentative evidence for clumping being directly related to P-Cygni optical lines
comes from the fact that \cite{prinja2000}
found the dwarf nova BZ Cam's outflow to be unsteady and highly mass-loaded in outburst,
based on observations of the UV resonance lines.
This system has also exhibited P-Cygni profiles in He~\textsc{i}~$\lambda5876$
and H$\alpha$\ when in a high-state \citep{patterson1996,RN98}.
The degree of ionization and density variation and
subsequent line opacities may be affected by our model parameters
and the specific parameterisation we have adopted.
In the UV, the model still produces all the observed lines,
and deep P-Cygni profiles are produced in the normal resonance lines,
as discussed in section 4.2. However, the UV spectra also
display what is perhaps the biggest problem with this revised model,
namely the strength of resonance line emission
at low and intermediate inclinations.
In order to generate strong optical wind signatures, we have adopted wind
parameters that lead to very high densities at the base of the wind
($n_e\sim10^{13}-10^{14}$~cm$^{-3}$). This produces
the desired optical recombination emission, but also increases the
role of collisional excitation in the formation of the UV resonance
lines. This explains the pronounced increase in the emission component
of the C\textsc{iv} $\lambda1550$ resonance line, for example, relative to
what was seen in the benchmark model (compare Figures~\ref{spec} and
\ref{uvoptb}). The strength of this component in the revised model
is probably somewhat too high to be consistent with UV observations
of high-state CVs (see e.g. Long et al. 1991, 1994; Noebauer et al. 2010).
\nocite{long1991,long1994, noebauer}
\subsection{Continuum Shape and the Balmer Jump}
The wind now also has a clear effect on the continuum shape,
as shown by Fig.~\ref{modelb_escape}. In fact, the majority of the
escaping spectrum has been reprocessed in some way by the wind,
either by electron scattering (the wind is now moderately Thomson-thick),
or by bound-free processes. This is demonstrated by the flatter spectral shape
and the slight He photoabsorption edge present in the optical spectrum
(marked in Fig.~\ref{continuumb}). This reprocessing is also
responsible for the change in continuum level between models A and B.
In addition, Figures~\ref{uvoptb}, \ref{continuumb}
and \ref{modelb_escape} clearly demonstrate that the wind produces
a recombination continuum sufficient to completely fill in the Balmer jump
at high inclinations.\footnote{Note that the apparent absorption feature
just redward of the Balmer jump in these models is artificial. It is
caused by residual line blanketing in the stellar atmospheres, which
our models cannot fill in since they employ a 20-level H atom.}
This might suggest that Balmer continuum emission from a wind can be important
in shaping the Balmer jump region, as
originally suggested by Knigge et al.
(1998b; see also Hassall et al. 1985)\nocite{KLWB98,hassall}.
It should be acknowledged, however,
that the Balmer jump in high-state CVs would naturally weaken at
high inclinations due to limb darkening effects \citep{ladous1989, ladous1989b}.
Although we include a simple limb darkening law which affects
the emergent flux at each inclination, we do not
include it as a {\em frequency dependent} opacity in our model.
As a result, the efficiency of filling in the Balmer jump
should really be judged at low and medium inclinations,
where, although prominent, the recombination continuum does
not overcome the disk atmosphere absorption.
In addition, this effect
could mean that any model which successfully fills in the
jump at low inclinations could lead to a Balmer jump
in emission at high inclinations.
In any case, to properly understand this phenomenon, a fully self-consistent
radiative transfer calculation of both the disk atmosphere
and connected wind is required.
\begin{figure}
\includegraphics[width=0.45\textwidth]{modelb_opt_cont.eps}
\caption{
Synthetic optical spectra from model B computed for
sightlines of 10, 27.5, 45, 62.5 and 80 degrees.
Model A is shown in grey for comparison.
In these plots the flux is divided by a polynomial fit to the
underlying continuum redward of the Balmer edge, so that
line-to-continuum ratios and the true depth of the
Balmer jump can be shown.
}
\label{continuumb}
\end{figure}
\begin{figure}
\includegraphics[width=0.45\textwidth]{modelb_escaping.eps}
\caption{Total packet-binned spectra across all viewing angles, in units
of monochromatic luminosity.
The thick black line shows the total
integrated escaping spectrum,
while the green line shows disk photons which escape without being reprocessed by
the wind. The red line show the contributions from reprocessed
photons.
In this denser model the reprocessed contribution is significant compared
to the escaping disk spectrum. The Balmer continuum emission is prominent, and
the wind has a clear effect on the overall spectral shape.}
\label{modelb_escape}
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidth]{mc.eps}
\caption{
H$\alpha$\ line profiles, normalized to 1, plotted in velocity space
for three models with varying kinematic
properties, computed at an inclination of $80^\circ$.
The benchmark model and the improved optical
model described in section 6 are labeled as A and B respectively,
and a third model (X) which has an increased acceleration length of
$R_v = 283.8~R_{WD}$, and $\alpha=4$ is also shown.
The $x$-axis limits correspond to the Keplerian velocity at
$4R_{WD}$, the inner edge of the wind.
We observe a narrowing of the lines, and a single-peaked line in model X.
This is not due to radial velocity shear (see section 5.3).
}
\label{halpha}
\end{figure}
\subsection{Line Profile Shapes: Producing Single-Peaked Emission}
Fig.~\ref{halpha} shows how the H$\alpha$ profile changes with the kinematics of the wind for
an inclination of $80^\circ$. The main prediction is that dense, slowly accelerating
wind models produce narrower emission lines. This is {\em not} due to radial
velocity shear. As stated by MC96, that mechanism can only work if poloidal
and rotational velocity gradients satisfy $(dv_l/dr)/(dv_\phi/dr) \gtrsim 1$; in
our models, this ratio is always $\lesssim 0.1$. Instead, the narrow lines predicted
by our denser wind models can be traced to the base of the outflow becoming optically
thick in the continuum, such that the line emission from the base of the wind
cannot escape to the observer. In such models, the `line photosphere'
(the $\tau \simeq 1$ surface of the line-forming region) moves outwards, towards larger
vertical and cylindrical distances. This reduces the predicted line widths, since the
rotational velocities -- which normally provide the main line broadening mechanism at
high inclination -- drop off as $1/r$. This is not to say that the MC96
mechanism could not be at work in CV winds. For example, it would be worth investigating
alternative prescriptions for the wind velocity field, as well as the possibility that the
outflows may be clumped. An inhomogeneous flow
(which has been predicted in CVs; see section 5.2)
might allow large radial velocity shears to exist while still
maintaining the high densities needed to produce the required level of emission.
However, such an investigation is beyond the scope of the present paper.
In our models, single-peaked line profiles are produced once the line forming region has been
pushed up to $\sim 10^{11}$~cm ($\sim150~R_{WD}$) above the disk plane.
This number may seem unrealistically large, but the vertical extent of
the emission region is actually not well constrained observationally.
In fact, multiple observations of eclipsing NLs show that the H$\alpha$
line is only moderately eclipsed compared to the continuum (e.g. Baptista et al. 2000;
Groot et al. 2004; see also section 5.4),
implying a significant vertical extent for the line-forming
region. This type of model should therefore not be ruled out {\em a priori},
but this specific model was not adopted as our optically optimized model
due to its unrealistically high continuum level in eclipse.
\subsection{Sensitivity to Parameters}
This revised model demonstrates that one can achieve a more
realistic optical spectrum by altering just two kinematic parameters.
However, it may also be possible to achieve this by modifying
other free parameters such as $\dot{M}_{wind}$, the opening angles of the wind and the
inner and outer launch radii. For example, increasing the mass loss rate of the wind
increases the amount of recombination emission (which scales as $\rho^2$),
as well as lowering the ionization parameter and increasing optical depth through the wind.
Wider launch radii and opening angles lead to a larger emitting volume, but this is moderated by a decrease in density
for a fixed mass-loss rate. We also note that the inner radius of $4~R_{WD}$ adopted by SV93
affects the emergent UV spectrum seen at inclinations $<\theta_{min}$ as
the inner disk is uncovered. This causes less absorption in the UV resonance lines,
but the effect on the optical spectrum is negligible.
We have verified this general behaviour, but
we suggest that future work should investigate the effect of these parameters in more detail
as well as incorporating a treatment of clumping.
If a wind really does produce the line and continuum emission seen in optical spectra of high-state CVs, then
understanding the true mass loss rate and geometry of the outflow is clearly important.
\subsection{Comparison to RW Tri}
\begin{figure*}
\includegraphics[width=0.9\textwidth]{fig13.eps}
\caption{{\sl Top Panel:} In and out of eclipse spectra of the high
inclination NL RW Tri. {\sl Bottom Panel:} In and out of eclipse synthetic
spectra from model B.
The artificial `absorption' feature just redward of the Balmer jump
is caused for the reasons described in section 5.2.}
\label{rwtricomp}
\end{figure*}
Fig.~\ref{rwtricomp} shows a comparison of the predicted
out-of-eclipse and mid-eclipse spectra against observations of the
high-inclination nova-like RW~Tri. The inclination of RW Tri is
somewhat uncertain, with estimates including $70.5^\circ$
\citep{smak1995}, $75^\circ$ \citep{groot2004}, $80^\circ$
\citep{longmore1981} and $82^\circ$\citep{frankking1981}. Here, we
adopt $i = 80^\circ$, but our qualitative conclusions are not
particularly sensitive to this choice.
We follow LK02 is setting the value of $r_{disk}$ (the maximum radius of the accretion disk)
to $34.3~R_{WD}$. When compared to the semi-major axis of RW Tri,
this value is perhaps lower than one might
typically expect for NLs \citep{harropallinwarner1996}.
However, it is consistent
with values inferred by \cite{rutten1992}.
We emphasize that this model is in no sense a fit to this -- or any other -- data set.
The similarity between the synthetic and observed spectra is
striking. In particular, the revised model produces strong emission in
all the Balmer lines, with line-to-continuum ratios comparable to
those seen in RW Tri. Moreover, the line-to-continuum contrast
increases during eclipse, as expected for emission produced in a disk
wind. This trend is in line with the observations of RW~Tri, and it
has also been seen in other NLs, including members of the SW~Sex class
\citep{neustroev2011}. As noted in section 5.2, the majority
of the escaping radiation has been reprocessed by the wind in some way
(particularly the eclipsed light).
However, there are also interesting differences between the revised
model and the RW Tri data set. For example, the model exhibits
considerably stronger He~{\sc ii} features than the observations,
which suggests that the overall ionization state of the model is
somewhat too high. As discussed in section 5.3, the optical lines are
narrow, but double-peaked. This is in contrast to what is generally seen in observations
of NLs, although the relatively low resolution of the RW Tri
spectrum makes a specific comparison difficult. In order to demonstrate
the double-peaked nature of the narrower lines, we choose not to
smooth the synthesized data to the resolution of the RW Tri dataset.
If the data was smoothed, the H$\alpha$\ line would appear single-peaked.
\section{Conclusions}
We have investigated whether a disk wind model designed to reproduce
the UV spectra of high-state CVs would also have a significant effect
on the optical spectra of these systems. We find that this is indeed
the case. In particular, the model wind produces H and He
recombination lines, as well as a recombination continuum blueward of
the Balmer edge. We do not produce P-Cygni profiles
in the optical H and He lines,
which are seen in a small fraction of CV optical spectra.
Possible reasons for this are briefly discussed in section
5.2.
We have also constructed a revised benchmark model which is designed
to more closely match the optical spectra of high-state CVs. This
optically optimized model produces all the prominent optical lines in
and out of eclipse, and achieves reasonable verisimilitude with the
observed optical spectra of RW Tri. However, this model also has
significant shortcomings. In particular, it predicts
stronger-than-observed He~{\sc ii} lines in the optical region and too
much of a collisionally excited contribution to the UV resonance lines.
Based on this, we argue that recombination emission
from outflows with sufficiently high densities and/or optical depths
might produce the optical lines observed in CVs, and may also
fill in the Balmer absorption edge in the spectrum of the accretion disk,
thus accounting for the absence of a strong edge in observed CV spectra.
In section 5.3, we demonstrate that
although the double peaked lines narrow and
single-peaked emission can be formed in our densest models,
this is not due to the radial velocity shear mechanism proposed by MC96.
We suggest that `clumpy' line-driven winds or a different
wind parameterization may nevertheless allow this mechanism to work.
We also note the possibility that, as in our denser models,
the single-peaked lines are formed well above the disk, where
rotational velocities are lower.
It is not yet clear whether a wind model such as this can
explain all of the observed optical features of high-state CVs --
further effort is required on both the observational
and modelling fronts.
However, our work demonstrates that {\sl disk winds matter}. They are
not just responsible for creating the blue-shifted absorption and
P-Cygni profiles seen in the UV resonance lines of high-state CVs, but
can also have a strong effect on the optical appearance of these
systems. In fact, most of the optical features characteristic of CVs
are likely to be affected -- and possibly even dominated -- by their disk
winds. Given that optical spectroscopy plays the central role in
observational studies of CVs, it is critical to know
where and how these spectra are actually formed. We believe it is high
time for a renewed effort to understand the formation of spectra in
accretion disks and associated outflows.
\subsection*{Acknowledgements}
The work of JHM and CK is supported by the Science and Technology Facilities Council (STFC),
via studentships and a consolidated grant, respectively.
The work of NSH is supported by NASA under Astrophysics Theory Program grants NNX11AI96G and NNX14AK44G.
We would like to thank the anonymous referee for a helpful and constructive report, and
we are grateful to A.F. Palah and B.T. Gaensicke for the IX Vel XSHOOTER dataset.
We would also like to thank J.V. Hernandez Santisteban, S.W. Mangham and I. Hubeny for useful discussions.
We acknowledge the use of the IRIDIS High Performance Computing Facility,
and associated support services at the University of Southampton, in the completion of this work.
|
2,877,628,088,553 | arxiv | \section{1. Introduction}
The freedom of a general coordinate transformation allows a local Lorentz frame to be defined sufficiently locally around a point in curved spacetime. In this frame, also called an inertial frame, first derivatives of the metric vanish, and therefore the connections vanish, and gravitational forces on moving bodies vanish. Yet the coordinate freedom cannot remove all second derivatives of the metric, and so components of the Riemann tensor can be non-zero locally where the connections are zero. The lengthscale of the second derivatives determines the scale of the spacetime region over which first derivatives vanish and the local Lorentz frame holds.\cite{wbg},\cite{pw}
This local vanishing of gravitational forces, and the resulting local rectilinear trajectories of moving bodies in these frames, can be understood as an expression of the Equivalence Principle. Gravitational forces can be made to vanish locally. Correspondingly, it is impossible to localize gravitational field energy.\cite{mtw}
Now let us ask whether it is possible for matter to {\it locally} exchange momentum or energy with the gravitational field of the universe. According to a naive understanding of the Equivalence Principle, and the availability of local Lorentz frames, it might seem impossible for a body to exchange momentum or energy with the gravitational field of the universe, if such energy cannot be localized in the field. This article is to show how such exchange can in principle occur locally.
The gravitational field of the universe is the cosmological metric of general relativity. Since the discovery of dark energy at the turn of the century, the cosmological metric is well-constrained to behave as a flat Robertson-Walker metric, and the energy budgets of matter, radiation, and dark energy are well-constrained. The resulting cosmological model is called Lambda-Cold-Dark-Matter, where Lambda refers to dark energy.\cite{fth}
For bodies in motion against the cosmological metric, there is a drag force resulting from the expanding spacetime.\cite{crl},\cite{isl} This force is sometimes called Hubble drag, e.g.\cite{pck},\cite{ldr} and has been in models of galaxy cluster dynamics for decades\cite{P1},\cite{P2}.
We show that the Hubble drag force exists in the rest frame of a moving body. Although gravitational forces are coordinate dependent, the Hubble drag force exists in all frames, and there is an invariant expression for the
force.
We propose that Hubble drag can be understood as inductive rectilinear frame dragging. Rectilinear, or translational, frame-dragging has been considered to some extent for linear acceleration.\cite{ge},\cite{lb},\cite{fz},\cite{pk},\cite{pfr},\cite{pfs} However, Hubble drag would be the first case of rectilinear frame dragging proposed for an unaccelerated body.
Hubble drag can be understood as a type of frame-dragging because it arises from off-diagnonal metric components, as in the Kerr metric around rotating bodies. Yet in the boosted frame, Hubble drag originates not from the gravito-magnetic field, but from the inductive part of the gravito-electric field. The gravito-magnetic field vanishes in the cosmological case because there are no spatial curls, and the Newtonian part of the gravito-electric field vanishes for the same reason. Therefore proposed gravito-magnetic invariants formed from contractions of the Riemann tensor \cite{cfi1} do not capture inductive gravito-electric effects.
\section{2. Force on a body moving in the isotropic frame}
The standard model of cosmology, the Lambda-Cold-Dark-Matter model, describes the metric of the universe in terms of the Robertson-Walker metric, with zero curvature \cite{fth}:
\begin{equation}
\label{rwm}
-c^2 d\tau^2 = -c^2 dt^2 + a^2(t) [dx^2 + dy^2 + dz^2]
\end{equation}
in terms of cosmological time coordinate $t$ and spatial coordinates $x,y,z$, where $c$ is the speed of light, and $a(t)$ is the cosmological scale factor. The scale factor at the present epoch $t_0$ is $a(t_0)=1$.
The Hubble constant is given in terms of the scale factor by
\begin{equation}
\label{hub}
H(t) \equiv {1 \over a}{da\over dt}
\end{equation}
The Hubble constant at the current epoch is $H(t_0) \equiv H_0 = (da/dt)\vert_{t_0}$, the cosmological value observed today.
The metric (\ref{rwm}) is the gravitational field of the universe. It has the form of being maximally symmetric in the spatial components, and independent of position. This form enforces the standard assumption of an isotropic and homogeneous universe. The metric (\ref{rwm}) is characterized by the single parameter $a(t)$, where $t$ is a cosmological time coordinate that goes to $t=0$ at the Big Bang. The exact functional dependence of $a(t)$ depends on the energy content of the universe, with distinct regimes for radiation-dominated, matter-dominated, and Lambda-dominated phases of evolution.
The form (\ref{rwm}) can be considered a cosmic standard coordinate system, and the time coordinate a cosmic standard time coordinate. The time coordinate can be determined in principle from local time-dependent cosmological measurements, such as the temperature change of the microwave background, or the acceleration/deceleration of the redshift.\cite{wbg2} The time coordinate of (\ref{rwm}) is the time measured by a local free-fall observer.
The motion of a free body in spacetime is described by the geodesic equation,
\begin{equation}
\label{ge}
{dU^\mu\over d\tau} + \Gamma^\mu_{\alpha\beta} U^\alpha U^\beta = 0
\end{equation}
where $c^2d\tau^2 = g_{\mu\nu} dx^\mu dx^\nu$, greek indices range over the 4 components of spacetime, and the 4-velocity of a body is
\begin{equation}
U^\mu = {dx^\mu\over d\tau}
\end{equation}
The connections $\Gamma^\mu_{\alpha\beta}$ are given in terms of the metric by the standard textbook formula. From a cosmological perspective, the relativistic force equation (\ref{ge}) is understood to apply to galaxies or clusters of galaxies, moving in the smoothed, cosmological metric (\ref{rwm}). Local gravitational interactions will always dominate cosmic gravitational influences.
For a body at rest in the cosmic rest frame of (\ref{rwm})
\begin{equation}
{\widetilde U}^\mu = (c, 0,0,0)
\end{equation}
The force equation is simply
\begin{equation}
\label{ger}
{d{\widetilde U}^\mu\over d\tau} + {\Gamma}^\mu_{tt} c^2 = 0
\end{equation}
The connections operative for a body at rest in (\ref{rwm}) are:
\begin{equation}
\label{rff}
{\Gamma}^\mu_{tt} = {1\over 2} g^{\mu t} \partial_t g_{tt} + g^{\mu k} \partial_t g_{tk} = 0
\end{equation}
Galaxies at rest in this frame are in free fall. They experience no forces and remain at rest. The time and space coordinates are co-moving.
However, the free-fall frames in which the metric has the form (\ref{rwm}) are not inertial frames, because the connections do not all vanish. The definition of an inertial frame is that {\it all} first derivatives of the metric vanish, even while second derivatives may be present. Therefore, all gravitational forces must vanish in true inertial frames. Yet in the cosmic rest frame (\ref{rwm}), first derivatives do not all vanish. In fact, first derivatives correspond to the Hubble constant. Therefore, a true inertial cosmic frame would stipulate zero Hubble constant, locally.
Let us consider the connections for the metric (\ref{rwm}). We will use small roman indices $i,j,k$ to denote spatial coordinates, and the index $t$ for the time coordinate. The metric (\ref{rwm}) components are
\begin{equation}
g_{tt}=-c^2 \quad,\quad g_{ij}=a^2(t)\ \delta_{ij} \quad,\quad g^{tt} = -c^{-2} \quad,\quad g^{ij} = a^{-2}(t)\ \delta^{ij}
\end{equation}
Then the connections are:
\begin{equation}
\label{c1}
\Gamma^i_{tt} = 0 \quad,\quad \Gamma^l_{ij} = 0 \quad,\quad \Gamma^t_{jt} = 0
\quad,\quad \Gamma^t_{tt} =0
\end{equation}
\begin{equation}
\label{c2}
\Gamma^i_{jt}= {1\over 2} g^{ik} ( {\partial_t g_{jk}} + {\cancelto{0}{\partial_kg_{tk}} - \cancelto{0}{\partial_k g_{tj}}} ) = H \delta^i_j
\end{equation}
\begin{equation}
\label{c3}
\Gamma^t_{ij}={1\over 2} g^{tt} ( \cancelto{0}{\partial_i g_{jt}} + \cancelto{0}{\partial_j g_{it}} - {\partial_t g_{ij}} ) = {a^2\over c^2} H\delta_{ij}
\end{equation}
The non-zero spatial connections have 1 time index and are proportional to the Hubble constant (\ref{hub}). They originate in the time dependence of the spatial components $g_{jk}(t)$.
Consider the forces on a body moving in the x-direction with speed $dx/dt \equiv v$ and with 4-velocity
\begin{equation}
\label{4vel}
U^\mu = (U^t, U^x, 0, 0) = \left( c{dt\over d\tau}, {dx\over d\tau}, 0, 0 \right) = {dt\over d\tau}(c, v,0,0)
\end{equation}
The energy and momentum effects from the cosmological metric are:
\begin{equation}
{dU^t\over d\tau} + \Gamma^t_{xx} (U^x)^2 = 0
\end{equation}
\begin{equation}
{dU^x\over d\tau} + 2\Gamma^x_{tx} U^t U^x = 0
\end{equation}
Using the connections (\ref{c1}), ({\ref{c2}), (\ref{c3}), these equations reduce to
\begin{equation}{
\label{hc}
{dU^t\over dt} = - a^2 H U^t {v^2\over c^2}
}\end{equation}
\begin{equation}{
\label{hd}
{dU^x\over dt} = - 2H U^x
}\end{equation}
The equation (\ref{hd}) masks a cubic velocity dependence which can be exposed by expanding (\ref{hd}) and using (\ref{hc}):
\begin{equation}
\label{hdu}
{dv\over dt} = -2Hv + {a^2 v^3\over c^2} H = -Hv\left(2-{a^2v^2\over c^2}\right) = -Hv\left(1+ {c^2\over (U^t)^2}\right) < 0
\end{equation}
where the last equality follows because
\begin{equation}
-c^2 = g_{\mu\nu} U^\mu U^\nu = -(U^t)^2 + a^2 (U^x)^2 = -(U^t)^2 (1 - a^2 v^2/c^2)
\end{equation}
which is in turn consistent with (\ref{hc}). This shows that the coordinate acceleration is negative-definite.
We can also calculate this force from an alternative form of the geodesic equation that is useful when the metric is independent of a coordinate. In this form, the change to each component of the covariant 4-velocity depends on the derivative of the metric with respect to the corresponding coordinate:
\begin{equation}
\label{cge}
{dU_\mu\over d\tau} = {1\over 2} U^\alpha U^\beta \partial_\mu g_{\alpha\beta}
\end{equation}
Since the metric (\ref{rwm}) is independent of position, the evolution of spatial velocity of a body is given simply from (\ref{cge}) by:
\begin{equation}
U_j = g_{j\mu} U^\mu = a^2 U^x = \text{constant}
\end{equation}
which implies:
\begin{equation}
\label{hd2}{
{dU^x\over dt} = - 2 H U^x
}\end{equation}
This is the same cosmological drag equation (\ref{hd}). Although it is written for a particular component here, the isotropy of the Hubble expansion means it will be experienced by an object with any rectilinear velocity. This allows us to understand Hubble drag in terms of conservation of momentum in the expanding universe.
The Hubble drag force has an associated effect in the geodesic deviation equation
\begin{equation}
{d^2 S^\mu\over d\tau^2} = R^\mu_{\nu\rho\sigma} U^\nu U^\rho S^\sigma
\end{equation}
where $U^\nu$ is the 4-velocity of a moving body, and $S^\nu$ is a deviation vector separating observers with that 4-velocity.
The independent components of the Riemann tensor are
\begin{equation}
c^2 R^k_{ttj} = {\bm\ddot a\over a} \delta^k_j \quad,\quad c^2 R^i_{jkl} = a^2 H^2 (\delta^i_k \delta_{jl} - \delta^i_l \delta_{jk} )
\end{equation}
and their non-zero permutations, which may differ by factors of $a^2$.
For the case (\ref{4vel}) of a body moving in the $x$ direction, there will be geodesic deviation along $x$:
\begin{eqnarray}
\label{gde}
{d^2 S^x\over d\tau^2} &=& (R^x_{tt\mu} U^t U^t + {R^x_{tx\mu}} U^t U^x + \cancelto{0}{R^x_{xt\mu}} U^x U^t + \cancelto{0}{R^x_{xx\mu}} U^x U^x )S^\mu /c^2 \nonumber \\ &=& R^x_{ttx} U^t U^t S^x/c^2 + {R^x_{txt}} U^t U^x S^t /c^2
= {\bm\ddot a\over a} U^t (U^t S^x - U^x S^t)/c^2 \ne 0
\end{eqnarray}
This shows that the Hubble drag force corresponds to a non-vanishing effect in geodesic deviation, along the direction of motion of moving bodies. There is a well-known increasing spatial spatial separation of bodies at rest accruing from the expansion of the universe. For moving bodies there is also acceleration along the direction of motion.
The equation (\ref{hd}) expresses the well-known effect from galaxy cluster dynamics \cite{P1},\cite{P2} that is sometimes known as Hubble drag, e.g.\cite{pck},\cite{ldr}. It implies that an observer at rest in the cosmic standard coordinate system would detect a slowing and a drag force on a moving body. The result (\ref{hd2}) from a conserved quantity is also well known.\cite{crl},\cite{isl}
The force is minute, as the characteristic time scale for this slow-down is the age of the universe:
\begin{equation}
\label{sz}
\Delta U \sim {\Delta t} H U \sim {\text{dynamical timescale}\over \text{age of universe}}\ U
\end{equation}
This Hubble drag force would be undetectable for terrestrial or planetary phenomena. It is relevant for super clusters or other objects that are in cosmic free fall over the age of the universe.
The Hubble drag force effect is usually understood to apply to galaxy clusters,\cite{P1,P2} but mathematically it applies at a point in the smoothed background metric. There is a non-zero effect in the geodesic deviation (\ref{gde}) that validates the reality of the local gravitational interaction. Therefore these force effects should be understood as local gravitational phenomena arising from an interaction with the gravitational field of the universe.
Here is the interesting case of action of the gravitational field of the universe at a point, and on a body in uniform motion. It seems to imply that a local experiment on a small moving body deep in intergalactic space can in principle measure the gravitational field of the universe.
\section{3. Force in the rest frame of a moving body}
Now we show that the Hubble drag force manifests in the rest frame of a moving body, and therefore constitutes an invariant rest-frame force.
The isotropy of the cosmological metric (\ref{rwm}) selects a preferred cosmological frame. There is only one frame in which the metric is isotropic. Any coordinate transformation involving a boost at constant velocity will introduce off-diagonal components into the transformed isotropic metric. For a coordinate system not at rest in (\ref{rwm}), we can write the transformed metric as ${\widetilde g}_{\mu\nu}$, and the components of the connections that operate on a body at rest (\ref{rff}) in this frame are rewritten for ${\widetilde g}_{\mu\nu}$:
\begin{equation}
\label{rffg}
{\widetilde\Gamma}^\mu_{tt} = {1\over 2} {\widetilde g}^{\mu t} \partial_t {\widetilde g}_{tt} + {\widetilde g}^{\mu k} \partial_t {\widetilde g}_{tk} \ne 0
\end{equation}
In order to define the boosted frame metric ${\widetilde g}_{\mu\nu}$ from the isotropic metric (\ref{rwm}), we want to consider a velocity transformation with magnitude $v$ in the x-direction. It is clear that the spatial coordinates will be boosted by the velocity, but the time coordinate can be handled in several ways.
\begin{enumerate}
\item{}Boost to an inertial frame, where all first derivatives vanish, including the Hubble constant
\begin{equation}
{\widetilde t} = p(x^\mu, v) \quad ,\quad {\widetilde x}^i = q^i(x^\mu, v)
\end{equation}
where $p$ and $q^i$ are functions of the coordinates that eliminate all first derivatives of the metric.
\item{}Lorentz transformation to a boosted frame
\begin{equation}
\label{t2}
c{\widetilde t} = \gamma (ct + vx/c) \quad,\quad {\widetilde x}= \gamma (x + vt) \quad,\quad
{\widetilde y} = y \quad, \quad {\widetilde z} =z
\end{equation}
\item{}Galiean-type transformation to a boosted frame, retaining the cosmic standard time coordinate
\begin{equation}
\label{t3}
{\widetilde t} = t \quad,\quad {\widetilde x}= x + vt \quad,\quad
{\widetilde y} = y \quad, \quad {\widetilde z} =z
\end{equation}
\end{enumerate}
Although the transformation (\ref{t3}) has the form of a Galilean transformation, and although a Galilean transformation approximates a Lorentz transformation (\ref{t2}), the transformation (\ref{t3}) is valid and exact for relativistic velocities in curved space. The transformation (\ref{t3}) only approximates (\ref{t2}) against a background Minkowski spacetime. In our case, the reference frame of (\ref{rwm}) is not an inertial frame. In order for a boosted object to maintain the cosmic time coordinate in curved space, the transformed metric contains the information for the time dilation that would ordinarily be associated with the transformation (\ref{t2}). So the significance of the choice (\ref{t3}) is not as a non-relativistic approximation to (\ref{t2}), but as the cosmic time coordinate in a boosted frame.
Nonetheless, in practice, the 3 velocity transformation cases become indistinguishable for terrestrial time scales and non-relativistic velocities. The first derivatives of the isotropic metric (\ref{rwm}) are quite small -- time derivatives are of order the inverse of the age of the universe -- and so the deviations from Minkowski space are all small. Therefore the cosmic rest frame, with a non-zero, local Hubble constant, is approximately inertial to the extent the Hubble constant can be set to zero.
Let us therefore choose the transformation (\ref{t3}) that preserves the cosmic standard time coordinate and that approximates (\ref{t2}) when the metric is Minkowski.
The components of the transformation matrix for (\ref{t3}) are:
\begin{equation}
{\partial{\widetilde t}\over \partial t} = 1 \quad,\quad {c\partial{\widetilde t}\over \partial x} = 0 \quad,\quad
{\partial{\widetilde x}\over \partial t} = v \quad,\quad {\partial{\widetilde x}\over \partial x} = 1 \quad,\quad
{\partial{\widetilde y}\over \partial y} = 1 \quad,\quad {\partial{\widetilde z}\over \partial z} = 1
\end{equation}
The inverse transformation matrix components are:
\begin{equation}
t = {\widetilde t} \quad,\quad x= {\widetilde x} - v{\widetilde t} \quad,\quad
y={\widetilde y} \quad, \quad z={\widetilde z}
\end{equation}
The components of the inverse transformation are:
\begin{equation}
{\partial t\over \partial {\widetilde t}} =1 \quad,\quad {c\partial t\over \partial {\widetilde x}}= 0\quad,\quad
{\partial x\over \partial {\widetilde t}} =-v \quad,\quad {\partial x\over \partial {\widetilde x}} = 1 \quad,\quad
{\partial y\over \partial {\widetilde y}} = 1 \quad,\quad {\partial z\over \partial {\widetilde z}} =1
\end{equation}
The transformed metric in the boosted frame is given by
\begin{equation}
{\widetilde g}_{\mu\nu} = {\partial x^\alpha\over\partial {\widetilde x}^\mu}{\partial x^\beta\over\partial {\widetilde x}^\nu} g_{\alpha\beta} = - c^2 {\partial t\over\partial {\widetilde x}^\mu}{\partial t\over\partial {\widetilde x}^\nu}
+ a^2 \delta_{ij} {\partial x^i\over\partial {\widetilde x}^\mu}{\partial x^j\over\partial {\widetilde x}^\nu}
\end{equation}
The components of the metric implicated in the rest-frame forces of (\ref{rffg}) are given by:
\begin{equation}
{\widetilde g}_{tt} = -1 + {a^2 v^2\over c^2} \quad ,\quad
{\widetilde g}_{xx} = a^{2}
\end{equation}
\begin{equation}
\label{gm}
{\widetilde g}_{tx} = -{v\over c}a^2
\end{equation}
The off-diagonal, gravito-magnetic metric components (\ref{gm}) depend on the non-Minkowski character of the isotropic metric (\ref{rwm}). For a truly Minkowskian metric, with no first derivatives, then there are no off-diagonal components because the Minkowski metric is invariant under a Lorentz transformation.
The inverse transformed metric components are given by
\begin{equation}
{\widetilde g}^{\mu\nu} = {\partial {\widetilde x}^\mu\over\partial x^\alpha }{\partial {\widetilde x}^\nu\over\partial x^\beta } g^{\alpha\beta} =
- {1\over c^4}{\partial {\widetilde x}^\mu\over\partial t }{\partial {\widetilde x}^\nu\over\partial t }
+ a^{-2} \delta_{ij} {\partial {\widetilde x}^\mu\over\partial x^i }{\partial {\widetilde x}^\nu\over\partial x^j }
\end{equation}
The components of the inverse metric relevant to rest-frame forces are
\begin{equation}
{\widetilde g}^{tt} = -c^{-2} \quad ,\quad {\widetilde g}^{xx} = a^{-2} - {v^2\over c^2}
\end{equation}
\begin{equation}
{\widetilde g}^{tx} = - {v\over c}
\end{equation}
Therefore the rest frame connections (\ref{rffg}) are
\begin{equation}
{\widetilde \Gamma}^t_{tt} ={1\over 2} {\widetilde g}^{tt} {\partial_t {\widetilde g}_{tt}}
+ {\widetilde g}^{tk} {\partial_t {\widetilde g}_{tk}}
-{1\over 2} {\widetilde g}^{tk}\cancelto{0}{\partial_k {\widetilde g}_{tt}}
= H {a^2 v^2\over c^2}
\end{equation}
\begin{equation}
{\widetilde \Gamma}^i_{tt}
={1\over 2} {\widetilde g}^{it} {\partial_t {\widetilde g}_{tt}}
+ {\widetilde g}^{ik} {\partial_t {\widetilde g}_{tk}}
-{1\over 2} {\widetilde g}^{ik}\cancelto{0}{\partial_k {\widetilde g}_{tt}}
= - 2 {v\over c} H + {v^3\over c^3} a^2 H
\end{equation}
Just as for an object at rest in the istropic frame (\ref{ge}), the force equation in the boosted frame for an object at rest is
\begin{equation}
{d{\widetilde U}^\mu\over d{\widetilde t}} + {\widetilde \Gamma}^\mu_{tt}c^2 = 0
\end{equation}
where the proper time is just the coordinate time ${\widetilde t}$, and instantaneously ${\widetilde U}^\mu = (c,0,0,0)$.
In the instantaneous rest frame of the body, any force will manifest in the spatial components of the acceleration, since
\begin{equation}
{d\over d\tau} ({\widetilde g}_{\mu\nu} {\widetilde U}^\mu {\widetilde U}^\nu) = 0 = 2 {\widetilde g}_{\mu\nu} {\widetilde U}^\mu {d{\widetilde U}^\nu\over d\tau}
\quad\rightarrow\quad {d{\widetilde U}^\nu\over d\tau} = \left(0, {d{\widetilde U}^x\over d\tau},0,0\right)
\end{equation}
This means that an acceleration exists in the rest frame of an accelerated body, and in that frame the acceleration 4-vector is purely spatial.
When we consider $d{\widetilde U}^x / d\tau$ in the instantaneous rest frame, we find a non-zero force component along the x-direction:
\begin{equation}\label{hd3}
-{d{\widetilde U}^x\over d{\widetilde t}} = -{\widetilde \Gamma}^x_{tt}c^2 \quad\rightarrow\quad
{d{\widetilde U}^x\over d{\widetilde t}} = -2 {v\over c}H + {v^3\over c^3}a^2 H
\end{equation}
The form of the rest-frame force (\ref{hd3}) is the same as (\ref{hdu}). The acceleration measured on the body from the isotropic frame is the same force the body feels in its rest frame. This shows that this force is frame-independent.
There are an infinity of reference frames in which Hubble drag arises as it does here, from the off-diagonal, 3-vector metric components ${\widetilde g}_{tk}$. There is only one frame in which it arises from isotropic metric components. Yet the frame-independence of the effect insures that it can be fairly called frame dragging.
\begin{comment}
There is also a rest-frame energy effect, even though for test particle there is no internal energy.
\begin{equation}
{d{\widetilde U}^\mu\over d{\widetilde t}} = - a^2 v^2 H
\end{equation}
which is equivalent to (\ref{hc}). As before, applied to galaxy clusters, it must be understood as a reduction of kinetic energy density in a volume of space.
\end{comment}
\section{4. Inductive frame dragging}
Gravitomagnetism, frame-dragging, and Lense-Thirring effect are synonymous terms for the same underlying effect. They arise from the influence of off-diagonal metric elements on the motion of bodies. These effects are seen in both linear approximations and exact solutions of the Einstein equations. We briefly review both.
The Kerr metric is an exact solution for a static, spherical spacetime with aziumthal symmetry around an object with mass $M$ and angular momentum $J$. The metric in Boyer-Lindquist coordinates can be approximated in a weak field limit:
\begin{equation}
\label{kerr}
-c^2 d\tau^2 \simeq -\left( 1 - {2M\over r}\right) c^2dt^2 + \left( 1 + {2M\over r}\right)dr^2 + r^2(d\theta^2 + \sin^2\theta d\phi^2) - {4J\over r} \sin^2\theta d\phi cdt
\end{equation}
Frame-dragging effects arise from the off-diagonal components $g_{t\phi}$, which have their source in the angular momentum of the central gravitating object.
It is also useful to consider gravito-magnetic effects as they emerge in the geodesic equation written to linear order in metric perturbations $h_{\mu\nu}$, where $g_{\mu\nu}\simeq \eta_{\mu\nu} + h_{\mu\nu}$. To linear order in test particle speed, the geodesic equation linear in the metric perturbations has the form\cite{wi}:
\begin{equation}
\label{linmom
{1\over m}{d p^i\over dt} \simeq
c^2 {E}^i_g + \epsilon_{ijk}c{v^j} B_g^k
- {v^j} \partial_t h_{ij} \quad ,\quad v\ll c
\end{equation}
where $p^i$ are the spatial momentum components of a body in motion, $m$ is rest mass, and we have defined natural gravito-electric and -magnetic fields:
\begin{equation}
\label{gravebfield
E_g^i \equiv \partial_i h_{tt}/2 - \partial_t h_{ti} /c
\quad , \quad
B_g^i \equiv \epsilon_{ijk} \partial_j h^{tk}
\end{equation}
The linear force equation (\ref{linmom}) carries a striking analogy to the Lorentz force of electromagnetism, but there are also non-Lorentz, tensor effects. The gravito-magnetic effect is apparent, and its potentials are the off-diagonal metric components $h_{tk}$.
In the linear approximation, the off-diagonal components of (\ref{kerr}) in turn have their origin in static mass currents\cite{wi}:
\begin{equation}
\label{mc}
\nabla^2 h_{tk} \simeq -{16\pi G\over c^4} T_{tk}
\end{equation}
where $\nabla^2$ is the 3-space Laplacian and where the $T_{tk}$ are the time-space components of the energy-momentum sources. The units of $T_{tk}$ are momentum density or energy flux, and so they represent a mass current. Along with the transverse constraint that $\partial_k h^{tk}=0$, (\ref{mc}) governs two of the six physical degrees of freedom of the linear gravitational field.\cite{wi} Those degrees of freedom are clearly elliptical, with sources in the mass currents.
Standard treatments of frame dragging utilize mass currents (\ref{mc}) originating in the rotation of astrophysical masses. There is a wide literature on this subject, but key works include \cite{trg}, \cite{ltg}, \cite{bc}, \cite{bc2}. See also \cite{pfr} and \cite{cfi1} for a review, and \cite{cfi2} for a measurement review. A cosmological metric has also been investigated as the boundary for a rotational geometry.\cite{kln},\cite{smd}.
There is some association of frame dragging with accelerated masses, since rotating mass currents are sustained via centripetal acceleration. Therefore, the small literature on rectilinear frame dragging\cite{ge,lb,fz,pk,pfr,pfs} is focused on linear acceleration. Yet we see from (\ref{mc}) that acceleration is not required for a mass flux to exist. Rectilinear frame dragging for uniform motion is suggested here for the first time.
Let us return to the connections in the geodesic equation (\ref{ge}). As seen in (\ref{c2}), the 3-space forces for a body in motion in the isotropic cosmological metric arise from:
\begin{equation}
\label{force2}
2\Gamma^i_{tj} = g^{ik} ( \underset{\text{Hubble drag}}{\partial_t g_{jk}} + \underset{\text{gravito-magnetic force}}{[\cancel{\partial_j g_{tk}} - \cancel{\partial_k g_{tj}}]} )
\end{equation}
The forces associated with the connections (\ref{force2}) are proportional to particle velocity, and we have identified the separate contributions of Hubble drag and gravito-magnetic forces.
On the other hand, for a homogeneous metric ${\widetilde g}_{\mu\nu}$ with off-diagonal components ${\widetilde g}_{tk}$, the 3-forces on a body at rest arise from:
\begin{equation}
\label{force}
2{\widetilde \Gamma}^i_{tt} = {\widetilde g}^{it} \partial_t {\widetilde g}_{tt}
+ \underset{\text{gravito-electric force}} {[\overset{\text{inductive}}{2{\widetilde g}^{ik} \partial_t {\widetilde g}_{tk}}- \overset{\text{Newtonian}}{ {\widetilde g}^{ik} \cancel{\partial_k {\widetilde g}_{tt}}} ]}
\end{equation}
These forces exist for bodies at rest, and contain the limit of Newtonian gravity as well as a gravito-electric induction force.
The inductive force effects were considered early by Einstein\cite{ein}, and he coined the term ``inductive", although he anticipated the inductive effects would come from accelerated motion, not from the expanding spacetime.
For the homogeneous cosmological metric, spatial derivatives vanish. The Newtonian piece of the gravito-electric field (\ref{force}) vanishes, as does the entire gravito-magnetic field (\ref{force2}). Hubble drag in the isotropic frame is accounted in the boosted frame by the inductive term in the gravito-electric field. Therefore, we call this effect ``inductive linear frame dragging".
There are some important distinctions from frame-dragging induced by rotating mass currents. One is that a gravito-magnetic force in (\ref{linmom}) acts normal to particle velocity, and so does no work. The forces of Hubble drag and the inductive gravito-electric effect, by contrast, do work on a body, changing its energy.
Another distinction is the Hubble drag force, or the inductive dragging, occurs at constant velocity. It does not require acceleration, as in frame-dragging from rotation. Therefore we consider this the first proposed case of uniform-velocity frame dragging.
Any velocity transformation will introduce off-diagonal components into the metric, so there is some reasonable question as to whether such effects are real or are a coordinate artifact. Ref.~\cite{cfi1} suggested a scalar invariant for the existence of gravitomagnetism in terms of the Riemann tensor and the antisymmetric tensor:
\begin{equation}
\label{ci}
I = \epsilon^{\alpha\beta\sigma\rho}R_{\sigma\rho\gamma\delta} R_{\alpha\beta\mu\nu} g^{\mu\gamma}g^{\nu\delta}
\end{equation}
The quantity $I$ is zero for the metric (\ref{rwm}). However, the traditional scalars of the flat Robertson-Walker metric, the curvature scalar $R$ and the Kretschmann scalar $\mathscr{R}$, are non-zero:
\begin{equation}
c^2 R = 6 \left( {\bm\dot a^2\over a^2} + {\bm\ddot a\over a} \right)
\quad,\quad c^4 \mathscr{R} = 12 \left( {\bm\dot a^4\over a^4} + {\bm\ddot a^2\over a^2} \right)
\end{equation}
There can be no traditional gravito-magnetic invariant involving contractions of Riemann for uniform-velocity frame dragging in the cosmological metric because rectilinear uniform motion is coordinate-dependent.
This is unlike the traditional gravito-magnetic invariants, or the Kerr metric itself. While the off-diagnonal elements of the Kerr metric arise from mass in motion, the motion is compact, and cannot be transformed away with respect to the background stars. This is consistent with the understanding that the de Sitter precession arising from motion around a static mass source is fundamentally different than frame-dragging or gravitomagnetism.\cite{cfi1,cfi3} Therefore the metric arising from the closed mass flux of a rotating central object can be described in terms of an invariant composed of contractions of the Riemann tensor.
The compactness of the matter current not only allows an invariant, but also implies acceleration, and establishes the link between gravitomagnetism and acceleration. Acceleration is not necessary for an inductive rectilinear gravito-electric effect, but it will be coordinate dependent in the way of de Sitter precession.
This implies that there are dragging phenomena that are not contemplated in gravito-magnetic invariants of the Riemann tensor.
\section{5. Local coupling to the cosmological metric}
The Hubble drag force in principle acts on bodies great and small, and so represents a local momentum transfer from matter to the gravitational field of the universe. This locality of momentum lost by the matter is not mirrored by a localization of momentum deposited into the field, because gravitational field energy-momentum cannot be localized. The freedom of a general coordinate transformation allows construction of a local inertial frame in which all gravitational forces vanish, and so local gravitational energy-momentum must vanish.\cite{mtw}
Yet gravitational forces do work locally on matter, as can be seen from the covariant divergence of the Einstein equations \cite{wbgem}:
\begin{equation}
\nabla_\mu T^{\mu\nu} = g^{-1/2} \partial_\mu (g^{1/2} T^{\mu\nu}) + \Gamma^\nu_{\mu\lambda} T^{\mu\lambda} =0
\end{equation}
This is the expression of conservation of energy and momentum in a gravitational field. The last term represents the energy-momentum exchanged between the non-gravitational energy-momentum $T^{\mu\nu}$ and the gravitational field.
The energy-momentum of the gravitational field can be characterized by decomposing the metric into a Minkowski piece and a variable piece $f_{\mu\nu}$ that is not small:
\begin{equation}
g_{\mu\nu} \equiv \eta_{\mu\nu} + f_{\mu\nu}
\end{equation}
With this, the Einstein tensor can be decomposed into a term linear in $f_{\mu\nu}$ and terms of higher order:
\begin{equation}
G_{\mu\nu} \equiv G^{(1)}_{\mu\nu} + G^{(2+)}_{\mu\nu}
\end{equation}
Then the Einstein equations can be recast exactly:
\begin{equation}
G^{(1)}_{\mu\nu} = {8\pi G}T_{\mu\nu} -G^{(2+)}_{\mu\nu}
\end{equation}
This invites definition of the gravitational energy-momentum pseudotensor:
\begin{equation}
\Theta_{\mu\nu} \equiv (8\pi G)^{-1}[G_{\mu\nu} - G^{(1)}_{\mu\nu}]
\end{equation}
The pseudotensor $\Theta_{\mu\nu}$ has several properties that support its interpretation as the energy-momentum of the gravitational field \cite{wbgpt}. One is that the total energy-momentum of matter and the field is a constant:
\begin{equation}
\label{tem}
\partial_\nu [ \eta^{\nu\mu}\eta^{\lambda\kappa} ( T_{\mu\kappa} + \Theta_{\mu\kappa} ) ] \equiv \partial_\nu ( T^{\nu\kappa} + \Theta^{\nu\kappa} )= 0
\end{equation}
The pseudotensor $\Theta^{\nu\kappa}$ describes the gravitational field energy measured in any coordinate system. However, the pseudotensor is not a tensor, and is coordinate-dependent. It yields meaningful integrals of energy and momentum only if the fields asymptotically approach Minkowskian.\cite{wbgpt}
Consideration of the volume integral of (\ref{tem}) allows identification of the energy-momentum 4-vector that locally describes the combined gravitational and non-gravitational contributions at every spacetime point:
\begin{equation}
\label{mom}
P^\mu = \int ( T^{\mu t} + \Theta^{\mu t} )d^3x
\end{equation}
where the $t$ index indicates the time component.
Since the field energy cannot be localized, the momentum absorbed by the gravitational field of the universe $\Theta^{j t}$ owing to rectilinear inductive frame dragging of matter $T^{j t}$ must be expressed as an integral of $\Theta^{j t}$ over a finite region, where roman indices indicate the 3 spatial coordinates. Therefore, from (\ref{mom}) the momentum lost by a body moving with velocity $v$ for a time $\Delta t$ would be absorbed into the field in a volume of size $\Delta x=v\Delta t$:
\begin{equation}
\label{mt}
\int T^{ti} d^3 x = -\int \Theta^{ti} d^3x = - v \int \Theta^{ti}\Delta t\ d^2x
\end{equation}
Cosmologically, $\Theta^{ti}=0$ for (\ref{rwm}), indicating that the gravitational field of the universe carries no net momentum. However, the cosmological gravitational field is integrated over all matter in the universe, with varying concentrations and in varying states of motion. The total gravitational field will include all these effects from source motion, including the effects in the field of local mass currents. However, the cosmological field will always naturally be described in the frame in which the net motion of the matter in the universe $T^{it}_U$ is zero, and the net motion of matter in the universe can always be determined from an integral over all space:
\begin{equation}
\oint T_U^{it}d^3x \equiv \rho_U c^2 V^i
\end{equation}
The metric (\ref{rwm}) is then formulated in the frame for which $V^i=0$, composed as it is of all the disparate motion of all the matter in the universe.
Since matter can lose momentum into the gravitational field of the universe, it seems to imply that the gravitational field of the universe can be detected from dynamics. The Hubble constant can be measured by looking outward into space, at galactic redshift. It would appear that Hubble drag also allows such a determination looking inward at the motion of material bodies.
\section{6. Detectability of Hubble drag in the laboratory}
Cosmological redshift is given in terms of the scale factor by the simple textbook formula
\begin{equation}
\label{cr}
{\lambda_e\over a(t_e)} = {\lambda_r\over a(t_r)}
\end{equation}
where the subscripts indicate ``emission" and ``reception" of a photon, $t$ is the time, and $\lambda$ the wavelength. The Hubble expansion famously redshifts light, and the redshift increases with time. The galaxies can be considered beacons emitting light at known frequencies, and the cosmological redshift increases the separation between beacons at rest, and reddens their light. Since the momentum of a photon is inversely proportional to the wavelength, the reddening of (\ref{cr}) can be seen to correspond to the same momentum loss described by Hubble drag.
We have also seen in (\ref{hd}) that the geodesic equation describes an analogous loss of momentum for massive bodies, not just for light. In fact, the same term in the geodesic equation accounts for the effect on both timelike and null trajectories. Yet due to the ``running-awayness" of the momentum loss associated with Hubble expansion -- the passive draining of momentum, instead of losing it to work elsewhere in the system -- there is a misconception that Hubble drag is not a ``real" force.
In principle, an experiment could be conducted to detect the Hubble expansion within the bounded space of a laboratory, without looking at the distant galaxies or at the cosmic microwave background. In addition to the well-known effect on redshift that accrues over time as the universe expands, there is also an effect on the Doppler shift arising from Hubble drag. At intermediate cosmological redshifts, there is an interplay of these effects before the magnitude of cosmological redshift overwhelms the Doppler shifts in local velocity effects. Of course, Hubble drag has been modeled already statistically in galaxy dynamics in terms of ``peculiar velocities"\cite{P2}, but we are talking here of detecting it on a single moving body.
Let us consider a hypothetical laboratory deep in intergalactic space, far from local gravitational sources.
Consider a test body carrying a beacon of known frequency, and receding along the line of sight. A beacon at rest in the frame of (\ref{rwm}) will have a redshift from the Hubble expansion, and this is the usual cosmological redshift. A beacon receding in the frame of (\ref{rwm}) will have an additional redshift from the Doppler effect. There is also a third redshift effect from the time dilation of the moving source that will always accompany its motion. Let us quantify this.
Let the radiation emitted from the beacon to be at a fixed frequency corresponding to some narrow spectral line. The period of the radiation in the rest frame of the moving body can be written as a differential $d{\widetilde t}$, which is an invariant:
\begin{equation}
d{\widetilde t}^2 = \eta_{\mu\nu} d{\widetilde x}^\mu d{\widetilde x}^\nu = g_{\mu\nu} dx^\mu dx^\nu = g_{\mu\nu}{dx^\mu\over dt} {dx^\nu \over dt} dt^2
\end{equation}
where $dt$ is the time in the frame of (\ref{rwm}).
In addition to this time dilation in curved space, the period $d t_o$ of the radiation received by an observer at rest in the cosmic isotropic frame will be shifted by the motion of the object along the line of sight:
\begin{equation}
\label{td}
d t_o = (1+a v_r/c)d t = (1+a v_r/c) \left( g_{\mu\nu}{dx^\mu\over dt} {dx^\nu \over dt}\right)^{-1/2}d {\widetilde t}
\end{equation}
where $v_r$ is the velocity of the source along the radial direction from the observer, $a$ is the scale factor, and $g_{\mu\nu}$ is given by (\ref{rwm}). For this time dilation expression, the Hubble timescale is much longer than the period of the radiation, so the term in $g_{\mu\nu}$ can be evaluated at a single instant.
Let us now convert $(dx/dt) \rightarrow v$, and convert (\ref{td}) from period to frequency. Then $d t_o\rightarrow 1/f_o$ and $d{\widetilde t}\rightarrow 1/{\widetilde f}$, and the frequency shift of a beacon moving against the metric (\ref{rwm}) is
\begin{equation}
\label{dop}
{f_o\over {\widetilde f}} = {(1-a^2 v^2/c^2)^{1/2}\over (1+av_r/c)} \quad \underset{v_r = v}{\rightarrow}\quad
\left( {1-av/c}\over {1+av/c} \right)^{1/2}
\end{equation}
where the last expression follows because the beacon is receding from the observer along the line of sight. Note that the previous 3 expressions relate the frequency of radiation seen in two different coordinate systems, but co-located at the same spacetime point. We have not yet integrated radiation from the moving source to a fixed observer.
Now let us evaluate the time derivative of $f_o$ in (\ref{dop}):
\begin{equation}
\label{tdv}
{\bm\dot f_o\over {\widetilde f}} = -{f_o\over {\widetilde f}} { {(v\bm\dot a /c + a\bm\dot v/c)}\over (1-a^2v^2/c^2)}
\end{equation}
where dot notation $\bm\dot f_0$, $\bm\dot a$, and $\bm\dot v$ indicates time derivatives.
There are two effects on the time derivative of the Doppler shift (\ref{tdv}). One is from the time derivative of the scale factor, and represents the effect of the Hubble expansion carrying bodies apart. The other is from the time derivative of the beacon velocity, presumed to be receding along the line of sight. These two effects operate oppositely on the redshift.
The scale factor increases with time, so its time derivative is positive. Yet $\bm\dot v < 0$ according to (\ref{hdu}), due to the slowing of Hubble drag. Putting (\ref{hdu}) into (\ref{tdv}) yields:
\begin{equation}
{\bm\dot f_o\over {\widetilde f}} = {{\bm\dot a v}\over {c}}{f_o\over {\widetilde f}} >0
\end{equation}
The Hubble drag effect dominates in the Doppler shift over the Hubble expansion effect, leading to a net positive time derivative of the Doppler-shifted frequency as measured at the beacon.
This makes sense physically. A beacon receding at velocity $v$ at time $t=0$ will have its maximum Doppler shift according to (\ref{dop}). As Hubble drag works to decelerate the beacon, the reduction in speed against the isotropic background leads to a decrease of the redshift at time $t >0$ compared to $t=0$. Although the Hubble expansion is always working to increase redshift, the deceleration of Hubble drag produces a stronger effect that leads to a net bluing of the Doppler redshift. As Hubble drag brings the object to rest over the age of the universe, the locally-measured Doppler and time-dilation redshift effects vanish. This is indicated schematically in Figure 1.
\begin{figure*}
\includegraphics[width=0.7\textwidth]{Fig1.jpg}
\caption{Schematic diagram of the bluing of Doppler redshift accruing from Hubble drag. This diagram shows the time evolution of the locally-observed Doppler shift of a receding beacon. This frequency shift at the beacon includes effects from time dilation, recession, and Hubble expansion. Hubble drag will slow the velocity relative to the cosmic rest frame, manifesting as a decrease in locally-observed Doppler shift. \label{Fig1}}
\end{figure*}
In this way, by detecting the action of Hubble drag on the Doppler shift of a moving body, and the resulting frequency shift operating opposite to the cosmological redshift, it is in principle possible to determine cosmological parameters by measurements on material bodies, without looking at distant objects.
The local detection of cosmic expansion through the time derivative of the redshift invites consideration of Mossbauer-type experiments. The Pound-Rebka experiment used this technique and obtained a frequency measurement of gravitational redshift to a part in $10^{15}$.\cite{pr} They used controlled Doppler shifts to cancel out the gravitational redshift over a 20-meter vertical drop.
The sensitivity of gravitational redshift detection can be compared with the necessary sensitivity of a Hubble drag experiment. In that case, the size of the effect (\ref{sz}) is the ratio of the dynamic timescale to the age of the universe. For a timescale of hours, the ratio is of order $10^{-14}$. The associated frequency shift is reduced by a factor of $v/c$, so that a beacon moving at 10\% the speed of light would show a shift detectable at the Pound-Rebka resolution. Configuring a free-fall experiment that eliminates all gravitational effects except Hubble expansion might be impractical in a terrestrial experiment.
Nonetheless, a redshift observation of this type would be described by the time integral of radiation received from the decelerating source:
\begin{equation}
\label{ds}
\Delta f_0 \equiv \int_{t_e}^{t_r} \bm\dot f_o(t') dt' = {\widetilde f}
\int_{t_e}^{t_r} {{\bm\dot a v(t')}\over {c}}
\left( {1-av/c}\over {1+av/c} \right)^{1/2} dt'
\end{equation}
where $v(t)$ is the solution to (\ref{hdu}), which depends on the scale factor $a(t)$, which in turn depends on the cosmological energy densities. Therefore, the solution to (\ref{hdu}) is a well-defined, albeit complex, function of the standard cosmological model. The cosmology-dependent Hubble drag $v(t)$ is then integrated along the line of sight in (\ref{ds}) to yield the total frequency shift measured by an observer stationary in the frame of (\ref{rwm}). As Hubble drag slows the beacon over the age of the universe, its locally-measured Doppler shift vanishes as shown in Fig.1, and the redshift seen by a fixed observer merges with the average Hubble flow measured with standard candles at rest in the frame of (\ref{rwm}).
This shows the momentum transfer of Hubble drag is real and measurable, and in principle observable looking inward at a laboratory configuration large enough to observe a fast body for approximately an hour.
\section{7. Conclusions}
\begin{enumerate}
\item The Hubble drag force exists for motion with respect to the isotropic galactic free-fall frame because the isotropy of the cosmological metric picks out a preferred frame.
\item The energy and momentum lost by Hubble drag are dissipated into the gravitational field of the universe on cosmological timescales.
\item The effect of Hubble drag on Doppler shift is larger and opposite to that of the Hubble expansion, leading to a bluing of Doppler-redshifted objects over the age of the universe.
\item Since Hubble drag always operates on test bodies, it implies the cosmological metric can be measured in principle through laboratory-scale dynamical experiments. If resolution similar to the Pound-Rebka experiment were attainable, Hubble drag should be detectable in free-fall Doppler shifts over an integration time of order 1000 seconds for bodies with $v/c \sim 0.1$.
\item These considerations show there is a channel for exchange of energy and momentum between a test body and the gravitational field of the universe. This is stated mathematically in (\ref{mt}).
\item The isotropic cosmological metric acquires off-diagonal components in boosted frames. In these frames, the Hubble drag force can be considered inductive rectilinear frame-dragging. It arises from the inductive part of the gravito-electric force, and constitutes a type of frame-dragging that is not covered by usual gravito-magnetic and frame-dragging invariants.
\end{enumerate}
\vspace{6pt}
\section{8. Acknowledgements}
This research was funded by DARPA DSO under award number D19AC00020.
Yaroslav Balytski evaluated the gravito-magnetic scalar with tensor algebra software.}
\section{9. References}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.